Infinidat Power Panel | CUBEconversation
[Music] hello and welcome to this power panel where we go deep with three storage industry vets two from infinidat in an analyst view to find out what's happening in the high-end storage business and what's new with infinidat which has recently added significant depth to its executive ranks and we're going to review the progress on infinidat's infinibox ssa a low-latency all-solid state system designed for the most intensive enterprise workloads to do that we're joined by phil bullinger the chief executive officer of it finidet ken steinhardt is the field cto at infinidat and we bring in the analyst view with eric bergener who's the vice president of research infrastructure systems platforms and technologies group at idc all three cube alums gents welcome back to the cube good to see you thanks very much dave good to be here thanks david as always a pleasure phil let me start with you as i mentioned up top you've been top grading your team we covered the herzog news beefing up your marketing and also upping your game and emea and apj go to market recently give us the business update on the company since you became ceo earlier this year yeah dave i'd be happy to you know the uh i joined the company in january and it's been a it's been a fast 11 months uh exciting exciting times at infinidad as you know really beginning last fall the company has gone through quite a renaissance a change in the executive leadership team uh i was really excited to join the company we brought on you know a new cfo new chief human resources officer new chief legal officer operations head of operations and most recently as has been you know widely reported we brought in eric to head up our marketing organization as a cmo and then last week richard bradbury in in london to head up international sales so very excited about the team we brought together it's uh it's resulted in or it's been the culmination of a lot of work this year to accelerate the growth of infinidat and that's exactly what we've done it's the company has posted quarter after quarter of significant revenue growth we've been accelerating our rate and pace of adding large new fortune 500 global 2000 accounts and the results show it definitely the one of the most exciting things i think this year has been infinidat has pretty rapidly evolved from a single product line uh company around the infinibox architecture which is what made us unique at the start and still makes us very unique as a company and we've really expanded out from there on that same common software-defined architecture to the ssa the solid state array which we're going to talk about in some in some depth today and then our backup appliance our data protection appliance as well all running the same software and what we see now in the field uh many customers are expanding quickly beyond you know the traditional infinibox business uh to the other parts of our portfolio and our sales teams in turn are expanding their selling motion from kind of an infinibox approach to a portfolio approach and it's it's really helping accelerate the growth of the company yeah that's great to hear you really got a deep bench and of course you you know a lot of people in the industry so you're tapping a lot of your your colleagues okay let's get into the market i want to bring in uh the analyst perspective eric can you give us some context when we talk about things like ultra low latency storage what's the market look like to you help us understand the profile of the customer the workloads the market segment if you would well you bet so i'll start off with a macro trend which is clearly there's more real-time data being captured every year in fact by 2024 24 of all of the data captured and stored will be real-time and that puts very different performance requirements on the storage infrastructure than what we've seen in years past a lot of this is driven by digital transformation we've seen new workload types come in big data analytics real-time big data analytics and obviously we've got legacy workloads that need to be handled as well one other trend i'll mention that is really pointing up this need for low latency consistent low latency is workload consolidation we're seeing a lot of enterprises look to move to fewer storage platforms consolidate more storage workloads onto fewer systems and to do that they really need low latency consistent low latency platforms to be able to achieve that and continue to meet their service level agreements great thank you for that all right ken let's bring you into the conversation steiny what are the business impacts of of latency i want you to help us understand when and why is high latency a problem what are the positive impacts of having a consistent low latency uh opportunity or option and what kind of workloads and customers need that right the world has really changed i mean when when dinosaurs like me started in this industry the only people that really knew about performance were the people in the data center and then as things moved into online computing over the years then people within your own organization would care about performance if things weren't going well and it was really the erp revolution the 1990s that sort of opened uh people's eyes to the need for performance particularly for storage performance where now it's not just your internal users but your suppliers are now seeing what your systems look like fast forward to today in a web-based internet world everyone can see with customer facing applications whether you're delivering what they want or not and to answer your question it really comes down to competitive differentiation for the users that can deliver a better user customer experience if you and i'm sure everybody can relate if you go online and try to place an order especially with the holiday season coming up if there's one particular site that is able to give you instantaneous response you're more likely to do business there than somebody where you're going to be waiting and it literally is that simple it used to be that we cared about bandwidth and we used to care about ios per second and the third attribute latency really has become the only one that really matters going forward we found that most customers tell us that these days almost anyone can meet their requirements for bandwidth and ios per second with very few outlying cases where that's not true but the ever unachievable zero latency instantaneous response that's always going to be able to give people competitive differentiation in everything that they do and whoever can provide that is going to be in a very good position to help them serve their customers better yeah eric that stat you threw out of 24 real time uh and that that sort of underscores the need but phil i wonder how how this fits if you could talk about how that fits into your tam expansion strategy i think that's the job of of every ceo is to think about the expanding the tam it seems like you know a lot of people might say it's not necessarily the largest market but it's strategic and maybe opens up some downstream opportunities is that how you're thinking about it or based on what ken just said you expect this to to grow over time oh we definitely expect it to grow uh dave you know the the history of infinidat has been around our infinibox product targeting the primary storage market at the at the higher end of that market you know it's we've enjoyed operating in a eight nine 10 billion dollar tan through the years and that it continues to grow and we continue to outpace market growth within that tam which is exciting what this uh what the ssa really does is it opens up a tier of workload performance that we see more and more emerging in the primary data center the infinibox classic infinibox architecture we have very very fast as we say it typically outperforms most of our all-flash uh array competitors but clearly there there are a tier of workloads that are growing in the data center that require very very tight tail latencies and and that segment is certainly growing it's where some of the most demanding workloads are on the infinibox ssa was really built to expand our participation in those segments of the market and as i mentioned up front at the same time also taking that that software architecture and moving it into the the data protection space as well which is a whole nother market space that we're opening up for the company so we really see our tam this year with more of the this portfolio approach expanding quite a bit eric how how do you see it well those real-time applications that you talked about that require that consistent ultra-low latency grow kind of in in parallel with that that time curve you know will they become a bigger part of that the the overall storage team and and the workload mix how does idc see it yeah so so they actually are going to be growing over time and a lot of that's driven by the fact of the expectations that um steinhart mentioned a little bit earlier just on the part of customers right what they expect when they interact with your i.t infrastructure so we see that absolutely growing going forward i will make a quick comment about you know when all flash arrays first hit back in 2012 um in the 10 years since they started shipping they now generate over 80 of the primary revenues out there in in the primary storage arena so clearly they've taken over an interesting aspect of what's going on here is that a lot of companies now write rfps specifically requiring an all-flash array and what's going to be interesting for infinidat is despite the fact that they could deliver better performance than many of those systems in the past they couldn't really go after the business where that rfp was written for an afa spec well now they'll certainly have the opportunity to do that in my estimation that's going to give them access to about an additional 5 billion in tam by 2025 so this is big for them as a company yeah that's a 50 increase in tamp so okay well eric you just set up my my follow-up question to you ken was going to be the tougher questions uh which we've you and i have had some healthy debates about this but i know you'll have answers so so for years you've argued that your cached architecture and magic sauce algorithms if i caught that could outperform all flash arrays we're using spinning disks so eric talked about the sort of check off item but are there other reasons for the change of heart why and why does the world need another afa doesn't this cut against your petabyte scale messaging i wonder if you could sort of add some color to that sure a great question and the good news is infinibox still does typically outperform all flash arrays but usually that's for average of latency performance and we're tending to get because we're a a caching architecture not a tiered architecture and we're caching to dram which is an order of magnitude faster than flash or even storage class memory technologies it's our software magic and that software defined storage approach that we've had that now effectively is extended to solid state arrays and some customers told us that you know we love your performance it's incredible but if you could let us effectively be confident that we're seeing you know some millisecond sub half millisecond performance consistently for every single io you're going to give us competitive differentiation and this is one of the reasons why we chose to call the product a solid state array as opposed to merely an all-flash array the more common ubiquitous term and it's because we're not dependent on a specific technology we're using dram we can use virtually any technology on the back end and in this case we've chosen to use flash but it's the software that is able to provide that caching to the front end dram that makes things different so that's one aspect is it's the software that really makes the difference it's been the software all along and still on this architecture still mentions going to across the multiple products it's still the software it's also that in that class of ultra high performance architecturally because it is based on the infinibox architecture we're able to deliver 100 availability which is another aspect that the market has evolved to come to expect and it's not rocket science or magic how we do it the godfather of computer science john von neumann all the way back in the 1950s theorized all the way back then that the right way to do ultra high availability and integrity in i.t systems of any type is in threes triple redundancy and in our case amazingly we're the only architecture that uses triple redundant active active components for every single mission critical component on the system and that gives a level of confidence to people from an availability perspective to go with that performance that is just unmatched in the market and then bring all of that together with a set it and forget it mentality for ease of use and simplicity of management and as phil mentioned being able to have a single architecture that can address now not only the ultra high performance but across the entire swath of as eric mentioned consolidation which is a key aspect as well driving this in addition to those real-time applications that he mentioned and even being able to take it down into our our infiniguard data protection device but all with the same common base of software common interface common user experience and unmatched availability and we've got something that we really think people are going to like and they've certainly been proving that of late well i was going to ask you you know what makes the the infinibox ssa different but i think you just laid it out but your contention is this is totally unique in the marketplace is that right ken yes indeed this is a unique architecture and i i literally as a computer scientist myself truly am genuinely surprised that no other vendor in the market has taken the wisdom of the godfather of computer science john von neumann and put it into practice except in the storage world for this particular architecture which transcends our entire realm all the way from the performance down to the data protection phil i mean you have a very wide observation space in this industry and a good strong historical perspective do you think the expectations for performance and this notion of ultra low latencies you know becoming more demanding is is there a parallel so first of all why is that we've talked about a little bit but is there a parallel to the way availability remember you could have escalated over the years um because it was such a problem and now it's really become table stakes and that last mile is so hard but what are your thoughts on that i i think i think absolutely dave you know the the hallmark of infinidat is this white glove concierge level customer experience that we deliver and it's it's affirmed uh year after year in unsolicited enterprise customer feedback uh above every other competitor in our space uh infinidat sets itself apart for this um and i think that's a big part of what continues to drive and fuel the growth and success of the company i just want to touch on a couple things that ken and and eric mentioned the ssa absolutely opens up our tan because we get to we get a lot more at bats now but i think a lot of the industry looks at infinidat as well those guys are are hard drive zealots right they've their architecture is all based on rotating disk that's what they believe in and it's a hybrid versus afa world out there and they were increasingly not on the right bus and that's just absolutely not true in that our our neural cache and what ken talked about what made us unique at the start i think actually only increasingly differentiates us going forward in terms of the the set it and forget it the intelligence of our architecture the ability of that dram based cache to adapt so dynamically without any knobs and and configuration changes to massive changes in workload scale and user scale and it does it with no drama in fact most of our customers the most common feedback we get is that your platform just kind of disappears into our data infrastructure we don't think about it we don't worry about it when we install an infiniti an infinidat rack our intentions are never to come back you know we're not there showing up with trays of disk under our arms trying to upgrade a mission-critical platform that's just not our model what the ssa does is it gives our customers choice it's not about infinidat saying that used to be the shiny object now this is our new shiny object please everybody now go buy that what where where we position our ssa is it's a it's a tco latency sla choice that they can make between exactly identical customer experiences so instead of an old hybrid and a new afa we've got that same software architecture set it and forget it the neural cache and customers can choose what back-end persistent store they want based on the tco and the sla that they want to deliver to a given set of applications so probably the most significant thing that i've seen happen in the last six months at infinidat is a lot of our largest customers the the fortune 15s the fortune 50s the fortune 100s who have been long-standing infinidat customers are now on almost every sort of re-tranche of or trancha purchase orders into us we're now seeing a mix we're seeing a mix of some ssa and some classic infinibox because they're mixing and matching in a given data center down a given row these applications need this sla these applications need this la and we're able to give them that choice and frankly we don't we don't intentionally try to steer them one direction or the other they they're smart they do the math they can pick and choose what experience they want knowing that irrespective of what front door they go through into the infinidat portfolio they're going to get that same experience so i'm hearing it's not just a an rfp check off item it's more than that the market is heading in that direction eric's data on on real time and we're certainly seeing that the data-driven applications the injection of ai and you know systems making decisions in in real time um and i i'm also hearing phil that you're building on your core principles i'm hearing the white glove service the media agnostic the set it and forget it sort of principles that you guys were founded on is you're carrying that through to this this opportunity we absolutely are in the reason and you ask a good question before and i want to more completely answer it i think availability and customer experience are incredibly important today more so than ever because data center economics and data center efficiency um are more important than ever before is as customers evaluate what workloads belong in the public cloud what workloads do i want on-prem irrespective of those decisions they're trying to optimize their their operational expenses their capex expenses and so one thing that infinidat has always excelled at is consolidation bringing multiple users multiple workloads into the same common platform in the data center it says floor space and watts and and uh you know storage administration resources but to do consolidation well you've got to be incredibly reliable and incredibly predictable without a lot of fuss and drama associated with it and so i think the thing that has made infinidat really strong through the years with being a very good consolidation platform is more important now than ever before in in the enterprise storage space because it is really about data center efficiency and uh administration efficiency associated with that yeah thank you for that phil now actually ken let me come back to you i want to ask you a question about consolidation and you and i and and doc our business friend rest his soul have had some some great conversations about this over time but but as you consolidate people are sometimes worried about the blast radius could you address that concern sure well um phil alluded to software and uh it is the cornerstone of everything we bring to the table and it's not just that deep learning that transcends all the intelligence phil talked about in terms of that full wide range of product it's also protection of data across multiple sites and in multiple ways so we were very fortunate in that when we started to create this product since it is a modern product we got to start with a clean sheet of paper and basically look at everything that had been done before and even with some of the very people who created some of the original software for replication in the market were able to then say if i could do it again how would i do it today and how would it be better so we started with local replication and snapshot technology which is the foundation for being able to do full active active replication across two sites today where you can have true zero rpo no data loss even in the face of any kind of failure of a site of a server of a network of a storage device of a connection as well as zero rto immediate consistent operation with no human intervention and we can extend from that out to remote sites literally anywhere in the world in multiples where you can have additional copies of information and at any of them you can be using not only for protection against natural disasters and floods and things like that but from a cyber security perspective immutable snapshots being able to provide data that you know the bad actors can't compromise in multiple locations so we can protect today against virtually any kind of failure scenario across the swath of infinibox or infinibox ssa you can even connect infinite boxes and infinibox ssas because they are the same architecture exactly as phil said what we're seeing is people deploying mostly infinibox because it addresses the wide swath from a consolidation perspective and usually just infinibox ssa for those ultra high performance environments but the beauty of it is it looks feels runs and operates as that one single simple environment that's set it and forget it and just let it run okay so you can consolidate with with confidence uh let's end with the the independent analyst perspective eric you know how do you see this offering what do you think it means for the market is this a new category is it an extension to an existing space how do you look at that uh so i don't see it as a new category i mean it clearly falls into the current definition of afas i think it's more important from the point of view of the customer base that likes this architecture likes the availability the functionality the flexibility that it brings to the table and they can leverage it with tier zero workloads which was something that in the past they didn't have that latency consistency to do that you know i'll just make one one final comment on the software side as well so the reason software is eating the world mark andreessen is basically because of the flexibility the ease of use and the economics and if you take a look at how this particular vendor infinidat designed their product with a software-based definition they were able to swap out underneath and create a different set of characteristics with this new platform because of the flexibility in the software design and that's critical one if you think about how software is dominating so today for 2021 68 of the revenue in the external storage market that's the size of the software defined storage market that's going to be going to almost 80 by 2024 so clearly things are moving in the direction of systems that are defined in a software-defined manner yeah and data is eating software which is why you're going to need ultra low latency um okay we got to wrap it eric you've just published a piece uh this summer called enterprise storage vendor infinidat expands total available market opportunities with all flash system introduction i'm sure they can get that on your website here's a little graphic that shows you how to get that but so guys thanks so much for coming on the cube congratulations on the progress and uh we'll be watching thanks steve thanks very much dave thank you as always a pleasure all right thank you for watching this cube conversation everybody this is dave vellante and we'll see you next time [Music] you
SUMMARY :
the market segment if you would
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
eric bergener | PERSON | 0.99+ |
john von neumann | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
infinidat | ORGANIZATION | 0.99+ |
january | DATE | 0.99+ |
2025 | DATE | 0.99+ |
phil bullinger | PERSON | 0.99+ |
david | PERSON | 0.99+ |
two sites | QUANTITY | 0.99+ |
dave | PERSON | 0.99+ |
steve | PERSON | 0.99+ |
london | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
24 | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.98+ |
2024 | DATE | 0.98+ |
today | DATE | 0.98+ |
11 months | QUANTITY | 0.98+ |
100 availability | QUANTITY | 0.98+ |
steinhart | PERSON | 0.97+ |
50 increase | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
last fall | DATE | 0.96+ |
third attribute | QUANTITY | 0.96+ |
infinibox | ORGANIZATION | 0.96+ |
two | QUANTITY | 0.96+ |
eric | PERSON | 0.96+ |
1950s | DATE | 0.95+ |
earlier this year | DATE | 0.94+ |
first | QUANTITY | 0.93+ |
one particular site | QUANTITY | 0.92+ |
eight nine 10 billion dollar | QUANTITY | 0.92+ |
dave vellante | PERSON | 0.92+ |
finidet ken steinhardt | PERSON | 0.91+ |
ken | PERSON | 0.91+ |
mark andreessen | PERSON | 0.9+ |
phil | PERSON | 0.89+ |
last six months | DATE | 0.88+ |
2024 24 | DATE | 0.87+ |
one aspect | QUANTITY | 0.87+ |
5 billion | QUANTITY | 0.87+ |
single | QUANTITY | 0.86+ |
50s | TITLE | 0.86+ |
lot of people | QUANTITY | 0.86+ |
2021 68 | DATE | 0.86+ |
one direction | QUANTITY | 0.85+ |
every year | QUANTITY | 0.85+ |
one thing | QUANTITY | 0.83+ |
a lot of companies | QUANTITY | 0.83+ |
a lot of people | QUANTITY | 0.82+ |
infinidad | ORGANIZATION | 0.8+ |
over 80 of the primary revenues | QUANTITY | 0.8+ |
1990s | DATE | 0.79+ |
summer | DATE | 0.79+ |
one single simple | QUANTITY | 0.78+ |
Infinidat | ORGANIZATION | 0.76+ |
ios | TITLE | 0.76+ |
zero latency | QUANTITY | 0.76+ |
one final comment | QUANTITY | 0.75+ |
almost 80 | QUANTITY | 0.73+ |
every single mission | QUANTITY | 0.72+ |
Towards Understanding the Fundamental Limits of Analog, Continuous Time Computing
>> Hello everyone. My name is Zoltan Toroczkai. I am from University of Notre Dame, Physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with the Redefine Lab and Yoshian collaborators on the topics of this work. So today I'll briefly talk about, our attempt to understand, the fundamental limits of analog, continuous-time computing at least from the point of view of Boolean Satisfiability problem-solving using ordinary differential equations. But I think the issues that we raise during this occasion actually apply to other approaches, analog approaches as well, until to other problems as well. I think everyone here, knows what Boolean Satisfiability problems are. You have N Boolean variables, you have M clauses. Each a disjunction of K literals. Literal is a variable or it's negation. And the goal is to find an assignment to the variable such that all the clauses are true. This is a decision type problem from the NP class, which means you can check in polynomial time for satisfiability of any assignment. And the 3-SAT is NP-complete with K, 3 or larger, which means an efficient 3-SAT solver, (clears throat) implies an efficient solver for all the problems in the NP clause because all the problems in the NP clause can be reduced in polynomial time to 3-SAT. As a matter of fact you can, reduce the NP-complete problems into each other. You can go from 3-SAT to Set Packing or to Maximum Independent Set which is the set packing in graph theoretic notions or terms, to the ising graph SAT problem decision version. This is useful when you are comparing different approaches or working on different kinds of problems. When not all the clauses can be satisfied, you're looking at the optimization version of SAT, called Max-SAT and the goal here is to find the assignment that satisfies the maximum number of clauses, and this is from the NP-hard class. In terms of applications, if we had an efficient SAT solver, or NP-complete problem solver, it would literally, positively influence thousands of problems in applications in industry and science. I'm not going to read this. But this of course gives us some motivation, to work on these kind of problems. Now, our approach to SAT solving, involves embedding the problem in a continuous space, and you use all these to do that. So instead of working zeros and ones, we work with minus one and plus ones, and if we allow the corresponding variables, to change continuously between the two bounds, we formulate the problem with the help of a Clause Matrix. If, if a clause does not contain a variable or its negation, the corresponding matrix element is zero. If it contains the variable in positive form it's one. If it contains the variable in negated form, it's negative one. And now we use this to formulate these products, called clause violation functions, one for every clause, which rarely continues between zero and one and beyond zero if and only if the clause itself is true. Then we form... We define, also define the dynamics, search dynamics in this and the M-dimensional hypercube, where the search happens and if there exists solutions they're sitting in some of the corners of this hypercube. So we define this energy, potential or landscape function as shown here in a way that it, this is zero if and only if all the clauses, all the Kms are zero. All the clauses are satisfied, keeping these auxiliary variables, Ams always positive. And therefore what we do here is a dynamics that is essentially a gradient descent on this potential energy landscape. If you are to keep all the Ams constant then it would get stuck in some local minimum. However what do you do here is, we couple it with the dynamics. We couple it with the clause violation functions as shown here. And if you didn't have these Am here, just had just the Kms, for example, you have essentially, both case you have a positive feedback. You have a decreasing variable, but in that case you'll still get stuck, would still behave... We'll still find solutions better than the constant version or still would get stuck. Only when we put here this Am, which makes them dynamics in this variable exponential like, only then it keeps searching until it finds a solution. And there's a reason for that, that I'm not going to talk about here, but essentially boils down to performing a gradient descent on a globally time-varying landscape. And, and, and this is what works. Now, I'm going to talk about the good or bad, and maybe the ugly. This is, this is... What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any solution in it, then the number of trajectories in it, the case exponentially quickly and the decay rate is a characteristic, invariant characteristic of the dynamics itself with the dynamical systems called the escape rate. The inverse of that is the timescale in which you find solutions by this dynamical system. And you can see here some trajectories, they are curved because it's, it's not linear but it's transiently curved to give, if there are solutions of course, we could see eventually, it does lead to the solutions. Now, in terms of performance, here what you show, for a bunch of, constraint densities, defined by, M over N, the ratio between clauses to variables, for random SAT problems, is random 3-SAT problems. And they, they, as, as function of N, and we look at, monitor the wall time, the wall clock time, and it, it behaves quite well, it behaves as a, as a polynomialy, until you actually hit, or reach the set on set transition, where the hardest problems are found. But what's more interesting is if you monitor the continuous-time t, the performance in terms of the analog continuous-time t, because that seems to be a polynomial. And the way we show that, is we can see the random K-SAT or random 3-SAT for a fixed constraint density. And we here, what you show here is at the, right at the threshold where it's really hard. And, (clears throat) we monitor the fraction of problems that we have not been able to solve it. We select thousands of problems at that cost rate ratio and we solve them with our algorithm, and we monitor the fraction of problems that have not yet been solved by continuous-time t. And these, as you see these decays exponentially in different decay rates for different system sizes and in this spot shows that this decay rate behaves polynomialy. or actually as a power law. So if you combine these two, you find that the time needed to solve all problems, except maybe appeared fraction of them, scales polynomialy with problem size. So you have polynomial continuous-time complexity. And this is also true, for other types of very hard constraints of the SAT problem such as exact color, because you can always transform them into 3-SAT as we discussed before, Ramsay coloring and, and on these problems, even algorithms like a survey propagation wheel will fail. But this doesn't mean that P equals NP because what you have, first of all, if you were to implement these equations in a device, whose behavior is described by these ODEs, then of course, t the continuous-time variable, becomes a physical wall clock time. And that would be polynomialy scaling but you have other variables, auxiliary variables, which fluctuate in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost algorithm. But this is some kind of trade between time and energy while I know how to generate energy or I don't know how to generate time but I know how to generate energy so it could be useful. But there's other issues as well, especially if you're trying to do this on a digital machine, but also happens, problems happen, appear, other problems appear on in physical devices as well as we discuss later. So if you implement these in GPU, you can, then you can get an order of two magnitude speedup, and you can also modify this, to solve Max-SAT problems quite efficiently, we are competitive with the best, heuristics solvers, this is all the problems in 2016, Max-SAT competition. So, so this, this, this is definitely, this is like a good approach, but there's of course, interesting limitations, I would say interesting, because it kind of makes you think about what it needs and how you can explore this, these observations in understanding better analog continuous-time complexity. If you monitor the discrete number, the number of discrete steps, done by the Runge Kutta integrator, and you solve this on a digital machine. You're using some kind of integrator, and, you know, using the same approach, but now you measure the number of problems you haven't solved, by a given number of discrete steps taken by the integrator. You find out, you have exponential discrete-time complexity. And of course, this is a problem. And if you look closely, what happens, even though the analog mathematical trajectory, that's the red curve here, if you monitor what happens in discrete time, the integrator fluctuates very little. So this is like you know, third or four digits precision, but fluctuates like crazy. So it really is like the integration freezes out, and this is because of the phenomenon of stiffness that I'll talk a little bit, more about a little bit later. So you know, it may look like an integration issue on your digital machines that you could improve and you could definitely improve, but actually the issue is bigger than that. It's, it's deeper than that because on a digital machine there is no time energy conversion. So the auxiliary variables are efficiently represented in a digital machine, so there's no exponential fluctuating current or voltage in your computer when you do this. So if e is not equal NP, then the exponential time complexity or exponential cost complexity has to hit you somewhere. And this is how. But you know one would be tempted to think maybe, this wouldn't be an issue in a analog device, and to some extent is true. Analog devices can be orders of magnitude faster, but they also suffer from their own problems because P not equal NP affects that clause of followers as well. So, indeed if you look at other systems, like Coherent Ising Machine with Measurement-Feedback, or Polariton Condensate Graphs or Oscillator Networks, they all hinge on some kind of, our ability to control real variables with arbitrarily high precision, and Oscillator Networks, you want to read out arbitrarily close frequencies. In case of CIMs, we require identical analog amplitudes which is hard to keep and they kind of fluctuate away from one another, shift away from one another, And, and if you control that, of course, then you can control the performance. So, actually one can ask if whether or not this is a universal bottleneck, and it seems so, as I will argue next. We can recall a fundamental result by A. Schönhage, Graham Schönhage from 1978 who says that, it's a purely computer science proof, that, "If you are able to compute, "the addition, multiplication, division "of real variables with infinite precision then, "you could solve NP-complete problems in polynomial time." He doesn't actually propose a solid work, he just shows mathematically that this will be the case. Now, of course, in real world, you have loss of precision. So the next question is, "How does that affect the computation of our problems?" This is what we are after. Loss of precision means information loss or entropy production. So what we are really looking at, the relationship between hardness and cost of computing of a problem. (clears throat) And according to Sean Harget, there is this left branch, which in principle could be polynomial time, but the question, whether or not this is achievable, that is not achievable, but something more achievable that's on the right-hand side. You know, there's always going to be some information loss, some entropy generation that could keep you away from, possibly from polynomial time. So this is what we'd like to understand. And this information loss, the source of this is not just noise, as, as I will argue in any physical system, but it's also of algorithmic nature. So that is a questionable area or, or approach, but Schönhage's result is purely theoretical, no actual solver is proposed. So we can ask, you know, just theoretically, out of curiosity, "Would in principle be such solvers?" Because he's not proposing a solver. In such properties in principle, if you were to look mathematically, precisely what that solver does, would have the right properties. And I argue, yes, I don't have a mathematical proof but I have some arguments that this would be the case. And this is the case for actually our sitdia solver, that if you could calculate, it's subjectivity in a loss this way, then it would be... Would solve NP-complete problems in polynomial continuous-time. Now, as a matter of fact, this is a bit more difficult question because time in all these can be re-scaled however you want. So what Bournez says, that you actually have to measure the length of the trajectory which is an invariant of the dynamical system or the property of the dynamical system, not of it's parametrization. And we did that. So Shubha Kharel my student did that, by first improving on the stiffness of the problem of the integrations using the implicit solvers and some smart tricks, such that you actually are closer to the actual trajectory and using the same approach to know, what fraction of problems you can solve. We did not give a length of the trajectory, you find that it is polynomialy scaling with the problem size. So we have polynomial scale complexity. That means that our solver is both poly-length, and as it is defined, it's also poly-time analog solver. But if you look at as a discrete algorithm, which will measure the discrete steps on a digital machine, it is an exponential solver, and the reason is because of all this stiffness. So every integrator has to truncate, digitize and truncate the equations. And what it has to do is to keep the integration within this so-called Stimpy TD gen for, for that scheme. And you have to keep this product within Eigenvalues of the Jacobian and the step size within this region, if you use explicit methods, you want to stay within this region. But what happens, that some of the eigenvalues grow fast for stiff problems, and then you're, you're forced to reduce that t, so the product stays in this bounded domain, which means that now you have to, we are forced to take smaller and smaller time steps, so you're, you're freezing out the integration and what I will show you, that's the case. Now you can move to implicit solvers, which is a new trick, in this case, your stability domain is actually on the outside, but what happens in this case, is some of the eigenvalues of the Jacobian, also for this instant start to move to zero, as they are moving to zero, they are going to enter this instability region. So your solver is going to try to keep it out, so it's going to increase the delta t, but if you increase that t, you increase the truncation errors, so you get randomized in the large search space. So it's, it's really not, not willing to work out. Now, one can sort of, introduce a theory or a language to discuss computational, analog computational complexity, using the language from dynamical systems theory. But basically I don't have time to go into this but you have for hard problems, the chaotic object the chaotic saddle in the middle of the search space somewhere, and that dictates how the dynamics happens and invariant properties of the dynamics, of course, of that saddle is what determines performance and many things. So an important measure that we find that, is also helpful in describing, this analog complexity is the so-called Kolmogorov or metric entropy. And basically what this does in an intuitive way, is to describe the rate at which the uncertainty, containing the insignificant digits of a trajectory in the back, they flow towards the significant ones, as you lose information because of errors being, grown or, or or, or developed into larger errors in an exponential, at an exponential rate because you have positive Lyapunov exponents. But this is an invariant property. It's the property of the set of all these, not how you compute them. And it's really the intrinsic rate of accuracy loss of a dynamical system. As I said that you have in such a high dimensional dynamical system, you have positive and negative Lyapunov exponents, as many as the total is the dimension of the space and user dimension, the number of unstable manufactured dimensions and assets now more stable many forms dimensions. And there's an interesting and I think important Pesin equality, equality called the Pesin equality, that connects the information theoretic, as per the rate of information loss with the geometric data each trajectory separate minus cut part which is the escape rate that I already talked about. Now, one can actually prove a simple theorem strike back of the calculation. The idea here is that, you know the rate at which the largest rate at which the closely started trajectory, separate from one another. So now you can say that, that is fine, as long as my trajectory finds the solution, before the trajectory separate too quickly. In that case, I can have the hope, that if I start from some region of the face space, several closely started trajectories, they kind of go into the same solution over time and that's, that's, that's this upper bound of this limit. And it is really showing that it has to be... It's an exponentially smaller number, but it depends on the N, dependence of the exponent right here, which combines information loss rate and the solution time performance. So these, if these exponent here or there, has a large independence, so even a linear independence, then you really have to start trajectories, exponentially closer to one another, in order to end up in the same order. So this is sort of like the, the direction that you are going into, and this formulation is applicable to, to all dynamical systems, deterministic dynamical systems. (clears throat) And I think we can expand this further because the, there is a way of getting the expression for the escape rates in terms of N the number of variables from cycle expansions, that I don't have time to talk about, but it's kind of like a program that you can try to pursue. And this is it. So uh, uh... The conclusions, I think are self-explanatory. I think there is a lot of future in, in analog continuous-time computing. They can be efficient by orders of magnitude than digital ones in solving NP-hard problems, because first of all, many of the systems lack of von Neumann bottleneck, there's parallelism involved and you can also have a larger spectrum of continuous-time dynamical algorithms than discrete ones. And, and, you know, but we also have to be mindful of what are the possibilities, what are the limits? And one, one open question, if any important open question is you know, "What are these limits? "Is there some kind of no-go theorem that tells you that, "you can never perform better than this limit "or, or that limit?" And I think that's, that's the exciting part to, to derive these, these limits and to get to an understanding about what's possible in this, in this area. Thank you.
SUMMARY :
in some of the corners of this hypercube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Zoltan Toroczkai | PERSON | 0.99+ |
Sean Harget | PERSON | 0.99+ |
1978 | DATE | 0.99+ |
2016 | DATE | 0.99+ |
A. Schönhage | PERSON | 0.99+ |
Graham Schönhage | PERSON | 0.99+ |
Shubha Kharel | PERSON | 0.99+ |
Bournez | PERSON | 0.99+ |
University of Notre Dame | ORGANIZATION | 0.99+ |
Schönhage | PERSON | 0.99+ |
Redefine Lab | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
two bounds | QUANTITY | 0.99+ |
Ramsay | PERSON | 0.98+ |
zero | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
each trajectory | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
four digits | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
von Neumann | PERSON | 0.96+ |
Jacobian | OTHER | 0.95+ |
Each | QUANTITY | 0.9+ |
Yoshian | ORGANIZATION | 0.9+ |
Lyapunov | OTHER | 0.9+ |
zeros | QUANTITY | 0.87+ |
thousands of problems | QUANTITY | 0.86+ |
two magnitude | QUANTITY | 0.85+ |
both case | QUANTITY | 0.81+ |
minus one | QUANTITY | 0.78+ |
3-SAT | OTHER | 0.74+ |
one open question | QUANTITY | 0.73+ |
Runge Kutta | PERSON | 0.73+ |
plus | QUANTITY | 0.66+ |
3 | OTHER | 0.65+ |
every | QUANTITY | 0.65+ |
Clause | OTHER | 0.56+ |
ones | QUANTITY | 0.55+ |
N Boolean | OTHER | 0.55+ |
SAT | OTHER | 0.46+ |
problems | QUANTITY | 0.44+ |
Nutanix .NEXT Morning Keynote Day1
Section 1 of 13 [00:00:00 - 00:10:04] (NOTE: speaker names may be different in each section) Speaker 1: Ladies and gentlemen our program will begin momentarily. Thank you. (singing) This presentation and the accompanying oral commentary may include forward looking statements that are subject to risks uncertainties and other factors beyond our control. Our actual results, performance or achievements may differ materially and adversely from those anticipated or implied by such statements because of various risk factors. Including those detailed in our annual report on form 10-K for the fiscal year ended July 31, 2017 filed with the SEC. Any future product or roadmap information presented is intended to outline general product direction and is not a commitment to deliver any functionality and should not be used when making any purchasing decision. (singing) Ladies and gentlemen please welcome Vice President Corporate Marketing Nutanix, Julie O'Brien. Julie O'Brien: All right. How about those Nutanix .NEXT dancers, were they amazing or what? Did you see how I blended right in, you didn't even notice I was there. [French 00:07:23] to .NEXT 2017 Europe. We're so glad that you could make it today. We have such a great agenda for you. First off do not miss tomorrow morning. We're going to share the outtakes video of the handclap video you just saw. Where are the customers, the partners, the Nutanix employee who starred in our handclap video? Please stand up take a bow. You are not going to want to miss tomorrow morning, let me tell you. That is going to be truly entertaining just like the next two days we have in store for you. A content rich highly interactive, number of sessions throughout our agenda. Wow! Look around, it is amazing to see how many cloud builders we have with us today. Side by side you're either more than 2,200 people who have traveled from all corners of the globe to be here. That's double the attendance from last year at our first .NEXT Conference in Europe. Now perhaps some of you are here to learn the basics of hyperconverged infrastructure. Others of you might be here to build your enterprise cloud strategy. And maybe some of you are here to just network with the best and brightest in the industry, in this beautiful French Riviera setting. Well wherever you are in your journey, you'll find customers just like you throughout all our sessions here with the next two days. From Sligro to Schroders to Societe Generale. You'll hear from cloud builders sharing their best practices and their lessons learned and how they're going all in with Nutanix, for all of their workloads and applications. Whether it's SAP or Splunk, Microsoft Exchange, unified communications, Cloud Foundry or Oracle. You'll also hear how customers just like you are saving millions of Euros by moving from legacy hypervisors to Nutanix AHV. And you'll have a chance to post some of your most challenging technical questions to the Nutanix experts that we have on hand. Our Nutanix technology champions, our MPXs, our MPSs. Where are all the people out there with an N in front of their certification and an X an R an S an E or a C at the end. Can you wave hello? You might be surprised to know that in Europe and the Middle East alone, we have more than 2,600 >> Julie: In Europe and the Middle East alone, we have more than 2,600 certified Nutanix experts. Those are customers, partners, and also employees. I'd also like to say thank you to our growing ecosystem of partners and sponsors who are here with us over the next two days. The companies that you meet here are the ones who are committed to driving innovation in the enterprise cloud. Over the next few days you can look forward to hearing from them and seeing some fantastic technology integration that you can take home to your data center come Monday morning. Together, with our partners, and you our customers, Nutanix has had such an exciting year since we were gathered this time last year. We were named a leader in the Gartner Magic Quadrant for integrated systems two years in a row. Just recently Gartner named us the revenue market share leader in their recent market analysis report on hyper-converged systems. We know enjoy more than 35% revenue share. Thanks to you, our customers, we received a net promoter score of more than 90 points. Not one, not two, not three, but four years in a row. A feat, I'm sure you'll agree, is not so easy to accomplish, so thank you for your trust and your partnership in us. We went public on NASDAQ last September. We've grown to more than 2,800 employees, more than 7,000 customers and 125 countries and in Europe and the Middle East alone, in our Q4 results, we added more than 250 customers just in [Amea 00:11:38] alone. That's about a third of all of our new customer additions. Today, we're at a pivotal point in our journey. We're just barely scratching the surface of something big and Goldman Sachs thinks so too. What you'll hear from us over the next two days is this: Nutanix is on it's way to building and becoming an iconic enterprise software company. By helping you transform your data center and your business with Enterprise Cloud Software that gives you the power of freedom of choice and flexibility in the hardware, the hypervisor and the cloud. The power of one click, one OS, any cloud. And now, to tell you more about the digital transformation that's possible in your business and your industry and share a little bit around the disruption that Nutanix has undergone and how we've continued to reinvent ourselves and maybe, if we're lucky, share a few hand clap dance moves, please welcome to stage Nutanix Founder, CEO and Chairman, Dheeraj Pandey. Ready? Alright, take it away [inaudible 00:13:06]. >> Dheeraj P: Thank you. Thank you, Julie and thank you every one. It looks like people are still trickling. Welcome to Acropolis. I just hope that we can move your applications to Acropolis faster than we've been able to move people into this room, actually. (laughs) But thank you, ladies and gentlemen. Thank you to our customers, to our partners, to our employees, to our sponsors, to our board members, to our performers, to everybody for their precious time. 'Cause that's the most precious thing you actually have, is time. I want to spend a little bit of time today, not a whole lot of time, but a little bit of time talking about the why of Nutanix. Like why do we exist? Why have we survived? Why will we continue to survive and thrive? And it's simpler than an NQ or category name, the word hyper-convergence, I think we are all complicated. Just thinking about what is it that we need to talk about today that really makes it relevant, that makes you take back something from this conference. That Nutanix is an obvious innovation, it's very obvious what we do is not very complicated. Because the more things change, the more they remain the same, so can we draw some parallels from life, from what's going on around us in our own personal lives that makes this whole thing very natural as opposed to "Oh, it's hyper-converged, it's a category, it's analysts and pundits and media." I actually think it's something new. It's not that different, so I want to start with some of that today. And if you look at our personal lives, everything that we had, has been digitized. If anything, a lot of these gadgets became apps, they got digitized into a phone itself, you know. What's Nutanix? What have we done in the last seven, eight years, is we digitized a lot of hardware. We made everything that used to be single purpose hardware look like pure software. We digitized storage, we digitized the systems manager role, an operations manager role. We are digitizing scriptures, people don't need to write scripts anymore when they automate because we can visually design automation with [com 00:15:36]. And we're also trying to make a case that the cloud itself is not just a physical destination. That it can be digitized and must be digitized as well. So we learn that from our personal lives too, but it goes on. Look at music. Used to be tons of things, if you used to go to [inaudible 00:15:55] Records, I'm sure there were European versions of [inaudible 00:15:57] Records as well, the physical things around us that then got digitized as well. And it goes on and on. We look at entertainment, it's very similar. The idea that if you go to a movie hall, the idea that you buy these tickets, the idea that we'd have these DVD players and DVDs, they all got digitized. Or as [inaudible 00:16:20] want to call it, virtualized, actually. That is basically happening in pretty much new things that we never thought would look this different. One of the most exciting things happening around us is the car industry. It's getting digitized faster than we know. And in many ways that we'd not even imagined 10 years ago. The driver will get digitized. Autonomous cars. The engine is definitely gone, it's a different kind of an engine. In fact, we'll re-skill a lot of automotive engineers who actually used to work in mechanical things to look at real chemical things like battery technologies and so on. A lot of those things that used to be physical are now in software in the car itself. Media itself got digitized. Think about a physical newspaper, or physical ads in newspapers. Now we talk about virtual ads, the digital ads, they're all over on websites and so on is our digital experience now. Education is no different, you know, we look back at the kind of things we used to do physically with physical things. Their now all digital. The experience has become that digital. And I can go on and on. You look at retail, you look at healthcare, look at a lot of these industries, they all are at the cusp of a digital disruption. And in fact, if you look at the data, everybody wants it. We all want a digital transformation for industries, for companies around us. In fact, the whole idea of a cloud is a highly digitized data center, basically. It's not just about digitizing servers and storage and networks and security, it's about virtualizing, digitizing the entire data center itself. That's what cloud is all about. So we all know that it's a very natural phenomenon, because it's happening around us and that's the obviousness of Nutanix, actually. Why is it actually a good thing? Because obviously it makes anything that we digitize and we work in the digital world, bring 10X more productivity and decision making efficiencies as well. And there are challenges, obviously there are challenges, but before I talk about the challenges of digitization, think about why are things moving this fast? Why are things becoming digitally disrupted quicker than we ever imagined? There are some reasons for it. One of the big reasons is obviously we all know about Moore's Law. The fact that a lot of hardware's been commoditized, and we have really miniaturized hardware. Nutanix today runs on a palm-sized server. Obviously it runs on the other end of the spectrum with high-end IBM power systems, but it also runs on palm-sized servers. Moore's Law has made a tremendous difference in the way we actually think about consuming software itself. Of course, the internet is also a big part of this. The fact that there's a bandwidth glut, there's Trans-Pacific cables and Trans-Atlantic cables and so on, has really connected us a lot faster than we ever imagined, actually, and a lot of this was also the telecom revolution of the '90s where we really produced a ton of glut for the internet itself. There's obviously a more subtle reason as well, because software development is democratizing. There's consumer-grade programming languages that we never imagined 10, 15, 20 years ago, that's making it so much faster to write- >> Speaker 1: 15-20 years ago that's making it so much faster to write code, with this crowdsourcing that never existed before with Githubs and things like that, open source. There's a lot more stuff that's happening that's outside the boundary of a corporation itself, which is making things so much faster in terms of going getting disrupted and writing things at 10x the speed it used to be 20 years ago. There is obviously this technology at the tip of our fingers, and we all want it in our mobile experience while we're driving, while we're in a coffee shop, and so on; and there's a tremendous focus on design on consumer-grade simplicity, that's making digital disruption that much more compressed in some of sense of this whole cycle of creative disruption that we talk about, is compressed because of mobility, because of design, because of API, the fact that machines are talking to machines, developers are talking to developers. We are going and miniaturizing the experience of organizations because we talk about micro-services and small two-pizza teams, and they all want to talk about each other using APIs and so on. Massive influence on this digital disruption itself. Of course, one of the reasons why this is also happening is because we want it faster, we want to consume it faster than ever before. And our attention spans are reducing. I like the fact that not many people are watching their cell phones right now, but you can imagine the multi-tasking mode that we are all in today in our lives, makes us want to consume things at a faster pace, which is one of the big drivers of digital disruption. But most importantly, and this is a very dear slide to me, a lot of this is happening because of infrastructure. And I can't overemphasize the importance of infrastructure. If you look at why did Google succeed, it was the ninth search engine, after eight of them before, and if you take a step back at why Facebook succeeded over MySpace and so on, a big reason was infrastructure. They believed in scale, they believed in low latency, they believed in being able to crunch information, at 10x, 100x, bigger scale than anyone else before. Even in our geopolitical lives, look at why is China succeeding? Because they've made infrastructure seamless. They've basically said look, governance is about making infrastructure seamless and invisible, and then let the businesses flourish. So for all you CIOs out there who actually believe in governance, you have to think about what's my first role? What's my primary responsibility? It's to provide such a seamless infrastructure, that lines of business can flourish with their applications, with their developers that can write code 10x faster than ever before. And a lot of these tenets of infrastructure, the fact of the matter is you need to have this always-on philosophy. The fact that it's breach-safe culture. Or the fact that operating systems are hardware agnostic. A lot of these tenets basically embody what Nutanix really stands for. And that's the core of what we really have achieved in the last eight years and want to achieve in the coming five to ten years as well. There's a nuance, and obviously we talk about digital, we talk about cloud, we talk about everything actually going to the cloud and so on. What are the things that could slow us down? What are the things that challenge us today? Which is the reason for Nutanix? Again, I go back to this very important point that the reason why we think enterprise cloud is a nuanced term, because the word "cloud" itself doesn't solve for a lot of the problems. The public cloud itself doesn't solve for a lot of the problems. One of the big ones, and obviously we face it here in Europe as well, is laws of the land. We have bureaucracy, which we need to deal with and respect; we have data sovereignty and computing sovereignty needs that we need to actually fulfill as well, while we think about going at breakneck speed in terms of disrupting our competitors and so on. So there's laws of the land, there's laws of physics. This is probably one of the big ones for what the architecture of cloud will look like itself, over the coming five to ten years. Our take is that cloud will need to be more dispersed than they have ever imagined, because computing has to be local to business operations. Computing has to be in hospitals and factories and shop floors and power plants and on and on and on... That's where you really can have operations and computing really co-exist together, cause speed is important there as well. Data locality is one of our favorite things; the fact that computing and data have to be local, at least the most relevant data has to be local as well. And the fact that electrons travel way faster when it's actually local, versus when you have to have them go over a Wide Area Network itself; it's one of the big reasons why we think that the cloud will actually be more nuanced than just some large data centers. You need to disperse them, you need to actually think about software (cloud is about software). Whether data plane itself could be dispersed and even miniaturized in small factories and shop floors and hospitals. But the control plane of the cloud is centralized. And that's the way you can have the best of both worlds; the control plane is centralized. You think as if you're managing one massive data center, but it's not because you're really managing hundreds or thousands of these sites. Especially if you think about edge-based computing and IoT where you really have your tentacles in tens of thousands of smaller devices and so on. We've talked about laws of the land, which is going to really make this digital transformation nuanced; laws of physics; and the third one, which is really laws of entropy. These are hackers that do this for adrenaline. These are parochial rogue states. These are parochial geo-politicians, you know, good thing I actually left the torture sign there, because apparently for our creative designer, geo-politics is equal to torture as well. So imagine one bad tweet can actually result in big changes to the way we actually live in this world today. And it's important. Geo-politics itself is digitized to a point where you don't need a ton of media people to go and talk about your principles and what you stand for and what you strategy for, for running a country itself is, and so on. And these are all human reasons, political reasons, bureaucratic reasons, compliance and regulations reasons, that, and of course, laws of physics is yet another one. So laws of physics, laws of the land, and laws of entropy really make us take a step back and say, "What does cloud really mean, then?" Cause obviously we want to digitize everything, and it all should appear like it's invisible, but then you have to nuance it for the Global 5000, the Global 10000. There's lots of companies out there that need to really think about GDPR and Brexit and a lot of the things that you all deal with on an everyday basis, actually. And that's what Nutanix is all about. Balancing what we think is all about technology and balancing that with things that are more real and practical. To deal with, grapple with these laws of the land and laws of physics and laws of entropy. And that's where we believe we need to go and balance the private and the public. That's the architecture, that's the why of Nutanix. To be able to really think about frictionless control. You want things to be frictionless, but you also realize that you are a responsible citizen of this continent, of your countries, and you need to actually do governance of things around you, which is computing governance, and data governance, and so on. So this idea of melding the public and the private is really about melding control and frictionless together. I know these are paradoxical things to talk about like how do you really have frictionless control, but that's the life you all lead, and as leaders we have to think about this series of paradoxes itself. And that's what Nutanix strategy, the roadmap, the definition of enterprise cloud is really thinking about frictionless control. And in fact, if anything, it's one of the things is also very interesting; think about what's disrupting Nutanix as a company? We will be getting disrupted along the way as well. It's this idea of true invisibility, the public cloud itself. I'd like to actually bring on board somebody who I have a ton of respect for, this leader of a massive company; which itself is undergoing disruption. Which is helping a lot of its customers undergo disruption as well, and which is thinking about how the life of a business analyst is getting digitized. And what about the laws of the land, the laws of physics, and laws of entropy, and so on. And we're learning a lot from this partner, massively giant company, called IBM. So without further ado, Bob Picciano. >> Bob Picciano: Thanks, >> Speaker 1: Thank you so much, Bob, for being here. I really appreciate your presence here- >> Bob Picciano: My pleasure! >> Speaker 1: And for those of you who actually don't know Bob, Bob is a Senior VP and General Manager at IBM, and is all things cognitive and obviously- >> Speaker 1: IBM is all things cognitive. Obviously, I learn a lot from a lot of leaders that have spent decades really looking at digital disruption. >> Bob: Did you just call me old? >> Speaker 1: No. (laughing) I want to talk about experience and talking about the meaning of history, because I love history, actually, you know, and I don't want to make you look old actually, you're too young right now. When you talk about digital disruption, we look at ourselves and say, "Look we are not extremely invisible, we are invisible, but we have not made something as invisible as the public clouds itself." And hence as I. But what's digital disruption mean for IBM itself? Now, obviously a lot of hardware is being digitized into software and cloud services. >> Bob: Yep. >> Speaker 1: What does it mean for IBM itself? >> Bob: Yeah, if you allow me to take a step back for a moment, I think there is some good foundational understanding that'll come from a particular point of view. And, you talked about it with the number of these dimensions that are affecting the way businesses need to consider their competitiveness. How they offer their capabilities into the market place. And as you reflected upon IBM, you know, we've had decades of involvement in information technology. And there's a big disruption going on in the information technology space. But it's what I call an accretive disruption. It's a disruption that can add value. If you were to take a step back and look at that digital trajectory at IBM you'd see our involvement with information technology in a space where it was all oriented around adding value and capability to how organizations managed inscale processes. Thinking about the way they were going to represent their businesses in a digital form. We came to call them applications. But it was how do you open an account, how do you process a claim, how do you transfer money, how do you hire an employee? All the policies of a company, the way the people used to do it mechanically, became digital representations. And that foundation of the digital business process is something that IBM helped define. We invented the role of the CIO to help really sponsor and enter in this notion that businesses could re represent themselves in a digital way and that allowed them to scale predictably with the qualities of their brand, from local operations, to regional operations, to international operations, and show up the same way. And, that added a lot of value to business for many decades. And we thrived. Many companies, SAP all thrived during that span. But now we're in a new space where the value of information technology is hitting a new inflection point. Which is not about how you scale process, but how you scale insight, and how you scale wisdom, and how you scale knowledge and learning from those operational systems and the data that's in those operational systems. >> Speaker 1: How's it different from 1993? We're talking about disruption. There was a time when IBM reinvented itself, 20-25 years ago. >> Bob: Right. >> Speaker 1: And you said it's bigger than 25 years ago. Tell us more. >> Bob: You know, it gets down. Everything we know about that process space right down to the very foundation, the very architecture of the CPU itself and the computer architecture, the von Neumann architecture, was all optimized on those relatively static scaled business processes. When you move into the notion where you're going to scale insight, scale knowledge, you enter the era that we call the cognitive era, or the era of intelligence. The algorithms are very different. You know the data semantically doesn't integrate well across those traditional process based pools and reformation. So, new capabilities like deep learning, machine learning, the whole field of artificial intelligence, allows us to reach into that data. Much of it unstructured, much of it dark, because it hasn't been indexed and brought into the space where it is directly affecting decision making processes in a business. And you have to be able to apply that capability to those business processes. You have to rethink the computer, the circuitry itself. You have to think about how the infrastructure is designed and organized, the network that is required to do that, the experience of the applications as you talked about have to be very natural, very engaging. So IBM does all of those things. So as a function of our transformation that we're on now, is that we've had to reach back, all the way back from rethinking the CPU, and what we dedicate our time and attention to. To our services organization, which is over 130,000 people on the consulting side helping organizations add digital intelligence to this notion of a digital business. Because, the two things are really a confluence of what will make this vision successful. >> Speaker 1: It looks like massive amounts of change for half a million people who work with the company. >> Bob: That's right. >> Speaker 1: I'm sure there are a lot of large customers out here, who will also read into this and say, "If IBM feels disrupted ... >> Bob: Uh hm >> Speaker 1: How can we actually stay not vulnerable? Actually there is massive amounts of change around their own competitive landscape as well. >> Bob: Look, I think every company should feel vulnerable right. If you're at this age, this cognitive era, the age of digital intelligence, and you're not making a move into being able to exploit the capabilities of cognition into the business process. You are vulnerable. If you're at that intersection, and your competitor is passing through it, and you're not taking action to be able to deploy cognitive infrastructure in conjunction with the business processes. You're going to have a hard time keeping up, because it's about using the machines to do the training to augment the intelligence of our employees of our professionals. Whether that's a lawyer, or a doctor, an educator or whether that's somebody in a business function, who's trying to make a critical business decision about risk or about opportunity. >> Speaker 1: Interesting, very interesting. You used the word cognitive infrastructure. >> Bob: Uh hm >> Speaker 1: There's obviously computer infrastructure, data infrastructure, storage infrastructure, network infrastructure, security infrastructure, and the core of cognition has to be infrastructure as well. >> Bob: Right >> Speaker 1: Which is one of the two things that the two companies are working together on. Tell us more about the collaboration that we are actually doing. >> Bob: We are so excited about our opportunity to add value in this space, so we do think very differently about the cognitive infrastructure that's required for this next generation of computing. You know I mentioned the original CPU was built for very deterministic, very finite operations; large precision floating point capabilities to be able to accurately calculate the exact balance, the exact amount of transfer. When you're working in the field of AI in cognition. You actually want variable precision. Right. The data is very sparse, as opposed to the way that deterministic or scorecastic operations work, which is very dense or very structured. So the algorithms are redefining the processes that the circuitry actually has to run. About five years ago, we dedicated a huge effort to rethink everything about the chip and what we made to facilitate an orchestra of participation to solve that problem. We all know the GPU has a great benefit for deep learning. But the GPU in many cases, in many architectures, specifically intel architectures, it's dramatically confined by a very small amount of IO bandwidth that intel allows to go on and off the chip. At IBM, we looked at all 686 roughly square millimeters of our chip and said how do we reuse that square area to open up that IO bandwidth? So the innovation of a GPU or a FPGA could really be utilized to it's maximum extent. And we could be an orchestrator of all of the diverse compute that's going to be necessary for AI to really compel these new capabilities. >> Speaker 1: It's interesting that you mentioned the fact that you know power chips have been redefined for the cognitive era. >> Bob: Right, for Lennox for the cognitive era. >> Speaker 1: Exactly, and now the question is how do you make it simple to use as well? How do you bring simplicity which is where ... >> Bob: That's why we're so thrilled with our partnership. Because you talked about the why of Nutanix. And it really is about that empowerment. Doing what's natural. You talked about the benefits of calm and being able to really create that liberation of an information technology professional, whether it's in operations or in development. Having the freedom of action to make good decisions about defining the infrastructure and deploying that infrastructure and not having to second guess the physical limitations of what they're going to have to be dealing with. >> Speaker 1: That's why I feel really excited about the fact that you have the power of software, to really meld the two forms together. The intel form and the power form comes together. And we have some interesting use cases that our CIO Randy Phiffer is also really exploring, is how can a power form serve as a storage form for our intel form. >> Bob: Sure. >> Speaker 1: It can serve files and mocks and things like that. >> Bob: Any data intensive application where we have seen massive growth in our Lennox business, now for our business, Lennox is 20% of the revenue of our power systems. You know, we started enabling native Lennox distributions on top of little Indian ones, on top of the power capabilities just a few years ago, and it's rocketed. And the reason for that if for any data intensive application like a data base, a no sequel database or a structured data base, a dupe in the unstructured space, they typically run about three to four times better price performance on top of Lennox on power, than they will on top of an intel alternative. >> Speaker 1: Fascinating. >> Bob: So all of these applications that we're talking about either create or consume a lot of data, have to manage a lot of flexibility in that space, and power is a tremendous architecture for that. And you mentioned also the cohabitation, if you will, between intel and power. What we want is that optionality, for you to utilize those benefits of the 3X better price performance where they apply and utilize the commodity base where it applies. So you get the cost benefits in that space and the depth and capability in the space for power. >> Speaker 1: Your tongue in cheek remark about commodity intel is not lost on people actually. But tell us about... >> Speaker 1: Intel is not lost on people actually. Tell us about ... Obviously we digitized Linux 10, 15 years ago with [inaudible 00:40:07]. Have you tried to talk about digitizing AIX? That is the core of IBM's business for the last 20, 25, 30 years. >> Bob: Again, it's about this ability to compliment and extend the investments that businesses have made during their previous generations of decision making. This industry loves to talk about shifts. We talked about this earlier. That was old, this is new. That was hard, this is easy. It's not about shift, it's about using the inflection point, the new capability to extend what you already have to make it better. And that's one thing that I must compliment you, and the entire Nutanix organization. It's really empowering those applications as a catalog to be deployed, managed, and integrated in a new way, and to have seamless interoperability into the cloud. We see the AIX workload just having that same benefit for those businesses. And there are many, many 10's of thousands around the world that are critically dependent on every element of their daily operations and productivity of that operating platform. But to introduce that into that network effect as well. >> Speaker 1: Yeah. I think we're looking forward to how we bring the same cloud experience on AIX as well because as a company it keeps us honest when we don't scoff at legacy. We look at these applications the last 10, 15, 20 years and say, "Can we bring them into the new world as well?" >> Bob: Right. >> Speaker 1: That's what design is all about. >> Bob: Right. >> Speaker 1: That's what Apple did with musics. We'll take an old world thing and make it really new world. >> Bob: Right. >> Speaker 1: The way we consume things. >> Bob: That governance. The capability to help protect against the bad actors, the nefarious entropy players, as you will. That's what it's all about. That's really what it takes to do this for the enterprise. It's okay, and possibly easier to do it in smaller islands of containment, but when you think about bringing these class of capabilities into an enterprise, and really helping an organization drive both the flexibility and empowerment benefits of that, but really be able to depend upon it for international operations. You need that level of support. You need that level of capability. >> Speaker 1: Awesome. Thank you so much Bob. Really appreciate you coming. [crosstalk 00:42:14] Look forward to your [crosstalk 00:42:14]. >> Bob: Cheers. Thank you. >> Speaker 1: Thanks again for all of you. I know that people are sitting all the way up there as well, which is remarkable. I hope you can actually see some of the things that Sunil and the team will actually bring about, talk about live demos. We do real stuff here, which is truly live. I think one of the requests that I have is help us help you navigate the digital disruption that's upon you and your competitive landscape that's around you that's really creating that disruption. Thank you again for being here, and welcome again to Acropolis. >> Speaker 3: Ladies and gentlemen, please welcome Chief Product and Development Officer, Nutanix Sunil Potti. >> Sunil Potti: Okay, so I'm going to just jump right in because I know a bunch of you guys are here to see the product as well. We are a lot of demos lined up for you guys, and we'll try to mix in the slides, and the demos as well. Here's just an example of the things I always bring up in these conferences to look around, and say in the last few months, are we making progress in simplifying infrastructure? You guys have heard this again and again, this has been our mantra from the beginning, that the hotter things get, the more differentiated a company like Nutanix can be if we can make things simple, or keep things simple. Even though I like this a lot, we found something a little bit more interesting, I thought, by our European marketing team. If you guys need these tea bags, which you will need pretty soon. It's a new tagline for the company, not really. I thought it was apropos. But before I get into the product and the demos, to give you an idea. Every time I go to an event you find ways to memorialize the event. You meet people, you build relationships, you see something new. Last night, nothing to do with the product, I sat beside someone. It was a customer event. I had no idea who I was sitting beside. He was a speaker. How many of you guys know him, by the way? Sir Ranulph Fiennes. Few hands. Good for you. I had no idea who I was sitting beside. I said, "Oh, somebody called Sir. I should be respectful." It's kind of hard for me to be respectful, but I tried. He says, "No, I didn't do anything in the sense. My grandfather was knighted about 100 years ago because he was the governor of Antigua. And when he dies, his son becomes." And apparently Sir Ranulph's dad also died in the war, and so that's how he is a sir. But then I started looking it up because he's obviously getting ready to present. And the background for him is, in my opinion, even though the term goes he's the World's Greatest Living Explorer. I would have actually called it the World's Number One Stag, and I'll tell you why. Really, you should go look it up. So this guy, at the age of 21, gets admitted to Special Forces. If you're from the UK, this is as good as it gets, SAS. Six, seven years into it, he rebels, helps out his local partner because he doesn't like a movie who's building a dam inside this pretty village. And he goes and blows up a dam, and he's thrown out of that Special Forces. Obviously he's in demolitions. Goes all the way. This is the '60's, by the way. Remember he's 74 right now. The '60's he goes to Oman, all by himself, as the only guy, only white guy there. And then around the '70's, he starts truly exploring, truly exploring. And this is where he becomes really, really famous. You have to go see this in real life, when he sees these videos to really appreciate the impact of this guy. All by himself, he's gone across the world. He's actually gone across Antarctica. Now he tells me that Antarctica is the size of China and India put together, and he was prepared for -50 to 60 degrees, and obviously he got -130 degrees. Again, you have to see the videos, see his frostbite. Two of his fingers are cut off, by the way. He hacksawed them himself. True story. And then as he, obviously, aged, his body couldn't keep up with him, but his will kept up with him. So after a recent heart attack, he actually ran seven marathons. But most importantly, he was telling me this story, at 65 he wanted to do something different because his body was letting him down. He said, "Let me do something easy." So he climbed Mount Everest. My point being, what is this related to Nutanix? Is that if Nutanix is a company, without technology, allows to spend more time on life, then we've accomplished a piece of our vision. So keep that in mind. Keep that in mind. Now comes the boring part, which is the product. The why, what, how of Nutanix. Neeris talked about this. We have two acts in this company. Invisible Infrastructure was what we started off. You heard us talk about it. How did we do it? Using one-click technologies by converging infrastructure, computer storage, virtualization, et cetera, et cetera. What we are now about is about changing the game. Saying that just like we'd applicated what powers Google and Amazon inside the data center, could we now make them all invisible? Whether it be inside or outside, could we now make clouds invisible? Clouds could be made invisible by a new level of convergence, not about computer storage, but converging public and private, converging CAPEX and OPEX, converging consumption models. And there, beyond our core products, Acropolis and Prism, are these new products. As you know, we have this core thesis, right? The core thesis says what? Predictable workloads will stay inside the data center, elastic workloads will go outside, as long as the experience on both sides is the same. So if you can genuinely have a cloud-like experience delivered inside a data center, then that's the right a- >> Speaker 1: Genuinely have a cloud like experience developed inside the data center. And that's the right answer of predictable workloads. Absolutely the answer of elastic workloads, doesn't matter whether security or compliance. Eventually a public cloud will have a data center right beside your region, whether through local partner or a top three cloud partner. And you should use it as your public cloud of choice. And so, our goal is to ensure that those two worlds are converged. And that's what Calm does, and we'll talk about that. But at the same time, what we found in late 2015, we had a bunch of customers come to us and said "Look, I love this, I love the fact that you're going to converge public and private and all that good stuff. But I have these environments and these apps that I want to be delivered as a service but I want the same operational tooling. I don't want to have two different environments but I don't want to manage my data centers. Especially my secondary data centers, DR data centers." And that's why we created Xi, right? And you'll hear a lot more about this, obviously it's going to start off in the U.S but very rapidly launch in Europe, APJ globally in the next 9-12 months. And so we'll spend some quality time on those products as well today. So, from the journey that we're at, we're starting with the score cloud that essentially says "Look, your public and private needs to be the same" We call that the first instantiation of your cloud architectures and we're essentially as a company, want to build this enterprise cloud operating system as a fabric across public and private. But that's just the starting point. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. Just like you have a public and a private cloud in the core data centers and so forth, you'll need a similar experience inside your remote office branch office, inside your DR data centers, inside your branches, and it won't stop there. It'll go all the way to the edge. All we're already seeing this right? Not just in the army where your forward operating bases in Afghanistan having a three note cluster sitting inside a tent. But we're seeing this in a variety of enterprise scenarios. And here's an example. So, here's a customer, global oil and gas company, has couple of primary data centers running Nutanix, uses GCP as a core public cloud platform, has a whole bunch of remote offices, but it also has this interesting new edge locations in the form of these small, medium, large size rigs. And today, they're in the process of building a next generation cloud architecture that's completely dispersed. They're using one node, coming out on version 5.5 with Nutanix. They're going to use two nodes, they're going to throw us three nods, multicultural architectures. Day one, they're going to centrally manage it using Prism, with one click upgrades, right? And then on top of that, they're also now provisioning using Calm, purpose built apps for the various locations. So, for example, there will be a re control app at the edge, there's an exploration data lag in Google and so forth. My point being that increasingly this architecture that we're talking about is happening in real time. It's no longer just an existing cellular civilization data center that's being replatformed to look like a private cloud and so forth, or a hybrid cloud. But the fact that you're going into this multi cloud era is getting excel bated, the more someone consumes AWL's GCP or any public cloud, the more they're excel bating their internal transformation to this multi cloud architecture. And so that's what we're going to talk about today, is this construct of ONE OS and ONE Click, and when you think about it, every company has a standard stack. So, this is the only slide you're going to see from me today that's a stack, okay? And if you look at the new release coming out, version 5.5, it's coming out imminently, easiest way to say it is that it's got a ton of functionality. We've jammed as much as we can onto one slide and then build a product basically, okay? But I would encourage you guys to check out the release, it's coming out shortly. And we can go into each and every feature here, we'd be spending a lot of time but the way that we look at building Nutanix products as many of you know, it is not feature at a time. It's experience at a time. And so, when you really look at Nutanix using a lateral view, and that's how we approach problems with our customers and partners. We think about it as a life cycle, all the way from learning to using, operating, and then getting support and experiences. And today, we're going to go through each of these stages with you. And who better to talk about it than our local version of an architect, Steven Poitras please come up on stage. I don't know where you are, Steven come on up. You tucked your shirt in? >> Speaker 2: Just for you guys today. >> Speaker 1: Okay. Alright. He's sort of putting on his weight. I know you used a couple of tight buckles there. But, okay so Steven so I know we're looking for the demo here. So, what we're going to do is, the first step most of you guys know this, is we've been quite successful with CE, it's been a great product. How many of you guys like CE? Come on. Alright. I know you had a hard time downloading it yesterday apparently, there's a bunch of guys had a hard time downloading it. But it's been a great way for us not just to get you guys to experience it, there's more than 25,000 downloads and so forth. But it's also a great way for us to see new features like IEME and so forth. So, keep an eye on CE because we're going to if anything, explode the way that we actually use as a way to get new features out in the next 12 months. Now, one thing beyond CE that we did, and this was something that we did about ... It took us about 12 months to get it out. While people were using CE to learn a lot, a lot of customers were actually getting into full blown competitive evals, right? Especially with hit CI being so popular and so forth. So, we came up with our own version called X-Ray. >> Speaker 2: Yup. >> Speaker 1: What does X-Ray do before we show it? >> Speaker 2: Yeah. Absolutely. So, if we think about back in the day we were really the only ACI platform out there on the market. Now there are a few others. So, to basically enable the customer to objectively test these, we came out with X-Ray. And rather than talking about the slide let's go ahead and take a look. Okay, I think it's ready. Perfect. So, here's our X-Ray user interface. And essentially what you do is you specify your targets. So, in this case we have a Nutanix 80150 as well as some of our competitors products which we've actually tested. Now we can see on the left hand side here we see a series of tests. So, what we do is we go through and specify certain workloads like OLTP workloads, database colocation, and while we do that we actually inject certain test cases or scenarios. So, this can be snapshot or component failures. Now one of the key things is having the ability to test these against each other. So, what we see here is we're actually taking a OLTP workload where we're running two virtual machines, and then we can see the IOPS OLTP VM's are actually performing here on the left hand side. Now as we're actually go through this test we perform a series of snapshots, which are identified by these red lines here. Now as you can see, the Nutanix platform, which is shown by this blue line, is purely consistent as we go through this test. However, our competitor's product actually degrades performance overtime as these snapshots are taken. >> Speaker 1: Gotcha. And some of these tests by the way are just not about failure or benchmarking, right? It's a variety of tests that we have that makes real life production workloads. So, every couple of months we actually look at our production workloads out there, subset those two cases and put it into X-Ray. So, X-Ray's one of those that has been more recently announced into the public. But it's already gotten a lot of update. I would strongly encourage you, even if you an existing Nutanix customer. It's a great way to keep us honest, it's a great way for you to actually expand your usage of Nutanix by putting a lot of these real life tests into production, and as and when you look at new alternatives as well, there'll be certain situations that we don't do as well and that's a great way to give us feedback on it. And so, X-Ray is there, the other one, which is more recent by the way is a fact that most of you has spent many days if not weeks, after you've chosen Nutanix, moving non-Nutanix workloads. I.e. VMware, on three tier architectures to Atrio Nutanix. And to do that, we took a hard look and came out with a new product called Xtract. >> Speaker 2: Yeah. So essentially if we think about what Nutanix has done for the data center really enables that iPhone like experience, really bringing it simplicity and intuitiveness to the data center. Now what we wanted to do is to provide that same experience for migrating existing workloads to us. So, with Xtract essentially what we've done is we've scanned your existing environment, we've created design spec, we handled the migration process ... >> Steven: ... environment, we create a design spec. We handle for the migration process as well as the cut over. Now, let's go ahead and take a look in our extract user interface here. What we can see is we have a source environment. In this case, this is a VC environment. This can be any VC, whether it's traditional three tier or hypherconverged. We also see our Nutanix target environments. Essentially, these are our AHV target clusters where we're going to be migrating the data and performing the cut over to you. >> Speaker 2: Gotcha. Steven: The first thing that we do here is we go ahead and create a new migration plan. Here, I'm just going to specify this as DB Wave 2. I'll click okay. What I'm doing here is I'm selecting my target Nutanix cluster, as well as my target Nutanix container. Once I'll do that, I'll click next. Now in this case, we actually like to do it big. We're actually going to migrate some production virtual machines over to this target environment. Here, I'm going to select a few windows instances, which are in our database cluster. I'll click next. At this point, essentially what's occurring is it's going through taking a look at these virtual machines as well as taking a look at the target environment. It takes a look at the resources to ensure that we actually have enough, an ample capacity to facilitate the workload. The next thing we'll do is we'll go ahead and type in our credentials here. This is actually going to be used for logging into the virtual machine. We can do a new device driver installation, as well as get any static IP configuration. Well specify our network mapping. Then from there, we'll click next. What we'll do is we'll actually save and start. This will go through create the migration plan. It'll do some analysis on these virtual machines to ensure that we can actually log in before we actually start migrating data. Here we have a migration, which has been in progress. We can see we have a few virtual machines, obviously some Linux, some Windows here. We've cut over a few. What we do to actually cut over these VMS, is go ahead select the VMS- Speaker 2: This is the actual task of actually doing the final stage of cut over. Steven: Yeah, exactly. That's one of the nice things. Essentially, we can migrate the data whenever we want. We actually hook into the VADP API's to do this. Then every 10 minutes, we send over a delta to sync the data. Speaker 2: Gotcha, gotcha. That's how one click migration can now be possible. This is something that if you guys haven't used this, this has been out in the wild, just for a month or so. Its been probably one of our bestselling, because it's free, bestselling features of the recent product release. I've had customers come to me and say, "Look, there are situations where its taken us weeks to move data." That is now minutes from the operator perspective. Forget where the director, or the VP, it's the line architecture and operator that really loves these tools, which is essentially the core of Nutanix. That's one of our core things, is to make sure that if we can keep the engineer and the architect truly happy, then everything else will be fine for us, right? That's extract. Then we have a lot of things, right? We've done the usual things, there's a tunnel functionality on day zero, day one, day two, kind of capabilities. Why don't we start with something around Prism Central, now that we can do one click PC installs? We can do PC scale outs, we can go from managing thousands of VMS, tens of thousands of VMS, while doing all the one click operations, right? Steven: Yep. Speaker 2: Why don't we take a quick look at what's new in Prism Central? Steven: Yep. Absolutely. Here, we can see our Prism element interface. As you mentioned, one of the key things we added here was the ability to deploy Prism Central very simply just with a few clicks. We'll actually go through a distributed PC scale of deployment here. Here, we're actually going to deploy, as this is a new instance. We're going to select our 5.5 version. In this case, we're going to deploy a scale out Prism Central cluster. Obviously, availability and up-time's very critical for us, as we're mainly distributed systems. In this case we're going to deploy a scale-out PC cluster. Here we'll select our number of PC virtual machines. Based upon the number of VMS, we can actually select our size of VM that we'd deploy. If we want to deploy 25K's report, we can do that as well. Speaker 2: Basically a thousand to tens of thousands of VM's are possible now. Steven: Yep. That's a nice thing is you can start small, and then scale out as necessary. We'll select our PC network. Go ahead and input our IP address. Now, we'll go to deploy. Now, here we can see it's actually kicked off the deployment, so it'll go provision these virtual machines to apply the configuration. In a few minutes, we'll be up and running. Speaker 2: Right. While Steven's doing that, one of the things that we've obviously invested in is a ton of making VM operations invisible. Now with Calm's, what we've done is to up level that abstraction. Two applications. At the end of the day, more and more ... when you go to AWS, when you go to GCP, you go to [inaudible 01:04:56], right? The level of abstractions now at an app level, it's cloud formations, and so forth. Essentially, what Calm's able to do is to give you this marketplace that you can go in and self-service [inaudible 01:05:05], create this internal cloud like environment for your end users, whether it be business owners, technology users to self-serve themselves. The process is pretty straightforward. You, as an operator, or an architect, or [inaudible 01:05:16] create these blueprints. Consumers within the enterprise, whether they be self-service users, whether they'll be end business users, are able to consume them for a simple marketplace, and deploy them on whether it be a private cloud using Nutanix, or public clouds using anything with public choices. Then, as a single frame of glass, as operators you're doing conversed operations, at an application centric level between [inaudible 01:05:41] across any of these clouds. It's this combination of producer, consumer, operator in a curated sense. Much like an iPhone with an app store. It's the core construct that we're trying to get with Calm to up level the abstraction interface across multiple clouds. Maybe we'll do a quick demo of this, and then get into the rest of the stuff, right? Steven: Sure. Let's check it out. Here we have our Prism Central user interface. We can see we have two Nutanix clusters, our cloudy04 as well as our Power8 cluster. One of the key things here that we've added is this apps tab. I'm clicking on this apps tab, we can see that we have a few [inaudible 01:06:19] solutions, we have a TensorFlow solution, a [inaudible 01:06:22] et cetera. The nice thing about this is, this is essentially a marketplace where vendors as well as developers could produce these blueprints for consumption by the public. Now, let's actually go ahead and deploy one of these blueprints. Here we have a HR employment engagement app. We can see we have three different tiers of services part of this. Speaker 2: You need a lot of engagement at HR, you know that. Okay, keep going. Steven: Then the next thing we'll do here is we'll go and click on. Based upon this, we'll specify our blueprint name, HR app. The nice thing when I'm deploying is I can actually put in back doors. We'll click clone. Now what we can see here is our blueprint editor. As a developer, I could actually go make modifications, or even as an in-user given the simple intuitive user interface. Speaker 2: This is the consumers side right here, but it's also the [inaudible 01:07:11]. Steven: Yep, absolutely. Yeah, if I wanted to make any modifications, I could select the tier, I could scale out the number of instances, I could modify the packages. Then to actually deploy, all I do is click launch, specify HR app, and click create. Speaker 2: Awesome. Again, this is coming in 5.5. There's one other feature, by the way, that is coming in 5.5 that's surrounding Calm, and Prism Pro, and everything else. That seems to be a much awaited feature for us. What was that? Steven: Yeah. Obviously when we think about multi-tenant, multi-cloud role based access control is a very critical piece of that. Obviously within the organization, we're going to have multiple business groups, multiple units. Our back's a very critical piece. Now, if we go over here to our projects, we can see in this scenario we just have a single project. What we've added is if you want to specify certain roles, in this case we're going to add our good friend John Doe. We can add them, it could be a user or group, but then we specify their role. We can give a developer the ability to edit and create these blueprints, or consumer the ability to actually provision based upon. Speaker 2: Gotcha. Basically in 5.5, you'll have role based access control now in Prism and Calm burned into that, that I believe it'll support custom role shortly after. Steven: Yep, okay. Speaker 2: Good stuff, good stuff. I think this is where the Nutanix guys are supposed to clap, by the way, so that the rest of the guys can clap. Steven: Thank you, thank you. Okay. What do we have? Speaker 2: We have day one stuff, obviously there's a ton of stuff that's coming in core data path capabilities that most of you guys use. One of the most popular things is synchronous replication, especially in Europe. Everybody wants to do [Metro 01:08:49] for whatever reason. But we've got something new, something even more enhanced than Metro, right? Steven: Yep. Speaker 2: Do you want to talk a little bit about it? Steven: Yeah, let's talk about it. If we think about what we had previously, we started out with a synchronous replication. This is essentially going to be your higher RPO. Then we moved into Metro cluster, which was RPO zero. Those are two ins of the gamete. What we did is we introduced new synchronous replication, which really gives you the best of both worlds where you have very, very decreased RPO's, but zero impact in line mainstream performance. Speaker 2: That's it. Let's show something. Steven: Yeah, yeah. Let's do it. Here, we're back at our Prism Element interface. We'll go over here. At this point, we provisioned our HR app, the next thing we need to do is to protect that data. Let's go here to protection domain. We'll create a new PD for our HR app. Speaker 2: You clearly love HR. Steven: Spent a lot of time there. Speaker 2: Yeah, yeah, yeah. Steven: Here, you can see we have our production lamp DBVM. We'll go ahead and protect that entity. We can see that's protected. The next thing we'll do is create a schedule. Now, what would you say would be a good schedule we should actually shoot for? Speaker 2: I don't know, 15 minutes? Steven: 15 minutes is not bad. But I ... Section 7 of 13 [01:00:00 - 01:10:04] Section 8 of 13 [01:10:00 - 01:20:04] (NOTE: speaker names may be different in each section) Speaker 1: ... 15 minutes. Speaker 2: 15 minutes is not bad, but I think the people here deserve much better than that, so I say let's shoot for ... what about 15 seconds? Speaker 1: Yeah. They definitely need a bathroom break, so let's do 15 seconds. Speaker 2: Alright, let's do 15 seconds. Speaker 1: Okay, sounds good. Speaker 2: K. Then we'll select our retention policy and remote cluster replicate to you, which in this case is wedge. And we'll go ahead and create the schedule here. Now at this point we can see our protection domain. Let's go ahead and look at our entities. We can see our database virtual machine. We can see our 15 second schedule, our local snapshots, as well as we'll start seeing our remote snapshots. Now essentially what occurs is we take two very quick snapshots to essentially see the initial data, and then based upon that then we'll start taking our continuous 15 second snaps. Speaker 1: 15 seconds snaps, and obviously near sync has less of impact than synchronous, right? From an architectural perspective. Speaker 2: Yeah, and that's a nice thing is essentially within the cluster it's truly pure synchronous, but externally it's just a lagged a-sync. Speaker 1: Gotcha. So there you see some 15 second snapshots. So near sync is also built into five-five, it's a long-awaited feature. So then, when we expand in the rest of capabilities, I would say, operations. There's a lot of you guys obviously, have started using Prism Pro. Okay, okay, you can clap. You can clap. It's okay. It was a lot of work, by the way, by the core data pad team, it was a lot of time. So Prism Pro ... I don't know if you guys know this, Prism Central now run from zero percent to more than 50 percent attach on install base, within 18 months. And normally that's a sign of true usage, and true value being supported. And so, many things are new in five-five out on Prism Pro starting with the fact that you can do data[inaudible 01:11:49] base lining, alerting, so that you're not capturing a ton of false positives and tons of alerts. We go beyond that, because we have this core machine-learning technology power, we call it cross fit. And, what we've done is we've used that as a foundation now for pretty much all kinds of operations benefits such as auto RCA, where you're able to actually map to particular [inaudible 01:12:12] crosses back to who's actually causing it whether it's the network, a computer, and so forth. But then the last thing that we've also done in five-five now that's quite different shading, is the fact that you can now have a lot of these one-click recommendations and remediations, such as right-sizing, the fact that you can actually move around [inaudible 01:12:28] VMs, constrained VMs, and so forth. So, I now we've packed a lot of functionality in Prism Pro, so why don't we spend a couple of minutes quickly giving a sneak peak into a few of those things. Speaker 2: Yep, definitely. So here we're back at our Prism Central interface and one of the things we've added here, if we take a look at one of our clusters, we can see we have this new anomalies portion here. So, let's go ahead and select that and hop into this. Now let's click on one of these anomaly events. Now, essentially what the system does is we monitor all the entities and everything running within the system, and then based upon that, we can actually determine what we expect the band of values for these metrics to be. So in this scenario, we can see we have a CPU usage anomaly event. So, normal time, we expect this to be right around 86 to 100 percent utilization, but at this point we can see this is drastically dropped from 99 percent to near zero. So, this might be a point as an administrator that I want to go check out this virtual machine, ensure that certain services and applications are still up and running. Speaker 1: Gotcha, and then also it changes the baseline based on- Speaker 2: Yep. Yeah, so essentially we apply machine-learning techniques to this, so the system will dynamically adjust based upon the value adjustment. Speaker 1: Gotcha. What else? Speaker 2: Yep. So the other thing here that we mentioned was capacity planning. So if we go over here, we can take a look at our runway. So in this scenario we have about 30 days worth of runway, which is most constrained by memory. Now, obviously, more nodes is all good for everyone, but we also want to ensure that you get the maximum value on your investment. So here we can actually see a few recommendations. We have 11 overprovision virtual machines. These are essentially VMs which have more resources than are necessary. As well as 19 inactives, so these are dead VMs essentially that haven't been powered on and not utilized. We can also see we have six constrained, as well as one bully. So, constrained VMs are essentially VMs which are requesting more resources than they actually have access to. This could be running at 100 percent CPU utilization, or 100 percent memory, or storage utilization. So we could actually go in and modify these. Speaker 1: Gotcha. So these are all part of the auto remediation capabilities that are now possible? Speaker 2: Yeah. Speaker 1: What else, do you want to take reporting? Speaker 2: Yeah. Yeah, so I know reporting is a very big thing, so if we think about it, we can't rely on an administrator to constantly go into Prism. We need to provide some mechanism to allow them to get emailed reports. So what we've done is we actually autogenerate reports which can be sent via email. So we'll go ahead and add one of these sample reports which was created today. And here we can actually get specific detailed information about our cluster without actually having to go into Prism to get this. Speaker 1: And you can customize these reports and all? Speaker 2: Yep. Yeah, if we hop over here and click on our new report, we can actually see a list of views we could add to these reports, and we can mix and match and customize as needed. Speaker 1: Yeah, so that's the operational side. Now we also have new services like AFS which has been quite popular with many of you folks. We've had hundreds of customers already on it live with SMB functionality. You want to show a couple of things that is new in five-five? Speaker 2: Yeah. Yep, definitely. So ... let's wait for my screen here. So one of the key things is if we looked at that runway tab, what we saw is we had over a year's worth of storage capacity. So, what we saw is customers had the requirement for filers, they had some excess storage, so why not actually build a software featured natively into the cluster. And that's essentially what we've done with AFS. So here we can see we have our AFS cluster, and one of the key things is the ability to scale. So, this particular cluster has around 3.1 or 3.16 billion files, which are running on this AFS cluster, as well as around 3,000 active concurrent sessions. Speaker 1: So basically thousands of concurrent sessions with billions of files? Speaker 2: Yeah, and the nice thing with this is this is actually only a four node Nutanix cluster, so as the cluster actually scales, these numbers will actually scale linearly as a function of those nodes. Speaker 1: Gotcha, gotcha. There's got to be one more bullet here on this slide so what's it about? Speaker 2: Yeah so, obviously the initial use case was realistically for home folders as well as user profiles. That was a good start, but it wasn't the only thing. So what we've done is we've actually also introduced important and upcoming release of NFS. So now you can now use NFS to also interface with our [crosstalk 01:16:44]. Speaker 1: NFS coming soon with AFS by the way, it's a big deal. Big deal. So one last thing obviously, as you go operationalize it, we've talked a lot of things on features and functions but one of the cool things that's always been seminal to this company is the fact that we all for really good customer service and support experience. Right now a lot of it is around the product, the people, the support guys, and so forth. So fundamentally to the product we have found ways using Pulse to instrument everything. With Pulse HD that has been allowed for a little bit longer now. We have fine grain [inaudible 01:17:20] around everything that's being done, so if you turn on this functionality you get a lot of information now that we built, we've used when you make a phone call, or an email, and so forth. There's a ton of context now available to support you guys. What we've now done is taken that and are now externalizing it for your own consumption, so that you don't have to necessarily call support. You can log in, look at your entire profile across your own alerts, your own advisories, your own recommendations. You can look at collective intelligence now that's coming soon which is the fact that look, here are 50 other customers just like you. These are the kinds of customers that are using workloads like you, what are their configuration profiles? Through this centralized customer insights portal you going to get a lot more insight, not just about your own operations, but also how everybody else is also using it. So let's take a quick look at that upcoming functionality. Speaker 2: Yep. Absolutely. So this is our customer 360 portal, so as [inaudible 01:18:18] mentioned, as a customer I can actually log in here, I can get a high-level overview of my existing environment, my cases, the status of those cases, as well as any relevant announcements. So, here based upon my cluster version, if there's any updates which are available, I can then see that here immediately. And then one of the other things that we've added here is this insights page. So essentially this is information that previously support would leverage to essentially proactively look out to the cluster, but now we've exposed this to you as the customer. So, clicking on this insights tab we can see an overview of our environment, in this case we have three Nutanix clusters, right around 550 virtual machines, and over here what's critical is we can actually see our cases. And one of the nice things about this is these area all autogenerated by the cluster itself, so no human interaction, no manual intervention was required to actually create these alerts. The cluster itself will actually facilitate that, send it over to support, and then support can get back out to you automatically. Speaker 1: K, so look for customer insights coming soon. And obviously that's the full life cycle. One cool thing though that's always been unique to Nutanix was the fact that we had [inaudible 01:19:28] security from day one built-in. And [inaudible 01:19:31] chunk of functionality coming in five-five just around this, because every release we try to insert more and more security capabilities, and the first one is around data. What are we doing? Speaker 2: Yeah, absolutely. So previously we had support for data at rest encryption, but this did have the requirement to leverage self-encrypting drives. These can be very expensive, so what we've done, typical to our fashion is we've actually built this in natively via software. So, here within Prism Element, I can go to data at rest encryption, and then I can go and edit this configuration here. Section 8 of 13 [01:10:00 - 01:20:04] Section 9 of 13 [01:20:00 - 01:30:04] (NOTE: speaker names may be different in each section) Steve: Encryption and then I can go and edit this configuration here. From here I could add my CSR's. I can specify KMS server and leverage native software base encryption without the requirement of SED's. Sunil: Awesome. So data address encryption [inaudible 01:20:15] coming soon, five five. Now data security is only one element, the other element was around network security obviously. We've always had this request about what are we doing about networking, what are we doing about network, and our philosophy has always been simple and clear, right. It is that the problem in networking is not the data plan. Problem in networking is the control plan. As in, if a packing loss happens to the top of an ax switch, what do we do? If there's a misconfigured board, what do we do? So we've invested a lot in full blown new network visualization that we'll show you a preview of that's all new in five five, but then once you can visualize you can take action, so you can actually using our netscape API's now in five five. You can optovision re lands on the switch, you can update reps on your load balancing pools. You can update obviously rules on your firewall. And then we've taken that to the next level, which is beyond all that, just let you go to AWS right now, what do you do? You take 100 VM's, you put it in an AWS security group, boom. That's how you get micro segmentation. You don't need to buy expensive products, you don't need to virtualize your network to get micro segmentation. That's what we're doing with five five, is built in one click micro segmentation. That's part of the core product, so why don't we just quickly show that. Okay? Steve: Yeah, let's take a look. So if we think about where we've been so far, we've done the comparison test, we've done a migration over to a Nutanix. We've deployed our new HR app. We've protected it's data, now we need to protect the network's. So one of the things you'll see that's new here is this security policies. What we'll do is we'll actually go ahead and create a new security policy and we'll just say this is HR security policy. We'll specify the application type, which in this case is HR. Sunil: HR of course. Steve: Yep and we can see our app instance is automatically populated, so based upon the number of running instances of that blueprint, that would populate that drop-down. Now we'll go ahead and click next here and what we can see in the middle is essentially those three tiers that composed that app blueprint. Now one of the important things is actually figuring out what's trying to communicate with this within my existing environment. So if I take a look over here on my left hand side, I can essentially see a few things. I can see a Ha Proxy load balancer is trying to communicate with my app here, that's all good. I want to allow that. I can see some sort of monitoring service is trying to communicate with all three of the tiers. That's good as well. Now the last thing I can see here is this IP address which is trying to access my database. Now, that's not designed and that's not supposed to happen, so what we'll do is we'll actually take a look and see what it's doing. Now hopping over to this database virtual machine or the hack VM, what we can see is it's trying to perform a brute force log in attempt to my MySQL database. This is not good. We can see obviously it can connect on the socket, however, it hasn't guessed the right password. In order to lock that down, we'll go back to our policies here and we're going to click deny. Once we've done that, we'll click next and now we'll go to Apply Now. Now we can see our newly created security policy and if we hop back over to this VM, we can now see it's actually timing out and what this means is that it's not able to communicate with that database virtual machine due to micro segmentation actively blocking that request. Sunil: Gotcha and when you go back to the Prism site, essentially what we're saying now is, it's as simple as that, to set up micro segmentation now inside your existing clusters. So that's one click micro segmentation, right. Good stuff. One other thing before we let Steve walk off the stage and then go to the bathroom, but is you guys know Steve, you know he spends a lot time in the gym, you do. Right. He and I share cubes right beside each other by the way just if you ever come to San Jose Nutanix corporate headquarters, you're always welcome. Come to the fourth floor and you'll see Steve and Sunil beside each other, most of the time I'm not in the cube, most of the time he's in the gym. If you go to his cube, you'll see all kinds of stuff. Okay. It's true, it's true, but the reason why I brought this up, was Steve recently became a father, his first kid. Oh by the way this is, clicker, this is how his cube looks like by the way but he left his wife and his new born kid to come over here to show us a demo, so give him a round of applause. Thank you, sir. Steve: Cool, thanks, Sunil. That was fun. Sunil: Thank you. Okay, so lots of good stuff. Please try out five five, give us feedback as you always do. A lot of sessions, a lot of details, have fun hopefully for the rest of the day. To talk about how their using Nutanix, you know here's one of our favorite customers and partners. He normally comes with sunglasses, I've asked him that I have to be the best looking guy on stage in my keynotes, so he's going to try to reduce his charm a little bit. Please come on up, Alessandro. Thank you. Alessandro R.: I'm delighted to be here, thank you so much. Sunil: Maybe we can stand here, tell us a little bit about Leonardo. Alessandro R.: About Leonardo, Leonardo is a key actor of the aerospace defense and security systems. Helicopters, aircraft, the fancy systems, the fancy electronics, weapons unfortunately, but it's also a global actor in high technology field. The security information systems division that is the division I belong to, 3,000 people located in Italy and in UK and there's several other countries in Europe and the U.S. $1 billion dollar of revenue. It has a long a deep experience in information technology, communications, automation, logical and physical security, so we have quite a long experience to expand. I'm in charge of the security infrastructure business side. That is devoted to designing, delivering, managing, secure infrastructures services and secure by design solutions and platforms. Sunil: Gotcha. Alessandro R.: That is. Sunil: Gotcha. Some of your focus obviously in recent times has been delivering secure cloud services obviously. Alessandro R.: Yeah, obviously. Sunil: Versus traditional infrastructure, right. How did Nutanix help you in some of that? Alessandro R.: I can tell something about our recent experience about that. At the end of two thousand ... well, not so recent. Sunil: Yeah, yeah. Alessandro R.: At the end of 2014, we realized and understood that we had to move a step forward, a big step and a fast step, otherwise we would drown. At that time, our newly appointed CEO confirmed that the IT would be a core business to Leonardo and had to be developed and grow. So we decided to start our digital transformation journey and decided to do it in a structured and organized way. Having clear in mind our targets. We launched two programs. One analysis program and one deployments programs that were essentially transformation programs. We had to renew ourselves in terms of service models, in terms of organization, in terms of skills to invest upon and in terms of technologies to adopt. We were stacking a certification of technologies that adopted, companies merged in the years before and we have to move forward and to rationalize all these things. So we spent a lot of time analyzing, comparing technologies, and evaluating what would fit to us. We had two main targets. The first one to consolidate and centralize the huge amount of services and infrastructure that were spread over 52 data centers in Italy, for Leonardo itself. The second one, to update our service catalog with a bunch of cloud services, so we decided to update our data centers. One of our building block of our new data center architecture was Nutanix. We evaluated a lot, we had spent a lot of time in analysis, so that wasn't a bet, but you are quite pioneers at those times. Sunil: Yeah, you took a lot of risk right as an Italian company- Alessandro R.: At this time, my colleague used to say, "Hey, Alessandro, think it over, remember that not a CEO has ever been fired for having chose IBM." I apologize, Bob, but at that time, when Nutanix didn't run on [inaudible 01:29:27]. We have still a good bunch of [inaudible 01:29:31] in our data center, so that will be the chance to ... Audience Member: [inaudible 01:29:37] Alessandro R.: So much you must [inaudible 01:29:37] what you announced it. Sunil: So you took a risk and you got into it. Alessandro R.: Yes, we got into, we are very satisfied with the results we have reached. Sunil: Gotcha. Alessandro R.: Most of the targets we expected to fulfill have come and so we are satisfied, but that doesn't mean that we won't go on asking you a big discount ... Sunil: Sure, sure, sure, sure. Alessandro R.: On price list. Sunil: Sure, sure, so what's next in terms of I know there are some interesting stuff that you're thinking. Alessandro R.: The next- Section 9 of 13 [01:20:00 - 01:30:04] Section 10 of 13 [01:30:00 - 01:40:04] (NOTE: speaker names may be different in each section) Speaker 1: So what's next, in terms of I know you have some interesting stuff that you're thinking of. Speaker 2: The next, we have to move forward obviously. The name Leonardo is inspired to Leonardo da Vinci, it was a guy that in terms of innovation and technology innovation had some good ideas. And so, I think, that Leonardo with Nutanix could go on in following an innovation target and following really mutual ... Speaker 1: Partnership. Speaker 2: Useful partnership, yes. We surely want to investigate the micro segmentation technologies you showed a minute ago because we have some looking, particularly by the economical point of view ... Speaker 1: Yeah, the costs and expenses. Speaker 2: And we have to give an alternative to the technology we are using. We want to use more intensively AHV, again as an alternative solution we are using. We are selecting a couple of services, a couple of quite big projects to build using AHV talking of Calm we are very eager to understand the announcement that they are going to show to all of us because the solution we are currently using is quite[crosstalk 01:31:30] Speaker 1: Complicated. Speaker 2: Complicated, yeah. To move a step of automation to elaborate and implement[inaudible 01:31:36] you spend 500 hours of manual activities that's nonsense so ... Speaker 1: Manual automation. Speaker 2: (laughs) Yes, and in the end we are very interested also in the prism features, mostly the new features that you ... Speaker 1: Talked about. Speaker 2: You showed yesterday in the preview because one bit of benefit that we received from the solution in the operations field means a bit plus, plus to our customer and a distinctive plus to our customs so we are very interested in that ... Speaker 1: Gotcha, gotcha. Thanks for taking the risk, thanks for being a customer and partner. Speaker 2: It has been a pleasure. Speaker 1: Appreciate it. Speaker 2: Bless you, bless you. Speaker 1: Thank you. So, you know obviously one OS, one click was one of our core things, as you can see the tagline doesn't stop there, it also says "any cloud". So, that's the rest of the presentation right now it's about; what are we doing, to now fulfill on that mission of one OS, one cloud, one click with one support experience across any cloud right? And there you know, we talked about Calm. Calm is not only just an operational experience for your private cloud but as you can see it's a one-click experience where you can actually up level your apps, set up blueprints, put SLA's and policies, push them down to either your AWS, GCP all your [inaudible 01:33:00] environments and then on day one while you can do one click provisioning, day two and so forth you will see new and new capabilities such as, one-click migration and mobility seeping into the product. Because, that's the end game for Calm, is to actually be your cloud autonomy platform right? So, you can choose the right cloud for the right workload. And talk about how they're building a multi cloud architecture using Nutanix and partnership a great pleasure to introduce my other good Italian friend Daniele, come up on stage please. From Telecom Italia Sparkle. How are you sir? Daniele: Not too bad thank you. Speaker 1: You want an espresso, cappuccino? Daniele: No, no later. Speaker 1: You all good? Okay, tell us a little about Sparkle. Daniele: Yeah, Sparkle is a fully owned subsidy of Telecom Italia group. Speaker 1: Mm-hmm (affirmative) Daniele: Spinned off in 2003 with the mission to develop the wholesale and multinational corporate and enterprise business abroad. Huge network, as you can see, hundreds of thousands of kilometers of fiber optics spread between; south east Asia to Europe to the U.S. Most of it proprietary part of it realized on some running cables. Part of them proprietary part of them bilateral part of them[inaudible 01:34:21] with other operators. 37 countries in which we have offices in the world, 700 employees, lean and clean company ... Speaker 1: Wow, just 700 employees for all of this. Daniele: Yep, 1.4 billion revenues per year more or less. Speaker 1: Wow, are you a public company? Daniele: No, fully owned by TIM so far. Speaker 1: So, what is your experience with Nutanix so far? Daniele: Well, in a way similar to what Alessandro was describing. To operate such a huge network as you can see before, and to keep on bringing revenues for the wholesale market, while trying to turn the bar toward the enterprise in a serious way. Couple of years ago the management team realized that we had to go through a serious transformation, not just technological but in terms of the way we build the services to our customers. In terms of how we let our customer feel the Sparkle experience. So, we are moving towards cloud but we are moving towards cloud with connectivity attached to it because it's in our cord as a provider of Telecom services. The paradigm that is driving today is the on-demand, is the dynamic and in order to get these things we need to move to software. Most of the network must become invisible as the Nutanix way. So, we decided instead of creating patchworks onto our existing systems, infrastructure, OSS, BSS and network systems, to build a new data center from scratch. And the paradigm being this new data center, the mantra was; everything is software designed, everything must be easy to manage, performance capacity planning, everything must be predictable and everything to be managed by few people. Nutanix is at the moment the baseline of this data center for what concern, let's say all the new networking tools, meaning as the end controllers that are taking care of automation and programmability of the network. Lifecycle service orchestrator, network orchestrator, cloud automation and brokerage platform and everything at the moment runs on AHV because we are forcing our vendors to certify their application on AHV. The only stack that is not at the moment AHV based is on a specific cloud platform because there we were really looking for the multi[inaudible 01:37:05]things that you are announcing today. So, we hope to do the migration as soon as possible. Speaker 1: Gotcha, gotcha. And then looking forward you're going to build out some more data center space, expose these services Daniele: Yeah. Speaker 1: For the customers as well as your internal[crosstalk 01:37:21] Daniele: Yeah, basically yes for sure we are going to consolidate, to invest more in the data centers in the markets on where we are leader. Italy, Turkey and Greece we are big data centers for [inaudible 01:37:33] and cloud, but we believe that the cloud with all the issues discussed this morning by Diraj, that our locality, customer proximity ... we think as a global player having more than 120 pops all over the world, which becomes more than 1000 in partnerships, that the pop can easily be transformed in a data center, so that we want to push the customer experience of what we develop in our main data centers closer to them. So, that we can combine traditional infrastructure as a service with the new connectivity services every single[inaudible 01:38:18] possibly everything running. Speaker 1: I mean, it makes sense, I mean I think essentially in some ways to summarize it's the example of an edge cloud where you're pushing a micro-cloud closer to the customers edge. Daniele: Absolutely. Speaker 1: Great stuff man, thank you so much, thank you so much. Daniele: Pleasure, pleasure. Thank you. Speaker 1: So, you know a couple of other things before we get in the next demo is the fact that in addition to Calm from multi-cloud management we have Zai, we talked about for extended enterprise capabilities and something for you guys to quickly understand why we have done this. In a very simple way is if you think about your enterprise data center, clearly you have a bunch of apps there, a bunch of public clouds and when you look at the paradigm you currently deploy traditional apps, we call them mode one apps, SAP, Exchange and so forth on your enterprise. Then you have next generation apps whether it be [inaudible 01:39:11] space, whether it be Doob or whatever you want to call it, lets call them mode two apps right? And when you look at these two types of apps, which are the predominant set, most enterprises have a combination of mode one and mode two apps, most public clouds primarily are focused, initially these days on mode two apps right? And when people talk about app mobility, when people talk about cloud migration, they talk about lift and shift, forklift [inaudible 01:39:41]. And that's a hard problem I mean, it's happening but it's a hard problem and ends up that its just not a one time thing. Once you've forklift, once you move you have different tooling, different operation support experience, different stacks. What if for some of your applications that mattered ... Section 10 of 13 [01:30:00 - 01:40:04] Section 11 of 13 [01:40:00 - 01:50:04] (NOTE: speaker names may be different in each section) Speaker 1: What if, for some of your applications that matter to you, that are your core enterprise apps that you can retain the same toolimg, the same operational experience and so forth. And that is what we achieve to do with Xi. It is truly making hybrid invisible, which is a next act for this company. It'll take us a few years to really fulfill the vision here, but the idea here is that you shouldn't think about public cloud as a different silo. You should think of it as an extension of your enterprise data centers. And for any services such as DR, whether it would be dev test, whether it be back-up, and so-forth. You can use the same tooling, same experience, get a public cloud-like capability without lift and shift, right? So it's making this lift and shift invisible by, soft of, homogenizing the data plan, the network plan, the control plan is what we really want to do with Xi. Okay? And we'll show you some more details here. But the simplest way to understand this is, think of it as the iPhone, right? D has mentioned this a little bit. This is how we built this experience. Views IOS as the core, IP, we wrap it up with a great package called the iPhone. But then, a few years into the iPhone era, came iTunes and iCloud. There's no apps, per se. That's fused into IOS. And similarly, think about Xi that way. The more you move VMs, into an internet-x environment, stuff like DR comes burnt into the fabric. And to give us a sneak peek into a bunch of the com and Xi cable days, let me bring back Binny who's always a popular guys on stage. Come on up, Binny. I'd be surprised in Binny untucked his shirt. He's always tucking in his shirt. Binny Gill: Okay, yeah. Let's go. Speaker 1: So first thing is com. And to show how we can actually deploy apps, not just across private and public clouds, but across multiple public clouds as well. Right? Binny Gill: Yeah, basically, you know com is about simplifying the disparity between various public clouds out there. So it's very important for us to be able to take one application blueprint and then quickly deploy in whatever cloud of your choice. Without understanding how one cloud is different. Speaker 1: Yeah, that's the goal. Binny Gill: So here, if you can see, I have market list. And by the way, this market list is a great partner community interest. And every single sort of apps come up here. Let me take a sample app here, Hadoop. And click launch. And now where do you want me to deploy? Speaker 1: Let's start at GCP. Binny Gill: GCP, okay. So I click on GCP, and let me give it a name. Hadoop. GCP. Say 30, right. Clear. So this is one click deployment of anything from our marketplace on to a cloud of your choice. Right now, what the system is doing, is taking the intent-filled description of what the application should look like. Not just the infrastructure level but also within the merchant machines. And it's creating a set of work flows that it needs to go deploy. So as you can see, while we were talking, it's loading the application. Making sure that the provisioning workflows are all set up. Speaker 1: And so this is actually, in real time it's actually extracting out some of the GCP requirements. It's actually talking to GCP. Setting up the constructs so that we can actually push it up on the GCP personally. Binny Gill: Right. So it takes a couple of minutes. It'll provision. Let me go back and show you. Say you worked with deploying AWS. So you Hadoop. Hit address. And that's it. So again, the same work flow. Speaker 1: Same process, I see. Binny Gill: It's going to now deploy in AWS. Speaker 1: See one of the keys things is that we actually extracted out all the isms of each of these clouds into this logical substrate. Binny Gill: Yep. Speaker 1: That you can now piggy-back off of. Binny Gill: Absolutely. And it makes it extremely simple for the average consumer. And you know we like more cloud support here over time. Speaker 1: Sounds good. Binny Gill: Now let me go back and show you an app that I had already deployed. Now 13 days ago. It's on GCP. And essentially what I want to show you is what is the view of the application. Firstly, it shows you the cost summary. Hourly, daily, and how the cost is going to look like. The other is how you manage it. So you know one click ways of upgrading, scaling out, starting, deleting, and so on. Speaker 1: So common actions, but independent of the type of clouds. Binny Gill: Independent. And also you can act with these actions over time. Right? Then services. It's learning two services, Hadoop slave and Hadoop master. Hadoop slave runs fast right now. And auditing. It shows you what are the important actions you've taken on this app. Not just, for example, on the IS front. This is, you know how the VMs were created. But also if you scroll down, you know how the application was deployed and brought up. You know the slaves have to discover each other, and so on. Speaker 1: Yeah got you. So find game invisibility into whatever you were doing with clouds because that's been one of the complaints in general. Is that the cloud abstractions have been pretty high level. Binny Gill: Yeah. Speaker 1: Yeah. Binny Gill: Yeah. So that's how we make the differences between the public clouds. All go away for the Indias of ... Speaker 1: Got you. So why don't we now give folks ... Now a lot of this stuff is coming in five, five so you'll see that pretty soon. You'll get your hands around it with AWS and tree support and so forth. What we wanted to show you was emerging alpha version that is being baked. So is a real production code for Xi. And why don't we just jump right in to it. Because we're running short of time. Binny Gill: Yep. Speaker 1: Give folks a flavor for what the production level code is already being baked around. Binny Gill: Right. So the idea of the design is make sure it's not ... the public cloud is no longer any different from your private cloud. It's a true seamless extension of your private cloud. Here I have my test environment. As you can see I'm running the HR app. It has the DB tier and the Web tier. Yeah. Alright? And the DB tier is running Oracle DB. Employee payroll is the Web tier. And if you look at the availability zones that I have, this is my data center. Now I want to protect this application, right? From disaster. What do I do? I need another data center. Speaker 1: Sure. Binny Gill: Right? With Xi, what we are doing is ... You go here and click on Xi Cloud Services. Speaker 1: And essentially as the slide says, you are adding AZs with one click. Binny Gill: Yeps so this is what I'm going to do. Essentially, you log in using your existing my.nutanix.com credentials. So here I'm going to use my guest credentials and log in. Now while I'm logging in what's happening is we are creating a seamless network between the two sides. And then making the Xi cloud availability zone appear. As if it was my own. Right? Speaker 1: Gotcha. Binny Gill: So in a couple of seconds what you'll notice this list is here now I don't have just one availability zone, but another one appears. Speaker 1: So you have essentially, real time now, paid a one data center doing an availability zone. Binny Gill: Yep. Speaker 1: Cool. Okay. Let's see what else we can do. Binny Gill: So now you think about VR setup. Now I'm armed with another data center, let's do DR Center. Now DR set-up is going to be extremely simple. Speaker 1: Okay but it's also based because on the fact that it is the same stack on both sides. Right? Binny Gill: It's the same stack on both sides. We have a secure network lane connecting the two sides, on top of the secure network plane. Now data can flow back and forth. So now applications can go back and forth, securely. Speaker 1: Gotcha, okay. Let's look at one-click DR. Binny Gill: So for one-click DR set-up. A couple of things we need to know. One is a protection rule. This is the RPO, where does it apply to? Right? And the connection of the replication. The other one is recovery plans, in case disaster happens. You know, how do I bring up my machines and application work-order and so on. So let me first show you, Protection Rule. Right? So here's the protection rule. I'll create one right now. Let me call it Platinum. Alright, and source is my own data center. Destination, you know Xi appears now. Recovery point objective, so maybe in a one hour these snapshots going to the public cloud. I want to retain three in the public side, three locally. And now I select what are the entities that I want to protect. Now instead of giving VMs my name, what I can do is app type employee payroll, app type article database. It covers both the categories of the application tiers that I have. And save. Speaker 1: So one of the things here, by the way I don't know if you guys have noticed this, more and more of Nutanix's constructs are being eliminated to become app-centric. Of course is VM centric. And essentially what that allows one to do is to create that as the new service-level API/abstraction. So that under the cover over a period of time, you may be VMs today, maybe containers tomorrow. Or functions, the day after. Binny Gill: Yep. What I just did was all that needs to be done to set up replication from your own data center to Xi. So we started off with no data center to actually replication happening. Speaker 1: Gotcha. Binny Gill: Okay? Speaker 1: No, no. You want to set up some recovery plans? Binny Gill: Yeah so now set up recovery plan. Recovery plans are going to be extremely simple. You select a bunch of VMs or apps, and then there you can say what are the scripts you want to run. What order in which you want to boot things. And you know, you can set up access these things with one click monthly or weekly and so on. Speaker 1: Gotcha. And that sets up the IPs as well as subnets and everything. Binny Gill: So you have the option. You can maintain the same IPs on frame as the move to Xi. Or you can make them- Speaker 1: Remember, you can maintain your own IPs when you actually use the Xi service. There was a lot of things getting done to actually accommodate that capability. Binny Gill: Yeah. Speaker 1: So let's take a look at some of- Binny Gill: You know, the same thing as VPC, for example. Speaker 1: Yeah. Binny Gill: You need to possess on Xi. So, let's create a recovery plan. A recovery plan you select the destination. Where does the recovery happen. Now, after that Section 11 of 13 [01:40:00 - 01:50:04] Section 12 of 13 [01:50:00 - 02:00:04] (NOTE: speaker names may be different in each section) Speaker 1: ... does the recovery happen. Now, after that you have to think of what is the runbook that you want to run when disaster happens, right? So you're preparing for that, so let me call "HR App Recovery." The next thing is the first stage. We're doing the first stage, let me add some entities by categories. I want to bring up my database first, right? Let's click on the database and that's it. Speaker 2: So essentially, you're building the script now. Speaker 1: Building the script- Speaker 2: ... on the [inaudible 01:50:30] Speaker 1: ... but in a visual way. It's simple for folks to understand. You can add custom script, add delay and so on. Let me add another stage and this stage is about bringing up the web tier after the database is up. Speaker 2: So basically, bring up the database first, then bring up the web tier, et cetera, et cetera, right? Speaker 1: That's it. I've created a recovery plan. I mean usually it's complicated stuff, but we made it extremely simple. Now if you click on "Recovery Points," these are snapshots. Snapshots of your applications. As you can see, already the system has taken three snapshots in response to the protection rule that we had created just a couple minutes ago. And these are now being seeded to Xi data centers. Of course this takes time for seeding, so what I have is a setup already and that's the production environment. I'll cut over to that. This is my production environment. Click "Explore," now you see the same application running in production and I have a few other VMs that are not protected. Let's go to "Recovery Points." It has been running for sometime, these recover points are there and they have been replicated to Xi. Speaker 2: So let's do the failover then. Speaker 1: Yeah, so to failover, you'll have to go to Xi so let me login to Xi. This time I'll use my production account for logging into Xi. I'm logging in. The first thing that you'll see in Xi is a dashboard that gives you a quick summary of what your DR testing has been so far, if there are any issues with the replication that you have and most importantly the monthly charges. So right now I've spent with my own credit card about close to 1,000 bucks. You'll have to refund it quickly. Speaker 2: It depends. If the- Speaker 1: If this works- Speaker 2: IF the demo works. Speaker 1: Yeah, if it works, okay. As you see, there are no VMs right now here. If I go to the recovery points, they are there. I can click on the recovery plan that I had created and let's see how hard it's going to be. I click "Failover." It says three entities that, based on the snapshots, it knows that it can recovery from source to destination, which is Xi. And one click for the failover. Now we'll see what happens. Speaker 2: So this is essentially failing over my production now. Speaker 1: Failing over your production now. [crosstalk 01:52:53] If you click on the "HR App Recovery," here you see now it started the recovery plan. The simple recovery plan that we had created, it actually gets converted to a series of tasks that the system has to do. Each VM has to be hydrated, powered on in the right order and so on and so forth. You don't have to worry about any of that. You can keep an eye on it. But in the meantime, let's talk about something else. We are doing failover, but after you failover, you run in Xi as if it was your own setup and environment. Maybe I want to create a new VM. I create a VM and I want to maybe extend my HR app's web tier. Let me name it as "HR_Web_3." It's going to boot from that disk. Production network, I want to run it on production network. We have production and test categories. This one, I want to give it employee payroll category. Now it applies the same policies as it's peers will. Here, I'm going to create the VM. As you can see, I can already see some VMs coming up. There you go. So three VMs from on-prem are now being filled over here while the fourth VM that I created is already being powered. Speaker 2: So this is basically realtime, one-click failover, while you're using Xi for your [inaudible 01:54:13] operations as well. Speaker 1: Exactly. Speaker 2: Wow. Okay. Good stuff. What about- Speaker 1: Let me add here. As the other cloud vendors, they'll ask you to make your apps ready for their clouds. Well we tell our engineers is make our cloud ready for your apps. So as you can see, this failover is working. Speaker 2: So what about failback? Speaker 1: All of them are up and you can see the protection rule "platinum" has been applied to all four. Now let's look at this recovery plan points "HR_Web_3" right here, it's already there. Now assume the on-prem was already up. Let's go back to on-prem- Speaker 2: So now the scenario is, while Binny's coming up, is that the on-prem has come back up and we're going to do live migration back as in a failback scenario between the data centers. Speaker 1: And how hard is it going to be. "HR App Recovery" the same "HR App Recovery", I click failover and the system is smart enough to understand the direction is reversed. It's also smart enough to figure out "Hey, there are now the four VMs are there instead of three." Xi to on-prem, one-click failover again. Speaker 2: And it's rerunning obviously the same runbook but in- Speaker 1: Same runbook but the details are different. But it's hidden from the customer. Let me go to the VMs view and do something interesting here. I'll group them by availability zone. Here you go. As you can see, this is a hybrid cloud view. Same management plane for both sides public and private. There are two availability zones, the Xi availability zone is in the cloud- Speaker 2: So essentially you're moving from the top- Speaker 1: Yeah, top- Speaker 2: ... to the bottom. Speaker 1: ... to the bottom. Speaker 2: That's happening in the background. While this is happening, let me take the time to go and look at billing in Xi. Speaker 1: Sure, some of the common operations that you can now see in a hybrid view. Speaker 2: So you go to "Billing" here and first let me look at my account. And account is a simple page, I have set up active directory and you can add your own XML file, upload it. You can also add multi-factor authentication, all those things are simple. On the billing side, you can see more details about how did I rack up $966. Here's my credit card. Detailed description of where the cost is coming from. I can also download previous versions, builds. Speaker 1: It's actually Nutanix as a service essentially, right? Speaker 2: Yep. Speaker 1: As a subscription service. Speaker 2: Not only do we go to on-prem as you can see, while we were talking, two VMs have already come back on-prem. They are powered off right now. The other two are on the wire. Oh, there they are. Speaker 1: Wow. Speaker 2: So now four VMs are there. Speaker 1: Okay. Perfect. Sometimes it works, sometimes it doesn't work, but it's good. Speaker 2: It always works. Speaker 1: Always works. All right. Speaker 2: As you can see the platinum protection rule is now already applied to them and now it has reversed the direction of [inaudible 01:57:12]- Speaker 1: Remember, we showed one-click DR, failover, failback, built into the product when Xi ships to any Nutanix fabric. You can start with DSX on premise, obviously when you failover to Xi. You can start with AHV, things that are going to take the same paradigm of one-click operations into this hybrid view. Speaker 2: Let's stop doing lift and shift. The era has come for click and shift. Speaker 1: Binny's now been promoted to the Chief Marketing Officer, too by the way. Right? So, one more thing. Speaker 2: Okay. Speaker 1: You know we don't stop any conferences without a couple of things that are new. The first one is something that we should have done, I guess, a couple of years ago. Speaker 2: It depends how you look at it. Essentially, if you look at the cloud vendors, one of the key things they have done is they've built services as building blocks for the apps that run on top of them. What we have done at Nutanix, we've built core services like block services, file services, now with Calm, a marketplace. Now if you look at [inaudible 01:58:14] applications, one of the core building pieces is the object store. I'm happy to announce that we have the object store service coming up. Again, in true Nutanix fashion, it's going to be elastic. Speaker 1: Let's- Speaker 2: Let me show you. Speaker 1: Yeah, let's show it. It's something that is an object store service by the way that's not just for your primary, but for your secondary. It's obviously not just for on-prem, it's hybrid. So this is being built as a next gen object service, as an extension of the core fabric, but accommodating a bunch of these new paradigms. Speaker 2: Here is the object browser. I've created a bunch of buckets here. Again, object stores can be used in various ways: as primary object store, or for secondary use cases. I'll show you both. I'll show you a Hadoop use case where Hadoop is using this as a primary store and a backup use case. Let's just jump right in. This is a Hadoop bucket. AS you can see, there's a temp directory, there's nothing interesting there. Let me go to my Hadoop VM. There it is. And let me run a Hadoop job. So this Hadoop job essentially is going to create a bunch of files, write them out and after that do map radius on top. Let's wait for the job to start. It's running now. If we go back to the object store, refresh the page, now you see it's writing from benchmarks. Directory, there's a bunch of files that will write here over time. This is going to take time. Let's not wait for it, but essentially, it is showing Hadoop that uses AWS 3 compatible API, that can run with our object store because our object store exposes AWS 3 compatible APIs. The other use case is the HYCU backup. As you can see, that's a- Section 12 of 13 [01:50:00 - 02:00:04] Section 13 of 13 [02:00:00 - 02:13:42] (NOTE: speaker names may be different in each section) Vineet: This is the hycu back up ... As you can see, that's a back-up software that can back-up WSS3. If you point it to Nutanix objects or it can back-up there as well. There are a bunch of back-up files in there. Now, object stores, it's very important for us to be able to view what's going on there and make sure there's no objects sprawled because once it's easy to write objects, you just accumulate a lot of them. So what we wanted to do, in true Nutanix style, is give you a quick overview of what's happening with your object store. So here, as you can see, you can look at the buckets, where the load is, you can look at the bucket sizes, where the data is, and also what kind of data is there. Now this is a dashboard that you can optimize, and customize, for yourself as well, right? So that's the object store. Then we go back here, and I have one more thing for you as well. Speaker 2: Okay. Sounds good. I already clicked through a slide, by the way, by mistake, but keep going. Vineet: That's okay. That's okay. It is actually a quiz, so it's good for people- Speaker 2: Okay. Sounds good. Vineet: It's good for people to have some clues. So the quiz is, how big is my SAP HANA VM, right? I have to show it to you before you can answer so you don't leak the question. Okay. So here it is. So the SAP HANA VM here vCPU is 96. Pretty beefy. Memory is 1.5 terabytes. The question to all of you is, what's different in this screen? Speaker 2: Who's a real Prism user here, by the way? Come on, it's got to be at least a few. Those guys. Let's see if they'll notice something. Vineet: What's different here? Speaker 3: There's zero CVM. Vineet: Zero CVM. Speaker 2: That's right. Yeah. Yeah, go ahead. Vineet: So, essentially, in the Nutanix fabric, every server has to run a [inaudible 02:01:48] machine, right? That's where the storage comes from. I am happy to announce the Acropolis Compute Cloud, where you will be able to run the HV on servers that are storage-less, and add it to your existing cluster. So it's a compute cloud that now can be managed from Prism Central, and that way you can preserve your investments on your existing server farms, and add them to the Nutanix fabric. Speaker 2: Gotcha. So, essentially ... I mean, essentially, imagine, now that you have the equivalent of S3 and EC2 for the enterprise now on Premisis, like you have the equivalent compute and storage services on JCP and AWS, and so forth, right? So the full flexibility for any kind of workload is now surely being available on the same Nutanix fabric. Thanks a lot, Vineet. Before we wrap up, I'd sort of like to bring this home. We've announced a pretty strategic partnership with someone that has always inspired us for many years. In fact, one would argue that the genesis of Nutanix actually was inspired by Google and to talk more about what we're actually doing here because we've spent a lot of time now in the last few months to really get into the product capabilities. You're going to see some upcoming capabilities and 55X release time frame. To talk more about that stuff as well as some of the long-term synergies, let me invite Bill onstage. C'mon up Bill. Tell us a little bit about Google's view in the cloud. Bill: First of all, I want to compliment the demo people and what you did. Phenomenal work that you're doing to make very complex things look really simple. I actually started several years ago as a product manager in high availability and disaster recovery and I remember, as a product manager, my engineers coming to me and saying "we have a shortage of our engineers and we want you to write the fail-over routines for the SAP instance that we're supporting." And so here's the PERL handbook, you know, I haven't written in PERL yet, go and do all that work to include all the network setup and all that work, that's amazing, what you are doing right there and I think that's the spirit of the partnership that we have. From a Google perspective, obviously what we believe is that it's time now to harness the power of scale security and these innovations that are coming out. At Google we've spent a lot of time in trying to solve these really large problems at scale and a lot of the technology that's been inserted into the industry right now. Things like MapReduce, things like TenserFlow algorithms for AI and things like Kubernetes and Docker were first invented at Google to solve problems because we had to do it to be able to support the business we have. You think about search, alright? When you type in search terms within the search box, you see a white screen, what I see is all the data-center work that's happening behind that and the MapReduction to be able to give you a search result back in seconds. Think about that work, think about that process. Taking and pursing those search terms, dividing that over thousands of [inaudible 02:05:01], being able to then search segments of the index of the internet and to be able to intelligent reduce that to be able to get you an answer within seconds that is prioritized, that is sorted. How many of you, out there, have to go to page two and page three to get the results you want, today? You don't because of the power of that technology. We think it's time to bring that to the consumer of the data center enterprise space and that's what we're doing at Google. Speaker 2: Gotcha, man. So I know we've done a lot of things now over the last year worth of collaboration. Why don't we spend a few minutes talking through a couple things that we're started on, starting with [inaudible 02:05:36] going into com and then we'll talk a little bit about XI. Bill: I think one of the advantages here, as we start to move up the stack and virtualize things to your point, right, is virtual machines and the work required of that still takes a fair amount of effort of which you're doing a lot to reduce, right, you're making that a lot simpler and seamless across both On-Prem and the cloud. The next step in the journey is to really leverage the power of containers. Lightweight objects that allow you to be able to head and surface functionality without being dependent upon the operating system or the VM to be able to do that work. And then having the orchestration layer to be able to run that in the context of cloud and On-Prem We've been very successful in building out the Kubernetes and Docker infrastructure for everyone to use. The challenge that you're solving is how to we actually bridge the gap. How do we actually make that work seamlessly between the On-Premise world and the cloud and that's where our partnership, I think, is so valuable. It's cuz you're bringing the secret sauce to be able to make that happen. Speaker 2: Gotcha, gotcha. One last thing. We talked about Xi and the two companies are working really closely where, essentially the Nutanix fabric can seamlessly seep into every Google platform as infrastructure worldwide. Xi, as a service, could be delivered natively with GCP, leading to some additional benefits, right? Bill: Absolutely. I think, first and foremost, the infrastructure we're building at scale opens up all sorts of possibilities. I'll just use, maybe, two examples. The first one is network. If you think about building out a global network, there's a lot of effort to do that. Google is doing that as a byproduct of serving our consumers. So, if you think about YouTube, if you think about there's approximately a billion hours of YouTube that's watched every single day. If you think about search, we have approximately two trillion searches done in a year and if you think about the number of containers that we run in a given week, we run about two billion containers per week. So the advantage of being able to move these workloads through Xi in a disaster recovery scenario first is that you get to take advantage of the scale. Secondly, it's because of the network that we've built out, we had to push the network out to the edge. So every single one of our consumers are using YouTube and search and Google Play and all those services, by the way we have over eight services today that have more than a billion simultaneous users, you get to take advantage of that network capacity and capability just by moving to the cloud. And then the last piece, which is a real advantage, we believe, is that it's not just about the workloads you're moving but it's about getting access to new services that cloud preventers, like Google, provide. For example, are you taking advantage like the next generation Hadoop, which is our big query capability? Are you taking advantage of the artificial intelligence derivative APIs that we have around, the video API, the image API, the speech-to-text API, mapping technology, all those additional capabilities are now exposed to you in the availability of Google cloud that you can now leverage directly from systems that are failing over and systems that running in our combined environment. Speaker 2: A true converged fabric across public and private. Bill: Absolutely. Speaker 2: Great stuff Bill. Thank you, sir. Bill: Thank you, appreciate it. Speaker 2: Good to have you. So, the last few slides. You know we've talked about, obviously One OS, One Click and eCloud. At the end of the day, it's pretty obvious that we're evaluating the move from a form factor perspective, where it's not just an OS across multiple platforms but it's also being distributed genuinely from consuming itself as an appliance to a software form factor, to subscription form factor. What you saw today, obviously, is the fact that, look you know we're still continuing, the velocity has not slowed down. In fact, in some cases it's accelerated. If you ask my quality guys, if you ask some of our customers, we're coming out fast and furious with a lot of these capabilities. And some of this directly reflects, not just in features, but also in performance, just like a public cloud, where our performance curve is going up while our price-performance curve is being more attractive over a period of time. And this is balancing it with quality, it is what differentiates great companies from good companies, right? So when you look at the number of nodes that have been shipping, it was around ten more nodes than where we were a few years ago. But, if you look at the number of customer-found defects, as a percentage of number of nodes shipped it is not only stabilized, it has actually been coming down. And that's directly reflected in the NPS part. That most of you guys love. How many of you guys love your Customer Support engineers? Give them a round of applause. Great support. So this balance of velocity, plus quality, is what differentiates a company. And, before we call it a wrap, I just want to leave you with one thing. You know, obviously, we've talked a lot about technology, innovation, inspiration, and so forth. But, as I mentioned, from last night's discussion with Sir Ranulph, let's think about a few things tonight. Don't take technology too seriously. I'll give you a simple story that he shared with me, that puts things into perspective. The year was 1971. He had come back from Aman, from his service. He was figuring out what to do. This was before he became a world-class explorer. 1971, he had a job interview, came down from Scotland and applied for a role in a movie. And he failed that job interview. But he was selected from thousands of applicants, came down to a short list, he was a ... that's a hint ... he was a good looking guy and he lost out that role. And the reason why I say this is, if he had gotten that job, first of all I wouldn't have met him, but most importantly the world wouldn't have had an explorer like him. The guy that he lost out to was Roger Moore and the role was for James Bond. And so, when you go out tonight, enjoy with your friends [inaudible 02:12:06] or otherwise, try to take life a little bit once upon a time or more than once upon a time. Have fun guys, thank you. Speaker 5: Ladies and gentlemen please make your way to the coffee break, your breakout sessions will begin shortly. Don't forget about the women's lunch today, everyone is welcome. Please join us. You can find the details in the mobile app. Please share your feedback on all sessions in the mobile app. There will be prizes. We will see you back here and 5:30, doors will open at 5, after your last breakout session. Breakout sessions will start sharply at 11:10. Thank you and have a great day. Section 13 of 13 [02:00:00 - 02:13:42]
SUMMARY :
of the globe to be here. And now, to tell you more about the digital transformation that's possible in your business 'Cause that's the most precious thing you actually have, is time. And that's the way you can have the best of both worlds; the control plane is centralized. Speaker 1: Thank you so much, Bob, for being here. Speaker 1: IBM is all things cognitive. and talking about the meaning of history, because I love history, actually, you know, We invented the role of the CIO to help really sponsor and enter in this notion that businesses Speaker 1: How's it different from 1993? Speaker 1: And you said it's bigger than 25 years ago. is required to do that, the experience of the applications as you talked about have Speaker 1: It looks like massive amounts of change for Speaker 1: I'm sure there are a lot of large customers Speaker 1: How can we actually stay not vulnerable? action to be able to deploy cognitive infrastructure in conjunction with the business processes. Speaker 1: Interesting, very interesting. and the core of cognition has to be infrastructure as well. Speaker 1: Which is one of the two things that the two So the algorithms are redefining the processes that the circuitry actually has to run. Speaker 1: It's interesting that you mentioned the fact Speaker 1: Exactly, and now the question is how do you You talked about the benefits of calm and being able to really create that liberation fact that you have the power of software, to really meld the two forms together. Speaker 1: It can serve files and mocks and things like And the reason for that if for any data intensive application like a data base, a no sequel What we want is that optionality, for you to utilize those benefits of the 3X better Speaker 1: Your tongue in cheek remark about commodity That is the core of IBM's business for the last 20, 25, 30 years. what you already have to make it better. Speaker 1: Yeah. Speaker 1: That's what Apple did with musics. It's okay, and possibly easier to do it in smaller islands of containment, but when you Speaker 1: Awesome. Thank you. I know that people are sitting all the way up there as well, which is remarkable. Speaker 3: Ladies and gentlemen, please welcome Chief But before I get into the product and the demos, to give you an idea. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. So, what we're going to do is, the first step most of you guys know this, is we've been Now one of the key things is having the ability to test these against each other. And to do that, we took a hard look and came out with a new product called Xtract. So essentially if we think about what Nutanix has done for the data center really enables and performing the cut over to you. Speaker 1: Sure, some of the common operations that you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Binny Gill | PERSON | 0.99+ |
Daniele | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Binny | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Julie | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Italy | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
Telecom Italia | ORGANIZATION | 0.99+ |
Acropolis | ORGANIZATION | 0.99+ |
100 percent | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alessandro | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
Sunil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
20% | QUANTITY | 0.99+ |
Steven Poitras | PERSON | 0.99+ |
15 seconds | QUANTITY | 0.99+ |
1993 | DATE | 0.99+ |
Leonardo | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Six | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
John Doe | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Josh Stella, Fugue Inc. | AWS Public Sector Summit 2017
(energetic techno music) >> Announcer: Live from Washington D.C., it's theCUBE. Covering AWS Public Sector Summit 2017. Brought to you by Amazon Web Services, and its partner, Ecosystem. >> Interviewer: So what can Fugue do for you? Well, I'm going to guess that they can take your agency to the Cloud. >> Josh: You're, you're correct, Jeff. >> John W.: That's exactly what I'm looking at over here, the Fugue booth here, on the show floor at AWS Public Sector Summit 2017. Welcome inside, live on theCUBE channel, John Wells and John Furrier, and Josh Stella, who is the founder, and CEO of Fugue. Did I get it right, by the way? >> You did get it right. >> Jeff: You're taking the agencies to the Cloud, correct. >> Taking agencies to the Cloud, taking companies to the Cloud, too, but of course, this is worldwide public sector, so we're focused on the agencies today. >> Yeah, we were just talking before this even started, just a little historical background here, you were with Amazon back in 2012, when this show started, and you told me that your commission with your colleagues was to get 600 attendees. >> Yeah, we wanted to get 600, I think we got 750, which is classic Amazon style, right. >> John W.: Bonus year. >> We go over. But yeah, over 10,000 registered this year, it's amazing. >> Which shows you that this explosive growth of this area, in terms of the public sector. So let's talk about Fugue a little bit. >> Sure. >> Before we dive a little bit, share with our viewers, core competencies, what your primary mission is. >> So Fugue is an automation system. Fugue is a way to completely automate the Cloud API surface. It's true infrastructure as code, so unlike a deployment tool that just builds something on Cloud, Fugue builds it, monitors it, self-heals it, modifies it every time, alerts if anything drifts, and we've added a layer to that for policy as codes. So you can actually express the rules of your organization, so if you're a government agency, those might be NIST or FISMA rules. If you're a start-up, those might be, we don't open SSH to the world. Those can be just expressed as code. So Fugue fully automates the stack, it doesn't just do deployment, and we just released the team conductor, that will manage dozens of AWS accounts for you, so many of our customers in financial services, and other enterprises have many, many AWS accounts. Fugue allows you to kind of centralize all of that control without slowing down your developers. Without getting in the way of going fast. >> John W.: And what, why is that big news? >> It's big news because in the past, the whole core value prop of Cloud is to go fast, is to innovate, iterate, be disruptive, and move quickly. What happens, though, is as you do that, at the beginning, when you're starting small, it looks pretty easy. You can go fast. But you learn pretty quickly over time that things get very messy and complex. So Fugue accelerates that going-fast part, but keeps everything kind of within the bounds of knowing who's running things, knowing what resources you're actually using. Who built what, who has permissions to do what. So it's really this foundational layer for organizations to build and control Cloud environments. >> Josh, one of the things we talked about in the opening was the government's glacial case of innovation over the years. But the pressure is on the innovate. So the, lot of emphasis on innovation. In an environment that's constrained by regulation, governance, policies. So they have kind of an Achilles heal there, but Cloud gives them an opportunity at a scale point to do something differently, I want to dip into that, but I'll set this question up by quoting a CIO I chatted with who's in the government sector. He's like, "Look, Cloud's like, jumping out of a plane "with a parachute I didn't even know was going to open up." So this is kind of a mindset, he was over generalizing, but again, to the point is, trust, scale, execution, risk. >> Mhm, mhm. >> It's a huge thing. >> Absolutely. >> How do you guys solve that problem for the agencies that want to go to the Cloud, because, certainly they want to go there, I think it's a new normal as Werner said. What do you guys do to make that go away? How do you make it go faster? >> Sure, so Amazon and other Cloud vendors have done a great job of building a very highly trusted, low level infrastructure that you can put together into systems. That's really the core offering. But there's still, in government agencies, as you point out, this need to follow rules and regulations and policies, and check those. So, one of the things Fugue does, is allows you to actually turn those rules into executable compiled code. So, instead of finding out you're breaking a rule a month later, in some meeting somewhere, that's going to loop back, it'll tell you in ten milliseconds. And how to fix it. So we allow you to go just as fast as anyone can on Cloud, but meeting all those extra constraints and so on. >> So you codify policies, and governance type stuff, right? >> That's part of what we do, but we also automate the entire infrastructure and grid. >> So this is the key, this is what I want to kind of jump to that next point. That's cool, but it would make sense that machine learning would probably be like an interesting take away. Cuz' everyone talks about training, data models, and it sounds like what you're doing, if you codify the policies, you probably set up well for growing and scaling in that world. Is that something that's on your radar? >> Sure. >> How do you guys look at that whole, okay I've got machine learning coming down the pike, everyone wants to get their hands on some libraries, and they want to get to unsupervised at some point. >> Yes, yeah it's a great question. So Fugue is really a bridge to that future where the entire infrastructure layer is automated and dynamic. And that's what you're talking about, where you have machine learning that are helping you make decisions about how to do computing. A lot of folks aren't ready for that yet. They're still thinking about the Cloud as kind of a remote data center, in our view, it's actually just a big distributed computer. And so, when you think about things like whether it's machine learning, or just algorithms to run over time that modify these environments to make them more efficient. Fugue is definitely built to get you there, but we start where you're comfortable now, which is just the first thing we have. >> Yeah of course, when you're still early to tells in the water, all kinds of data issues, you see the growth there. So the question is, what is the low hanging fruit for you? What are the use cases? Where are you guys winning, and what's new with your codifying the policies that you're releasing here? What's the use cases, and what're you guys releasing? >> Yeah, so common use case for us is integration with CI, CD, and DevOps for the entire infrastructure chain. So, you'll have organizations that want to go to a fully automated deployment management of infrastructure. And what they've learned in the past is, without Fugue, they might get some of the deployment automated, with a traditional CM tool or something like this, but because they're not doing the self healing, the constant maintenance on the environment, the updating of the environment, the alerting on it, there's a big missing link in terms of that automation. So, we're getting a lot of resonance in the financial services sector, and folks who are sophisticated on Cloud, and are doing large-scale Cloud operations. So, if you think about, uh, Netflix can build full automation for themselves, because they're Netflix. But not everyone fits in that boat. So Fugue is sort of the sorts of capabilities that Netflix built in a very specific way for themselves, we don't use their tools. We're a general purpose solution to that same class of problems. So, really, where we're winning is in automation of, again, deployments and operations of those deployments, but also in things like policy. We're seeing that not just in government but in the private sector as well. >> What are the big bottlenecks, what are the roadblocks for the industry? >> The roadblocks for the industry certainly are bringing, sort of, a legacy patterns to Cloud. Imagining it's a remote data center, thinking of it as virtual machines and storage, instead of just, infinite compute, and infinite resources that you put together. >> John F.: So the mindset's the bottleneck. >> Absolutely, it's cultural, yeah, yeah. And skillset, because in the DevOps Cloud world, everything should be code, and therefore everyone has to be a developer. And so, that's a little new. >> Is scale a big issue for you guys, with your customers? Is that something that they're looking for? And what's the kind of, scope of some of your customers and your use cases in government Clouds. >> Yeah, sure, absolutely. I mean, a lot of us came from AWS, so we know how to build things at scale. But yeah, y'know, a lot of folks start small with Fugue, but they go to very large, very quickly, has been our experience. So, scale across dozens, or hundreds AWS accounts-- >> That's where the automation, if they're not set up properly, bites them in the butt pretty much, right? >> Absolutely, absolutely. So yeah, we get a lot of that too. Going back in and helping people put their system back together the right way for Cloud, because they went there from the-- >> Alright, so what's the magnified learnings from this, from your experience with your company, mobile rounds of finance, you guys are well financed, one of the best venture capitals, the firm's NEA, great backer, you guys are doing well. Over the years, what have you learned, what's the magnification of the learnings, and how do you apply it to today's marketplace? >> Um, we are in a massive transition. We're just beginning to see the effects of this transition. So, from 1947 until the Cloud, you just had faster, smaller, Von Neumann machines in a box. You had any ax that got down to the size of your wristwatch. The Cloud is intrinsically different. And so there is an opportunity now, that's a challenge, but it's a massive opportunity to get this new generation of computing right. So I'd say that the learnings for me, as a technologist coming into a CEO role, are how to relate these deeply technical concepts to the world in ways that are approachable, and that can show people a path that they want to get involved with. But I think the learnings that I've had at AWS and at Fugue are, this is the beginning of this ride. It's not going to end at containers, it's not going to end at Lambda, it's going to continue to evolve. And the Cloud in ten years is going to look massively different than it does now. >> So, when you said, "to get it right," the computer, I mean, such as, or in what way, I mean, we have paths right, routes you could take. So you're saying that there are a lot of options that will be pitfalls, and the others that would be great opportunities. >> Well, that's absolutely right. So, for example, betting on the wrong technologies too soon, in terms of where the Cloud is going to finally land, is a box canyon, right. That's an architectural dead end. If you cannot compose systems across all these disparate Cloud surfaces, the application boundary, the system boundary is now drawn across services. You used to be able to open an IDE, and see your application. Well, now that might be spread across virtual machines, containers, Lambda, virtual discs, block storage, machine learning services, human language recognition services. That's your application boundary. So, if you can't understand all of that in context, you're in real trouble. Because the change is accelerating. If you look at the rate of new services, year over year in the Cloud, it's going up, not down. So the future's tougher. >> So, if I'm a government service, though, and I think John just talked about this, I'm just now getting confidence, right? >> Yes. >> I'm really feeling a little bit better, because I met somebody to hold my hand. And then I hear on the other hand, say, we have to make sure we get this right. So now all of a sudden, I'm backing off the edge again. I'm not so sure. So how do you get your public sector client base to take those risks, or take those daring steps, if you will. You know, we've had a lot of really great conversations and have a lot of great relationships in public sector, what we're seeing there is, like in the commercial world. I mean, public sector wasn't that far behind commercial on Cloud. When I was at Amazon, y'know, five years ago, I worked mostly with public sector costumers, and they were trying hard there, they were champions already, moving there. So, one of the things that Fugue does very effectively is, because we have this ability to deterministically, programmatically follow the rules, it takes it off of the humans, having to go and check. And that's always the slow and expensive part. So we can give a lot of assurance to these government agencies that, for example, if one of their development teams chooses to deploy something to Cloud, in the past, they'd have to go look for that. Well, with Fugue, they literally cannot deploy it, unless it's correct. And that's what I mean by "get it right." Is the developer, who's sitting there, and I've been a developer for decades, they want to do things by the rules. They want to do things correctly. But they don't always want to read the stack of books like this, and follow, y'know, check their boxes. So, with Fugue, you just get a compiler error and you keep going. >> Josh, I wanted to ask you about a new category we see emerging, it's really not kind of mainstream yet, by Wikibon research, and still getting in theCUBE, we get to see things a little bit early. Plus we have a data science team to skim through the predictive analytics. One thing that's clear is SAS businesses are emerging. So, SAS is growing at an astounding rate, platform is a service, and infrastructure's a service, I mean, Javassist doesn't think to see it that way, I don't you do either. It's infrastructure and SAS pretty much. So pretty much, everyone's going to, at some point, be a Cloud service provider. And there'll be a long tail distribution, we believe, on niche, to completely huge, and the big ones are going to be the Amazons, the Facebooks, the Google, but then there's going to be service providers that is going to emerge. They're going to be on Clouds, with governments, so we believe that to be true. If you believe that to be true, then the question is, how do I scale it? So, now I'm a solution architect in an enterprise. And like you said, it's intrinsically different in the Cloud than it was, say on premise, or even the critical traditional enterprise computing. I've got to now completely change my architectural view. >> Yes. >> If you think it's a big computer, then you've got to be an operating systems guy. (laughs) You've got to say, okay, there's a linker, there's a load, there's a compiler, I've got subsystems, I got IO. You got to start thinking that way. How do you talk to your friends, and colleagues, and customers around how to be a new solutions architect. >> Yeah, so I think it's a balancing act. Because we are this transition stage, right. The modern Cloud is still a Prius. (chuckles) And the future Cloud is the Tesla, in terms of how customers use it. We're in this transition phase in technologies, so you have to have one foot in both camps. Immutable infrastructure patterns are incredibly important to any kind of new development, and if you go to the Fugue.cosite or O'Reily, we wrote a little book with them on immutable infrastructure patterns. So, the notion there is, you don't maintain anything, you just replace it. So you stand a compute instance, Verner likes to talk about, these are cattle, not pets, Y'know, or paper cuff computing, that's right. You never touch it, you never do configuration management, you crumple it up, throw it away, and make a new one. That's the right new pattern, but a lot of the older systems that people still rely upon don't work that way. So, you have to have a foot in each camp as a solutions architect in Cloud, or as the CEO of a Cloud company. You have to understand both of those, and understand how to bridge between them. And understand it's an evolution-- >> And the roles within the architecture, as well. >> That's right. >> They coexist, this coexistence. >> Absolutely. You know, it's interesting you said, "everyone's going to become a service provider." I'd put that a little differently, the only surface that matters in the future is APIs. Everything is APIs. And how you express your APIs is a business question. But, fundamentally, that's where we are. So, whether you're a sales force with a SAS, I really don't like the infrastructure and SAS delineation, because I think the line's very blurred. It's just APIs that you compose into applications. >> Well, it's a tough one, this is good debate we could have, certainly, we aren't going to do it live on theCUBE, and arm wrestle ourselves here, and talk about it. But, one of the things about the Cloud that's amazing is the horizontal scalability of it. So, you have great scalability horizontally, but also, you need to have specialty, specialism at the app layers. >> Josh: Yes. >> You can't pick one or the other, they're not mutually exclusive. >> Josh: That's right. >> So, you say, okay, what does a stack look like? (laughs) If everything's in API, where the hell's the stack? >> Yeah, well that's why we write Fugue. Because Fugue does unify all that. Right, you can design one composition in Fugue. One description of that stack. And then run the whole thing as a process, like you would run Apache. >> So you're essentially wrapping a system around, you like almost what Docker Containers is for microservices. You are for computing. >> And including the container's managers. (John F. Laughs) So that's just one more service to us, that's exactly right. And, y'know, you asked me earlier, "how does this affect agencies?" So one thing we're really excited about today is, we just announced today, we're live on GovCloud, so we support GovCloud now, you can run in the commercial regions, you can run in GovCloud, and one of the cool things you can do with Fugue, because of that system wrapping capability, is build systems in public regions, and deploy them on GovCloud and they'll just work, instead of having to figure out the differences. >> Oh that's what what you think about the Cloud, standing up's something that's a verb now. "Hey I'm going to stand this up." That's, what used to be Cloud language, now that's basically app language. >> I think what you're getting at here is something near to my heart, which is all there are anymore are applications. Talking about infrastructure is kind of like calling a chair an assembly of wood. What we're really about are these abstractions, and the application is the first class citizen. >> I want to be comfortable, and sit down, take a load off. >> Josh: That's right, that's right. >> That's what a chair does. And there's different versions. >> John W.: You don't want to stand up, you want to sit down. >> And there's different, there's the Tesla of the chairs, and then there's the wooden hard chair for your lower back, for your back problems. >> Josh: Exactly, exactly. >> The Tesla really is a good use case, because that points to the, what I call, the fine jewelry of a product. Right, they really artistically built amazing product, where the value is not so much the car, yeah there's some innovations with the car, you've got that, with electric. But it's the data. The data powering the car that brings back the question of the apps and the data, again, I want to spend all my time thinking about how to create a sustainable, competitive advantage, and serve my customers, rather than figure out how to architect solutions that require configuration management, and tons of labor. This is here the shift is. This is where the shift is going from non-differentiated operations to high-value added capabilities. So, it's not like jobs are going up. Yeah, some jobs are going away, I believe that. But, it's like saying bank tellers were going to kill the bank industry. Actually, more branches opened up as a result. >> Oh yeah, this is the democratization of computing as a service. And that's only going to grow computing as a whole. Getting back to the, kind of, fine jewelry, you talked about data as part of that, I believe another part of that is the human experience of using something. And I think that is often missing in enterprise software. So, you'll see in the current release of Fugue, we just put into Beta a very, we've spent about two years on it, a graphic user interface that shows you everything about the system in an easily digestible way. And so, I think that the, kind of, the effect of the iPhone on computing in the enterprise is important to understand, too. The person that's sitting there at an enterprise environment during their day job gets in their Tesla, because they also love beautiful things. >> Well, I mean, no other places for you guys to do that democratization, and liberation, if you will. The government Cloud, and public sector, is the public sector. They need, right now they've been on antiquated systems for (chuckles) yeah, not only just antiquated, siloed, y'know, Cobol systems, main framed, and they've got a lot of legacy stuff. >> There is, there's a lot of legacy stuff, and they're a lot of inefficiencies in the process model in how things get done, and so, we love that AWS has come in, and when we were there, we helped do that part. And now with Fugue, we want to take these customers to kind of, the next level of being able to move forward quickly. >> Well, if you want to take your agency to the Cloud, Fugue is your vehicle to do that. Josh Stella, founder, CEO. Thanks for being with us here on theCUBE. >> Thanks so much. >> We appreciate it. We'll continue, live from Washington, D.C. Nation's capital here, AWS Public Sector Summit, 2017 on theCUBE. >> John F.: Alright, great job, well done. (upbeat techno music)
SUMMARY :
Brought to you by Amazon Web Services, Well, I'm going to guess Did I get it right, by the way? taking companies to the Cloud, too, and you told me that your commission with your colleagues Yeah, we wanted to get 600, I think we got 750, But yeah, over 10,000 registered this year, it's amazing. in terms of the public sector. core competencies, what your primary mission is. So you can actually express the rules of your organization, at the beginning, when you're starting small, Josh, one of the things we talked about in the opening What do you guys do to make that go away? So, one of the things Fugue does, is allows you to actually but we also automate the entire infrastructure and grid. if you codify the policies, you probably set up well How do you guys look at that whole, Fugue is definitely built to get you there, and what're you guys releasing? So Fugue is sort of the sorts of capabilities and infinite resources that you put together. and therefore everyone has to be a developer. Is scale a big issue for you guys, with your customers? but they go to very large, very quickly, So yeah, we get a lot of that too. Over the years, what have you learned, So I'd say that the learnings for me, and the others that would be great opportunities. So, for example, betting on the wrong technologies too soon, in the past, they'd have to go look for that. and the big ones are going to be the Amazons, and colleagues, and customers around how to be and if you go to the Fugue.cosite And how you express your APIs is a business question. but also, you need to have specialty, You can't pick one or the other, Right, you can design one composition in Fugue. you like almost what Docker Containers is for microservices. and one of the cool things you can do with Fugue, Oh that's what what you think about the Cloud, and the application is the first class citizen. and sit down, take a load off. And there's different versions. you want to sit down. and then there's the wooden hard chair for your lower back, and the data, again, I want to spend all my time I believe another part of that is the human experience and public sector, is the public sector. and so, we love that AWS has come in, Well, if you want to take your agency to the Cloud, AWS Public Sector Summit, 2017 on theCUBE. John F.: Alright, great job, well done.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Odie | PERSON | 0.99+ |
Mitzi Chang | PERSON | 0.99+ |
Ruba | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Alicia | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Jarvis | PERSON | 0.99+ |
Rick Echevarria | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Rebecca | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Acronis | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Infosys | ORGANIZATION | 0.99+ |
Thomas | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Anant | PERSON | 0.99+ |
Mahesh | PERSON | 0.99+ |
Scott Shadley | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Alicia Halloran | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Nadir Salessi | PERSON | 0.99+ |
Miami Beach | LOCATION | 0.99+ |
Mahesh Ram | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
January of 2013 | DATE | 0.99+ |
America | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Bruce Bottles | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Asia Pacific | LOCATION | 0.99+ |
March | DATE | 0.99+ |
David Cope | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rick Echavarria | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
July of 2017 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Catalina | LOCATION | 0.99+ |
Newport | LOCATION | 0.99+ |
Zappos | ORGANIZATION | 0.99+ |
NGD Systems | ORGANIZATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |