UNLIST TILL 4/1 - How The Trade Desk Reports Against Two 320-node Clusters Packed with Raw Data
hi everybody thank you for joining us today for the virtual Vertica BBC 2020 today's breakout session is entitled Vertica and en mode at the trade desk my name is su LeClair director of marketing at Vertica and I'll be your host for this webinar joining me is Ron Cormier senior Vertica database engineer at the trade desk before we begin I encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we don't address we'll do our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also a quick reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you a notification as soon as it's ready so let's get started over to you run thanks - before I get started I'll just mention that my slide template was created before social distancing was a thing so hopefully some of the images will harken us back to a time when we could actually all be in the same room but with that I want to get started uh the date before I get started in thinking about the technology I just wanted to cover my background real quick because I think it's peach to where we're coming from with vertically on at the trade desk and I'll start out just by pointing out that prior to my time in the trade desk I was a tech consultant at HP HP America and so I traveled the world working with Vertica customers helping them configure install tune set up their verdict and databases and get them working properly so I've seen the biggest and the smallest implementations and everything in between and and so now I'm actually principal database engineer straight desk and and the reason I mentioned this is to let you know that I'm a practitioner I'm working with with the product every day or most days this is a marketing material so hopefully the the technical details in this presentation are are helpful I work with Vertica of course and that is most relative or relevant to our ETL and reporting stack and so what we're doing is we're taking about the data in the Vertica and running reports for our customers and we're an ad tech so I did want to just briefly describe what what that means and how it affects our implementation so I'm not going to cover the all the details of this slide but basically I want to point out that the trade desk is a DSP it's a demand-side provider and so we place ads on behalf of our customers or agencies and ad agencies and their customers that are advertised as brands themselves and the ads get placed on to websites and mobile applications and anywhere anywhere digital advertising happens so publishers are what we think ocean like we see here espn.com msn.com and so on and so every time a user goes to one of these sites or one of these digital places and an auction takes place and what people are bidding on is the privilege of showing and add one or more ads to users and so this is this is really important because it helps fund the internet ads can be annoying sometimes but they actually help help are incredibly helpful in how we get much much of our content and this is happening in real time at very high volumes so on the open Internet there is anywhere from seven to thirteen million auctions happening every second of those seven to thirteen million auctions happening every second the trade desk bids on hundreds of thousands per second um so that gives it and anytime we did we have an event that ends up in Vertica that's that's one of the main drivers of our data volume and certainly other events make their way into Vertica as well but that wanted to give you a sense of the scale of the data and sort of how it's impacting or how it is impacted by sort of real real people in the world so um the uh let's let's take a little bit more into the workload and and we have the three B's in spades late like many many people listening to a massive volume velocity and variety in terms of the data sizes I've got some information here some stats on on the raw data sizes that we deal with on a daily basis per day so we ingest 85 terabytes of raw data per day and then once we get it into Vertica we do some transformations we do matching which is like joins basically and we do some aggregation group buys to reduce the data and make it clean it up make it so it's more efficient to consume buy our reporting layer so that matching in aggregation produces about ten new terabytes of raw data per day it all comes from the it all comes from the data that was ingested but it's new data and so that's so it is reduced quite a bit but it's still pretty pretty high high volume and so we have this aggregated data that we then run reports on on behalf of our customers so we have about 40,000 reports per day oh that's probably that's actually a little bit old and older number it's probably closer to 50 or 55,000 reports per day at this point so it's I think probably a pretty common use case for for Vertica customers it's maybe a little different in the sense that most of the reports themselves are >> reports so they're not it's not a user sitting at a keyboard waiting for the result basically we have we we have a workflow where we do the ingest we do this transform and then and then once once all the data is available for a day we run reports on behalf of our customer to let me have our customers on that that daily data and then we send the reports out you via email or we drop them in a shared location and then they they look at the reports at some later point of time so it's up until yawn we did all this work on on enterprise Vertica at our peak we had four production enterprise clusters each which held two petabytes of raw data and I'll give you some details on on how those enterprise clusters were configured in the hardware but before I do that I want to talk about the reporting workload specifically so the the reporting workload is particularly lumpy and what I mean by that is there's a bunch of work that becomes available bunch of queries that we need to run in a short period of time after after the days just an aggregation is completed and then the clusters are relatively quiet for the remaining portion of the day that's not to say they are they're not doing anything as far as read workload but they certainly are but it's much less reactivity after that big spike so what I'm showing here is our reporting queue and the spike is is when all those reports become a bit sort of ailable to be processed we can't we can't process we can't run the report until we've done the full ingest and matching and aggregation for the day and so right around 1:00 or 2:00 a.m. UTC time every day that's when we get this spike and the spike we affectionately called the UTC hump but basically it's a huge number of queries that need to be processed sort of as soon as possible and we have service levels that dictate what as soon as possible means but I think the spike illustrates our use case pretty pretty accurately and um it really as we'll see it's really well suited for pervert icky on and we'll see what that means so we've got our we had our enterprise clusters that I mentioned earlier and just to give you some details on what they look like there they were independent and mirrored and so what that means is all four clusters held the same data and we did this intentionally because we wanted to be able to run our report anywhere we so so we've got this big queue over port is big a number of reports that need to be run and we've got these we started we started with one cluster and then we got we found that it couldn't keep up so we added a second and we found the number of reports went up that we needed to run that short period of time and and so on so we eventually ended up with four Enterprise clusters basically with this with the and we'd say they were mirrored they all had the same data they weren't however synchronized they were independent and so basically we would run the the tailpipe line so to speak we would run ingest and the matching and the aggregation on all the clusters in parallel so they it wasn't as if each cluster proceeded to the next step in sync with which dump the other clusters they were run independently so it was sort of like each each cluster would eventually get get consistent and so this this worked pretty well for for us but it created some imbalances and there was some cost concerns that will dig into but just to tell you about each of these each of these clusters they each had 50 nodes they had 72 logical CPU cores a half half a terabyte of RAM a bunch of raid rated disk drives and 2 petabytes of raw data as I stated before so pretty big beefy nodes that are physical physical nodes that we held we had in our data centers we actually reached these nodes so so it was on our data center providers data centers and the these were these these were what we built our business on basically but there was a number of challenges that we ran into as we as we continue to build our business and add data and add workload and and the first one is is some in ceremony can relate to his capacity planning so we had to prove think about the future and try to predict the amount of work that was going to need to be done and how much hardware we were going to need to satisfy that work to meet that demand and that's that's just generally a hard thing to do it's very difficult to verdict the future as we can probably all attest to and how much the world has changed and even in the last month so it's a it's a very difficult thing to do to look six twelve eighteen eighteen months into the future and sort of get it right and and and what people what we tended to do is we reach or we tried to our art plans our estimates were very conservative so we overbought in a lot of cases and not only that we had to plan for the peak so we're planning for that that that point in time that those number of hours in the early morning when we had to we had all those reports to run and so that so so we ended up buying a lot of hardware and we actually sort of overbought at times and then and then as the hardware were days it would kind of come into it would come into maturity and we have our our our workload would sort of come approach matching the demand so that was one of the big challenges the next challenge is that we were running on disk you can we wanted to add data in sort of two dimensions the only dimensions that everybody can think about we wanted to add more columns to our big aggregates and we wanted to keep our big aggregates for for longer periods of time so both horizontally and vertically we wanted to expand the datasets but we basically were running out of disk there was no more disk in and it's hard to add a disc to Vertica in enterprise mode not not impossible but certainly hard and and one cannot add discs without adding compute because enterprise mode the disk is all local to each of the nodes for most most people you can do not exchange with sands and other external rays but that's there are a number of other challenges with that so um adding in order to add disk we had to add compute and that basically meant kept us out of balance we're adding more compute than we needed for the amount of disk so that was the problem certainly physical nodes getting them the order delivered racked cables even before we even start such Vertica there's lead times there and and so it's also long commitment since we like I mentioned me Lisa hardware so we were committing to these nodes these physical servers for two or three years at a time and I mentioned that can be a hard thing to do but we wanted to least to keep our capex down so we wanted to keep our aggregates for a long period of time we could have done crazy things or more exotic things to to help us with this if we had to in enterprise mode we could have started to like daisy chain clusters together and that would have been sort of a non-trivial engineering effort because we would need to then figure out how to migrate data source first to recharge the data across all the clusters and we had to migrate data from one cluster to another cluster hesitation and we would have to think about how to aggregate run queries across clusters so if you assured data set spans two clusters it would have had to sort of aggregated within each cluster maybe and then build something on top the aggregated the data from each of those clusters so not impossible things but certainly not easy things and luckily for us we started talking about two Vertica about separation of compute and storage and I know other customers were talking to Vertica as we were people had had these problems and so Vertica inyeon mode came to the rescue and what I want to do is just talk about nyan mode really briefly for for those in the audience who aren't familiar but it's basically Vertigo's answered to the separation of computing storage it allows one to scale compute and or storage separately and and this there's a number of advantages to doing that whereas in the old enterprise days when you add a compute you added stores and vice-versa now we can now we can add one or the other or both according to how we want to and so really briefly how this works this slide this figure was taken directly from the verdict and documentation and so just just to talk really briefly about how it works the taking advantage of the cloud and so in this case Amazon Web Services the elasticity in the cloud and basically we've got you seen two instances so elastic cloud compute servers that access data that's in an s3 bucket and so three three ec2 nodes and in a bucket or the the blue objects in this diagram and the difference is a couple of a couple of big differences one the data no longer the persistent storage of the data the data where the data lives is no longer on each of the notes the persistent stores of the data is in s3 bucket and so what that does is it basically solves one of our first big problems which is we were running out of disk the s3 has for all intensive purposes infinite storage so we can keep much more data there and that mostly solved one of our big problems so the persistent data lives on s3 now what happens is when a query runs it runs on one of the three nodes that you see here and assuming we'll talk about depo in a second but what happens in a brand new cluster where it's just just spun up the hardware is the query will will run on those ec2 nodes but there will be no data so those nodes will reach out to s3 and run the query on remote storage so that so the query that the nodes are literally reaching out to the communal storage for the data and processing it entirely without using any data on on the nodes themselves and so that that that works pretty well it's not as fast as if the data was local to the nodes but um what Vertica did is they built a caching layer on on each of the node and that's what the depot represents so the depot is some amount of disk that is relatively local to the ec2 node and so when the query runs on remote stores on the on the s3 data it then queues up the data for download to the nodes and so the data will get will reside in the Depot so that the next query or the subsequent subsequent queries can run on local storage instead of remote stores and that speeds things up quite a bit so that that's that's what the role of the Depot is the depot is basically a caching layer and we'll talk about the details of how we can see your in our Depot the other thing that I want to point out is that since this is the cloud another problem that helps us solve is the concurrency problem so you can imagine that these three nodes are one sort of cluster and what we can do is we can spit up another three nodes and have it point to the same s3 communal storage bucket so now we've got six nodes pointing to the same data but we've you isolated each of the three nodes so that they act as if they are their own cluster and so vertical calls them sub-clusters so we've got two sub clusters each of which has three nodes and what this has essentially done it is it doubled the concurrency doubled the number of queries that can run at any given time because we've now got this new place which new this new chunk of compute which which can answer queries and so that has given us the ability to add concurrency much faster and I'll point out that for since it's cloud and and there are on-demand pricing models we can have significant savings because when a sub cluster is not needed we can stop it and we pay almost nothing for it so that's that's really really important really helpful especially for our workload which I pointed out before was so lumpy so those hours of the day when it's relatively quiet I can go and stop a bunch of sub clusters and and I will pay for them so that that yields nice cost savings let's be on in a nutshell obviously engineers and the documentation can use a lot more information and I'm happy to field questions later on as well but I want to talk about how how we implemented beyond at the trade desk and so I'll start on the left hand side at the top the the what we're representing here is some clusters so there's some cluster 0 r e t l sub cluster and it is a our primary sub cluster so when you get into the world of eon there's primary Club questions and secondary sub classes and it has to do with quorum so primary sub clusters are the sub clusters that we always expect to be up and running and they they contribute to quorum they decide whether there's enough instances number a number of enough nodes to have the database start up and so these this is where we run our ETL workload which is the ingest the match in the aggregate part of the work that I talked about earlier so these nodes are always up and running because our ETL pipeline is always on we're internet ad tech company like I mentioned and so we're constantly getting costly running ad and there's always data flowing into the system and the matching is happening in the aggregation so that part happens 24/7 and we wanted so that those nodes will always be up and running and we need this we need that those process needs to be super efficient and so what that is reflected in our instance type so each of our sub clusters is sixty four nodes we'll talk about how we came at that number but the infant type for the ETL sub cluster the primary subclusters is I 3x large so that is one of the instance types that has quite a bit of nvme stores attached and we'll talk about that but on 32 cores 240 four gigs of ram on each node and and that what that allows us to do I should have put the amount of nvme but I think it's seven terabytes for anything me storage what that allows us to do is to basically ensure that our ETL everything that this sub cluster does is always in Depot and so that that makes sure that it's always fast now when we get to the secondary subclusters these are as mentioned secondary so they can stop and start and it won't affect the cluster going up or down so they're they're sort of independent and we've got four what we call Rhian subclusters and and they're not read by definition or technically they're not read only any any sub cluster can ingest and create your data within the database and that'll all get that'll all get pushed to the s3 bucket but logically for us they're read only like these we just most of these the work that they happen to do is read only which it is which is nice because if it's read only it doesn't need to worry about commits and we let we let the primary subclusters or ETL so close to worry about committing data and we don't have to we don't have to have the all nodes in the database participating in transaction commits so we've got a for read subclusters and we've got one EP also cluster so a total of five sub clusters each so plus they're running sixty-four nodes so that gives us a 320 node database all things counted and not all those nodes are up at the same time as I mentioned but often often for big chunks of the days most of the read nodes are down but they do all spin up during our during our busy time so for the reading so clusters we've got I three for Excel so again the I three incidents family type which has nvme stores these notes have I think three and a half terabytes of nvme per node we just rate it to nvme drives we raid zero them together and 16 cores 122 gigs of ram so these are smaller you'll notice but it works out well for us because the the read workload is is typically dealing with much smaller data sets than then the ingest or the aggregation workbook so we can we can run these workloads on on smaller instances and leave a little bit of money and get more granularity with how many sub clusters are stopped and started at any given time the nvme doesn't persist the data on it isn't persisted remember you stop and start this is an important detail but it's okay because the depot does a pretty good job in that in that algorithm where it pulls data in that's recently used and the that gets pushed out a victim is the data that's least reasons use so it was used a long time ago so it's probably not going to be used to get so we've got um five sub-clusters and we have actually got to two of those so we've got a 320 node cluster in u.s. East and a 320 node cluster in u.s. West so we've got a high availability region diversity so and their peers like I talked about before they're they're independent but but yours they are each run 128 shards and and so with that what that which shards are is basically the it's similar to segmentation when you take those dataset you divide it into chunks and though and each sub cluster can concede want the data set in its entirety and so each sub cluster is dealing with 128 shards it shows 128 because it'll give us even distribution of the data on 64 node subclusters 60 120 might evenly by 64 and so there's so there's no data skew and and we chose 128 because the sort of ginger proof in case we wanted to double the size of any of the questions we can double the number of notes and we still have no excuse the data would be distributed evenly the disk what we've done is so we've got a couple of raid arrays we've got an EBS based array that they're catalog uses so the catalog storage location and I think we take for for EBS volumes and raid 0 them together and come up with 128 gigabyte Drive and we wanted an EPS for the catalog because it we can stop and start nodes and that data will persist it will come back when the node comes up so we don't have to run a bunch of configuration when the node starts up basically the node starts it automatically joins the cluster and and very strongly there after it starts processing work let's catalog and EBS now the nvme is another raid zero as I mess with this data and is ephemeral so let me stop and start it goes away but basically we take 512 gigabytes of the nvme and we give it to the data temp storage location and then we take whatever is remaining and give it to the depot and since the ETL and the reading clusters are different instance types they the depot is is side differently but otherwise it's the same across small clusters also it all adds up what what we have is now we we stopped the purging data for some of our big a grits we added bunch more columns and what basically we at this point we have 8 petabytes of raw data in each Jian cluster and it is obviously about 4 times what we can hold in our enterprise classes and we can continue to add to this maybe we need to add compute maybe we don't but the the amount of data that can can be held there against can obviously grow much more we've also built in auto scaling tool or service that basically monitors the queue that I showed you earlier monitors for those spikes I want to see as low spikes it then goes and starts up instances one sub-collector any of the sub clusters so that's that's how that's how we we have compute match the capacity match that's the demand also point out that we actually have one sub cluster is a specialized nodes it doesn't actually it's not strictly a customer reports sub clusters so we had this this tool called planner which basically optimizes ad campaigns for for our customers and we built it it runs on Vertica uses data and Vertica runs vertical queries and it was it was wildly successful um so we wanted to have some dedicated compute and beyond witty on it made it really easy to basically spin up one of these sub clusters or new sub cluster and say here you go planner team do what you want you can you can completely maximize the resources on these nodes and it won't affect any of the other operations that were doing the ingest the matching the aggregation or the reports up so it gave us a great deal of flexibility and agility which is super helpful so the question is has it been worth it and without a doubt the answer is yes we're doing things that we never could have done before sort of with reasonable cost we have lots more data specialized nodes and more agility but how do you quantify that because I don't want to try to quantify it for you guys but it's difficult because each eon we still have some enterprise nodes by the way cost as you have two of them but we also have these Eon clusters and so they're there they're running different workloads the aggregation is different the ingest is running more on eon does the number of nodes is different the hardware is different so there are significant differences between enterprise and and beyond and when we combine them together to do the entire workload but eon is definitely doing the majority of the workload it has most of the data it has data that goes is much older so it handles the the heavy heavy lifting now the query performance is more anecdotal still but basically when the data is in the Depot the query performance is very similar to enterprise quite close when the data is not in Depot and it needs to run our remote storage the the query performance is is is not as good it can be multiples it's not an order not orders of magnitude worse but certainly multiple the amount of time that it takes to run on enterprise but the good news is after the data downloads those young clusters quickly catch up as the cache populates there of cost I'd love to be able to tell you that we're running to X the number of reports or things are finishing 8x faster but it's not that simple as you Iran is that you it is me I seem to have gotten to thank you you hear me okay I can hear you now yeah we're still recording but that's fine we can edit this so if I'm just talking to the person the support person he will extend our recording time so if you want to maybe pick back up from the beginning of the slide and then we'll just edit out this this quiet period that we have sir okay great I'm going to go back on mute and why don't you just go back to the previous slide and then come into this one again and I'll make sure that I tell the person who yep perfect and then we'll continue from there is that okay yeah sound good all right all right I'm going back on yet so the question is has it been worth it and for us the answer has been a resounding yes we're doing things that we never could have done at reasonable cost before and we got more data we've got this Y note this law has nodes and in work we're much more agile so how to quantify that um well it's not quite as simple and straightforward as you might hope I mean we still have enterprise clusters we've got to update the the four that we had at peak so we've still got two of those around and we got our two yawn clusters but they're running different workloads and they're comprised of entirely different hardware the dependence has I've covered the number of nodes is different for sub-clusters so 64 versus 50 is going to have different performance the the workload itself the aggregation is aggregating more columns on yon because that's where we have disk available the queries themselves are different they're running more more queries on more intensive data intensive queries on yon because that's where the data is available so in a sense it is Jian is doing the heavy lifting for the cluster for our workload in terms of query performance still a little anecdotal but like when the queries that run on the enterprise cluster the performance matches that of the enterprise cluster quite closely when the data is in the Depot when the data is not in a Depot and Vertica has to go out to the f32 to get the data performance degrades as you might expect it can but it depends on the curious all things like counts counts are is really fast but if you need lots of the data from the material others to realize lots of columns that can run slower I'm not orders of magnitude slower but certainly multiple of the amount of time in terms of costs anecdotal will give a little bit more quantifying here so what I try to do is I try to figure out multiply it out if I wanted to run the entire workload on enterprise and I wanted to run the entire workload on e on with all the data we have today all the queries everything and to try to get it to the Apple tab so for enterprise the the and estimate that we do need approximately 18,000 cores CPU cores all together and that's a big number but that's doesn't even cover all the non-trivial engineering work that would need to be required that I kind of referenced earlier things like starting the data among multiple clusters migrating the data from one culture to another the daisy chain type stuff so that's that's the data point now for eon is to run the entire workload estimate we need about twenty thousand four hundred and eighty CPU cores so more CPU cores uh then then enterprise however about half of those and partly ten thousand of both CPU cores would only run for about six hours per day and so with the on demand and elasticity of the cloud that that is a huge advantage and so we are definitely moving as fast as we can to being on all Aeon we have we have time left on our contract with the enterprise clusters or not we're not able to get rid of them quite yet but Eon is certainly the way of the future for us I also want to point out that uh I mean yawn is we found to be the most efficient MPP database on the market and what that refers to is for a given dollar of spend of cost we get the most from that zone we get the most out of Vertica for that dollar compared to other cloud and MPP database platforms so our business is really happy with what we've been able to deliver with Yan Yan has also given us the ability to begin a new use case which is probably this case is probably pretty familiar to folks on the call where it's UI based so we'll have a website that our customers can log into and on that website they'll be able to run reports on queries through the website and have that run directly on a separate row to get beyond cluster and so much more latent latency sensitive and concurrency sensitive so the workflow that I've described up until this point has been pretty steady throughout the day and then we get our spike and then and then it goes back to normal for the rest of the day this workload it will be potentially more variable we don't know exactly when our engineers are going to deliver some huge feature that is going to make a 1-1 make a lot of people want to log into the website and check how their campaigns are doing so we but Yohn really helps us with this because we can add a capacity so easily we cannot compute and we can add so we can scale that up and down as needed and it allows us to match the concurrency so beyond the concurrency is much more variable we don't need a big long lead time so we're really excited about about this so last slide here I just want to leave you with some things to think about if you're about to embark or getting started on your journey with vertically on one of the things that you'll have to think about is the no account in the shard count so they're kind of tightly coupled the node count we determined by figuring like spinning up some instances in a single sub cluster and getting performance smaller to finding an acceptable performance considering current workload future workload for the queries that we had when we started and so we went with 64 we wanted to you want to certainly want to increase over 50 but we didn't want to have them be too big because of course it costs money and so what you like to do things in power to so 64 nodes and then the shard count for the shards again is like the data segmentation is a new type of segmentation on the data and the start out we went with 128 it began the reason is so that we could have no skew but you know could process the same same amount of data and we wanted to future-proof it so that's probably it's probably a nice general recommendation doubleness account for the nodes the instance type and and how much people space those are certainly things you're going to consider like I was talking about we went for they I three for Excel I 3/8 Excel because they offer good good Depot stores which gives us a really consistent good performance and it is all in Depot the pretty good mud presentation and some information on on I think we're going to use our r5 or the are for instance types for for our UI cluster so much less the data smaller so much less enter this on Depot so we don't need on that nvm you stores the reader we're going to want to have a reserved a mix of reserved and on-demand instances if you're if you're 24/7 shop like we are like so our ETL subclusters those are reserved instances because we know we're going to run those 24 hours a day 365 days a year so there's no advantage of having them be on-demand on demand cost more than reserve so we get cost savings on on figuring out what we're going to run and have keep running and it's the read subclusters that are for the most part on on demand we have one of our each sub Buster's is actually on 24/7 because we keep it up for ad-hoc queries your analyst queries that we don't know when exactly they're going to hit and they want to be able to continue working whenever they want to in terms of the initial data load the initial data ingest what we had to do and now how it works till today is you've got to basically load all your data from scratch there isn't a great tooling just yet for data populate or moving from enterprise to Aeon so what we did is we exported all the data in our enterprise cluster into park' files and put those out on s3 and then we ingested them into into our first Eon cluster so it's kind of a pain we script it out a bunch of stuff obviously but they worked and the good news is that once you do that like the second yon cluster is just a bucket copy in it and so there's tools missions that can help help with that you're going to want to manage your fetches and addiction so this is the data that's in the cache is what I'm referring to here the data that's in the default and so like I talked about we have our ETL cluster which has the most recent data that's just an injected and the most difficult data that's been aggregated so this really recent data so we wouldn't want anybody logging into that ETL cluster and running queries on big aggregates to go back one three years because that would invalidate the cache the depot would start pulling in that historical data and it was our assessing that historical data and evicting the recent data which would slow things out flow down that ETL pipelines so we didn't want that so we need to make sure that users whether their service accounts or human users are connecting to the right phone cluster and I mean we just created the adventure users with IPS and target groups to palm those pretty-pretty it was definitely something to think about lastly if you're like us and you're going to want to stop and start nodes you're going to have to have a service that does that for you we're where we built this very simple tool that basically monitors the queue and stops and starts subclusters accordingly we're hoping that that we can work with Vertica to have it be a little bit more driven by the cloud configuration itself so for us all amazon and we love it if we could have it have a scale with the with the with the eight of us can take through points do things to watch out for when when you're working with Eon is the first is system table queries on storage layer or metadata and the thing to be careful of is that the storage layer metadata is replicated it's caught as a copy for each of the sub clusters that are out there so we have the ETL sub cluster and our resources so for each of the five sub clusters there is a copy of all the data in storage containers system table all the data and partitions system table so when you want to use this new system tables for analyzing how much data you have or any other analysis make sure that you filter your query with a node name and so for us the node name is less than or equal to 64 because each of our sub clusters at 64 so we limit we limit the nodes to the to the 64 et 64 node ETL collector otherwise if we didn't have this filter we would get 5x the values for counts and some sort of stuff and lastly there is a problem that we're kind of working on and thinking about is a DC table data for sub clusters that are our stops when when the instances stopped literally the operating system is down and there's no way to access it so it takes the DC table DC table data with it and so I cannot after after my so close to scale up in the morning and then they scale down I can't run DC table queries on how what performed well and where and that sort of stuff because it's local to those nodes so we're working on something so something to be aware of and we're working on a solution or an implementation to try to suck that data out of all the notes you can those read only knows that stop and start all the time and bring it in to some other kind of repository perhaps another vertical cluster so that we can run analysis and monitoring even you want those those are down that's it um thanks for taking the time to look into my presentation really do it thank you Ron that was a tremendous amount of information thank you for sharing that with everyone um we have some questions come in that I would like to present to you Ron if you have a couple min it your first let's jump right in the first one a loading 85 terabytes per day of data is pretty significant amount what format does that data come in and what does that load process look like yeah a great question so the format is a tab separated files that are Jesus compressed and the reason for that could basically historical we don't have much tabs in our data and this is how how the data gets compressed and moved off of our our bidders the things that generate most of this data so it's a PSD gzip compressed and how you kind of we kind of have how we load it I would say we have actually kind of a Cadillac loader in a couple of different perspectives one is um we've got this autist raishin layer that's homegrown managing the logs is the data that gets loaded into Vertica and so we accumulate data and then we take we take some some files and we push them to redistribute them along the ETL nodes in the cluster and so we're literally pushing the file to through the nodes and we then run a copy statement to to ingest data in the database and then we remove the file from from the nodes themselves and so it's a little bit extra data movement which you may think about changing in the future assisting we move more and more to be on well the really nice thing about this especially for for the enterprise clusters is that the copy' statements are really fast and so we the coffee statements use memory but let's pick any other query but the performance of the cautery statement is really sensitive to the amount of available memory and so since the data is local to the nodes literally in the data directory that I referenced earlier it can access that data from the nvme stores and the kabhi statement runs very fast and then that memory is available to do something else and so we pay a little bit of cost in terms of latency and in terms of downloading the data to the nose we might as we move more and more PC on we might start ingesting it directly from s3 not copying the nodes first we'll see about that what's there that's how that's how we read the data interesting works great thanks Ron um another question what was the biggest challenge you found when migrating from on-prem to AWS uh yeah so um a couple of things that come to mind the first was the baculum the data load it was kind of a pain I mean like I referenced in that last slide only because I mean we didn't have tools built to do this so I mean we had to script some stuff out and it wasn't overly complex but yes it's just a lot of data to move I mean even with starting with with two petabytes so making sure that there there is no missed data no gaps making and moving it from the enterprise cluster so what we did is we exported it to the local disk on the enterprise buses and we then we push this history and then we ingested it in ze on again Allspark X oh so it's a lot of days to move around and I mean we have to you have to take an outage at some point stop loading data while we do that final kiss-up phase and so that was that was a challenge a sort of a one-time challenge the other saying that I mean we've been dealing with a week not that we're dealing with but with his challenge was is I mean it's relatively you can still throw totally new product for vertical and so we are big advantages of beyond is allow us to stop and start nodes and recently Vertica has gotten quite good at stopping in part starting nodes for a while there it was it was it took a really long time to start to Noah back up and it could be invasive but we worked with with the engineering team with Yan Zi and others to really really reduce that and now it's not really an issue that we think that we think too much about hey thanks towards the end of the presentation you had said that you've got 128 shards but you have your some clusters are usually around 64 nodes and you had talked about a ratio of two to one why is that and if you were to do it again would you use 128 shards ah good question so that is a reference the reason why is because we wanted to future professionals so basically we wanted to make sure that the number of stars was evenly divisible by the number of nodes and you could I could have done that was 64 I could have done that with 128 or any other multiple entities for but we went with 128 is to try to protect ourselves in the future so that if we wanted to double the number of nodes in the ECL phone cluster specifically we could have done that so that was double from 64 to 128 and then each node would have happened just one chart that it had would have to deal with so so no skew um the second part of question if I had to do it if I had to do it over again I think I would have done I think I would have stuck with 128 we still have I mean so we either running this cluster for more than 18 months now I think especially in USC and we haven't needed to increase the number of nodes so in that sense like it's been a little bit extra overhead having more shards but it gives us the peace of mind that we can easily double that and not have to worry about it so I think I think everyone is a nice place to start and you may even consider a three to one or four to one if if you're if you're expecting really rapid growth that you were just getting started with you on and your business and your gates that's a small now but what you expect to have them grow up significantly less powerful green thank you Ron that's with all the questions that we have out there for today if you do have others please feel free to send them in and we will get back to you and we'll respond directly via email and again our engineers will be available on the vertical forums where you can continue the discussion with them there I want to thank Ron for the great presentation and also the audience for your participation in questions please note that a replay of today's event and a copy of the slides will be available on demand shortly and of course we invite you to share this information with your colleagues as well again thank you and this concludes this webinar and have a great day you
SUMMARY :
stats on on the raw data sizes that we is so that we could have no skew but you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron Cormier | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
Ron | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
8 petabytes | QUANTITY | 0.99+ |
122 gigs | QUANTITY | 0.99+ |
85 terabytes | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
512 gigabytes | QUANTITY | 0.99+ |
128 gigabyte | QUANTITY | 0.99+ |
three nodes | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
six nodes | QUANTITY | 0.99+ |
each cluster | QUANTITY | 0.99+ |
two petabytes | QUANTITY | 0.99+ |
240 | QUANTITY | 0.99+ |
2 petabytes | QUANTITY | 0.99+ |
16 cores | QUANTITY | 0.99+ |
espn.com | OTHER | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Yan Yan | ORGANIZATION | 0.99+ |
more than 18 months | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
each cluster | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one cluster | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
32 cores | QUANTITY | 0.99+ |
ten thousand | QUANTITY | 0.98+ |
each sub cluster | QUANTITY | 0.98+ |
one cluster | QUANTITY | 0.98+ |
72 | QUANTITY | 0.98+ |
seven terabytes | QUANTITY | 0.98+ |
two dimensions | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
5x | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
eon | ORGANIZATION | 0.98+ |
128 | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
four gigs | QUANTITY | 0.98+ |
s3 | TITLE | 0.98+ |
three and a half terabytes | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
64 | QUANTITY | 0.98+ |
8x | QUANTITY | 0.97+ |
one chart | QUANTITY | 0.97+ |
about ten new terabytes | QUANTITY | 0.97+ |
one-time | QUANTITY | 0.97+ |
two instances | QUANTITY | 0.97+ |
Depot | ORGANIZATION | 0.97+ |
last month | DATE | 0.97+ |
five sub-clusters | QUANTITY | 0.97+ |
two clusters | QUANTITY | 0.97+ |
each node | QUANTITY | 0.97+ |
five sub clusters | QUANTITY | 0.96+ |
Monica Ene-Pietrosanu, Intel Corporation | Node Summit 2017
>> Hey welcome back everybody, Jeff Frick here with theCUBE. We are in downtown San Francisco at the Mission Bay Convention Center at Node Summit 2017. We've been coming to Node Summit off and on for a number of years. And it's pretty amazing, the growth of this application for development. It really seems to take off. There's about 800 or 900 people here. It's kind of the limits of the facility here at Mission Bay. But we're really excited to be here. And it's not surprising to have me see Intel is here in full force. Our first guest is Monica Ene-Pietrosanu. And she is the Director of Software Engineering for Intel, welcome. >> Thank you, hello, and thank you very much for inviting me. It's definitely exciting to be here. Node is this dynamic community that grows in one year, like others can. So it's always exciting to be part one of these events. And present about the work we are doing for Node. >> So you're on a panel later on Taking Benchmarking to the Next Level. So what is that all about? >> That is part of the work we are doing for Node. And I want to mention here the word stewardship. Intel is a long time contributor in the open source communities. And has assumed a performance leadership in many of these communities. We are doing the same for Node. We are driving, we are trying to be a steward for the performance in OJS. And what this means, is we are watching to make sure that every check in that happens, doesn't impact performance. We are also optimizing Nodes, so it give the best of the hardware, Node runs best on the newest hardware that we have. And also, we are developing, right now new measures, new benchmarks which better reflect the reality of the data center use cases. The way your Node is getting used in the Cloud. The way Node is getting used in the data center. There are very few ways to measure that today. And with this fast development of the ecosystem, my team has also taken this role of working with the industry partners and coming up with realistic measures for the performance. >> Right, so these new benchmarks that you're defining around the capabilities of Node. Or are you using old benchmarks? Or how are you kind of addressing that challenge? >> We started by running what was available. And most of the benchmarks were quite, let's say, isolated. They were focused on single Node, one operation, not realistic in terms of what the measurements were being done for the data center. Especially, in the data center everything is evolving. So nothing is just running with one single computer. Everything is impacted by network latencies. We have a significant number of servers out there. We have multiple software components interacting. So it's way more complex. And then you have containers coming into the picture. And everything makes it harder and harder to evaluate from the performance perspective. And I think Node is doing a pretty good job from the performance perspective. But who's watching that it stays the same? I think performance is one of those things that you value when you don't have it, right? Otherwise you just take it as granted, like it's there. So, my team at Intel is focused on top tier scripting languages. We are part of this larger software organization called Software and Services Group. And we are, right now, optimizing and writing the performance for Python, No-gs, PHP HHVM, and for some of the top tier languages used in the data centers. So Node is actually our interesting story in terms of evolution. Because we've seen, also, an extraordinary growth. We've seen, it's probably the one who's doubled for the past three years. The community has doubled. Everything has doubled for Node, right? Even, the number of commits, it depends on which statuses you look-- >> They're all up and to the right, very steep. >> Yeah, so then it's a very fast progress which we need to keep pace with. And one thing that is important for us is to make sure that we expose the best of our hardware to the software. With Node that is taking an interesting approach. Because Node is one of, what we called CPU front end bounce. It's having a large footprint. It's one of the largest footprint applications that we've seen. And for this we want to make sure that the newest CPUs we bring to market are able to handle it. >> I was just going to say, they have Trevor Livingston on it from HomeAway. Kicked off things today. We're talking about the growth. He said a year ago, they had one Node JS project. And this is a big site that competes with, like, Air B&B. That's now owned by Expedia. Now they say, he said, they had, "15 projects in production. "22 almost in production, and 75 other internal projects." In one year, from one. So that shows pretty amazing growth and the power of the application. And from Intel's point of view, you guys are all in on cloud. You're all in on data centers. You've all seen all the adds. So you guys are really, aggressively taking on the optimization, for the unique challenges and special environment that is Cloud. Which is computing everywhere, computing nowhere. But at the end of the day, it's got to sit on somebody's servers. And there's got to be a CPU in the background. So you look at all these different languages. Why do you think Node has gone so crazy? >> I think there are several reasons. And my background is a C++ developer, coming and security. So coming into the Node space, one thing amazed me. Like, only 2% of the code is yours, when you write an application. So that is like-- >> Jeff: 2%? >> So where is the other 98% coming from? Or it's already pre developed. It's an ecosystem, you just pull in those libraries. So that's what brings, in addition to the security risks you have. It brings a fantastic time to market. So it enables you as the developer to launch an application in a matter of days, instead of months or a year. So time to market is an unbeatable proposition. And I think that's what drives this space. When you need to launch new applications faster and faster, and upgrade. For us, that's also an interesting challenge. Because we have, our super road maps are not days, right? Are years? So what we want to make sure is that we feed back into the CPU road map the developments we are seeing into this space. I have on my team, I have several principal engineers who are working with the CPU architects to make sure that we are continuously providing this information back. One thing I wanted to mention is, as you probably know, since you've been talking to other Intel people, we've been launching recently, the latest generation server, Skylake. And on this latest generation Nodes. So all the Node workloads we've been optimizing and measuring. So one point five x performance improvement, from the prior generation. So this is a fantastic boost. And this doesn't happen only from hardware. It happens from a combination of hardware and software. And we are continuing to work now with the CPU architects to make sure that the future generation also keeps space with the developments. >> It's interesting, kind of the three horsemen of computing, if you will, right? There's compute, there's store, and there's IO. And now we're working, and it's interesting that Ryan Dahl, it's funny, they brought up Ryan Dahl. We interviewed him back at the Node JS, I think back in 2011? Still one of our most popular segments on theCUBE. We do thousands of interviews a year. He's still one of the most popular. But to really rethink the IO problem, in this asynchronous form, seems to be just another real breakthrough that opens up all types of capacity in compute and store. When you don't have to sit and wait. So that must be another thing that you guys have addressed from coming from the hardware and the software perspective? >> You are right on spot, because I think Node, comparing to other scripting languages brings more into the picture, the whole platform. So it's not only a CPU. It's also a networking. It's also related to storage. Also, it makes the entire platform to shine if it's optimized to the right capability. And we've been investing a lot into this. We have all our work is made available is open source. All our contributions are up-streamed back into the mainstream. We also started an effort to work with the industry in developing these new workloads. So last year at Node Interactive, we launched one new workload, benchmark, for Node. Which we called Node DC. With his first use case, which is an employee information system, simulating what a large data center distributed application will be doing. This year, now at Node Summit, we will be presenting the updated version of that, one point zero, this time. It was version zero point nine, last time. Where we added support for containers. We included several capabilities to be able to run, in a configural manner, in as many configurations as needed. And we are also contributing this back. We submitted it to the Node Foundation. So it becomes an official benchmark for the Node Foundation. Which means, every night, after the build system runs, this will be run as part of the regressions. To make sure that the performance doesn't degrade. So that's part of our work. And that's also continuing an effort we started with what we call the languages performance portal. If you go to languagesperformance.intel.com we have an entire lab behind that portal, in which every night we build this top tier scripting languages. Including Python, including Node, including PHP, and we run performance regressions on the latest Intel architecture. So we are contributing the results back into the open source community, to make sure that the community is aware if any regression happens. And we have a team of engineers who jumps on those regression center root causes and analyzes it. So to figure it out. >> So Monica, but we're almost out of time. But before I let you go, we talked before we got started, I love Kim Stevenson, I've interviewed her a bunch of times. And one of the conversations that we had was about Moore's Law. And that Moore's Law's really an attitude. And it's kind of a way to do things more than hitting the physical limitations on chips, which I think is a silly conversation. You're in a constantly, the role of constantly optimizing. And making things better, faster, cheaper. As you sit back and look at, kind of, what you've done to date, and looking forward, do you see any slowdown in this ability to continue to tweak, optimize, tweak, optimize? And just get more and more performance out of some of these new technologies? >> I wouldn't see slow down. At least from where I sit on the software side. I'm seeing only acceleration. So, the hardware brings a 30%, 40% improvement. We add, on top of that, the software optimizations. Which bring 10%, 20% improvements as well. So that continuously is going on. And I am not seeing it improving. I'm seeing it becoming more, there is a need for customization. So that's where when we design the workloads, we need to make them customizable. Because there are different use cases across the data center customers. So they are used differently. And we want to make sure that we reflect the reality. That's how they're in the world. And that's how our customers, our partners can also leverage them, to measure something that's meaningful for them. So in terms of speed, now, we want to make sure that we fully utilize our CPU. And we grow to more and more cores and increase frequency. We also grow to more capabilities. And our focus is also to make the entire platform to shine. And when we talk about platform we talk about networking. We talk about non volatile memory. We talk about storage as well as CPU. >> So Gordon's safe. You're safe, Gordon Moore. Your law's still solid. Monica, thanks for taking a few minutes out of your day and good luck on your panel later this afternoon. >> Thank you very much for having me here. It was pleasure. >> Absolutely, all right, Jeff Frick checking in from Node Summit 2017 in San Francisco. We'll be right back after this short break. Thanks for watching. (upbeat music)
SUMMARY :
And it's pretty amazing, the growth And present about the work we are doing for Node. Taking Benchmarking to the Next Level. Node runs best on the newest hardware that we have. Or are you using old benchmarks? And most of the benchmarks were quite, let's say, isolated. the best of our hardware to the software. But at the end of the day, it's got to So coming into the Node space, one thing amazed me. So all the Node workloads we've We interviewed him back at the Node JS, Also, it makes the entire platform to shine And one of the conversations that we had And our focus is also to make the entire platform to shine. So Gordon's safe. Thank you very much for having me here. We'll be right back after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Monica Ene-Pietrosanu | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
30% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
15 projects | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Ryan Dahl | PERSON | 0.99+ |
Kim Stevenson | PERSON | 0.99+ |
Node | TITLE | 0.99+ |
Node Foundation | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Expedia | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Node Interactive | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Nodes | TITLE | 0.99+ |
Intel Corporation | ORGANIZATION | 0.99+ |
PHP | TITLE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HomeAway | ORGANIZATION | 0.99+ |
This year | DATE | 0.99+ |
Gordon Moore | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
98% | QUANTITY | 0.99+ |
Gordon | PERSON | 0.99+ |
languagesperformance.intel.com | OTHER | 0.99+ |
2% | QUANTITY | 0.98+ |
Air B&B. | ORGANIZATION | 0.98+ |
Mission Bay Convention Center | LOCATION | 0.98+ |
900 people | QUANTITY | 0.98+ |
one year | QUANTITY | 0.98+ |
first guest | QUANTITY | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
one point | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Trevor Livingston | PERSON | 0.98+ |
one thing | QUANTITY | 0.98+ |
one operation | QUANTITY | 0.97+ |
Node Summit | EVENT | 0.97+ |
today | DATE | 0.96+ |
single | QUANTITY | 0.96+ |
OJS | TITLE | 0.96+ |
75 other internal projects | QUANTITY | 0.95+ |
Mission Bay | LOCATION | 0.94+ |
Moore | PERSON | 0.94+ |
three horsemen | QUANTITY | 0.93+ |
PHP HHVM | TITLE | 0.93+ |
about 800 | QUANTITY | 0.93+ |
later this afternoon | DATE | 0.92+ |
one single computer | QUANTITY | 0.92+ |
22 | QUANTITY | 0.91+ |
thousands of interviews | QUANTITY | 0.91+ |
Node JS | TITLE | 0.88+ |
first use case | QUANTITY | 0.88+ |
C+ | TITLE | 0.86+ |
Software and Services Group | ORGANIZATION | 0.86+ |
five | QUANTITY | 0.85+ |
a year | QUANTITY | 0.81+ |
Nick O'Leary, IBM | Node Summit 2017
>> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at Node Summit 2017 in downtown San Francisco at the Mission Bay Convention Center. About 800 hardcore developers talkin' about Node and really the crazy growth and acceleration in this community as well as the applications. We're excited to have our next quest. He's Nick O'Leary, Developer Advocate from IBM for Watson IoT, and you're workin' on somethin' kind of cool called Node-REDS. First off, welcome. >> Thank you, thank you very much for havin' me. >> Absolutely, so what is Node-RED? >> So, Node-RED is an open source project we started working on about four years ago now in the Emerging Technologies group in the UK parts of IBM, and it's a Node.js application that gives you a visual programming tool for Internet of Things-type applications. So when you run it, you point your web browser at it, and it gives you this visual workspace to start dragging in nodes into your canvas that represent some sort of functionality, like connect to Twitter and get some tweets or save something to a database or read some sensor data, whatever it might be, and you start drawing wires between those nodes to express how you want your application to flow, how you want data to flow through your application. So it's quite a lightweight tool and really accessible to a wide range of developers whether sort of seasoned, experienced Node developers or your kids just learning how to program because it hides complexity. And, yeah, it's Node.js-based, so it runs down on a Raspberry Pi, it runs up in the cloud like IBM Bluemix, wherever you want to run it. So really flexible developer platform. >> Pretty interesting 'cause we just had Monica on from Intel, and she was talking about one of the interesting things in this development world of Node.js is so much of the code was written by somebody else. I think she said in a lot of projects the actual original code may be 2% because you're using all these other stuff, and libraries have already been created. And it sounds like you're really kind of leveraging that infrastructure to be able to do something like this. >> Absolutely, so, one of the key things we enabled very early on was to, 'cause we recognized the power of our tool, is those nodes in our palette that you drag on. So we built the system so that people could write their own nodes and extend the palette, and we used the same node packaging as the standard MPM ecosystem. And as of a couple weeks ago, we have over a thousand third party nodes people have written, so there's probably already a module for most hardware devices, online APIs, databases, whatever you want. People are creating and extending the platform in all sorts of ways just building on top of that incredible ecosystem that Node.js has. >> And then how does that tie back to Watson? You said you're involved in Watson. So Watson people don't think of necessarily a simple, simple interface but not necessarily a simple application. So what's the tie between Watson and Node.js and Node-RED? >> So, Node-RED is a development tool. I say it all hinges on those nodes and what they connect to, so we have got nodes for the Watson IoT platform, so that's great for getting, if you're running node-RED on a Raspberry Pi, connected up to our IoT platform, connect to applications in the Bluemix space. But we also have nodes for the Watson cognitive services, like the machine learning things, visual recognition, text to speech, all of those services we have nodes for. So, again, it allows people to start playing with the rich capabilities of the Watson platform without having to dive straight into understanding lines of code and you can start being productive and create real meaningful solutions without having to understand whether it's Node.js or Java, whatever language you would normally write to access low-level APIs. >> And can the visual tool connect to things that are not necessarily Node specific? >> So, anything that provides some sort of API. If it's got a programmatic API, then it's easier to do with Node 'cause we are in a Node ecosystem. But we've got established patterns for talking to other languages but also things often provides like a rest API, HTTP, MQTT, many other protocols, and we have all of that support built straight into the platform. >> Right, and so what was the motivation to build this, just to have an easier development interface? >> Yeah, it was twofold really. One was in the Emerging Technologies where I was, we do proof of concepts for clients we have to turn around really quickly, so whereas we're more than capable of writing individual lines of code, having that tool that lets us experiment much quicker and solve real client problems much quicker was a great value to us. But then we also saw the advantage for the developers who don't understand individual lines of code for educational purposes, whatever it might be. Those great motivators there in the various communities we're involved with, in IoT home hobbyists, all that sort of space as well, it's found a real incredible user community across the board. >> And when it started, was it designed to be an open source project or that kind of realization, if you will, kind of came along the way? >> I think on day one it wasn't the first thing to mind. You know, we were just experimenting with technology, which is kind of how we operated. But we very quickly got to the point where we realized we didn't have the time and resource to write all the nodes that could be written, and there was a much broader audience than just us doing our day job that this tool could tap into. So, maybe not on day one but maybe on a month in we thought this has to be open source. So, it was about six months after we started it we moved to an open source project, and that was September 2013. And then in October last year, IBM contributed the project to be a founding project of the JavaScript Foundation. Whereas it's a project that has come from IBM, it's now a project that is independently governed. It's not owned by IBM, it's part of the foundation. So, look at the wide range of other companies getting involved, making use of it, contributing back, and really good to see that ecosystem build. >> Oh, that's great, so I'm just curious, you said you deal with a lot of customer prototyping. Obviously you're involved in Watson, which is kind of the pointy end of the spear right now with IBM, with the cognitive and the IoT. As you kind of look at the landscape and stuff you're workin' on over the next, I would never say multiple years 'cause that's way too long, six months, nine months, what are some of your priorities, what are some of the things you're seeing, kind of that customers are doing today that they couldn't do before that gets you excited to get up out of bed and go to work every day? >> From my perspective, with our focus on Node-RED, which is kind of where my focus is right now, it's really that developer experience. We've gone so far with our really intuitive to use tooling, but we recognize there's more to do. So, how can we enable better collaboration, better basic workflows within our particular tooling, because there are people using Node-RED, in particular happily in production today, but it's funny 'cause we don't have a 1.0 version number because, for us, that wasn't interesting to us because we are delivering meaningful function. But in the project, we have just published our road map to a one point zero to really give that firm statement to people who are unsure about it as a technology that this is good for production. And we've got a wealth of use cases of companies who are using it today, so, that's very much our focus, my focus within Node-RED, and all of it does then tie back to yes, it's a JS foundation project, but then with my developer advocate hat on, making sure that draw from Node-RED into the Watson platform is as seamless and intuitive as possible because that helps everyone. >> Right, right, okay, so before I let you go, two things: One begs the question what version are you on, and where can people go to find more information so they can see when that 1.0 and obviously contribute? >> So as a Node project, we've stuck to Symantec versioning, so we are currently version naught dot 17. So we've done 17 major releases over the last about three and a bit years, and that's where we're moving forward. We've got this road map to get to 1.0 first quarter of next year. And if you want to find out more, nodered.org is where we're based, or you can find us through links by the JS Foundation as well. >> Alright, well, Nick, thanks for takin' a little bit of your time and safe travels home at the end of the show. >> Thank you very much. >> Alright, he's Nick O'Leary from IBM. I'm Jeff Frick, you're watchin' theCUBE. Thanks for watchin', see ya next time. (bubbly electronic music)
SUMMARY :
and really the crazy growth and acceleration to express how you want your application to flow, that infrastructure to be able to do something like this. and we used the same node packaging as And then how does that tie back to Watson? text to speech, all of those services we have nodes for. and we have all of that support But then we also saw the advantage for the developers So, it was about six months after we started it before that gets you excited to get up But in the project, we have just published One begs the question what version are you on, so we are currently version naught dot 17. of your time and safe travels home at the end of the show. I'm Jeff Frick, you're watchin' theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick O'Leary | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
September 2013 | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
Node.js | TITLE | 0.99+ |
2% | QUANTITY | 0.99+ |
Node-RED | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Node | TITLE | 0.99+ |
UK | LOCATION | 0.99+ |
Nick | PERSON | 0.99+ |
October last year | DATE | 0.99+ |
Watson | TITLE | 0.99+ |
six months | QUANTITY | 0.99+ |
JavaScript Foundation | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
JS Foundation | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Mission Bay Convention Center | LOCATION | 0.99+ |
Node-REDS | TITLE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
first quarter of next year | DATE | 0.97+ |
17 major releases | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
one | QUANTITY | 0.97+ |
node-RED | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
Monica | PERSON | 0.96+ |
First | QUANTITY | 0.95+ |
Node Summit 2017 | EVENT | 0.95+ |
one point | QUANTITY | 0.95+ |
first thing | QUANTITY | 0.93+ |
About 800 hardcore developers | QUANTITY | 0.93+ |
Raspberry Pi | COMMERCIAL_ITEM | 0.92+ |
today | DATE | 0.92+ |
nodered.org | OTHER | 0.9+ |
day one | QUANTITY | 0.89+ |
couple weeks ago | DATE | 0.88+ |
Bluemix | TITLE | 0.85+ |
about six months | QUANTITY | 0.85+ |
a month | QUANTITY | 0.84+ |
over a thousand third | QUANTITY | 0.82+ |
1.0 | QUANTITY | 0.82+ |
Emerging Technologies | ORGANIZATION | 0.78+ |
1.0 | DATE | 0.77+ |
theCUBE | ORGANIZATION | 0.76+ |
MQTT | OTHER | 0.74+ |
JS | ORGANIZATION | 0.73+ |
about four years ago | DATE | 0.73+ |
San Francisco | LOCATION | 0.72+ |
zero | QUANTITY | 0.67+ |
Watson IoT | ORGANIZATION | 0.64+ |
HTTP | OTHER | 0.63+ |
twofold | QUANTITY | 0.61+ |
about three | QUANTITY | 0.59+ |
Stephen Fluin, Google | Node Summit 2017
>> Hey, welcome back everybody. Jeff Frick with theCUBE. We're at Node Summit 2017, downtown San Francisco Mission Bay Conference Center, 800 people, a lot of developers, pretty much all developers talking about what's going on with Node, the Node community and some tangental things that are involved in Node, as well. We're excited to have our next guest on, he's Stephen Fluin, he's a developer advocate for Google, Stephen, welcome. >> Thank you so much for having me. >> Absolutely. First off, just kind of impressions of the show. You said you were here last year, the community's obviously very active, growing, I don't know that they're going to be able to come back to this space for very much longer. >> I know. >> What do you think? >> Probably not, I love how the community's continuing to grow and evolve, right? This technology is moving faster than almost any technology I've seen before. I call it a communatorial explosion of complexity because there's always new tools coming out, new ways of thinking and that's really rich and a great way to have a lot of innovation happening. >> Right, there was a great, one of the early ones this morning, the speaker said they had one Node app a year ago, and now they have 15 in production, 22 almost ready and 75 other internal projects, in one year! >> Yeah, it's definitely crazy. >> So why, I mean there's lots of things as to why Node's successful, but from your perspective, why is it growing so fast? >> I think it's fast because it's the first time that we've had a real extended eco-system where a lot of developers are coming together, bringing their own perspectives, and it's a very collaborative environment. Everyone's trying to help each other. >> So you just got off stage, you had your own session >> I did. >> But Angular on the Server. >> Yes. >> Even for the folks that missed it, kind of what was the main theme of your talk? >> Sure, sure, so I'm on the Angular Team, which is a client-side framework for building applications. We've really been focused a lot on really great web experiences for the client. How do we run code as close as possible to the browser so that you get these very rich, engaging applications. >> Right. >> But one of the things that we've been focused on and has been one of our design goals since the beginning is how do we write JavaScript and TypeScript in a way that you can run it on the client or the server? And so just last week we announced new support has landed in our CLI that makes this process easier so that you can run your applications on the server and then bootstrap a client-side application on top of that. >> Why is that important? >> It's important for a few different reasons. You want to run applications sometimes on the server, first, because there's a lot of computers that are processing the web and browsing the web across the internet >> Right. >> so there's search engines, there's things like Facebook and Twitter, which are scraping websites looking for metadata, looking for thumnbnails and other sorts of content, but then also there's a human aspect where by rendering things on the server, you can actually have an increased perception of your load times, so things look like they're loading faster while you can still then, on top of that, deliver very rich, engaging client side experience with animations and transitions and all those sorts of things. >> That's interesting. Before we got started you had talked about thinking of the world in terms of the user experience, at the end of the line versus thinking of it from the server. I thought you were going down kind of the server optimization, power, when you say think about the server, those types of things but you're talking about a whole different set of reasons to think about the server >> Yeah, absolutely. >> and the way that that connects to the rest of the web. >> Yes, because there's a lot of consumers of content that we don't necessarily think about when we're building applications >> Right, right. >> we normally think about the human side of things but having an application, whether it's a single application or whatever, that is also well optimized for servers can be very helpful. >> Yeah, that's pretty >> Servers as the consumers. >> servers as the consumers which I guess makes sense, right? Because the Google's Indexes and all the other ones are crawling servers >> Absolutely. >> they're not scraping web pages, hopefully, I assume, I assume we're past that stage. Alright, good, so what else is going on, in terms of the Angular community, that you're working on next? >> Sure, sure. I think we're really just focused on continuing to make things easier, smaller and faster to use, so those are kind of the three focus points we've got as we continue to invest and evolve in the platforms. So, how do we make it easier for new developers to come into the kind of Angular platform and take advantage of all we have to offer? How do we make smaller bundles so that the experience is faster for users? >> Right, right. >> And then how do we make all these things understandable and digestable for developers? >> It's like the bionic men never went away, right? It's still better, stronger, faster. >> Exactly. >> Alright, Steve, thanks for taking a few minutes out of your day and sharing your story with us. >> Thanks so much for having me. >> Absolutely, Stephen Fluin, from Google. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time. Take care.
SUMMARY :
the Node community and some tangental things the community's obviously very active, growing, Probably not, I love how the community's and it's a very collaborative environment. so that you get these very rich, engaging applications. so that you can run your applications on the server that are processing the web and browsing the web you can actually have an increased perception kind of the server optimization, power, and the way that the human side of things but having an application, in terms of the Angular community, so that the experience is faster for users? It's like the bionic men never went away, right? and sharing your story with us. Thanks for watching, we'll catch you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Stephen Fluin | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
last week | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
22 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Node | TITLE | 0.99+ |
800 people | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
a year ago | DATE | 0.98+ |
first time | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
ORGANIZATION | 0.95+ | |
single application | QUANTITY | 0.95+ |
Angular | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.94+ |
Node Summit 2017 | EVENT | 0.94+ |
ORGANIZATION | 0.94+ | |
three focus points | QUANTITY | 0.93+ |
San Francisco Mission Bay Conference Center | LOCATION | 0.93+ |
this morning | DATE | 0.92+ |
75 other internal projects | QUANTITY | 0.91+ |
Angular | TITLE | 0.79+ |
theCUBE | ORGANIZATION | 0.75+ |
JavaScript | TITLE | 0.73+ |
lot of computers | QUANTITY | 0.72+ |
TypeScript | OTHER | 0.64+ |
Angular Team | ORGANIZATION | 0.61+ |
Node | ORGANIZATION | 0.53+ |
CLI | TITLE | 0.45+ |
Michael Dawson, IBM | Node Summit 2017
>> Welcome back everybody, Jeff Frick here with theCUBE. We're at Node Summit 2017 in downtown San Francisco Mission Bay Conference Center, we've been coming here for years. The vibe is growing and exciting and some really interesting use cases in earlier sessions about how fast a Node adoption is happening in some of these enterprises and we're excited to have Michael Dawson. He's a software developer, but more importantly, he's a Node.js community lead for IBM. Michael welcome. >> Alright, thank you. It's great to be here. Nice to be able to talk to you and talk about Node.js and what's going on in the community. >> Just to get your impressions in terms of a temporal perspective, of how this has changed and evolved over time. A lot of talk about the community. I think the facility here only holds like 800 people. I think it's full to the capacity. You know, how has it been growing and kind of what's your perspective from a little bit of a higher point of view. >> It's really great, you know I was at Node Summit three years ago, and other conferences, and it's great to see that over the years how we get more and more people involved. Different constituencies, you know, more people who are deploying Node.js. And even just, you know, day-to-day we see a larger and larger number of collaborators who are getting involved in contributing to make the success of Node really grow and the functionality and all that great stuff. >> Jeff: Right. So what's your function inside of IBM as being kind of a Node advocate for the community I assume outside the walls of IBM, but then also inside the walls of IBM? >> So, I really have sort of the pleasure to be able to work out in the community. That's the large part of my job. But I also work very closely with our internal teams with a focus on Node.js, supporting it for our bundling products. IBM has about 50-60 products that bundle Node.js. We also support it through our platforms like Bluemix, and so I work with the team who supports those. You know if you're running Bluemix in Node it's the code that we've contributed and built. And our development approach is very much do that out in the community, so if a particular product needs some sort of feature we'll go out and work in the community to do that and then pull that back in to use it. So you see we have about 10 collaborators. I'm one of them and the great thing is that I get to be involved in a lot of the working group efforts like the N-API, the build work groups, the LTS work groups. And, you know, so my role is really to sort of bridge the community work that we do there to our internal needs and consumers as well. >> Right, so how is the uptake in the IBM world of this technology within all the different stats that you guys have? >> I work in the run time technologies team and we were called the Java Technology Center for a number of years, we're now called the Run Time Technology Center because we see it's a polyglot world with Node.js being one of the three key run times you know, it's Node.js, Java and Swift. [Jeff] - Right. >> And, we see that because we see our costumers as well as our products, you know, really embracing Node and using it in all sorts of places. They've mentioned earlier that Bluemix ARPAs is a very heavy user of Node.js in terms of the implementation of the UIs and the backend services, as well as Node.js is the biggest run time in terms of deployments in that environment as well. >> So it's interesting, we had Monica on earlier from Intel. I think you're going to be on a panel with her later today about benchmarking. >> Yeah. >> And she talked about that there's some unique challenges in trying to figure out how to benchmark these types of applications against kind of the benchmark standards of old. I wondered if you could share some of your thoughts on this challenge, and for the folks that aren't going to be here watching the panel, what are some of the topics that you want to make sure that get exposed in that panel. >> So, you know, I've been working with the benchmarking work group. I actually kicked it off a number of years back. The approach that we're following is we want to document the key use cases for Node, as well as the key attributes of the run time, like you know, like starting up fast, being small, the things that have made it successful. [Jeff] - Right. >> As well as the key use cases like a web front end, backend services for mobile, and then fill in that matrix with important benchmarks. I mean that's where one of the challenges comes in; other languages have a more mature and established set of benchmarks that different vendors and different people can use. >> Right. >> Whereas the work in the working group is to try and either find benchmarks and encourage people to write those benchmarks, and pull together a more comprehensive suite that we can use because performance is important to people, and as a community, we really want to make sure that we encourage a rapid pace of change, but be able to have a good handle on what's going on on the other side. >> Jeff: Right. >> And, having the benchmarks in place should be an enabler, in that if we can easily and quickly find out what a change impact has, a positive or negative, that'll help us move things forward as opposed to if you're uncertain it's a lot harder to make the decision as to which way you should go. >> It's funny on benchmarking, right, because on one hand, people can just poo-poo benchmarks because I'm writing my benchmark so that it beats your product and my benchmark, and you can write a benchmark the other way. But I think what you've just touched on is really important; it's really for optimization of what you're doing for improving your own performance over time. That's really the key to the benchmarks. >> Yeah, absolutely, the focus of the work in the benchmarking work group has been on a framework for like regression testing, and letting us make the right decision, not competition. >> Jeff: Right. >> I think that some of the pieces that we develop will perhaps factor into that, but the core focus is to get a good established set, and other individual companies can then maybe use it for other purposes as well. >> Jeff: Right. So Michael before I let you go I just wanted to get your perspective. You work for a big company. >> Michael: Yep. >> I don't think it's this as much anymore; there used to be a lot of opened source conferences people like oh we don't want the big people coming in, they're going to take it over. And to get your perspective of being kind of that liaison between kind of this really organic open source community with Node and big Blue back behind you, and how you kind of navigate that and in your experience of the acceptance of IBM into this community as well as your ability to bring some of that open source essos back into IBM. >> Right. You know, I found that it's been really great. I love this community, they've been very welcoming. I've had no issues at all, you know, getting involved. I think IBM is respected in the way that we've contributed. We're trying to contribute in a very constructive and collaborative way, you know, nothing that we do, do we really do on our own. If you look at the N-API, we're working with other individuals. People from different companies or just individual contributors to come to a consensus on what it should be, and to basically move things forward. So yeah, in terms of a big company coming in, you do hear some concerns, but I haven't seen any on the ground impediments or problems. You know, it's been very welcoming and it's been a great experience. >> Alright, very good. Alright, well, before I let you go, kind of final thoughts on this event where we are. >> It's a great event, I always enjoy being able to come and meet people. A lot of time you work on Git Hub you know somebody's handle, but there's nothing like making that personal connection to be able to like put the face to the name, and I think it affects your ongoing sort of interactions when you're not face-to-face. >> Jeff: Absolutely. >> So it's a really important thing to do, and that's why I like to come to a lot of these events. >> Alright, well Michael Dawson, we'll let you get back to meeting some more developers. Thanks for taking a few minutes out of your day. >> Thank you very much, bye. >> Absolutely, he's Michael Dawson from IBM. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time.
SUMMARY :
and some really interesting use cases Nice to be able to talk to you and kind of what's your perspective and it's great to see that over the years as being kind of a Node advocate for the community and the great thing is that I get to be involved and we were called the Java Technology Center and the backend services, I think you're going to be on a panel with her later today and for the folks that aren't going to be here like you know, like starting up fast, being small, and then fill in that matrix with important benchmarks. and encourage people to write those benchmarks, to make the decision as to which way you should go. That's really the key to the benchmarks. in the benchmarking work group has been on a framework but the core focus is to get a good established set, So Michael before I let you go and how you kind of navigate that and collaborative way, you know, Alright, well, before I let you go, and I think it affects your ongoing sort of interactions So it's a really important thing to do, we'll let you get back to meeting some more developers. Thanks for watching, we'll catch you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Dawson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
800 people | QUANTITY | 0.99+ |
Swift | TITLE | 0.99+ |
Monica | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
Node | TITLE | 0.99+ |
one | QUANTITY | 0.98+ |
Bluemix | TITLE | 0.98+ |
Git Hub | TITLE | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
three key | QUANTITY | 0.97+ |
Node Summit 2017 | EVENT | 0.95+ |
about 10 collaborators | QUANTITY | 0.95+ |
San Francisco Mission Bay Conference Center | LOCATION | 0.94+ |
Node Summit | EVENT | 0.94+ |
about 50-60 products | QUANTITY | 0.92+ |
theCUBE | ORGANIZATION | 0.9+ |
Node | ORGANIZATION | 0.89+ |
later today | DATE | 0.88+ |
Java Technology Center | ORGANIZATION | 0.83+ |
Run Time Technology Center | ORGANIZATION | 0.77+ |
Bluemix | ORGANIZATION | 0.59+ |
big Blue | ORGANIZATION | 0.59+ |
years | DATE | 0.4+ |
ARPAs | TITLE | 0.34+ |
James Bellenger, Twitter | Node Summit 2017
>> Hey welcome back everybody. Jeff Frick, with the Cube. We're at Node Summit 2017 in downtown San Francisco. About 800 people, developers talking about Node and Node GS. And really the crazy adoption of Node as a development platform. Enterprise adoption. Everything's up and to the right. Some crazy good stories. And we're excited to have somebody coming right off his keynote. It's James Bellenger. He is an engineer at Twitter. James, welcome. >> Thank you, thank you for having me. >> Yeah, absolutely. So you just got off stage and you were talking all about Twitter Lite. What is Twitter Lite? I like Twitter as it is. >> Ah, so Twitter Lite is an optimized, it's a mobile web app. So if you pull up your phone, open up the web browser and go to twitter.com, in your smart phone web browser, you get a Twitter experience that we're calling Twitter Lite. >> Okay. >> And it used to be a little bit out of date. But we've been able to update it using a lot of new exciting web technologies. And so now we have this thing that feels very much like a native web app. >> Okay. >> They call them progressive web apps these days. And so we're using that as sort of a way to sort of compete in areas and markets where maybe a native apps are less able to compete. Where you know, people don't want to download a 200 megabyte iOS app. They want something that fits under 600 kilobytes. >> Okay. So you had the Twitter Lite app before. And then this was really a re-deployment? Or am I getting it wrong? >> I think, well we had We had a web app at mobile.twitter.com. >> Okay. >> And it was just sort of the mobile web app. >> Okay. >> But you know we sort of really rewrote everything. And that includes the back end on Node. And then we're now sort of pushing that and calling it Twitter Lite. >> Okay. And when did that go live or GA? >> About three months ago. >> Three months ago, okay. Super. So obviously you're here at Node. You just spoke at Node. You know, how was the experience using a Node tool set versus whatever you had it built on before? >> It's definitely faster in every way. Well, I mean, >> Faster in every way. That's a good thing. >> So well, let me Let me catch that. Be more specific. It is ... >> It's those benchmarking people. We need them back over here. >> It is very fast for how we apply it. It's really fast for development speed. And perhaps the biggest win is that on both sort of areas of our stack whether it's the part of the application that runs on the browser or it's the part of the application that runs inside the Twitter data center. We have one language and technology. So when a problem comes up and an engineer needs to like go and find the problem and fix it they don't need to sort of "Oh, well that's server code. "I don't know how it works. "And it's written in this language I don't understand." We really just have one application and it happens to run in both places. And so it really improves engineering efficiency. >> And you saw that in the development process, QA and the ongoing. >> Yeah. >> Yeah. And was it more ... So it's more like the guys that were more front end that now have access to the back end and then the other way around. Is that correct? Yeah, it's a little bit of both. >> Okay. >> You know, I think before I think there's people that they really like Scala. And they only want to work in Scala. Or there's people that really don't like it. So you end up, I think, having engineers kind of get bulkanized by their technology choices, and their preferred systems. But I think it really sort of tears down a couple walls. And so it makes, it improves engineering efficiency that way. But we found also that some of the tool sets and the tool chains that we're using allow engineers to just sort of like move faster. >> Right. >> So you know, whether that's like recompiling the service in like one second. Instead of having to wait for multiple minutes. There's just sort of less time spent waiting. >> Right. And in terms of don't share anything you're not supposed to share but in terms of, you know, frequency of releases and ongoing maintenance and kind of the development of the I won't say the app, not the app. I guess it is the app. Going forward, you know, how has that been impacted by moving to this platform? >> I think it might be too early to say. >> Okay. >> We've, you know, right now we've got about 12 to 15 engineers and we're ramping up. And it, I think it might, we're kind of looking to finish around 25 engineers, by the end of the year. >> Okay. >> So the sort of team and contributor base of the kind of like core team that are working on the app is growing. But you know, otherwise there's, you know, we're releasing every day. We're, you know, we try to you know, we're always pushing code. We're running experiments a lot. >> Right. I don't know if that answers your question but. >> So it sounds like it's a little easier but you're still doing everything you were doing before but now it just feels like it's easier because of this. >> Well, you know, talk to me in a couple months. >> Okay. >> Then maybe we'll have some better answers for you. >> Okay. So the other thing I want, if I talk to you in a couple months, I talk to you a year from now, just in terms of as you look down the road, you know, what this opens up. You know, kind of what are some of your priorities now that you've got it out. You said you've been out there for three months. What's kind of next on your roadmap, your horizon? >> So far, I think we've been really encouraged by the success of using this stack for development. So we're looking to kind of double down on that. >> Okay. >> So that means looking at some of the other Twitter web apps. Oh, sorry, Twitter apps in general. The other ways people use Twitter. And to sort of look at how they were built. And to see, because we're using React, and because we're using, I think technologies that make it very easy to you know, be responsive and you know, either be have a wide layout or a very narrow layout, or work offline. We have a lot of potential to sort of cannibalize or replace and also update some of the existing apps >> Right. >> That maybe don't get the attention that they need. >> Right. >> So there's some of that. And then I think Twitter Lite as a product I think that we're going, you know, we're looking to really expand it's reach. And make a big push in some of the developing areas. >> Yeah. Because the other thing people don't know, I mean, Twitter's acquired a bunch of companies, you know, over the years. So we've heard some examples earlier today, where that's a use case when you do have the opportunity to maybe redo an acquired application. You know, that those are kind of natural opportunities to look to redo them with this method. >> Yeah. Sure. >> All right. Cool. Well, James, thanks for taking a few minutes. >> Thank you. >> Congratulations on the talk. And I'll think of you next time I go to Twitter Lite. >> Yeah. Thank you so much. >> All righty. He's James Bellenger from Twitter. I'm Jeff Frick. You're watching the Cube from Node Summit 2017. Thanks for watching. (techno music)
SUMMARY :
And really the crazy adoption of Node So you just got off stage and you were talking all about So if you pull up your phone, open up the web browser And it used to be a little bit out of date. And so we're using that as sort of a way to And then this was really a re-deployment? I think, well we had And that includes the back end on Node. a Node tool set versus whatever you had it built on before? It's definitely faster in every way. Faster in every way. So well, let me We need them back over here. And perhaps the biggest win is that on both And you saw that in the development process, QA So it's more like the guys that were more front end that So you end up, I think, having So you know, whether that's like recompiling the service in terms of, you know, frequency of releases and And it, I think it might, we're kind of looking to finish But you know, otherwise there's, you know, I don't know if that answers your question but. So it sounds like it's a little easier but Well, you know, I talk to you a year from now, So we're looking to kind of double down on that. So that means looking at some of the other And make a big push in some of the developing areas. you know, over the years. Well, James, thanks for taking a few minutes. And I'll think of you next time I go to Twitter Lite. I'm Jeff Frick.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Yokum | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Anna | PERSON | 0.99+ |
James Bellenger | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
16 times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Python | TITLE | 0.99+ |
mobile.twitter.com | OTHER | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
30,000 feet | QUANTITY | 0.99+ |
Russ Foundation | ORGANIZATION | 0.99+ |
Scala | TITLE | 0.99+ |
Twitter Lite | TITLE | 0.99+ |
two rows | QUANTITY | 0.99+ |
200 megabyte | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
Three months ago | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
both places | QUANTITY | 0.99+ |
each row | QUANTITY | 0.99+ |
Par K | TITLE | 0.99+ |
Anais Dotis Georgiou | PERSON | 0.99+ |
one language | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
15 engineers | QUANTITY | 0.98+ |
Anna East Otis Georgio | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
one second | QUANTITY | 0.98+ |
25 engineers | QUANTITY | 0.98+ |
About 800 people | QUANTITY | 0.98+ |
sql | TITLE | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
two temperature values | QUANTITY | 0.98+ |
one times | QUANTITY | 0.98+ |
c plus plus | TITLE | 0.97+ |
Rust | TITLE | 0.96+ |
SQL | TITLE | 0.96+ |
today | DATE | 0.96+ |
Influx | ORGANIZATION | 0.95+ |
under 600 kilobytes | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
c plus plus | TITLE | 0.95+ |
Apache | ORGANIZATION | 0.95+ |
par K | TITLE | 0.94+ |
React | TITLE | 0.94+ |
Russ | ORGANIZATION | 0.94+ |
About three months ago | DATE | 0.93+ |
8:30 AM Pacific time | DATE | 0.93+ |
twitter.com | OTHER | 0.93+ |
last decade | DATE | 0.93+ |
Node | ORGANIZATION | 0.92+ |
Hadoop | TITLE | 0.9+ |
InfluxData | ORGANIZATION | 0.89+ |
c c plus plus | TITLE | 0.89+ |
Cube | ORGANIZATION | 0.89+ |
each column | QUANTITY | 0.88+ |
InfluxDB | TITLE | 0.86+ |
Influx DB | TITLE | 0.86+ |
Mozilla | ORGANIZATION | 0.86+ |
DB IOx | TITLE | 0.85+ |
Jacob Groundwater, Github | Node Summit 2017
(click) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Node Summit 2017 in San Francisco at the Mission Bay Convention Center. We've been coming here for years. A really active community, a lot of good mojo, about 800 developers here. About to the limits that the Mission Bay center can hold. Now we're excited to have our next guest. He just came off a panel. It's Jacob Groundwater. He's an engineering manager for Electron at Github. Jacob, welcome. >> Thank you, it's great to be here. >> So really interesting panel, Electron. I hadn't heard about Electron before, I was kind digging in a little bit while the panel was going on, but for the folks that aren't familiar, what is Electron? >> Yeah. Electron, there's a good chance that people who haven't even heard of it might already be using it. >> (chuckles) That's always a good thing. >> Yeah. Electron is a project that's started by Github and it's open source and you can use it to build desktop applications but with web technologies. We're leveraging the Google Chrome project to do a lot of that. And Node. And Node. Node.js is a big part of it as well. >> So build desktop apps using web technologies. >> Yep. >> And why would somebody want to do that? >> You know, I think at the root of that question, it's always the same answer which is just economics right now. Developers are in demand, software developers are in demand. The web is taking over and the web is becoming the most common skillset that people have. So you get a few benefits by using Electron. You get to distribute to three platforms automatically, you get Linux, Mac, and Windows. Sometimes it's like super easy. Sometimes you do a little bit of building to get that to happen, but it's, you know, you could cut your team size down by maybe two thirds if you do it that way. >> Wow, that's a pretty significant cut. Now you said one 1.0 released year, and how's the, how's the adoption? >> I actually can't even keep up with the number of applications that are being published on top of Electron. I'm often surprised, I'll go to a company and I'll say, oh I work on Electron at Github. And they'll be like, oh we're developing an Electron app, or we're working on an Electron app. So it, it's kind of unreal. Like I've never really been in this situation before where something that I'm working on is being used so much. I think it's out, it's out there, it's in production, it's running in millions of laptops and desktops. >> Yeah. That's great though, 'cause that's the whole promise of software, right? That's why people want to get into software. >> Yeah. >> 'Cause you can actually write something that people use and you can change the world. It could be distributed all over the world with millions of users before you even know it. >> There's this wonderful thought of like writing something once and then it running in millions of places potentially. I just love it. I love it. I think it's super cool. Yeah. So as it's grown what have been some of the main kind of concerns, issues, what are some of the things you're managing within that growth that's not pure technical? >> Yeah. That's a great question. One of the biggest things that I found interesting is when I got on our website and check the analytics, it's almost uniform across the globe. People are interested in it from everywhere. So there's challenges like, right now I had to set up a core meeting to talk about some of the like, updates to Electron and that had to be at midnight pacific time because we had to include the Prague time zone, Tokyo time zone, and Chennai in India. And we're trying to see if we can squeeze in someone from Australia. And just the global distributive nature of Electron, like people around the world are working on this and using it. >> Right. The other part you mentioned in the session, was the management of the community. And you made an interesting, you know, we go to a lot of conferences, everyone's got their code of conduct published these days which is kind of sad. It's good, but it's kind of sad that people don't have basic manners it seems like anymore. We've covered a lot of opensource communities. One that jumps to mind is OpenStack and watch that evolve over time and there's kind of community management issues that come up as these things grow. And you brought up, kind of an interesting paradigm, if you've got a great technical contributor who's just not a good person for, I don't know you didn't really define kind of the negative side but got some issues that may impact the cohesiveness of the community going forward, especially because community is so important in these projects. But if you got a great technical mind, I never really heard that particular challenge. >> I think it comes up a lot more than people realize. And it's something that I think about a lot. And one thing I want to focus on is, what we're really zeroing in on is bad behavior. >> Bad behavior. That was the word. >> And not a bad person. >> Right, right. >> One of the best ways to, to maybe get around that happening is to set an expectation early about what is acceptable behavior and alert people early when they're doing things that are going to cause harm to the community or cause harm to others. And also frame it in a way where they know, we're trying to keep other people safe, but we're also trying to keep those offenders, give them the space to change. If you choose not to change, that's a whole different story. So I think that by keeping the community strong, we encourage people around the globe to work on this project and we've already seen great returns by doing this far, so that's why I'm really focused on keeping it, keeping it a place where you know you can come and show up and do your work and do your best work. >> Right. Right. Well hopefully that's not taking too many of your cycles, you don't got too many of those, of those characters. >> Every hour I put in, I get like 10s and 20, like hours and hours back in return from the people who give back. So it's well worth it. It's the best use of my time. >> Alright good. So great growth over the year. As you look forward to next calendar year, kind of what are some of your priorities? What are some of the community's priorities? Where is Electron going? And if we touch base a year from now, what are we going to be talking about? >> Excellent question. So strengthening, formalizing some aspects of the community that we have so far, it's a little ad hoc, would be great. We want to look to having people outside of Github that feel more ownership over the project. For example, we have contributors who probably should be reviewing and committing code on their own, without necessarily needing to loop in someone from my team. So really turning this into a community project. In addition, we are focusing up on what might go into a version 2 release. And we're really focusing on security as a key feature in version two. >> Yeah, security's key and it's got to be baked in all the way to the bottom. >> Yeah. >> Alright Jacob, well it sounds like you've got your work cut out for you >> Thank you. and it should be an exciting year. >> Yeah, thanks very much. >> Alright. He's Jacob Groundwater. He's from the Electron project at Github. I'm Jeff Frick. You're watching theCUBE. We'll see you next time. Thanks for watching. (sharp music)
SUMMARY :
at the Mission Bay Convention Center. but for the folks that aren't familiar, there's a good chance that people and you can use it to build desktop applications and the web is becoming the most common skillset Now you said one 1.0 released year, So it, it's kind of unreal. 'cause that's the whole promise of software, right? and you can change the world. So as it's grown what have been some of the main One of the biggest things that I found interesting kind of the negative side And it's something that That was the word. One of the best ways to, you don't got too many of those, from the people who give back. So great growth over the year. that feel more ownership over the project. all the way to the bottom. and it should be an exciting year. He's from the Electron project at Github.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jacob | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
Jacob Groundwater | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
India | LOCATION | 0.99+ |
Github | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Electron | ORGANIZATION | 0.99+ |
10s | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
Chennai | LOCATION | 0.99+ |
Mission Bay Convention Center | LOCATION | 0.99+ |
about 800 developers | QUANTITY | 0.98+ |
Node.js | TITLE | 0.98+ |
next calendar year | DATE | 0.97+ |
Linux | TITLE | 0.97+ |
One | QUANTITY | 0.96+ |
Windows | TITLE | 0.95+ |
millions of users | QUANTITY | 0.94+ |
Node Summit 2017 | EVENT | 0.94+ |
three platforms | QUANTITY | 0.93+ |
two thirds | QUANTITY | 0.93+ |
millions of places | QUANTITY | 0.9+ |
Electron | TITLE | 0.89+ |
Tokyo time zone | LOCATION | 0.89+ |
Mission Bay center | LOCATION | 0.87+ |
theCUBE | ORGANIZATION | 0.86+ |
Prague time zone | LOCATION | 0.85+ |
version 2 | OTHER | 0.83+ |
one thing | QUANTITY | 0.78+ |
millions of laptops | QUANTITY | 0.78+ |
one | QUANTITY | 0.77+ |
version two | OTHER | 0.75+ |
Mac | COMMERCIAL_ITEM | 0.75+ |
a year | QUANTITY | 0.74+ |
midnight | DATE | 0.71+ |
OpenStack | ORGANIZATION | 0.68+ |
Google Chrome | TITLE | 0.68+ |
1.0 | QUANTITY | 0.36+ |
Guy Podjarny, Snyk | Node Summit 2017
>> Hey welcome back everybody Jeff Frick here with theCUBE. We're at Node Summit 2015 in Downtown San Francisco Mission Bay Conference Center. About 800 people talking about nodes, Node JS. The crazy growth in this application development platform and we're excited to have our next guest to talk about security. Which I don't think we've talked about yet. He's Guy Podjarny, I'm sorry. >> Podjarny Correct. >> Welcome, he's a CEO of Snyk, not spelled like Snyk. (laughing) You'll see it on the lower third. >> It's amazing how often we that question. How do you pronounce Snyk? >> Well I know, obviously people that have never had this start up and tried to go through a URL search. >> Indeed. >> Just don't know what's it's all about. >> It's sort of Google dominance. It's short for so now you know. So now you know. >> Oh, so now you know. Okay perfect, super. First off welcome, great to see you. >> Thank you. Thanks for having me. >> You said this is your second year at the conference. Just kind of share your general impressions of what's going on here. >> Sure, well I think Node Summit is an awesome conference. I think this year's event is bigger, better organized. I don't know if it's bigger people wise but definitely feels that way. It sort of feels more structured. It's nice to see in the audience as well. Just an increased amount of larger organizations that are around and talking about their challenges and a little bit a lot earlier in the conference but a little bit of more experienced conversations. So conversations about hey, we've used node and we've encountered these issues versus we're about to use it. We're thinking of using it so definitely can see the enterprise adoption kind of growing up. That's my primary impression so far. >> Yeah and it's it in 'cause you're a start up but Microsoft is here, Google's here, Intel is here, IBM is here so a lot of the big players. Who've demonstrated in other open source communities that they have completely embraced open source as a method and way to get actually more than the software is getting closer to development community. >> Yeah, agreed and I think another adjacent trend that's happening is ServerList and ServerList has grown ridiculously, by massive amounts in these last while. And Node JS is sort of the de facto default language for ServerList. LAM just started with it and AWS and many of the other platforms only support it. I think that contribution also brings the giants a little bit more in here. The Cloud giants but also I think again just sort of boost the Node JS. As though the Node JS echo system needed a boost. They get another amplifier. Just raise enterprise awareness and general usage. >> Okay, so what's the Snyk all about? Gives us, some people aren't familiar with the company. >> Cool, so Snyk deals with open source security and specifically in Node JS, the world of MPMs. MPM is amazing and it allows us to build on the shoulders of giants and all the others in the community. But there are some inherent security risks with just pulling code off the internet and running it in your application. >> Jeff: Right, right. >> What we do at Snyk is we help you find known security flaws, known vulnerabilities in MPM packages, and do that in a natural fashion as part of your continuous development process, and then fix those efficiently and monitor for them over time. That's basically. >> That's your focus is really keeping track of all these other packages that people are using to their development. Precisely and we're helping you just use open source code and stay secure. The word node is our flag ship and it's where we started and build and now we support a bunch of other systems as well. >> It's interesting, Monica from Intel said that in some of their work they found that some of these applications. The actual developers only contributing 2% of the code 'cause they're pulling in all this other stuff. >> Precisely, I have this example I use in a bunch of my talks that shows ServerList example that has 19 lines of codes. Copies some file from URL and puts it on S3. That's 19 lines of codes which is awesome. Uses two packages which in turn use 19 packages which bring in 190,000 lines of code. >> Wow. >> That's a massive-- >> So what is that step function again? Start from the beginning. >> 19 to 190,000. >> It starts at two? >> 19 lines of code use two MPM packages. They use 19 packages because every package uses other packages as well, and combined those 19 packages bring in 190,000 lines of code. >> Wow, that's amazing. That's an extreme example but you see that pattern. You see this again and again that the majority of your code in your applications especially node is not first party it's third party code. >> Jeff: Right. >> And that means most of your security risks. Most of your vulnerabilities, they come from there so there is a lot of challenges around managing dependencies. I know it's called dependency help for a reason but specifically security is still not sufficiently taken care of. It's still overlooked and we need to make sure that it's not just addressed by security people. But it's addressed a part of the development process by developers. >> How do you keep up? Both with the number as the proliferation grows as well as the revisions and versions inside of any particular package? You kind of chasing a multi headed beast there. >> It's definitely tough. First of all the short answer is automation. Any scale solution has to start with automation. I've got a security research team in Israel that has a vulnerability pipeline that feeds in from activity in the open source world. Some developer opens an issue and gets helps that say SQL injection in some package and that disappears into the ether. So we try to surface those, get it to our security analysts, determine if it's a real vulnerability curated in our database, and then just build that database with your own research but a lot of it is around tapping into community. And then subsequently when you consume this if you want to be able to apply security correctly as you develop your applications Node JS or otherwise. It has to come to you. The security tool has to be a seamless integration with how you currently work. If you impose another step, another two steps, another three steps on the developers. They're just not going to use it. That's a lot of our emphasis is scale on the consumption and the tracking of the database and simplicity and ease of use on the developer on the user side. >> And do you help with just like flagging. Flagging is a problem or is there an alternative. I mean I would imagine with all these interdependencies, you find one rotten apple kind of have a huge impact. It's a huge scale of impact right. >> Absolutely so we do really what our moniker is that we don't find vulnerabilities, we fix them and our goal is to fix vulnerabilities. So we actually, first of all in the flow we have single click, open a fixed PR. We figure out what changes we need to do. What upgrades you need to make the vulnerability go away. Literally click a button to fix it. Put on one bat for everything and then what we also do. We build patches, sort of a little known fact is in the world of operation systems RedHat and Canonical. They build a lot of fixes or they back port a lot open source fixes, and they put them into their repository. You can just say on updates or upgrade and just get those fixes. You don't even know which vulnerabilities you're fixing. You're just getting the fixes so we build patches for our MPM packages as well to allow you to patch vulnerabilities you can not upgrade away. A lot of it is around fix. Make fix easy. >> Right and then the other part as you said is baking security in the development all the way through which we hear over and over and over. >> Build it in and bolt it in. >> The cast in method doesn't work anymore. You've got to have it throughout the application so you said you're speaking on a panel tomorrow. And I wondered if you can just highlight some of the topics for tomorrow for the folks that aren't going to be here and see the panel. When you look at ServerList security. Say that three times fast. What are some of the real special challenges that people need to be thinking about? >> Sure, so you know I actually have two talks tomorrow. One is a panel on Node JS security as a whole and that's sort of a broader panel. We have a few other colleagues in there and we talk about the evolution of Node JS security that includes the platform itself which is increasingly well handled by the foundation. Definitely some improvements there over the years and some of it is around best practices like the ones that was just discussed which is understanding known pitfalls and Node JS sort of security mistakes that you might do as well as handling the MPM echo system. The other talk that I have later in the day is around ServerList security. ServerList security is interesting because a lot of the promise of ServerList is function as a service is that a lot of the concerns. A lot of the earlier or lower levels get abstracted away from you. You don't need to manage servers. You don't need to manage operation systems and with those auto security concerns go away. Which in turns focuses the attackers and should focus you on the application. As attackers are not just going to give up because they can't hack the operating system that the pros are managing. They would look at the next low hanging fruit and that would be the application. Platform as a service and function as a service really increase the importance of dealing with application security as a whole. So my talk is a lot about that but also deals with other security concerns that you might of course any new methodology introduces its own concerns so talk a little bit about how to address those. ServerList like Node JS is an opportunity to build security into the culture and into our methodologies from the early day so trying to help us get that right. >> Alright, as you look forward, the next 12 months. I won't say more than 12 months, 6 months, 9 months, 12 months. What are some of your priorities at Snyk? What are you working on if we get together a year from now, what will we be talking about? I think, so two primary ones. One is continuing the emphasis on fix. Making fixing trivial in the Node JS environments as well as others. I think we've done well there but there is more work to be done. It needs to be as seamless as possible. The other aspect is indeed in this sort of past and fast world and platform and function as a service. Where increasingly there is this awareness as we work with different platforms to the blind spot that they have to open source libraries. They fix your NGX vulnerabilities but not your express vulnerabilities. I sometimes refer to MPM packages or open source packages as sprinkles of infrastructure that are just scattered through your application. And today, all of these Cloud platforms are blind to it so I expect us at Snyk to be helping past and fast users dealing with that security concerns efficiently. >> Alright, well I look forwards to the conversation. >> Thanks. >> Thanks for stopping by. >> Thank you. >> He's Guy Podjarny. He is from Snyk. The CEO of Snyk. I'm Jeff Frick, you're watching theCUBE. (uptempo techno music)
SUMMARY :
and we're excited to have our next guest You'll see it on the lower third. How do you pronounce Snyk? that have never had this start up It's short for so now you know. Oh, so now you know. Thank you. Just kind of share your general impressions and a little bit a lot earlier in the conference IBM is here so a lot of the big players. and AWS and many of the other platforms only support it. Gives us, some people aren't familiar with the company. and specifically in Node JS, the world of MPMs. and do that in a natural fashion Precisely and we're helping you The actual developers only contributing 2% of the code That's 19 lines of codes which is awesome. Start from the beginning. and combined those 19 packages but you see that pattern. And that means most of your security risks. How do you keep up? and that disappears into the ether. And do you help with just like flagging. and our goal is to fix vulnerabilities. Right and then the other part as you said and see the panel. and some of it is around best practices like the ones that they have to open source libraries. The CEO of Snyk.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
190,000 lines | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two steps | QUANTITY | 0.99+ |
19 lines | QUANTITY | 0.99+ |
Guy Podjarny | PERSON | 0.99+ |
19 packages | QUANTITY | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
Node JS | TITLE | 0.99+ |
two packages | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
second year | QUANTITY | 0.99+ |
Podjarny | PERSON | 0.99+ |
6 months | QUANTITY | 0.99+ |
three steps | QUANTITY | 0.99+ |
9 months | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Intel | ORGANIZATION | 0.99+ |
ServerList | TITLE | 0.99+ |
190,000 | QUANTITY | 0.98+ |
Canonical | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
three times | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
About 800 people | QUANTITY | 0.98+ |
Node Summit | EVENT | 0.96+ |
one bat | QUANTITY | 0.96+ |
nodes | TITLE | 0.95+ |
more than 12 months | QUANTITY | 0.95+ |
Node Summit 2017 | EVENT | 0.95+ |
two talks | QUANTITY | 0.94+ |
single click | QUANTITY | 0.94+ |
Downtown San Francisco Mission Bay Conference Center | LOCATION | 0.93+ |
this year | DATE | 0.93+ |
S3 | TITLE | 0.92+ |
node | TITLE | 0.9+ |
Node JS security | TITLE | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
19 p | QUANTITY | 0.87+ |
apple | ORGANIZATION | 0.85+ |
two primary ones | QUANTITY | 0.84+ |
echo | COMMERCIAL_ITEM | 0.84+ |
LAM | TITLE | 0.84+ |
Node Summit 2015 | EVENT | 0.82+ |
one | QUANTITY | 0.81+ |
2% of | QUANTITY | 0.8+ |
19 | QUANTITY | 0.8+ |
MPM | TITLE | 0.74+ |
first | QUANTITY | 0.73+ |
RedHat | ORGANIZATION | 0.71+ |
next 12 months | DATE | 0.69+ |
Gaurav Seth, Microsoft | Node Summit 2017
(switch clicking) >> Hey, welcome back, everybody. Jeff Frick, here with theCUBE. We're at the Mission Bay Conference Center in downtown San Francisco at Node Summit 2017. TheCUBE's been coming here for a number of years. In fact, Ryan Dahl's one of our most popular interviews in the history of the show, talking about Node. And, the community's growing, the performance is going up and there's a lot of good energy here, so we're excited to be here and there's a lot of big companies that maybe you would or wouldn't expect to be involved. And, we're excited to have Gaurav Seth. He is the Product Manager for Several Things JavaScript. I think that's the first time we've ever had that title on. He's from Microsoft. Thanks for stopping by. >> Yeah, hey, Jeff, nice to be here. Thanks for having me over. >> Absolutely, >> Yes. >> so let's just jump right into it. What is Microsoft doing here in such a big way? >> So, one of the things that Microsoft is, like, I think we really are, now, committed and, you know, we have the mantra that we are trying to follow which is any app, any developer, any platform. You know, Node actually is a great growing community and we've been getting soaked more and more and trying to help the community and build the community and play along and contribute and that's the reason that brings us here, like, it's great to see the energy, the passion with people around here. It's great to get those connections going, have those conversations, hear from the customers as to what they really need, hear from developers about their needs and then having, you know, a close set of collaboration with the Core community members to see how we can even evolve the project further. >> Right, right, and specifically on Azure, which is interesting. You know, it's been interesting to watch Microsoft really go full bore into cloud, via Azure. >> Right. >> I just talked to somebody the other day, I was talking about 365 being >> Uh huh. >> such a game-changer in terms of cloud implementation, as a big company. There was a report that came out about, you know, the path at 20 billion, >> Right. >> so, clearly, Microsoft is not only all-in, but really successfully >> Right. >> executing on that strategy >> Yeah, I mean-- >> and you're a big piece of that. >> Yes, I mean, I think one of the big, big, big pieces, really, is as the developer paradigms are changing, as the app paradigms are changing, you know, how do you really help make developers this transition to a cloud-native world? >> Right, right. >> How do you make sure that the app platforms, the underlying infrastructure, the cloud, the tools that developer use, how do you combine all of them and make sure that you're making it a much easier experience for developers to move on >> Right. >> from their existing paradigms to these new cloud-native paradigms? You know, one of the things we've been doing on the Azure side of the house and when, especially when we look at Node.js as a platform, we've been working on making sure that Node.js has a great story across all the different compute models that we support on Azure, starting from, like, hey, if you you want to do server list of functions, if you want to do BasS, if you want to go the container way, if you want to just use WEAMS, and, in fact, we just announced the Azure container instances, today, >> Right. >> so it's, one of the work, some of the work we are doing is really focused on making sure that the developer experiences as you migrate your workloads from old traditional, monolithic apps are also getting ready to move to this cloud native era. >> Right, so it's an interesting point of view from Microsoft 'cause some people, again, people in-the-know already know, but a lot of people maybe don't know, kind of, Microsoft's heritage in open source. We think, you know, that I used to buy my Office CD, >> Right. >> and my Outlook CD >> Right. >> you know, it's different, especially as you guys go more heavily into cloud, >> Right. >> you need to be more open to the various tools of the developer community. >> That's absolutely true and one of the focus areas for us, really, has been, you know, as we think through the cloud-native transition, what are the big pieces, the main open source tools, the frameworks that are available and how do we provide great experiences for those on Azure? >> Right, right. >> Right, because, at times, people come with the notion that, hey, Azure probably might just be good for dot NET or might just be good for Windows, but, you know, the actual fact, today, is really that Azure has great supporting story for Linux, Azure has great story for a lot of these open source tools and we are continuing to grow our story in that perspective. >> Right. >> So, we really want to make sure that open source developers who come and work on our platform are successful. >> And then, specifically for Node, and you're actually on the Board, so you've got >> Right. >> a leadership position, >> Yep. >> when you look at Node.js within the ecosystem of opensource projects and the growth that we keep hearing about in the sessions, >> Yep. >> you know, how are you, and you specifically and Microsoft generally, kind of helping to guide the growth of this community and the development of this community as it gets bigger and bigger and bigger? >> Right, I think that's a great question. I think from my perspective, and also Microsoft's perspective, there are a bunch of things we are actually doing to engage with the community, so I'll kind of list out three or four things that we are doing. I think the first and foremost is, you know, we are a participant in the Node.js Foundation. >> Right. >> You know, that's where like, hey, we kind of look at the administrative stuff. We are a sponsor of, you know, at the needed levels, et cetera, so that's just the initial monetary support, but then it gets to really being a part of the Node Core Committee, like, as we work on some of the Core pieces, as we evolve Node, how can we actually bring more perspectives, more value, into the actual project? So, that's, you know, we have many set of engineers who are, right now, working across different working groups with Node and helping evolve Node. You know, you might have heard about the NAPI effort. We are working with the Diagnostics Working Group, we are working with the Benchmarking Working Group and, you know, bringing the thing. The third thing that we did, a while back, was we also did this integration of bringing Chakra which is the JavaScript Runtime from Microsoft that powers Microsoft Edge. We made Node work with Chakra because we wanted to bring the power of Node to this new platform called Windows IoT >> Right, right. >> and, you know, the existing Node could not get there because some of the platform limitations. So, those are like some of the few examples that we've, and how we've been actually communicating and contributing. And then, I think the biggest and the foremost for me, really, are the two pillars, like when I think about Microsoft's contribution, it's really, like, you know, the big story or the big pivot for us is, we kind of go create developer tools and help make developer live's easier by giving them the right set of tools to achieve what they want to achieve in less time, be more productive >> Right, right. >> and the second thing is, really, like the cloud platforms, as things are moving. I think across both of those areas, our focus really had been to make sure that Node as a language, Node as a platform has great first-class experiences that we can help define. >> Right. Well, you guys are so fortunate. You have such a huge install base of developers, >> Right. >> but, again, traditionally, it wasn't necessarily cloud application developers and that's been changing >> Yep. >> over time >> Yep. >> and there's such a fierce competition for that guy, >> Yep. >> or gal, who wakes up >> Yep. >> in the morning or not, maybe, the morning, at 10:00, >> Yep. >> has a cup of coffee >> Yep. >> and has to figure out what they're going to develop today >> Right. >> and there's so many options >> Right. >> and it's a fierce competition, >> Right. >> so you need to have an easy solution, you need to have a nice environment, you need to have everything that they want, so they're coding on your stuff and not on somebody else's. >> That's true, I mean I, you know, somehow, I kind of instead of calling it competition, I have started using this term coopetition because between a lot of the companies and vendors that we talk about, right, it's more about, for all of us, it's working together to grow the community. >> Right. >> It's working together to grow the pie. You know, with open source, it's not really one over the other. It's like the more players you have and the more players who engage with great ideas, I think better things come out of that, so it's all about that coopetition, >> rather than competition, >> Right. >> I would say. >> Well, certainly, around and open source project, here, >> Yes, exactly. >> and we see a lot of big names, >> Exactly. >> but I can tell you, I've been to a lot of big shows where they are desperately trying to attract >> Right, right, yes. >> the developer ecosystem. "Come develop on our platforms." >> Yes, yes. >> So, you're in a fortunate spot, you started, >> Yes, I mean that-- >> not from zero, but, but open source is different >> Yes. >> and it's an important ethos because it is much more community >> Exactly, exactly. >> and people look at the name, they don't necessarily look at the title >> Exactly. >> or even the company >> Yep, exactly. >> that people work for. >> Exactly, and I think having more players involved also means, like, it's going to be great for the developer ecosystem, right, because everybody's going to keep pushing for making it better and better, >> Right. >> so, you know, as we grow from a smaller stage to, like, hey, there's actually a lot of enterprised option of these use case scenarios that people are coming up with, et cetera, it's always great to have more parties involved and more people involved. >> Gaurav, thank you very much >> Yeah. >> and, again, congratulations on your work here in Node. Keep this community strong. >> Sure. >> It looks like you guys are well on your way. >> Yeah. Thanks, Jeff. >> All right. >> Thanks for your time, take care, yeah. >> Guarav Seth, he's a Project Lead at Microsoft. I'm Jeff Frick. You're watching theCUBE from Node Summit 2017. Thanks for watching. (upbeat synthpop music)
SUMMARY :
in the history of the show, talking about Node. Yeah, hey, Jeff, nice to be here. so let's just jump right into it. and then having, you know, a close set of collaboration to watch Microsoft really go full bore There was a report that came out about, you know, You know, one of the things we've been doing on making sure that the developer experiences We think, you know, that I used to buy my Office CD, you need to be more open but, you know, the actual fact, today, is really So, we really want to make sure and the growth that we keep hearing about you know, we are a participant the power of Node to this new platform and, you know, the existing Node could not get there and the second thing is, really, Well, you guys are so fortunate. so you need to have because between a lot of the companies and vendors It's like the more players you have the developer ecosystem. so, you know, as we grow and, again, congratulations on your work here in Node. It looks like you guys are Yeah. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Ryan Dahl | PERSON | 0.99+ |
Gaurav Seth | PERSON | 0.99+ |
Gaurav | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20 billion | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Node.js Foundation | ORGANIZATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Guarav Seth | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
two pillars | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
Outlook | TITLE | 0.98+ |
Chakra | TITLE | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
one | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
JavaScript | TITLE | 0.97+ |
Mission Bay Conference Center | LOCATION | 0.97+ |
10:00 | DATE | 0.97+ |
Windows | TITLE | 0.97+ |
WEAMS | TITLE | 0.97+ |
Linux | TITLE | 0.96+ |
third thing | QUANTITY | 0.96+ |
first time | QUANTITY | 0.95+ |
TheCUBE | ORGANIZATION | 0.95+ |
Office | TITLE | 0.95+ |
today | DATE | 0.95+ |
Node Core Committee | ORGANIZATION | 0.94+ |
Azure | TITLE | 0.93+ |
four things | QUANTITY | 0.86+ |
NAPI | ORGANIZATION | 0.83+ |
San Francisco | LOCATION | 0.81+ |
Node | ORGANIZATION | 0.8+ |
NET | ORGANIZATION | 0.75+ |
zero | QUANTITY | 0.75+ |
Azure | ORGANIZATION | 0.7+ |
Node Summit | LOCATION | 0.69+ |
Diagnostics Working Group | ORGANIZATION | 0.64+ |
2017 | DATE | 0.58+ |
365 | QUANTITY | 0.54+ |
Edge | TITLE | 0.53+ |
Things | ORGANIZATION | 0.52+ |
BasS | TITLE | 0.52+ |
Group | ORGANIZATION | 0.47+ |
Charles Beeler, Rally Ventures | Node Summit 2017
>> Hey welcome back everybody. Jeff Frick here at theCUBE. We're at Node Summit 2017 in Downtown San Francisco. 800 people hanging out at the Mission Bay Conference Center talking about development and really monumental growth curve. One of the earlier presenters have one project last year. I think 15 this year, 22 in development and another 75 toy projects. The development curve is really steep. IBM's here, Microsoft, Google, all the big players so there is a lot of enterprise momentum as well and we're happy to have our next guest. Who's really started this show and one of the main sponsors of the show He's Charles Beeler. He's a general partner at Rally Ventures. Charles great to see you. >> Good to be back. Good to see you. >> Yeah, absolutely. Just kind of general impression. You've been doing this for a number of years I think when we talked earlier. Ryan Dawles interview from I don't even know what year it is I'd have to look. >> 2012, January 2012. >> 2012. It's still one of our most popular interviews of all the thousands we've done on the theCUBE, and now I kind of get it. >> Right place, right time but it was initially a lot. In 2011, we were talking about nodes. Seemed like a really interesting project. No one was really using it in a meaningful way. Bryan Cantrell from Joint. I know you all have talked before, walked me through the Hello World example on our board in my office, and we decided let's go for it. Let's see if we can get a bunch of enterprises to come and start talking about what they're doing. So January 2012, there were almost none who were actually doing it, but they were talking about why it made sense. And you fast forward to 2017, so Home Away was the company that actually had no apps. Now 15, 22 in development like you were mentioning and right now on stage you got Twitter talking about Twitter light. The breath and it's not just internet companies when you look at Capital One. You look at some of the other big banks and true enterprise companies who are using this. It's been fun to watch and for us. We do enterprise investing so it fits well but selfishly this community is just a fun group of people to be around. So as much as this helps for our rally and things. We've always been in awe of what the folks around the node community have meant to try to do, and it did start with Ryan and kind of went from there. It's fun to be back and see it again for the fifth annual installment. >> It's interesting some of the conversations on stage were also too about community development and community maturation and people doing bad behavior and they're technically strong. We've seen some of these kind of growing pains in some other open source communities. The one that jumps out is Open Stack as we've watched that one kind of grow and morph over time. So these are good. There's bad problems and good problems. These are good growing pain problems. >> And that's an interesting one because you read the latest press about the venture industry and the issues are there, and people talk more generally about the tech industry. And it is a problem. It's a challenge and it starts with encouraging a broad diverse group of people who would be interested in this business. >> Jeff: Right, right. >> And getting into it and so the node community to me is always been and I think almost any other out source community could benefit at looking at not just how they've done it, but who the people are and what they've driven. For us, one of the things we've always tried to do is bring a diverse set of speakers to come and get engaged. And it's really hard to go and find enough people who have the time and willingness to come up on stage and it's so rewarding when you start to really expose the breath of who's out there engaged and doing great stuff. Last year, we had Stacy Kirk, who she runs a company down in L.A. Her entire team pretty much is based in Jamaica brought the whole team out. >> Jeff: Really? >> It was so much fun to have whole new group people. The community just didn't know, get to know it and be in awe of what they're building. I thought the electron conversation. They were talking about community, that was Jacob from GitHub. It's an early community though. They're trying to figure it out. On the Open Stack side, it's very corporate driven. It's harder to have those conversations. In the node community, it's still more community driven and as a result they're able to have more of the conversation around how do we build a very inclusive group of people who can frankly do a more effective job of changing development. >> Jeff: Right, well kudos to you. I mean you open up the conference in your opening remarks talking about the code of conduct and it's kind of like good news bad news. Like really we have to talk about what should basically be. It's common sense but you have to do it and that's part of the program. It was Woman Attack Wednesday today so we've got a boat load of cards going out today with a lot of the women and it's been proven time and time again. That the diversity of opinions tackling any problem is going to lead to a better solution and hopefully this is not new news to anybody either. >> No and we have a few scholarship folks from Women who code over here. We've done that with them for the last few years but there are so many organizations that anyone who actually wants to spend a little time figuring out how can I be apart of the, I don't know if I'd call it solution but help with a challenge that we have to face. It's Women who code. It's Girls who code. It's Black girls code and it's not just women. There's a broad diverse set of people we need to engage. >> Jeff: Right, right. >> We have a group here, Operation Code who's working with Veterans who would like to find a career, and are starting to become developers and we have three or four sponsored folks from Operation Code too. And again, it's just rewarding to watch people who are some of the key folks who helped really make node happen. Walking up to some stranger who's sort of staring around. Hasn't met anybody. Introduce himself say, "Hey, what are you interested in "and how can I help?" And it's one of the things that frankly brings us back to do this year after year. It's rewarding. >> Well it's kind of interesting piece of what node is. Again we keep hearing time and time again. It's an easy language. Use the same language for the front end or the back end. >> Yep. >> Use a bunch of pre-configured model. I think Monica from Intel, she said that a lot of the codes they see is 2% is your code and everything you're leveraging from other people. And we see in all these tech conferences that the way to have innovation is to label more people to contribute. That have the tools and the data and that's really kind of part of what this whole ethos is here. >> And making it. Just generally the ethos around making it easier to develop and deploy. And so when we first started, Google was nowhere to be found and Microsoft was actually already here. IBM wasn't here yet and now you look at those folks. The number of submissions we saw for talk proposals. The depth of engagement within those organizations. Obviously Google's got their go and a bunch of it but node is a key part of what they're doing. Node and I think for both IBM and also for Google is the most deployed language or the most deployed stack in terms of what they're seeing on their Cloud, Which is why they're here. And they're seeing just continued growth, so yeah it drives that view of how can we make software easier to work with, easier to put together, create and deploy and it's fun to watch. Erstwhile competitors sitting comparing notes and ideas and someone said to me. One of the Google folks, Miles Boran had said. Mostly I love coming to this because the hallway chatter here is just always so fascinating. So you go hear these great talks and you walk out and the speakers are there. You get to talk to them and really learn from them. >> I want to shift gears a little. I always great to get a venture capitalist on it. Everybody wants to hear your thoughts and you see a lot of stuff come across your desk. As you just look at the constant crashing of waves of innovation that we keep going through here and I know that's apart of why you live here and why I do too. And Cloud clearly is probably past the peak of the wave but we're just coming into IoT and internet of things and 5G which is going to be start to hit in the near future. As you look at it from an enterprise perspective. What's getting you excited? What are some of the things that maybe people aren't thinking about that are less obvious and really the adoption of enterprises of these cutting edge technologies. Of getting involved in open source is really phenomenal thing of environment for start ups. >> Yeah and what you're seeing as the companies, the original enterprises that were interested in nodes. You decided to start deploying. The next question is alright this worked, what else can we be doing? And this is where you're seeing the advent of first Cloud but now how people are thinking about deployment. There's a lot of conversation here this week about ServerList. >> Jeff: Right, right. We were talking about containers. Micro services and next thing you know people are saying oh okay what else can we be doing to push the boundaries around this? So from our perspective, what we think about when we think about when we think of enterprise and infrastructure and Dev Ops et cetera is it is an ever changing thing. So Cloud as we know it today is sort, it's done but it's not close to being finished when you think about how people are making car-wny apps and deploying them. How that keeps changing, questions they keep asking but also now to your point when you look at 5G. When you look at IoT, the deployment methodology. They're going to have to change. The development languages are going to change and that will once again result in further change across the entire infrastructure. How am I going to go to place so I would say that we have not stopped seeing innovative stuff in any of those categories. You asked about where do we see kind of future things that we like. Like NEVC, if I don't say AI and ML and what are the other ones I'm suppose to say? Virtual reality, augmented reality, drones obviously are huge. >> It's anti drones. Drone detection. >> We look at those as enabling technology. We're more interested from a rally perspective and applied use of those technologies so there's some folks from GrowBio here today. And I'm sure you know Grail, right they raise a billion dollars. The first question I asked the VP who is here. I said, did you cure cancer yet? 'Cause it's been like a year and a half. They haven't yet, sorry. But what's real interesting is when you talk to them about what are they doing. So first they're using node but the approach they're taking to try to make their software get smarter and smarter and smarter by the stuff they see how they're changing. It's just fundamentally different than things people were thinking about a few years ago. So for us, the applied piece is we want to see companies like a Grail come in and say, here's what we're doing. Here's why and here's how we're going to leverage all of these enabling technologies to go accomplish something that no one has ever been able to do before. >> Jeff: Right, right. And that's what gets us excited. The idea of artificial intelligence. It's cool, it's great. I love talking about it. Walk me through how you're going to go do something compelling with that. Block chain is an area that we're spending, have been but continue to spend a lot of time looking right now not so much from a currency perspective. Just very compelling technology and the breath of our capability there is incredible. We've met in the last week. I met four entrepreneurs. There are three of them who are here talking about just really novel ways to take advantage of a technology that is still just kind of early stages, from our perspective of getting to a point where people can really deploy within large enterprise. And then I'd say the final piece for us and it's not a new space. But kind of sitting over all of this is security. And as these things change constantly. The security needs are going to change right. The foot print in terms of what the attack surface looks like. It gets bigger and bigger. It gets more complex and the unfortunate reality of simplifying the development process is you also sometimes sort of move out the security thought process from a developer perspective. From a deployment perspective, you assume I've heard companies say well we don't need to worry about security because we keep our stuff on Amazon. As a security investor, I love hearing that. As a user of some of those solutions it's scares me to death and so we see this constant evolution there. And what's interesting you have, today I think we have five security companies who are sponsoring this conference. The first few years, no one even wanted to talk about security. And now you have five different companies who are here really talking about why it matters if you're building out apps and deploying in the Cloud. What you should be thinking about from a security perspective. >> Security is so interesting because to me, it's kind of like insurance. How much is enough? And ultimate you can just shut everything down and close it off but that's not the solution. So where's the happy medium and the other thing that we hear over and over is it's got to be baked in all the layers of the cake. It can't just be the castle and moat methodology anymore. >> Charles: Absolutely. >> How much do you have? Where do you put it in? But where do you stop? 'cause ultimately it's like a insurance. You can just keep buying more and more. >> And recognize the irony of sitting here in San Francisco while Black Hat's taking place. We should both be out there talking about it too. (laughing) >> Well no 'cause you can't go there with your phone, your laptop. No, you're just suppose to bring your car anymore. >> This is the first year in four years that my son won't be at DEF CON. He just turned seven so he set the record at four, five and six as the youngest DEF CON attendee. A little bitter we're not going this year and shout out because he was first place in the kid's capture the flag last year. >> Jeff: Oh very good. >> Until he decided to leave and go play video games. So the way we think about the question you just asked on security, and this is actually, I give a lot of credit to Art Covella. He's one of our venture partners. He was the CEO at our safe for a number of years. Ran it post DMC acquisition as well is it's not so much of a okay, I've got this issue. It could be pay it ransom or whatever it is. People come in and say we solve that. You might solve the problem today but you don't solve the problem for the future typically. The question is what is it that you do in my environment that covers a few things. One, how does it reduce the time and energy my team needs to spend on solving these issues so that I can use them? Because the people problem in security is huge. >> Right. >> And if you can reduce the amount of time people are doing automated. What could be automated task, manual task and instead get them focused on hired or bit sub, you get to cover more. So how does it reduce the stress level for my team? What do I get to take out? I don't have unlimited budget. That could be buying point solutions. What is it that you will allow me to replace so that the net cost to me to add your solution is actually neutral or negative, so that I can simplify my environment. Again going back to making these work for the people, and then what is it that you do beyond claiming that you're going to solve a problem I have today. Walk me through how this fits into the future. They're not a lot of the thousands of-- >> Jeff: Those are not easy questions. >> They're not easy questions and so when you ask that and apply that to every company who's at Black Hat today. Every company at RSA, there's not very many of that companies who can really answer that in a concise way. And you talk to seesos, those are the questions they're starting to ask. Great, I love what you're doing. It's not a question of whether I have you in my budget this year or next. What do I get to do in my environment differently that makes my life easier or my organization's life easier, and ultimately nets it out at a lower cost? It's a theme we invest in. About 25% of our investments have been in the securities space and I feel like so far every one of those deals fits in some way in that category. We'll see how they play out but so far so good. >> Well very good so before we let you go. Just a shout out, I think we've talked before. You sold out sponsorship so people that want to get involved in node 2018. They better step up pretty soon. >> 2018 will happen. It's the earliest we've ever confirmed and announced next year's conference. It usually takes me five months before >> Jeff: To recover. >> I'm willing to think about it again. It will happen. It will probably happen within the same one week timeframe, two week timeframe. I actually, someone put a ticket tier up for next year or if you buy tickets during the conference the next two days. You can buy a ticket $395 for today. They're a $1000 bucks. It's a good deal if people want to go but the nice thing is we've never had a team that out reaches the sponsors. It's always been inbound interest. People who want to be involved and it's made the entire thing just a lot of fun to be apart of. We'll do it next year and it will be really fascinating to see how much additional growth we see between now and then. Because based on some of the enterprises we're seeing here. I mean true Fortune 500, nothing to do with technology from a revenue perspective. They just used it internally. You're seeing some really cool development taking place and we're going to get some of that on stage next year. >> Good, well congrats on a great event. >> Thanks. And thanks for being here. It's always fun to have you guys. >> He's Charles Beeler. I'm Jeff Frick. You're watching theCUBE, Node Summit 2017. Thanks for watching. (uptempo techno music)
SUMMARY :
and one of the main sponsors of the show Good to see you. it is I'd have to look. of all the thousands we've done on the theCUBE, and right now on stage you got Twitter talking It's interesting some of the conversations and people talk more generally about the tech industry. and so the node community to me is always been and be in awe of what they're building. and hopefully this is not new news to anybody either. No and we have a few scholarship folks And again, it's just rewarding to watch people who Well it's kind of interesting piece of what node is. she said that a lot of the codes they see is 2% is your code and someone said to me. and I know that's apart of why you live here Yeah and what you're seeing as the companies, but it's not close to being finished It's anti drones. and smarter by the stuff they see how they're changing. and the breath of our capability there is incredible. and the other thing that we hear over and over But where do you stop? And recognize the irony of sitting here in San Francisco Well no 'cause you can't go there with your phone, This is the first year in four years and this is actually, I give a lot of credit to Art Covella. so that the net cost to me to add your solution They're not easy questions and so when you ask Well very good so before we let you go. It's the earliest we've ever confirmed and announced just a lot of fun to be apart of. It's always fun to have you guys. He's Charles Beeler.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Charles Beeler | PERSON | 0.99+ |
Stacy Kirk | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Charles | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
$1000 | QUANTITY | 0.99+ |
January 2012 | DATE | 0.99+ |
Jamaica | LOCATION | 0.99+ |
Bryan Cantrell | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
seven | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ryan Dawles | PERSON | 0.99+ |
$395 | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Miles Boran | PERSON | 0.99+ |
next year | DATE | 0.99+ |
GrowBio | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.99+ |
2017 | DATE | 0.99+ |
L.A. | LOCATION | 0.99+ |
Home Away | ORGANIZATION | 0.99+ |
800 people | QUANTITY | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
one week | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
75 toy projects | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Mission Bay Conference Center | LOCATION | 0.99+ |
Jacob | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
this week | DATE | 0.99+ |
Rally Ventures | ORGANIZATION | 0.99+ |
first year | QUANTITY | 0.98+ |
DMC | ORGANIZATION | 0.98+ |
first place | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Ryan | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
thousands | QUANTITY | 0.98+ |
five security companies | QUANTITY | 0.98+ |
five different companies | QUANTITY | 0.98+ |
Wednesday | DATE | 0.98+ |
a year and a half | QUANTITY | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
DEF CON. | EVENT | 0.98+ |
One | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
four entrepreneurs | QUANTITY | 0.97+ |
Chris Jones QA Session **DO NOT PUBLISH**
(upbeat music) >> Okay, welcome back everyone. I'm John Furrier here in theCUBE, in Palo Alto for "CUBE Conversation" with Chris Jones, Director of Product Management at Platform9. I've got a series of questions, had a great conversation earlier. Chris, I have a couple questions for you, what do you think? >> Let's do it, John. >> Okay, how does Platform9 Solution, you- can it be used on any infrastructure anywhere, cloud, edge, on-premise? >> It can, that's the beauty of our control plane, right? It was born in the cloud, and we primarily deliver that SaaS, which allows it to work in your data center, on bare metal, on VMs, or with public cloud infrastructure. We now give you the ability to take that control plane, install it in your data center, and then use it with anything, or even in air gap. And that includes capabilities with bare metal orchestration as well. >> Second question. How does Platform9 ensure maximum uptime, and proactive issue resolution? >> Oh, that's a good question. So if you come to Platform nine we're going to talk about always on assurance. What is driving that is a system of three components around self-healing, monitoring, and proactive assistance. So our software will heal broken things on nodes, right? If something stops running that should be running, it will attempt to restart that. We also have monitoring that's deployed with everything. So you build a cluster in AWS, well, we put open source monitoring agents, that are actually Prometheus, on every single node. That means it's resilient, right? So if you lose a node, you don't lose monitoring. But that data importantly comes back to our control plane, and that's the control plane that you can put in your data center as well. That data is what alerts us, and you as a user, anytime of the day that something's going wrong. Let's say etcd latency, good example, etcd is going slow. We'll find out, we might not be able to take restorative action immediately, but we're definitely going to reach out and say,, "You have a problem, let's get ahead of this and let's prevent that from becoming a bigger problem." And that's what we're delivering. When we say always on assurance, we're talking about self-healing, we're talking about remote monitoring, we're talking about being proactive with our customers, not waiting for the phone call or the support desk ticket saying, "Oh we think something's not working." Or worse, the customer has an outage. >> Awesome. Thanks for sharing. Can you explain the process for implementing Platform9 within a company's existing infrastructure. >> Are we doing air gap, or on-prem or SaaS approached? SaaS approach I think is by far the easiest, right? We can build a dedicated Platform9 control plane instance in a manner of minutes, for any customer. So when we do a proof of concept or onboarding, we just literally put in an email address, put in the name you want for your fully qualified domain name, and your instance is up. From that point onwards, the user can just log in, and using our CLI, talk to any number of, say, virtual machines, or physical servers in their environment for, you know, doing this in a data center or colo, and say, "I want these to be my Kubernetes control plane nodes. Here's the five of them. Here's the VIP for the load balancing, the API server and here are all of my compute nodes." And that CLI will work with the SaaS control plane, and go and build the cluster. That's as simple as it, CentOS, Ubuntu, just plain old operating system. Our software takes care of all the prerequisites, installing all the pieces, putting down MetalLB, CoreDNS, Metrics Server, Kubernetes dashboard, etcd backups. You built some servers. That's essentially what you've done, and the rest is being handled by Platform9. It's as simple as that. >> Great, thanks for that. What are the two traditional paths for companies considering the cloud native journey? The two paths. >> The traditional paths. I think that's your engineering team running so fast that before you even realize that you've got, you know, 10 EKS clusters. Or, hey, we can do this. You know, I've got the I can build it mentality. Let's go DIY completely open source Kubernetes on our infrastructure, and we're going to piecemeal build it all up together. They're, I think the pathways that people traditionally look at this journey, as opposed to having that third alternative saying can I just consume it on my infrastructure, be it cloud or on-premise or at the edge. >> Third is the new way, you guys do that. >> That's been our focus since the company was, you know, brought together back in the open OpenStack days. >> Awesome, what's the makeup of your customer base? Is there a certain pattern to the size or environments that you guys work with? Is there a pattern or consistency to your customer base? >> It's a spread, right? We've got large enterprises like Juniper, and we go all the way down to people with 20, 30, 50 nodes in total. We've got people in banking and finance, we've got things all the way through to telecommunications and storage infrastructure. >> What's your favorite feature of Platform9? >> My favorite feature? You know, if I ask, should I say this as a pre-sales engineer, let me show you a favorite thing. My immediate response is, I should never do this. (John laughs) To me it's just being able to define my cluster and say, go. And in five minutes I have that environment, I can see everything that's running, right? It's all unified, it's one spot, right? I'm a cluster admin. I said I wanted three control plane, 25 workers. Here's the infrastructure, it creates it, and once it's built, I can see everything that's running, right? All the applications that are there. One UI, I don't have to go click around. I'm not trying to solve things or download things. It's the fact that it's unified and just delivered in one hit. >> What is the one thing that people should know about Platform9 that they might not know about it? >> I think it's that we help developers and engineers as much as we can help our operations teams. I think, for a long time we've sort of targeted that user and said, hey, we, we really help you. It's like, but why are they doing this? Why are they building any infrastructure or any cloud platform? Well, it's to run applications and services, to help their customers, but how do they get there? There's people building and writing those things, and we're helping them, right? For the last two years, we've been really focused on making it simple, and I think that's an important thing to know. >> Chris, thanks so much, appreciate it. >> Yeah, thank you, John. >> Okay, that's theCUBE Q&A session here with Platform9. I'm John Furrier, thanks for watching. (light music)
SUMMARY :
Chris, I have a couple questions It can, that's the beauty and proactive issue resolution? and that's the control Can you explain the process and go and build the cluster. What are the two traditional paths be it cloud or on-premise or at the edge. the company was, you know, and we go all the way down It's the fact that it's unified For the last two years, Okay, that's theCUBE Q&A
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Chris Jones | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
25 workers | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Platform9 | ORGANIZATION | 0.99+ |
Platform9 | TITLE | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
Third | QUANTITY | 0.99+ |
CentOS | TITLE | 0.99+ |
Second question | QUANTITY | 0.99+ |
one spot | QUANTITY | 0.99+ |
two paths | QUANTITY | 0.98+ |
Ubuntu | TITLE | 0.97+ |
one hit | QUANTITY | 0.97+ |
20 | QUANTITY | 0.97+ |
10 EKS | QUANTITY | 0.96+ |
One UI | QUANTITY | 0.96+ |
third alternative | QUANTITY | 0.95+ |
Prometheus | TITLE | 0.94+ |
couple questions | QUANTITY | 0.93+ |
50 | QUANTITY | 0.92+ |
two traditional paths | QUANTITY | 0.9+ |
one thing | QUANTITY | 0.89+ |
30 | QUANTITY | 0.86+ |
single node | QUANTITY | 0.85+ |
Kubernetes | TITLE | 0.85+ |
Platform nine | TITLE | 0.82+ |
last two years | DATE | 0.8+ |
CoreDNS | TITLE | 0.78+ |
OpenStack | TITLE | 0.74+ |
three components | QUANTITY | 0.71+ |
three control plane | QUANTITY | 0.7+ |
theCUBE | ORGANIZATION | 0.5+ |
CLI | TITLE | 0.48+ |
CUBE | EVENT | 0.32+ |
Google's PoV on Confidential Computing NO PUB
>> Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start, and then Patricia you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm honing a lot of interesting activities in Google and again, security or infrastructure securities that I usually hone, and we're talking about encryption, Antware encryption, and confidential computing is a part of portfolio. In additional areas that I contribute to get with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operating your confidential environment to have end to end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it, okay. Patricia? >> Well I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologies from large corporations, institutions, and a lot of success for startups as well. And we have two main goals. First, we work side by side with some of our largest, more strategic or most strategic customers and we help them solve complex engineering technical problems. And second, we are device Google and Google Cloud engineering and product management on emerging trends in technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent, thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool. And it's one of the tools in our toolbox. And confidential computing is a way how would help our customers to complete this very interesting end to end lifecycle of their data. And when customers bring in the data to Cloud and want to protect it, as they ingest it to the Cloud, they protect it address when they store data in the Cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to Cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to Cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential matters. Because at the end of the day it reduces more and more the customers thrush boundaries and the attack surface, that's about reducing that periphery, the boundary, in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now we are also encrypting data while in use. And among other beneficial I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry. Even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where you are a customer is actually trying to get a finance on an asset, let's say a boat or a house and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the of the data. >> Interesting, and I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can, because there's a narrative out there that says confidential computing is a marketing ploy. I talked about this upfront, by Cloud providers that are just trying to placate people that are scared of the Cloud. And I'm presuming you don't agree with that but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems, it is overhyped by Cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine, it's a crazy statement. But the most importantly is we mixing multiple concepts I guess. And exactly as Patricia said, we need to look at the end-to-end story not again the mechanism of how confidential computing trying to again execute and protect customer's data, and why it's so critically important. Because what confidential computing was able to do it's in addition to isolate our tenants in multi-tenant environments the Cloud over. To offer additional stronger isolation, we called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants that's running on the same host but also us, because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment but also incredibly important, stronger isolation of our customers. So tenants from us, we also writing code, we also software providers will also make mistakes or have some zero days sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants, and amongst those tenants, they're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together this very sensitive data, knowing that this particular protection is available to them. >> Okay, thank you, appreciate that. And I, you know, I think malicious code is often a threat model missed in these narratives. You know, operator access, yeah, could maybe I trust my Clouds provider, but if I can fence off your access even better I'll sleep better at night. Separating a code from the data, everybody's arm Intel, AM, Invidia, others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and industry way of dealing with confidential computing is to ensure as it's three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any Cloud can, something that Google actually pioneered in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 code Titan. Those our specific ASIC specific, again inch by inch system on every single motherboard that we have, that ensures that your low level former, your actually system code, your kernel, the most powerful system, is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing concluded. But for confidential computing what we have to change we bring in a MD again, future silicon vendors, and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity not only our software and our firmware but also firmware and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of the secure processor. It's special Asics best, specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes, or every single worker thread in our Spark capability. We offer all of that, and those keys are not available to us. It's the best keys ever in encryption space. Because when we are talking about encryption the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypt it enough. But the case in confidential computing quite so revolutionary technology, ask Cloud providers who don't have access to the keys. They're sitting in the hardware and they fed to memory controller. And it means when Hypervisors that also know about these wonderful things, saying I need to get access to the memories that this particular VM I'm trying to get access to. They do not encrypt the data, they don't have access to the key. Because those keys are random, ephemeral and VM, but the most importantly in hardware not exportable. And it means now you will be able to have this very interesting role that customers all Cloud providers, will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you're running in VM you actually see your memory in clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, you will not be able to do it. Now you'll see cybernet. And it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified, and OS is modified such way to provide integrity. It means even OS that you're running in UVM bucks is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine, Dave, that's increasing it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance, and scales as they would expect from Cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well you know you've already given me guarantees as a Cloud provider that you don't have access to my data but this gives another level of assurance. Key management as they say is key. Now you're not, humans aren't managing the keys the machines are managing them. So Patricia, my question to you is in addition to, you know, let's go pre-confidential computing days what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality the customer cares then they want to know whether their systems are protected from outside or unauthorized access. And that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret, right? The code is actually looking at the data only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered, with or impacted by outside actors. And what confidential computing insures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data it's also it has not been tempered and preserves integrity. I would also say that this is all verifiable. So you have attestation, and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tempered with. Confidentiality and integrity of code and data. >> Got it, okay, thank you. You know, Nelly, you mentioned, I think I heard you say that the applications, it's transparent,you don't have to change the application it just comes for free essentially. And I'm, we showed some various parts of the stack before. I'm curious as to what's affected but really more importantly what is specifically Google's value add? You know, how do partners, you know, participate in this? The ecosystem or maybe said another way how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees actually a lot of works was done by community. Google is very much operate and open. So again, our operating system we working in this operating system repository OS vendors to ensure that all capabilities that we need is part of their kernels, are part of their releases, and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors, kernel, host kernel, to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this role. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is pulling the lead and also announcing the trusted domain extension very similar architecture and no surprise, it's again a lot of work done with our partners to again, convince, work with them, and make this capability available. The same with ARM this year, actually last year, ARM unknowns are future design for confidential computing. It's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing. For example, simply to mention to ensure interop, as you mentioned, between different confidential environments of Cloud providers. We want to ensure that they can attest to each other. Because when you're communicating with different environments, you need to trust them. And if it's running on different Cloud providers you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at the station, the community based systems that we want to build and influence and work with ARM and every other Cloud providers to ensure that they can interrupt. And it means it doesn't matter where confidential workloads will be hosted but they can exchange the data in secure, verifiable, and controlled by customers way. And to do it, we need to continue what we are doing. Working open again and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let talk about data sovereignty, because when you think about data sharing you think about data sharing across, you know, the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, you know, the technology industry and sometimes is problematic. I know, you know, there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you you know, when you delete data, can you actually prove the data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect, so for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses at all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption, and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software, stack, any operations, that is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the Cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data, an insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty we care about whether it resides, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment. That the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity, safety of the confidential computing environment. And that's why we believe confidential computing is one, necessary and essential technology that will allow us to ensure data sovereignty especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed, so I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post you guys sent in some predictions, and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like, you know, this decade in, in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it'll become utility. It'll become TLS. As of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heading and heading, I don't know if we are there yet yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you, and Patricia, what's your prediction? >> I would double that and say, hey, in the future, in the very near future you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, If I say mode of operation, I like to compare that, today is inconceivable if we talk to the young technologists. It's inconceivable to think that at some point in history and I happen to be alive that we had data at address that was not encrypted. Data in transit, that was not encrypted. And I think that we will be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus, I think the beauty of the this industry is because there's so much competition this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much.
SUMMARY :
Patricia, great to have you. and then Patricia you can weigh in. In additional areas that I contribute to Got it, okay. of the CTO, OCTO for Excellent, thank you in the data to Cloud into the architecture a bit and privacy of the of the data. but I'm going to push you a is available to them. we could stay with you and they fed to memory controller. So Patricia, my question to you is and integrity of the data and of the code. that the applications, and ideas of our partners to this role is when you you know, and that the data will be only used of the enforcement. and we will support encrypted traffic. and I happen to be alive and we can double click
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nelly | PERSON | 0.99+ |
Patricia | PERSON | 0.99+ |
International Data Space Association | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
IDSA | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
2017 | DATE | 0.99+ |
two parties | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
Asics | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
Gaia X | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
each | QUANTITY | 0.98+ |
seven years | QUANTITY | 0.98+ |
OCTO | ORGANIZATION | 0.98+ |
one thought | QUANTITY | 0.98+ |
a decade ago | DATE | 0.98+ |
this year | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
Invidia | ORGANIZATION | 0.98+ |
'23 | DATE | 0.98+ |
today | DATE | 0.98+ |
Cloud | TITLE | 0.98+ |
three pillars | QUANTITY | 0.97+ |
one way | QUANTITY | 0.97+ |
hundred percent | QUANTITY | 0.97+ |
zero days | QUANTITY | 0.97+ |
three main property | QUANTITY | 0.95+ |
third pillar | QUANTITY | 0.95+ |
two main goals | QUANTITY | 0.95+ |
CTO | ORGANIZATION | 0.93+ |
Nell | PERSON | 0.9+ |
Kubernetes | TITLE | 0.89+ |
every single VM | QUANTITY | 0.86+ |
Nelly | ORGANIZATION | 0.83+ |
Google Cloud | TITLE | 0.82+ |
every single worker | QUANTITY | 0.77+ |
every single node | QUANTITY | 0.74+ |
AM | ORGANIZATION | 0.73+ |
double | QUANTITY | 0.71+ |
single motherboard | QUANTITY | 0.68+ |
single silicon | QUANTITY | 0.57+ |
Spark | TITLE | 0.53+ |
kernel | TITLE | 0.53+ |
inch | QUANTITY | 0.48+ |
Breaking Analysis: Google's PoV on Confidential Computing
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)
SUMMARY :
bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nelly | PERSON | 0.99+ |
Patricia | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
International Data Space Association | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rob Hoof | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Nelly Porter | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
IDSA | ORGANIZATION | 0.99+ |
Rodrigo Bronco | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
Aem | ORGANIZATION | 0.99+ |
Nellie | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two parties | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Patricia Florissi | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Gaia-X | ORGANIZATION | 0.99+ |
second point | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
second | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
'23 | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
a decade ago | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
zero days | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
OCTO | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Subbu Iyer, Aerospike | AWS re:Invent 2022
>>Hey everyone, welcome to the Cube's coverage of AWS Reinvent 2022. Lisa Martin here with you with Subaru ier, one of our alumni who's now the CEO of Aerospike. Sabu. Great to have you on the program. Thank you for joining us. >>Great as always, to be on the cube. Luisa, good to meet you. >>So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized, yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >>Well, you know, we, we see this across the board when I talk to customers and prospects. There's a desire from the business and from it actually to leverage data to really fuel newer applications, newer services, newer business lines, if you will, for companies. I think the struggle is one, I think one the, you know, the plethora of data that is created, you know, surveys say that over the next three years data is gonna be, you know, by 2025, around 175 zetabytes, right? A hundred and zetabytes of data is gonna be created. And that's really a, a, a growth of north of 30% year over year. But the more important, and the interesting thing is the real time component of that data is actually growing at, you know, 35% cagr. And what enterprises desire is decisions that are made in real time or near real time. >>And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient if you'll, so you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you, for both you, both users, so to speak? And the last point that we see out there is even if you're able to, you know, bring all that data, you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one, capturing the data, you know, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >>You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data, it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >>Yeah. When, when, when we started Aerospike, right when the company started, it started with the premise that data is gonna grow, number one, exponentially. Two, when applications open up to the internet, there's gonna be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply side and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years, what we've seen is as digitization has actually permeated every industry out there, the need to harness data in real time is pretty much present in every industry. >>Whether that's retail, whether that's financial services, telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather, are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't wanna be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, the customer exp you know, customer experience is paramount and we as customers expect answers in, you know, an instant in real time. And on the other hand, the way they make decisions is based on a large data set because you know, larger data sets actually propel better decisions. So there's competing pressures here, which essentially drive the need. One from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an inces need to actually make decisions in real or near real time. >>You know, I think one of the things that's been in short supply over the last couple of years is patients we do expect as consumers, whether we're in our business lives, our personal lives that we're going to be getting, be given information and data that's relevant, it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >>So, you know, going back to your initial question Lisa, around why is data really a high value but underutilized or underleveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus and they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? >>It's really easy to build an application that operates at low scale or low throughput or low concurrency, but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a, a really robust data platform that can be up on a five, nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer, which is, can you operate all of this at a cost point? Which is not prohibitive, but it makes sense from a TCO perspective. Cuz a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey, the revenue starts going up, the user base starts going up, but the cost basis starts crossing over the revenue and they're losing money on the service, ironically, as the service becomes more popular. So really unlimited scale, predictable performance always on, on a globally resilient basis and low tco. These are the four essential capabilities of any modern data platform. >>So then talk to me with those as the four main core functionalities of a modern data platform. How does aerospace deliver that? >>So we were built, as I said, from the from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers, we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are, who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know, globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here, essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid state devices as essentially extended memory. So you're getting memory performance, but you're accessing these SSDs, you're not paying memory prices, but you're getting memory performance as a result of that. >>You can attach a lot more data to each node or each server in your distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with aerospike, the same things at 60 to 80% lower server count and as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said, that's the key kind of starting point to the innovation. We layer around capabilities like, you know, replication change, data notification, you know, synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service, you can have a single aerospace cluster with one node in San Francisco, one northern New York, another one in London. And this would be basically seamlessly operating. So that, you know, this is strongly consistent. >>Very few no SQL data platforms are strongly consistent or if they are strongly consistent, they will actually suffer performance degradation. And what strongly consistent means is, you know, all your data is always available, it's guaranteed to be available, there is no data lost anytime. So in this configuration that I talked about, if the node in London goes down, your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up, it rejoins the cluster and everything is back to kind of the way it was before, you know, London left the cluster so to speak. So the op, the ability to do this globally resilient, highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or hybrid memory architecture and then we start building out a lot of these other capabilities around the platform. >>And then over the years, what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in a silo. So aerospace gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, pulsar, so that as you're ingesting data from a variety of data sources, you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike, you can actually run spark jobs across that data in a, in a multithreaded parallel fashion to get really insight from that data at really high, high throughput and high speed, >>High throughput, high speed, incredibly important, especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, edge IOT devices, the workforce embracing more and more hybrid these days. How are you ex helping customers to extract more value from data while also lowering costs? Go into some customer examples cause I know you have some great ones. >>Yeah, you know, I think we have, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples, let me talk to you about some of kind of the use cases which we see out there. We see a lot of aerospace being used in fraud detection. We see us being used in recommendations and since we use get used in customer data profiles or customer profiles, customer 360 stores, you know, multiplayer gaming and entertainment, these are kind of the repeated use case digital payments. We power most of the digital payment systems across the globe. Specific example from a, from a specific example perspective, the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you actually paying somebody your transaction is, you know, being sent through aero spike to really decide whether this is a fraudulent transaction or not. >>And when you do that, you know, you and I as a customer not gonna wait around for 10 seconds for PayPal to say yay or me, we expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal for every transaction that goes through PayPal before us, you know, PayPal was missing out on about 2% of their SLAs, which was essentially millions of dollars, which they were losing because, you know, they were letting transactions go through and taking the risk that it, it's not a fraudulent transaction with the aerospace. They can now actually get a much better sla and the data set on which they compute the fraud score has gone up by, you know, several factors. So by 30 x if you will. So not only has the data size that is powering the fraud engine actually grown up 30 x with Aerospike. Yeah. But they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's, >>And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >>Yes. And so that's a, that's a really powerful use case and you know, it's, it's a great customer, great customer success story. The other one I would talk about is really Wayfair, right? From retail and you know, from e-commerce. So everybody knows Wayfair global leader in really, you know, online home furnishings and they use us to power their recommendations engine and you know, it's basically if you're purchasing this, people who bought this but also bought these five other things, so on and so forth, they have actually seen the card size at checkout go by up to 30% as a result of actually powering their recommendations in G by through Aerospike. And they, they were able to do this by reducing the server count by nine x. So on one ninth of the servers that were there before aerospace, they're now powering their recommendation engine and seeing card size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair >>Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized, relevant experience that's gonna show me if I bought this, show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >>Exactly. And you know, another great example you asked about, you know, customer stories, Adobe, who doesn't know Adobe, you know, they, they're on a, they're on a mission to deliver the best customer experience that they can and they're talking about, you know, great customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this. With Aerospike going to Aerospike, basically what they have seen is their throughput go up by 70%, their cost has been reduced by three x. So essentially doing it at one third of the cost while their annual data growth continues at, you know, about north of 30%. So not only is their data growing, they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know, on a dataset which is constantly growing at north, north of 30% in this case. >>Those are three great examples, PayPal, Wayfair, Adobe talking about, especially with Wayfair when you talk about increasing their cart checkout sizes, but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >>Yep. I, I'll give you a fun one here. So, you know, you may not have heard about this company, it's called Dream 11 and it's a company based out of India, but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform and you know, India is a nation which is cricket crazy. So you know, when, when they have their premier league going on, you know, there's millions of users logged onto the dream alone platform building their fantasy lead teams and you know, playing on that particular platform, it has a hundred million users, a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered a, an amazing success story in, in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by aerospace where think about that they are able to deliver all of this and support a hundred million users, 5.5 million concurrent users all with you know, 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is you know, world renowned but at least you know from a what we see out there, it's an amazing success story of operating at scale. >>Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike aws, the partnership GRAVITON two better together. What are you guys doing together there? >>Great partnership. AWS has multiple layers in terms of partnerships. So you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know, those instance types work well for us. And then we just released support for Aerospike on the graviton platform and we just announced a benchmark of Aerospike running on graviton on aws. And what we see out there is with the benchmark, a 1.6 x improvement in price performance and you know, about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on graviton. So this is an amazing story from a price performance perspective, performance per wat for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aero Aerospike and aws, not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >>And it sounds like a great sustainability story. I wish we had more time so we would talk about this, but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >>Thank you very much. I mean, if, if folks are at reinvent next week or this week, come on and see us at our booth. We are in the data analytics pavilion. You can find us pretty easily. Would love to talk to you. >>Perfect. We'll send them there. So Ira, thank you so much for joining me on the program today. We appreciate your insights. >>Thank you Lisa. >>I'm Lisa Martin. You're watching The Cubes coverage of AWS Reinvent 2022. Thanks for watching.
SUMMARY :
Great to have you on the program. Great as always, to be on the cube. So, you know, every company these days has got to be a data company, the, you know, the plethora of data that is created, you know, surveys say that over the next three years you know, making decisions from it in real time and really operating it You know, you bring up a great point with respect to real time data access. on which ad to put in front of you and I so that we would click or engage with that particular the way they make decisions is based on a large data set because you know, larger data sets actually capabilities of a modern data platform that need to be delivered to meet demanding lot of the data platforms that, you know, some of these applications were built on have goes back to my first answer, which is, can you operate all of this at a cost So then talk to me with those as the four main core functionalities of deliver the always on, you know, operations. So that, you know, this is strongly consistent. the way it was before, you know, London left the cluster so to speak. Once the data is in Aerospike, you can actually run you ex helping customers to extract more value from data while also lowering So, you know, before I get into specific customer examples, let me talk to you about some 10 seconds for PayPal to say yay or me, we expect, you know, the decision to be made in an And that's what we expect as consumers, right? really powerful in terms of the business outcome and what we are able to, you know, We have this expectation that needs to be really fueled by technology. And you know, another great example you asked about, you know, especially with Wayfair when you talk about increasing their cart onto the dream alone platform building their fantasy lead teams and you know, What are you guys doing together there? So you know, we engage with AWS at the executive level. but thank you so much for talking about the main capabilities of a modern data platform, Thank you very much. So Ira, thank you so much for joining me on the program today. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Ira | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
Luisa | PERSON | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
Wayfair | ORGANIZATION | 0.99+ |
35% | QUANTITY | 0.99+ |
Aerospike | ORGANIZATION | 0.99+ |
each server | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
27% | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
30 x | QUANTITY | 0.99+ |
32% | QUANTITY | 0.99+ |
99.95% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
each node | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
less than one millisecond | QUANTITY | 0.99+ |
millions of users | QUANTITY | 0.99+ |
Subaru | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
first answer | QUANTITY | 0.99+ |
one third | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
over 70% | QUANTITY | 0.99+ |
Sabu | PERSON | 0.99+ |
both users | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
80% | QUANTITY | 0.98+ |
Kafka | TITLE | 0.98+ |
1.6 x | QUANTITY | 0.98+ |
northern New York | LOCATION | 0.98+ |
5.5 million concurrent users | QUANTITY | 0.98+ |
GRAVITON | ORGANIZATION | 0.98+ |
hundred million users | QUANTITY | 0.97+ |
Dream 11 | ORGANIZATION | 0.97+ |
Two | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
Aerospike | TITLE | 0.97+ |
third thing | QUANTITY | 0.96+ |
hundred million users | QUANTITY | 0.96+ |
The Cubes | TITLE | 0.95+ |
around 175 zetabytes | QUANTITY | 0.95+ |
Chuck Svoboda, Red Hat & Ted Stanton, AWS | AWS re:Invent 2022
>>Hey everyone, it's Vegas. Welcome back. We know you've been watching all day. We appreciate that. We always love being able to bring you some great content on the Cube Live from AWS Reinvented 22. Lisa Martin here with Paul Gill. And Paul, we've had such a great event. We've, I think we've done nearly 70 interviews since we started on the Cube on >>Monday night. I believe we just hit 70. Yeah, we just hit 70. You must feel like you've done half of >>Them. I really do. But we've been having great conversations. There's so much innovation going on at aws. Nothing slowed them down during the pandemic. We love also talking about the innovation, the flywheel that is their partner ecosystem. We're gonna have a great conversation about that >>Next. And as we've said, going back to day one, the energy of the show is remarkable. And here we are, we're getting late in the afternoon on day two, and there's just as much activity, just as much energy out there as, as the beginning of the first day. I have no doubt day three will be the >>Same. I agree. There's been no slowdown. We've got two guests here. We're gonna have a great conversation. Chuck Kubota joins us, senior Director of Cloud Services, GTM at Red Hat. Great to have you on the program. And Ted Stanton, global head of Sales, red Hat at IBM at aws. Welcome. >>Thanks for having us. >>How's the show going so far for you guys? >>It's a blur. Is it? Oh my gosh. >>Don't they all >>Blur? Well, yes, yes. I actually like last year a bit better. It was half the size. Yeah. And a lot easier to get around, but this is back to normal, so >>It is back to normal. Yeah. And and Ted, we're hearing north of 50,000 in-person attendees. I heard a, something I think was published. I heard the second hand over like 300,000 online attendees. This is maybe the biggest one we ever had. >>Yeah, yeah, I would agree. And frankly, it's my first time here, so I am massively impressed with the overall show, the meeting with partners, the meeting with customers, the announcements that were made, just fantastic. And >>If you remember back to two years ago, there were a lot of questions about whether in-person conferences would ever return and the volume that we used to see them. And that appears to be >>The case. I think we, I think we've answered, I think AWS has answered that for us, which I'm very pleased to see. Talk about some of those announcements. Ted. There's been so much that that's always one of the things we know and love about re men is there's slew of announcements. You were saying this morning, Paul, and then keynote, you lost, you stopped counting after I >>Lost 15, I lost count for 15. I think it was over 30 announcements this morning alone >>Where IBM and Red Hat are concern. What are some of the things that you are excited about in terms of some of the news, the innovation, and where the partnership is going? >>Well, definitely where the partnership is going, and I think even as we're speaking right now, is a keynote going on with Aruba, talking about some of the partners and the way in which we support partners and the new technologies and the new abilities for partners to take advantage of these technologies to frankly delight our customers is really what most excites me. >>Chuck, what about you? What's going on with Red Hat? You've been there a long time. Sales, everything, picking up customers, massively transforming. What are some of the things that you're seeing and that you're excited >>About? Yeah, I mean, first of all, you know, as customers have, you know, years ago discovered it's not competitively advantageous to manage their own data centers in most cases. So they would like to, you know, give that responsibility to Amazon. We're seeing them move further up the stack, right? So that would be more beyond the operating system, the application platforms like OpenShift. And now we have a managed application platform built on OpenShift called Red Out OpenShift service on AWS or Rosa. And then we're even further going up the stack with that with, we just announced this week that red out OpenShift data science is available in the AWS marketplace, runs on Rosa, helps break the land speed record to getting those data models out there that are so important to make, you know, help organizations become more, much more data driven to remain competitive themselves. >>So talk about Rosa and how it differs from previous iterations of, of OpenShift. I mean, you had, you had an online version of OpenShift several years ago. What's different about Rosa? >>Yeah, so the old OpenShift online that was several years old, right? For one thing, wasn't a joint partnership between Amazon and Red Hat. So we work together, right? Very closely on this, which is great. Also, the awesome thing about Rosa, you know, if you think about like OpenShift for, for, as a matter of fact, Amazon is the number one cloud that OpenShift runs on, right? So a lot of those customers want to take advantage of their committed spins, their EDPs, they want one bill. And so Rosa comes through the one bill comes through the marketplace, right? Which is, which is totally awesome. Not only that or financially backing OpenShift with a 99.95% financially backed sla, right? We didn't have that before either, right? >>When you say financially backed sla, >>What do you mean? That means that if we drop below 99.95% of availability, we're gonna give you some money back, right? So we're really, you know, for lack of better words, putting our money where our mouth is. Absolutely right. >>And, and some of the key reasons that we even work together to build Rosa was frankly we've had a mirror of customers and virtually every single region, every single industry been using OpenShift on AWS for years, right? And we listened to them, they wanted a more managed version of it and we worked very closely together. And what's really great about Rosa too is we built some really fantastic integrations with some of the AWS native services like API gateway, Amazon rds, private link, right? To make it very simple and easy for customers to get started. We talked a little bit about the marketplace, but it's also available just on the AWS console, right? So customers can get started in a pay as you go fashion start to use it. And if they wanna move into a more commitment, more of a set schedule of payments, they can move into a marketplace private offer. >>Chuck, talk about, how about Rosen? How is unlocking the power of technology like containers Kubernetes for customers while dialing down some of the complexity that's >>There? Yeah, I mean if you think about, you know, kind of what we did, you know, earlier on, right? If you think about like virtualization, how it dialed down the complexity of having to get something rack, get a blade rack, stack cable and cooled every time you wanted to deploy new application, right? So what we do is we, our message is this, we want developers to focus on what matters most. And that's build, deploy, and running applications. Most of our customers are not in the business of building app platforms. They're not in the business of building platforms like banks, I, you know, financials, right? Government, et cetera. Right? So what we do is we allow those developers that are, enable those developers that know Java and Node and springing and what have you, just to keep writing what they know. And then, you know, I don't wanna get too technical here, but get pushed through way and, and OpenShift takes care of the rest, builds it for them, runs it through a pipeline, a CICD pipeline, goes through all the testing and quality gates and things like that, deploys it, auto wires it up, you know, to monitoring which is what you need. >>And we have all kinds of other, you know, higher order services and an ecosystem around that. And oh, by the way, also plugging into and taking advantage of the services like rds, right? If you're gonna write an application, a tradition, a cloud native application on Amazon, you're probably going to wanna run it in Rosa and consuming one of those databases, right? Like RDS or Aurora, what have you. >>And I, and I would say it's not even just the customers. We have a variety of ecosystem partners, both of our partners leveraging it as well. We have solos built their executive management system that they go ahead and turn and sell to their customers, streamlines data and collects data from a variety of different sources. They decided, you know, it's better to run that on top of Rosa than manage OpenShift themselves. We've seen IBM restack a lot of their software, you know, to run on top of Rose, take advantage of that capabilities. So lots of partners as well as customers are taking advantage of fully managed stack of that OpenShift that that turnkey capabilities that it provides >>For, for OpenShift customers who wanna move to Rose, is that gonna be a one button migration? Is that gonna be, can they run both environments simultaneously and migrate over time? What kind of tools are you giving them? >>We have quite, we have quite a few migration tools such as conveyor, right? That's one of our projects, part of our migration application toolkit, right? And you know, with those, there's also partners like Trilio, right? Who can help move, you know, applications back 'em up. In fact, we're working on a pretty cool joint go to market with that right now. But generally speaking, the OpenShift experience that the customers that we have know and love and those who have never used OpenShift either are coming to it as well via Rosa, right? The experience is primarily the same. You don't have to really retrain your people, right? If anything, there's a reduction in operational cost. We increase developer productivity cuz we manage so much of the stack for you. We have SRE site reliability engineers that are backing the platform that proactively get ahead of anything that may go wrong. So maybe you don't even notice if something went wrong, wrong. And then also reactively fixing it if it comes to that, right? So, you know, all those kind of things that your customers are having to do on their own or hire a contractor, a consultant, what have to do Now we benefit from a managed offering in the cloud, right? In Amazon, right? And your developers still have that great experience too, like to say, you know, again, break the land speed record to prod. >>I >>Like that. And, and I would actually say migrations from OpenShift are on premise. OpenShift to Rosa maybe only represents about a third of the customers we have. About another third of the customers is frankly existing AWS customers. Maybe they're doing Kubernetes, do it, the, you know, do it themselves. We're struggling with some of the management of that. And so actually started to lean on top of using Rosa as a better platform to actually build upon their applications. And another third, we have quite a few customers that were frankly new OpenShift customers, new Red Hat customers and new AWS customers that were looking to build that next cloud native application. Lots of in the startup space that I've actually chosen to go with Rosa. >>It's funny you mention that because the largest Rosa consumer is new to OpenShift. Oh wow. Right. That's pretty, that's pretty powerful, right? It's not just for existing OpenShift customers, existing OpenShift. If you're running OpenShift, you know, on EC two, right. Self-managed, there's really no better way to run it than Rosa. You know, I think about whether this is the 10th year, 10 year anniversary of re right? Right. Yep. This is also the 10 year anniversary of OpenShift. Yeah, right. I think it one oh came out about sometime around a week, 10 years ago, right? When I came over to Red Hat in 2015. You know, if you, if you know your Kubernetes history was at July 25th, I think was when Kubernetes ga, July 25th, 2015 is when it g you have >>A good >>Memory. Well I remember those days back then, right? Those were fun, right? The, we had a, a large customer roll out on OpenShift three, which is our OpenShift RE based on Kubernetes. And where do you think they ran Amazon, right? Naturally. So, you know, as you move forward and, and, and OpenShift V four came out, the, reduces the operational complexity and becomes even more powerful through our operator framework and things like that. Now they revolved up to Rosa, right? And again, to help those customers focus on what matters most. And that's the applications, not the containers, not those underlying implementation and technical details while critically important, are not necessarily core to the business to most of our customers. >>Tremendous amount of innovation in OpenShift in a decade, >>Pardon me? >>Tremendous amount of innovation in OpenShift in the >>Last decade. Oh absolutely. And, and and tons more to come like every day. Right. I think what you're gonna see more of is, you know, as Kubernetes becomes more, more and more of the plumbing, you know, I call 'em productive abstractions on top of it, as you mentioned earlier, unlocking the power of these technologies while minimizing, even hiding the complexity of them so that you can just move fast Yeah. And safely move fast. >>I wanna be sure we get to, to marketplaces because you have been, red Hat has made, has really stepped up as commitment to the AWS marketplace. Why are you doing that now and how are, how are the marketplaces evolving as a channel for you? >>Well, cuz our customers want us to be there, right? I mean we, we, we are customer centric, customer first approach. Our customers want to buy through the marketplace. If you're an Amazon, if you're an Amazon customer, it's really easy for you to go procure software through the marketplace and have, instead of having to call up Red Hat and get on paper and write a second check, right? One stop shop one bill. Right? That is very, very attractive to our customers. Not only that, it opens up other ways to buy, you know, Ted mentioned earlier, you know, pay as you go buy the drink pricing using exactly what you need right now. Right? You know, AWS pioneered that, right? That provides that elasticity, you know, one of the core tenants at aws, AWS cloud, right? And we weren't able to get that with the traditional self-managed on Red Hat paper subscriptions. >>Talk a little bit about the go to market, what's, you talked about Ted, the kind of the three tenants of, of customer types. But talk a little bit about the gtm, the joint go to market, the joint engineering, so we get an understanding of how customers engage multiple options. >>Yeah, I mean, so if you think about go to market, you know, and the way I think of it is it's the intersection of a few areas, right? So the product and the product experience that we work together has to be so good that a customer or user, actually many start talk, talking about users now cuz it's self-service has a more than likely chance of getting their application to prod without ever talking to a person. Which is historically not what a lot of enterprise software companies are able to do, right? So that's one of those biggest things we do. We want customers to just be successful, turn it on, get going, be productive, right? At the same time we wanna to position the product in such a way that's differentiating that you can't get that experience anywhere else. And then part of that is ensuring that the education and enablement of our customers and our partners as such that they use the platform the right way to get as much value out of as possible. >>All backed by, you know, a very smart field that ensures that the customer get is making the right decision. A customer success org, this is attached to my org now that we can go on site and team with our customers to make sure that they get their first workloads up as quickly as possible, by the way, on our date, our, our dime. And then SRE and CEA backing that up with support and operational integrity to ensure that the service is always up and available so you can sleep, sleep, sleep well at night. Right? Right. One of our PMs of, of of Rosa, he says, what does he say? He says, Rosa allows organizations, enables organizations to go from 24 7 operations to nine to five innovation. Right? And that's powerful. That's how our customers remain more competitive running on Rosa with aws, >>When you're in customer conversations and you have 30 seconds, what are the key differentiators of the solution that you go boom, boom, boom, and they just go, I get it. >>Well, I mean, my 32nd elevator pitch, I think I've already said, I'll say it again. And that is OpenShift allows you to focus on your applications, build, deploy, and run applications while unlocking the power of the technologies like containers and Kubernetes and hiding or minimizing those complexities. So you can do as fast as possible. >>Mic drop Ted, question for you? Sure. Here we are at the, this is the, I leave the 11th reinvent, 10th anniversary, 11th event. You've been in the industry a long time. What is your biggest takeaway from what's been announced and discussed so far at Reinvent 22, where the AWS and and its partner ecosystem is concerned? If you had 30 seconds or if you had a bumper sticker to put on your DeLorean, what would you say? >>I would say we're continuing to innovate on behalf of our customers, but making sure we bring all of our partners and ecosystems along in that innovation. >>Yeah. I love the customer obsession on both sides there. Great work guides. Congrats on the 10th anniversary of OpenShift and so much evolution, the customer obsession is really clear for both of you guys. We appreciate your time. You're gonna have to come back now. Absolutely. Absolutely. Thank you. All right. Thank you so much for joining us. For our guests and for Paul Gillin. I'm Lisa Martin. You're watching The Cube, the leader in live enterprise and emerging tech coverage.
SUMMARY :
We always love being able to bring you some great content on the Cube Live from AWS Reinvented I believe we just hit 70. We love also talking about the innovation, And here we are, we're getting late in the afternoon on day two, and there's just as much activity, Great to have you on the program. It's a blur. And a lot easier to get around, I heard the second hand over overall show, the meeting with partners, the meeting with customers, the announcements And that appears to be of the things we know and love about re men is there's slew of announcements. I think it was over 30 announcements this morning alone What are some of the things that you are excited about in terms of some and the new abilities for partners to take advantage of these technologies to frankly delight our What are some of the things that you're seeing and Yeah, I mean, first of all, you know, as customers have, you know, years ago discovered I mean, you had, you had an online version of OpenShift several years ago. you know, if you think about like OpenShift for, for, as a matter of fact, So we're really, you know, for lack of better words, putting our money where our mouth is. And, and some of the key reasons that we even work together to build Rosa was frankly we've had a They're not in the business of building platforms like banks, I, you know, financials, And we have all kinds of other, you know, higher order services and an ecosystem around that. They decided, you know, it's better to run that on top of Rosa than manage OpenShift have that great experience too, like to say, you know, again, break the land speed record to prod. Lots of in the startup space that I've actually chosen to go with Rosa. It's funny you mention that because the largest Rosa consumer is new to OpenShift. And where do you think they ran Amazon, minimizing, even hiding the complexity of them so that you can just move fast Yeah. I wanna be sure we get to, to marketplaces because you have been, red That provides that elasticity, you know, Talk a little bit about the go to market, what's, you talked about Ted, the kind of the three tenants of, Yeah, I mean, so if you think about go to market, you know, and the way I think of it is it's the intersection of a few areas, and operational integrity to ensure that the service is always up and available so you can sleep, of the solution that you go boom, boom, boom, and they just go, I get it. And that is OpenShift allows you to focus on your applications, build, deploy, and run applications while If you had 30 seconds or if you had a bumper sticker to put on your of our partners and ecosystems along in that innovation. OpenShift and so much evolution, the customer obsession is really clear for both of you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ted Stanton | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Gill | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Chuck Kubota | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Ted | PERSON | 0.99+ |
Chuck Svoboda | PERSON | 0.99+ |
July 25th | DATE | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
red Hat | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
99.95% | QUANTITY | 0.99+ |
July 25th, 2015 | DATE | 0.99+ |
nine | QUANTITY | 0.99+ |
Chuck | PERSON | 0.99+ |
SRE | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
Monday night | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
last year | DATE | 0.98+ |
Red Hat | TITLE | 0.98+ |
one bill | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
10th year | QUANTITY | 0.98+ |
Vegas | LOCATION | 0.98+ |
One | QUANTITY | 0.98+ |
three tenants | QUANTITY | 0.98+ |
CEA | ORGANIZATION | 0.98+ |
The Cube | TITLE | 0.98+ |
Rosa | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Node | TITLE | 0.98+ |
first time | QUANTITY | 0.98+ |
one button | QUANTITY | 0.97+ |
first day | QUANTITY | 0.97+ |
10th anniversary | QUANTITY | 0.97+ |
second check | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
10 years ago | DATE | 0.97+ |
Reinvent 22 | EVENT | 0.97+ |
this week | DATE | 0.96+ |
Haseeb Budhani & Anant Verma | AWS re:Invent 2022 - Global Startup Program
>> Well, welcome back here to the Venetian. We're in Las Vegas. It is Wednesday, Day 2 of our coverage here of AWS re:Invent, 22. I'm your host, John Walls on theCUBE and it's a pleasure to welcome in two more guests as part of our AWS startup showcase, which is again part of the startup program globally at AWS. I've got Anant Verma, who is the Vice President of Engineering at Elation. Anant, good to see you, sir. >> Good to see you too. >> Good to be with us. And Haseeb Budhani, who is the CEO and co-founder of Rafay Systems. Good to see you, sir. >> Good to see you again. >> Thanks for having, yeah. A cuber, right? You've been on theCUBE? >> Once or twice. >> Many occasions. But a first timer here, as a matter of fact, glad to have you aboard. All right, tell us about Elation. First for those whom who might not be familiar with what you're up to these days, just give it a little 30,000 foot level. >> Sure, sure. So, yeah, Elation is a startup and a leader in the enterprise data intelligence space. That really includes a lot of different things including data search, data discovery, metadata management, data cataloging, data governance, data policy management, a lot of different things that companies want to do with the hoards of data that they have and Elation, our product is the answer to solve some of those problems. We've been doing pretty good. Elation is in running for about 10 years now. We are a series A startup now, we just raised around a few, a couple of months ago. We are already a hundred million plus in revenue. So. >> John: Not shabby. >> Yeah, it's a big benchmark for companies to, startup companies, to cross that milestone. So, yeah. >> And what's the relationship? I know Rafay and you have worked together, in fact, the two of you have, which I find interesting, you have a chance, you've been meeting on Zoom for a number of months, as many of us have it meeting here for the first time. But talk about that relationship with Rafay. >> Yeah, so I actually joined Elation in January and this is part of the move of Elation to a more cloud native solution. So, we have been running on AWS since last year and as part of making our solution more cloud native, we have been looking to containerize our services and run them on Kubernetes. So, that's the reason why I joined Elation in the first place to kind of make sure that this migration or move to a cloud native actually works out really well for us. This is a big move for the companies. A lot of companies that have done in the past, including, you know, Confluent or MongoDB, when they did that, they actually really reap great benefits out of that. So to do that, of course, you know, as we were looking at Kubernetes as a solution, I was personally more looking for a way to speed up things and get things out in production as fast as possible. And that's where I think, Janeb introduced us... >> That's right. >> Two of us. I think we share the same investor actually, so that's how we found each other. And yeah, it was a pretty simple decision in terms of, you know, getting the solution, figuring it out if it's useful for us and then of course, putting it out there. >> So you've hit the keyword, Kubernetes, right? And, so if you would to honestly jump in here, there are challenges, right? That you're trying to help them solve and you're working on the Kubernetes platform. So, you know, just talk about that and how that's influenced the work that the two of you are doing together. >> Absolutely. So, the business we're in is to help companies who adopt Kubernetes as an orchestration platform do it easier, faster. It's a simple story, right? Everybody is using Kubernetes, but it turns out that Kubernetes is actually not that easy to to operationalize, playing in a sandbox is one thing. Operationalizing this at a certain level of scale is not easy. Now, we have a lot of enterprise customers who are deploying their own applications on Kubernetes, and we've had many, many of them. But when it comes to a company like Elation, it's a more complicated problem set because they're taking a very complex application, their application, but then they're providing that as a service to their customers. So then we have a chain of customers we have to make happy. Anant's team, the platform organization, his internal customers who are the developers who are deploying applications, and then, the company has customers, we have to make sure that they get a good experience as they consume this application that happens to be running on Kubernetes. So that presented a really interesting challenge, right? How do we make this partnership successful? So I will say that, we've learned a lot from each other, right? And, end of the day, the goal is, my customer, Anant's specifically, right? He has to feel that, this investment, 'cause he has to pay us money, we would like to get paid. >> John: Sure. (John laughs) >> It reduces his internal expenditure because otherwise he'd have to do it himself. And most importantly, it's not the money part, it's that he can get to a certain goalpost significantly faster because the invention time for Kubernetes management, the platform that you have to build to run Kubernetes is a very complex exercise. It took us four and a half years to get here. You want to do that again, as a company, right? Why? Why do you want to do that? We, as Rafay, the way I think about what we deliver, yes, we sell a product, but to what end? The product is the what, the why, is that every enterprise, every ISV is building a Kubernetes platform in house. They shouldn't, they shouldn't need to. They should be able to consume that as a service. They consume the Kubernetes engine the EKS is Amazon's Kubernetes, they consume that as an engine. But the management layer was a gap in the market. How do I operationalize Kubernetes? And what we are doing is we're going to, you know, the Anant said. So the warden saying, "Hey you, your team is technical, you understand the problem set. Would you like to build it or would you rather consume this as a service so you can go faster?" And, resoundingly the answer is, I don't want to do this anymore. I wouldn't allow to buy. >> Well, you know, as Haseeb is saying, speed is again, when we started talking, it only took us like a couple of months to figure out if Rafay is the right solution for us. And so we ended up purchasing Rafay in April. We launched our product based on Rafay in Kubernetes, in EKS in August. >> August. >> So that's about four months. I've done some things like this before. It takes a couple of years just to sort of figure out, how do you really work with Kubernetes, right? In a production at a large scale. Right now, we are running about a 600 node cluster on Rafay and that's serving our customers. Like, one of the biggest thing that's actually happening on December 8th is we are running what we call a virtual hands on lab. >> A virtual? >> Hands on lab. >> Okay. >> For Elation. And they're probably going to be about 500 people is going to be attending it. It's like a webinar style. But what we do in that hands on lab is we will spin up an Elation instance for each attendee, right on the spot. Okay? Now, think about this enterprise software running and people just sign up for it and it's there for you, right on the spot. And that's the beauty of the software that we have been building. There's the beauty of the work that Rafay has helped us to do over the last few months. >> Okay. >> I think we need to charge them more money, I'm getting from this congregation. I'm going to go work on that. >> I'm going to let the two of you work that out later. All right. I don't want to get in the way of a big deal. But you mentioned that, we heard about it earlier that, it's you that would offer to your cert, to your clients, these services. I assume they have their different levels of tolerance and their different challenges, right? They've got their own complexities and their own organizational barriers. So how are you juggling that end of it? Because you're kind of learning as, well, not learning, but you're experiencing some of the thing. >> Right. Same things. And yet you've got this other client base that has a multitude of experiences that they're going through. >> Right. So I think, you know a lot of our customers, they are large enterprise companies. They got a whole bunch of data that they want work with us. So one of the thing that we have learned over the past few years is that we used to actually ship our software to the customers and then they would manage it for their privacy security reasons. But now, since we're running in the cloud, they're really happy about that because they don't need to juggle with the infrastructure and the software management and upgrades and things like that, we do it for them, right? And, that's the speed for them because now they are only interested in solving the problems with the data that they're working with. They don't need to deal with all these software management issues, right? So that frees our customers up to do the thing that they want to do. Of course it makes our job harder and I'm sure in turn it makes his job harder. >> We get a short end of the stick, for sure. >> That's why he is going to get more money. >> Exactly. >> Yeah, this is a great conversation. >> No, no, no. We'll talk about that. >> So, let's talk about the cloud then. How, in terms of being the platform where all this is happening and AWS, about your relationship with them as part of the startup program and what kind of value that brings to you, what does that do for you when you go out and are looking for work and what kind of cache that brings to you >> Talk about the AWS? >> Yes, sir. >> Okay. Well, so, the thing is really like of course AWS, a lot of programs in terms of making sure that as we move our customers into AWS, they can give us some, I wouldn't call it discount, but there's some credits that you can get as you move your workloads onto AWS. So that's a really great program. Our customers love it. They want us to do more things with AWS. It's a pretty seamless way for us to, as we were talking about or thinking about moving into the cloud, AWS was our number one choice and that's the only cloud that we are in, today. We're not going to go to any other place. >> That's it. >> Yeah. >> How would you characterize? I mean, we've already heard, from one side of the fence here, but. >> Absolutely. So for us, AWS is a make or break partner, frankly. As the EKS team knows very well, we support Azure's Kubernetes and Google's Kubernetes and the community Kubernetes as well. But the number of customers on our platform who are AWS native, either a hundred percent or a large percentage is, you know, that's the majority of our customer base. >> John: Yeah. >> And AWS has made it very easy for us in a variety of ways to make us successful and our customers successful. So Anant mentioned the credit program they have which is very useful 'cause we can, you know, readily kind of bring a customer to try things out and they can do that at no cost, right? So they can spin up infrastructure, play with things and AWS will cover the cost, as one example. So that's a really good thing. Beyond that, there are multiple programs at AWS, ISV accelerate, et cetera. That, you know, you got to over time, you kind of keep getting taller and taller. And you keep getting on bigger and bigger. And as you make progress, what I'm finding is that there's a great ecosystem of support that they provide us. They introduce us to customers, they help us, you know, think through architecture issues. We get access to their roadmap. We work very, very closely with the guest team, for example. Like the, the GM for Kubernetes at AWS is a gentleman named Barry Cooks who was my sponsor, right? So, we spend a lot of time together. In fact, right after this, I'm going to be spending time with him because look, they take us seriously as a partner. They spend time with us because end of the day, they understand that if they make their partners, in this case, Rafay successful, at the end of the day helps the customer, right? Anant's customer, my customer, their AWS customers, also. So they benefit because we are collectively helping them solve a problem faster. The goal of the cloud is to help people modernize, right? Reduce operational costs because data centers are expensive, right? But then if these complex solutions this is an enterprise product, Kubernetes, at the enterprise level is a complex problem. If we don't collectively work together to save the customer effort, essentially, right? Reduce their TCO for whatever it is they're doing, right? Then the cost of the cloud is too high. And AWS clearly understands and appreciates that and that's why they are going out of their air, frankly, to make us successful and make other companies successful in the startup program. >> Well. >> I would just add a couple of things there. Yeah, so, you know, cloud is not new. It's been there for a while. You know, people used to build things on their own. And so what AWS has really done is they have advanced technology enough where everything is really simple as just turning on a switch and using it, right? So, just a recent example, and I, by the way, I love managed services, right? So the reason is really because I don't need to put my own people to build and manage those things, right? So, if you want to use a search, they got the open search, if you want to use caching, they got elastic caching and stuff like that. So it's really simple and easy to just pick and choose which services you want to use and they're ready to be consumed right away. And that's the beautiful, and that that's how we can move really fast and get things done. >> Ease of use, right? Efficiency, saving money. It's a winning combination. Thanks for sharing this story, appreciate. Anant, Haseeb thanks for being with us. >> Yeah, thank you so much having us. >> We appreciate it. >> Thank you so much. >> You have been a part of the global startup program at AWS and startup showcase. Proud to feature this great collaboration. I'm John Walls. You're watching theCUBE, which is of course the leader in high tech coverage.
SUMMARY :
and it's a pleasure to Good to be with us. Thanks for having, yeah. glad to have you aboard. and Elation, our product is the answer startup companies, to the two of you have, So, that's the reason why I joined Elation you know, getting the solution, that the two of you are doing together. And, end of the day, the goal is, John: Sure. the platform that you have to build the right solution for us. Like, one of the biggest thing And that's the beauty of the software I'm going to go work on that. of you work that out later. that they're going through. So one of the thing that we have learned of the stick, for sure. going to get more money. We'll talk about that. and what kind of cache that brings to you and that's the only cloud from one side of the fence here, but. and the community Kubernetes as well. The goal of the cloud is to and that that's how we Ease of use, right? the global startup program
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Haseeb Budhani | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Barry Cooks | PERSON | 0.99+ |
April | DATE | 0.99+ |
Rafay | PERSON | 0.99+ |
December 8th | DATE | 0.99+ |
Anant Verma | PERSON | 0.99+ |
January | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Elation | ORGANIZATION | 0.99+ |
Anant | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
August | DATE | 0.99+ |
Rafay Systems | ORGANIZATION | 0.99+ |
Two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
twice | QUANTITY | 0.99+ |
four and a half years | QUANTITY | 0.99+ |
Janeb | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Rafay | ORGANIZATION | 0.99+ |
Haseeb | PERSON | 0.99+ |
Once | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
EKS | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Venetian | LOCATION | 0.97+ |
Confluent | ORGANIZATION | 0.97+ |
one side | QUANTITY | 0.97+ |
30,000 foot | QUANTITY | 0.97+ |
Anant | ORGANIZATION | 0.97+ |
about four months | QUANTITY | 0.97+ |
Kubernetes | ORGANIZATION | 0.96+ |
each attendee | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
two more guests | QUANTITY | 0.95+ |
Kubernetes | TITLE | 0.95+ |
about 10 years | QUANTITY | 0.93+ |
Wednesday, Day 2 | DATE | 0.92+ |
about 500 people | QUANTITY | 0.91+ |
today | DATE | 0.91+ |
Zoom | ORGANIZATION | 0.9+ |
Subbu Iyer
>> And it'll be the fastest 15 minutes of your day from there. >> In three- >> We go Lisa. >> Wait. >> Yes >> Wait, wait, wait. I'm sorry I didn't pin the right speed. >> Yap, no, no rush. >> There we go. >> The beauty of not being live. >> I think, in the background. >> Fantastic, you all ready to go there, Lisa? >> Yeah. >> We are speeding around the horn and we are coming to you in five, four, three, two. >> Hey everyone, welcome to theCUBE's coverage of AWS re:Invent 2022. Lisa Martin here with you with Subbu Iyer one of our alumni who's now the CEO of Aerospike. Subbu, great to have you on the program. Thank you for joining us. >> Great as always to be on theCUBE Lisa, good to meet you. >> So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >> Well, you know, we see this across the board. When I talk to customers and prospects there is a desire from the business and from IT actually to leverage data to really fuel newer applications, newer services newer business lines if you will, for companies. I think the struggle is one, I think one the, the plethora of data that is created. Surveys say that over the next three years data is going to be you know by 2025 around 175 zettabytes, right? A hundred and zettabytes of data is going to be created. And that's really a growth of north of 30% year over year. But the more important and the interesting thing is the real time component of that data is actually growing at, you know 35% CAGR. And what enterprises desire is decisions that are made in real time or near real time. And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient to fuel. So you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you for both users, so to speak. And the last point that we see out there is even if you're able to, you know bring all that data you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one capturing the data, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >> You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >> Yeah, when we started Aerospike, right? When the company started, it started with the premise that data is going to grow, number one exponentially. Two, when applications open up to the internet there's going to be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply set and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years what we've seen is as digitization has actually permeated every industry out there the need to harness data in real time is pretty much present in every industry. Whether that's retail, whether that's financial services telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't want to be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, you know customer experience is paramount and we as customers expect answers in you know an instant, in real time. And on the other hand, the way they make decisions is based on a large data set because you know larger data sets actually propel better decisions. So there's competing pressures here which essentially drive the need one from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an incessant need to actually make decisions in real or near real time. >> You know, I think one of the things that's been in short supply over the last couple of years is patience. We do expect as consumers whether we're in our business lives our personal lives that we're going to be getting be given information and data that's relevant it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >> So, you know, going back to your initial question Lisa around why is data really a high value but underutilized or under-leveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus. And they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? It's really easy to build an application that operates at low scale or low throughput or low concurrency but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a really robust data platform that can be up on a five nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer which is, can you operate all of this at a cost point which is not prohibitive but it makes sense from a TCO perspective. 'Cause a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey the revenue starts going up, the user base starts going up but the cost basis starts crossing over the revenue and they're losing money on the service, ironically as the service becomes more popular. So really unlimited scale predictable performance always on a globally resilient basis and low TCO. These are the four essential capabilities of any modern data platform. >> So then talk to me with those as the four main core functionalities of a modern data platform, how does Aerospike deliver that? >> So we were built, as I said from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid-state devices as essentially extended memory. So you're getting memory performance but you're accessing these SSDs. You're not paying memory prices but you're getting memory performance. As a result of that you can attach a lot more data to each node or each server in a distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with Aerospike the same things at 60 to 80% lower server count. And as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said that's the key kind of starting point to the innovation. We lay around capabilities like, you know replication, change data notification, you know synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service you can have a single Aerospike cluster with one node in San Francisco one node in New York, another one in London and this would be basically seamlessly operating. So that, you know, this is strongly consistent, very few no SQL data platforms are strongly consistent or if they are strongly consistent they will actually suffer performance degradation. And what strongly consistent means is, you know all your data is always available it's guaranteed to be available there is no data lost any time. So in this configuration that I talked about if the node in London goes down your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up it rejoins the cluster and everything is back to kind of the way it was before, you know London left the cluster so to speak. So the ability to do this globally resilient highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or Hybrid Memory Architecture and then we start building a lot of these other capabilities around the platform. And then over the years what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in the silo. So Aerospike gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, Pulsar, so that as you're ingesting data from a variety of data sources you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike you can actually run Spark jobs across that data in a multi-threaded parallel fashion to get really insight from that data at really high throughput and high speed. >> High throughput, high speed, incredibly important especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, Edge, IoT devices, the workforce embracing more and more hybrid these days. How are you helping customers to extract more value from data while also lowering costs? Go into some customer examples 'cause I know you have some great ones. >> Yeah, you know, I think, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples let me talk to you about some of kind of the use cases which we see out there. We see a lot of Aerospike being used in fraud detection. We see us being used in recommendations engines we get used in customer data profiles, or customer profiles, Customer 360 stores, you know multiplayer gaming and entertainment. These are kind of the repeated use case, digital payments. We power most of the digital payment systems across the globe. Specific example from a specific example perspective the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you're actually paying somebody your transaction is, you know being sent through Aerospike to really decide whether this is a fraudulent transaction or not. And when you do that, you know, you and I as a customer are not going to wait around for 10 seconds for PayPal to say yay or nay. We expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal. For every transaction that goes through PayPal. Before us, you know, PayPal was missing out on about 2% of their SLAs which was essentially millions of dollars which they were losing because, you know, they were letting transactions go through and taking the risk that it's not a fraudulent transaction. With Aerospike they can now actually get a much better SLA and the data set on which they compute the fraud score has gone up by you know, several factors. So by 30X if you will. So not only has the data size that is powering the fraud engine actually gone up 30X with Aerospike but they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's- >> And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >> Yes, and so that's a really powerful use case and you know, it's a great customer success story. The other one I would talk about is really Wayfair, right, from retail and you know from e-commerce. So everybody knows Wayfair global leader in really in online home furnishings and they use us to power their recommendations engine. And you know it's basically if you're purchasing this, people who bought this also bought these five other things, so on and so forth. They have actually seen their cart size at checkout go up by up to 30%, as a result of actually powering their recommendations engine through Aerospike. And they were able to do this by reducing the server count by 9X. So on one ninth of the servers that were there before Aerospike, they're now powering their recommendations engine and seeing cart size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair. >> Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized relevant experience that's going to show me if I bought this show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >> Exactly, and you know, another great example you asked about you know, customer stories, Adobe. Who doesn't know Adobe, you know. They're on a mission to deliver the best customer experience that they can. And they're talking about, you know great Customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this with Aerospike. Going to Aerospike basically what they have seen is their throughput go up by 70%, their cost has been reduced by 3X. So essentially doing it at one third of the cost while their annual data growth continues at, you know about north of 30%. So not only is their data growing they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great Customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know on a data set which is constantly growing at north of 30% in this case. >> Those are three great examples, PayPal, Wayfair, Adobe, talking about, especially with Wayfair when you talk about increasing their cart checkout sizes but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >> Yap, I'll give you a fun one here. So, you know, you may not have heard about this company it's called Dream11 and it's a company based out of India but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform. And you know, India is a nation which is cricket crazy. So you know, when they have their premier league going on and there's millions of users logged onto the Dream11 platform building their fantasy league teams and you know, playing on that particular platform, it has a hundred million users a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered an amazing success story in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by Aerospike. Think about that they're able to deliver all of this and support a hundred million users 5.5 million concurrent users all with, you know 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is, you know, world renowned but at least you know from what we see out there it's an amazing success story of operating at scale. >> Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike AWS the partnership Graviton2 better together. What are you guys doing together there? >> Great partnership. AWS has multiple layers in terms of partnerships. So, you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know those instance types work well for us. And then we just released support for Aerospike on the Graviton platform and we just announced a benchmark of Aerospike running on Graviton on AWS. And what we see out there is with the benchmark a 1.6X improvement in price performance. And you know about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on Graviton. So this is an amazing story from a price performance perspective, performance per watt for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aerospike and AWS not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >> And it sounds like a great sustainability story. I wish we had more time so we would talk about this but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >> Thank you very much. I mean, if folks are at re:Invent next week or this week come on and see us at our booth and we are in the data analytics pavilion and you can find us pretty easily. Would love to talk to you. >> Perfect, we'll send them there. Subbu Iyer, thank you so much for joining me on the program today. We appreciate your insights. >> Thank you Lisa. >> I'm Lisa Martin, you're watching theCUBE's coverage of AWS re:Invent 2022. Thanks for watching. >> Clear- >> Clear cutting. >> Nice job, very nice job.
SUMMARY :
the fastest 15 minutes I'm sorry I didn't pin the right speed. and we are coming to you in Subbu, great to have you on the program. Great as always to be on So, you know, every company these days And a lot of the challenges that access to real time data to put in front of you and I and data platforms need to have. One of the reasons we see is So the ability to do How are you helping customers let me talk to you about fraud detection on the swipe and you know, it's a great We have this expectation that needs to be Exactly, and you know, with Wayfair when you talk So you know, when they have What are you guys doing together there? And you know about 18% and how you guys are delivering that. and you can find us pretty easily. for joining me on the program today. of AWS re:Invent 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
London | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
3X | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Wayfair | ORGANIZATION | 0.99+ |
35% | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
99.95% | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
30X | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
32% | QUANTITY | 0.99+ |
27% | QUANTITY | 0.99+ |
1.6X | QUANTITY | 0.99+ |
each server | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Aerospike | ORGANIZATION | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Subbu | PERSON | 0.99+ |
9X | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
99 plus percent | QUANTITY | 0.99+ |
first answer | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
less than one millisecond | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
Subbu Iyer | PERSON | 0.99+ |
one third | QUANTITY | 0.99+ |
millions of users | QUANTITY | 0.99+ |
over 70% | QUANTITY | 0.98+ |
both users | QUANTITY | 0.98+ |
Dream11 | ORGANIZATION | 0.98+ |
80% | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Graviton | TITLE | 0.98+ |
each node | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
one node | QUANTITY | 0.98+ |
hundred million users | QUANTITY | 0.98+ |
first vertical | QUANTITY | 0.97+ |
about 2% | QUANTITY | 0.97+ |
Aerospike | TITLE | 0.97+ |
single cluster | QUANTITY | 0.96+ |
Ian Colle, AWS | SuperComputing 22
(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)
SUMMARY :
Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
400 gigs | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Ian Colle | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Annaperna | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Last month | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.98+ |
five | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
Lustre | ORGANIZATION | 0.97+ |
Annaperna Labs | ORGANIZATION | 0.97+ |
Trainium | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
OpEx | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
Supercomputing Conference | EVENT | 0.96+ |
first | QUANTITY | 0.96+ |
West Coast | LOCATION | 0.96+ |
thousands of dollars a day | QUANTITY | 0.96+ |
Supercomputing Conference 2022 | EVENT | 0.95+ |
CapEx | TITLE | 0.94+ |
three | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.92+ |
East Coast | LOCATION | 0.91+ |
single region | QUANTITY | 0.91+ |
years | QUANTITY | 0.91+ |
thousands of nodes | QUANTITY | 0.88+ |
Parallel Cluster | TITLE | 0.87+ |
about 25 gigs | QUANTITY | 0.87+ |
Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22
>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.
SUMMARY :
Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
October of 2000 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
NASA Science Foundation | ORGANIZATION | 0.99+ |
2001 | DATE | 0.99+ |
Baltimore | LOCATION | 0.99+ |
8,000 | QUANTITY | 0.99+ |
14 universities | QUANTITY | 0.99+ |
31 years | QUANTITY | 0.99+ |
20 million | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Karen Tom Cook | PERSON | 0.99+ |
60 students | QUANTITY | 0.99+ |
Ohio State University | ORGANIZATION | 0.99+ |
90 countries | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
Panda | PERSON | 0.99+ |
today | DATE | 0.99+ |
65,000 students | QUANTITY | 0.99+ |
3,200 organizations | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
United States | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
over 500 papers | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
more than 32 organ | QUANTITY | 0.99+ |
120 application | QUANTITY | 0.99+ |
Ohio | LOCATION | 0.99+ |
more than 3000 orange | QUANTITY | 0.99+ |
first ways | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
40 PIs | QUANTITY | 0.99+ |
Asics | ORGANIZATION | 0.99+ |
MPI Forum | ORGANIZATION | 0.98+ |
China | ORGANIZATION | 0.98+ |
Two | QUANTITY | 0.98+ |
Ohio State State University | ORGANIZATION | 0.98+ |
8 billion people | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
HP | ORGANIZATION | 0.97+ |
Dr. | PERSON | 0.97+ |
over 20 years | QUANTITY | 0.97+ |
US | ORGANIZATION | 0.97+ |
Finman | ORGANIZATION | 0.97+ |
Rocky | PERSON | 0.97+ |
Japan | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
first demonstration | QUANTITY | 0.96+ |
31 years ago | DATE | 0.96+ |
Ohio Super Center | ORGANIZATION | 0.96+ |
three broad goals | QUANTITY | 0.96+ |
one wish | QUANTITY | 0.96+ |
second part | QUANTITY | 0.96+ |
31 | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.95+ |
eight | QUANTITY | 0.95+ |
over 31 years | QUANTITY | 0.95+ |
10,000 node clusters | QUANTITY | 0.95+ |
day three | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
INFIN | EVENT | 0.94+ |
seven years | QUANTITY | 0.94+ |
Dhabaleswar “DK” Panda | PERSON | 0.94+ |
three | QUANTITY | 0.93+ |
S f I institute | TITLE | 0.93+ |
first thing | QUANTITY | 0.93+ |
Daniel Rethmeier & Samir Kadoo | Accelerating Business Transformation
(upbeat music) >> Hi everyone. Welcome to theCUBE special presentation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got two great guests, one for calling in from Germany, or videoing in from Germany, one from Maryland. We've got VMware and AWS. This is the customer successes with VMware Cloud on AWS Showcase: Accelerating Business Transformation. Here in the Showcase at Samir Kadoo, worldwide VMware strategic alliance solution architect leader with AWS. Samir, great to have you. And Daniel Rethmeier, principal architect global AWS synergy at VMware. Guys, you guys are working together, you're the key players in this relationship as it rolls out and continues to grow. So welcome to theCUBE. >> Thank you, greatly appreciate it. >> Great to have you guys both on. As you know, we've been covering this since 2016 when Pat Gelsinger, then CEO, and then then CEO AWS at Andy Jassy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success of VM workloads in the cloud. VMware's had great success with it since and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later, we got this whole inflection point coming, you're starting to see this idea of higher level services, more performance are coming in at the infrastructure side, more automation, more serverless, I mean and AI. I mean, it's just getting better and better every year in the cloud. Kind of a whole 'nother level. Where are we? Samir, let's start with you on the relationship. >> Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced. And then less than a year later, that's when we officially launched VMware Cloud on AWS. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware. You know, one of the key things... Together, day in, day out, as far as advancing VMware Cloud on AWS. You know, even if you look at the innovation that takes place with the solution, things have modernized, things have changed, there's been advancements. You know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right, more recently. One of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware Cloud on AWS. And even with VMware to other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware Cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware Cloud on AWS service competency. So think about it from the standpoint, there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >> Great stuff. Daniel, I want to get to you in a second upon this principal architect position you have. In your title, you're the global AWS synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly VMworld, talking about how the workloads on IT has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AIOps, you got ITOps changing a lot, you got a lot more automation, edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the relationship? >> So at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware Cloud and AWS, we are also enabling us mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembles globally and also virtually on Slack and the usual suspect tools working together and listening to customers. That's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the best benefits out of VMware Cloud on AWS. And over the time, we really have involved the solution. As Samir mentioned, we just added additional storage solutions to VMware Cloud on AWS. We now have three different instance types that cover a broad range of workloads. So for example, we just edited the I4i host, which is ideally for workloads that require a lot of CPU power, such as, you mentioned it, AI workloads. >> Yeah, so I want to get us just specifically on the customer journey and their transformation, you know, we've been reporting on Silicon angle in theCUBE in the past couple weeks in a big way that the ops teams are now the new devs, right? I mean that sounds a little bit weird, but IT operations is now part of a lot more DataOps, security, writing code, composing. You know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing, what are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >> That's a great point, because originally, VMware and AWS came from very different directions when it comes to speaking people and customers. So for example, AWS, very developer focused, whereas VMware has a very great footprint in the ITOps area. And usually these are very different teams, groups, different cultures, but it's getting together. However, we always try to address the customer needs, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, "Well, we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service. Recoverability as a service, scalability as a service. We want to have this from the infrastructure." That was one of the unique selling points for VMware on-premise and now we are bringing this into the cloud. >> Samir, talk about your perspective. I want to get your thoughts, and not to take a tangent, but we had covered the AWS re:MARS, actually it was Amazon re:MARS, machine learning automation, robotics and space was really kind of the confluence of industrial IoT, software, physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code, automation, you know, "Hey Alexa, deploy a Kubernetes cluster." Yeah, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services, meets workloads. Can you unpack that and share your opinion on what you see there from an Amazon perspective and how it relates to this? >> Yeah. Yeah, totally, right? And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware Cloud on AWS, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you want to leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's going to give you that power to do certain things, such as, for example, like how you mentioned with IoT, even with utilizing Alexa, or if there's any other service that you want to utilize, that's the joining point between both of the offerings right off the top. Though with digital transformation, right, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology even in your business. Leaders are looking to reinvent their business, they're looking to take different steps as far as pursuing a new strategy, maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. >> Okay. >> Then also- >> Oh, go ahead, finish your thought. >> No, no, no, I was going to say what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that vStor admin that's used to their on-premises environment. Now with VMware Cloud on AWS, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, you still have that methodology where you can utilize that in VMware Cloud on AWS too. >> Daniel, I want to get your thoughts on this because at Explore and after the event, as we prep for CubeCon and re:Invent coming up, the big AWS show, I had a couple conversations with a lot of the VMware customers and operators, and it's like hundreds of thousands of users and millions of people talking about and peaked on VMware, interested in VMware. The common thread was one person said, "I'm trying to figure out where I'm going to put my career in the next 10 to 15 years." And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm going to be the next cloud, but there's no like role yet. Architects, is it solution architect, SRE? So you're starting to see the psychology of the operators who now are going to try to make these career decisions. Like what am I going to work on? And then it's kind of fuzzy, but I want to get your thoughts, how would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity? And what's going to happen? >> So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills and trainings? Is everything worthless I learned over the last 15 years of my career? And the answer is to make digital transformation a success, we need not just to talk about technology, but also about process, people, and culture. And this is where VMware really can help because if you are applying VMware Cloud on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment, you can use the same managing and monitoring tools, if you have written, and many customers did this, if you have developed hundreds of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware Cloud on AWS. And that gives not just leaders, but also the architects at customers, the operators at customers, the confidence in such a complex project. >> The consistency, very key point, gives them the confidence to go. And then now that once they're confident, they can start committing themselves to new things. Samir, you're reacting to this because on your side, you've got higher level services, you've got more performance at the hardware level. I mean, a lot improvements. So, okay, nothing's changed, I can still run my job, now I got goodness on the other side. What's the upside? What's in it for the customer there? >> Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware Cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud. But if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you want to utilize any other AWS service in conjunction with that VM that resides maybe on-premises or even in VMware Cloud on AWS, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you want to expand on the skills, you certainly have that capability to do so. >> Great stuff, I love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, 'cause people want to know what's goes on behind the scenes. How does innovation get happen? How does it happen with the relationships? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? Do you guys just have a Zoom meeting, do you guys fly out, you write code, go do you ship things? I mean, I'm making it up, but you get the idea. How does it work? What's going on behind the scenes? >> So we hope to get more frequently together in-person, but of course we had some difficulties over the last two to three years. So we are very used to Zoom conferences and Slack meetings. You always have to have the time difference in mind if you are working globally together. But what we try, for example, we have regular assembles now also in-person, geo-based, so for AMEA, for the Americas, for APJ. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >> What's interesting, you know, as events are coming back, Samir, before you weigh in this, I'll comment as theCUBE's been going back out to events, we're hearing comments like, "What pandemic? We were more productive in the pandemic." I mean, developers know how to work remotely and they've been on all the tools there, but then they get in-person, they're happy to see people, but no one's really missed the beat. I mean, it seems to be very productive, you know, workflow, not a lot of disruption. More, if anything, productivity gains. >> Agreed, right? I think one of the key things to keep in mind is even if you look at AWS's, and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said and meant earlier, right? We might have meetings at different time zones, maybe it's in-person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation in VMware Cloud on AWS as well. But one of the key things to keep in mind is yes, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology, we've been able to still communicate, work with our customers, even with VMware in between, with AWS and whatnot, we had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware Cloud on AWS Outposts, that was something that customers have been asking for. We've been able to leverage the feedback and then continue to drive innovation even around VMware Cloud on AWS Outposts. So even with the on-premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >> In our last segment we did here on this Showcase, we listed the accomplishments and they were pretty significant. I mean geo, you got the global rollouts of the relationship. It's just really been interesting and people can reference that, we won't get into it here. But I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Because again, I think right now, we're at an inflection point more than ever. What can people expect from the relationship and what's coming up with re:Invent? Can you share a little bit of kind of what's coming down the pike? >> So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked for over the last years. Whenever you are requiring additional storage to host your virtual machines, you usually in VMware Cloud on AWS, you have to add additional nodes. Now we have three different node types with different ratios of compute, storage, and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay for it. And now with two solutions which offer choice for the customers, like FS6 wanted a ONTAP and VMware Cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements, at the upcoming events. >> Samir, what's your reaction take on what's coming down on your side? >> Yeah, I think one of the key things to keep in mind is we're looking to help our customers be agile and even scaled with their needs, right? So with VMware Cloud on AWS, that's one of the key things that comes to mind, right? There are going to be announcements, innovations, and whatnot with upcoming events. But together, we're able to leverage that to advance VMware cloud on AWS. To Daniel's point, storage for example, even with host offerings. And then even with decoupling storage from compute and memory, right? Now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware Cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's going to be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events, that's going to give us the options to even advance our own services together. >> Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I want to get both of your reaction to it. And we've been bringing this up in the open conversations on theCUBE is in the old days, let's go back this generation, you had ecosystems, you had VMware had an ecosystem, AWS had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships, and they do business together and they sell each other's products or do some stuff. Now it's more about architecture, 'cause we're now in a distributed large scale environment where the role of ecosystems are intertwining and you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides, they come together. So you have this now almost a three dimensional or multidimensional ecosystem interplay. What's your thoughts on this? Because it's about the architecture, integration is a value, not so much innovations only. You got to do innovation, but when you do innovation, you got to integrate it, you got to connect it. So how do you guys see this as an architectural thing, start to see more technical business deals? >> So we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even closer to specific vendors. We are removing these obstacles. So with VMware Cloud on AWS, moving to the cloud, firstly it's not a dead end. If you decide at one point in time because of latency requirements or maybe some compliance requirements, you need to move back into on-premise, you can do this. If you decide you want to stay with some of your services on-premise and just run a couple of dedicated services in the cloud, you can do this and you can man manage it through a single pane of glass. That's quite important. So cloud is no longer a dead end, it's no longer a binary decision, whether it's on-premise or the cloud, it is the cloud. And the second thing is you can choose the best of both worlds, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware Cloud on AWS either way in a very, very fast cost effective and safe way, then you can enrich, later on enrich these virtual machines with services that are offered by AWS, more than 200 different services ranging from object-based storage, load balancing, and so on. So it's an endless, endless possibility. >> We call that super cloud in the way that we generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is kind of where cloud is right now. You guys are not commodity, amazon's completely differentiating, but there's some commodity things happen. You got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. >> Absolutely. >> And everybody wins. >> Yeah, I 100% agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it's a cross education where there might be someone who's more proficient on the cloud side with AWS, maybe more proficient with the VMware's technology. But then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud, maybe I don't know what the networking constructs are, maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware Cloud on AWS. Maybe you want to leverage any of the native AWS services or even just off the top, 200 plus AWS services, right? But it comes down to that skillset, right? So again, solutions architecture at the back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the day. >> I mean, I just think it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean you don't have to do anything. You still run it. Just spear the way you're working on it and now do new things. This is kind of a cultural shift. >> Yeah, absolutely. And if you look, not every customer, not every organization has the resources to refactor and re-platform everything. And we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time, they can free up resources to develop new innovations and grow their business. >> Awesome. Samir, thank you for coming on. Daniel, thank you for coming to Germany. >> Thank you. Oktoberfest, I know it's evening over there, weekend's here. And thank you for spending the time. Samir, give you the final word. AWS re:Invent's coming up. We're preparing, we're going to have an exclusive with Adam, with Fryer, we'd do a curtain raise, and do a little preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at re:Invent this year? The big show? >> Yeah, so I think Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what are called chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking to sit and listen to a session, yes that's there, but if they want to be hands-on, that is also there too. So personally for me as an IT background, been in sysadmin world and whatnot, being hands-on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. >> Yeah, and re:Invent's an amazing show for the in-person. You guys nail it every year. We'll have three sets this year at theCUBE and it's becoming popular. We have more and more content. You guys got live streams going on, a lot of content, a lot of media. So thanks for sharing that. Samir, Daniel, thank you for coming on on this part of the Showcase episode of really the customer successes with VMware Cloud on AWS, really accelerating business transformation with AWS and VMware. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
This is the customer successes Great to have you guys both on. One of the things to keep in mind Daniel, I want to get to you in a second And over the time, we really that the ops teams are in the ITOps area. And so when you look at So that's going to give you even with logging, you in the next 10 to 15 years." And the answer is to make What's in it for the customer there? and that ability to just I'd love to have you guys explain, and to contribute to our community. but no one's really missed the beat. So the key thing is always to maintain But I will ask you guys to comment on, and memory and you have to pay for it. So it comes down to, you know, and you guys are in the is you can choose the best with you on their terms. on the cloud side with AWS, I mean you don't have to do anything. has the resources to refactor Samir, thank you for coming on. And thank you for spending the time. that's one of the key things of really the customer successes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Daniel Rethmeier | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Samir | PERSON | 0.99+ |
Maryland | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
Samir Kadoo | PERSON | 0.99+ |
more than 200 different services | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
two solutions | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
CubeCon | EVENT | 0.99+ |
Madhura Maskasky, Platform9 | Cloud Native at Scale
(uplifting music) >> Hello and welcome to The Cube, here in Palo Alto, California for a special program on cloud-native at scale, enabling next generation cloud or SuperCloud for modern application cloud-native developers. I'm John Furrier, host of The Cube. My pleasure to have here Madhura Maskasky, co-founder and VP of Product at Platform9. Thanks for coming in today for this cloud-native at scale conversation. >> Thank you for having me. >> So, cloud-native at scale, something that we're talking about because we're seeing the next level of mainstream success of containers, Kubernetes and cloud-native developers, basically DevOps in the CICD pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the SuperCloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on SuperCloud as it fits to cloud-native as scales up? >> Yeah, you know, I think what's interesting, and I think the reason why SuperCloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud-native and cloud deployments have scaled, I think we've reached a point now where, instead of having the traditional data center style model where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right, where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private, on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving that direction. And so you got to refer that with a terminology that indicates the scale and complexity of it. And so I think SuperCloud is an appropriate term for that. >> So, you brought a couple things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. What even know what's around the corner. You got buildings, you got IOT, OT and IT kind of coming together, but you also got this idea of regions, global infrastructure is a big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale. These new challenges there. Can you share, because you got to have edge. So, hybrid cloud is a winning formula. Everybody knows that it's a steady state. >> Madhura: Yeah. >> But across multiple clouds brings in this new un-engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's going to happen. It's only going to get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because it's something business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the SuperCloud or across multiple edges and regions? >> Yeah, absolutely. So, I think, you know, in the context of this, this term of SuperCloud, I think, it's sometimes easier to visualize things in terms of two axes, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then, on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity but potentially manageable. But when you are expanding on both these axes you really get to a point where that scale really needs some well thought out, well structured solutions to address it. Right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when your scale is not at the level. >> Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We're seeing cloud-native becomes successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because about at scale, >> Madhura: Yeah. >> Challenges here. >> Yeah. Absolutely. And I think, you know, I like to call it, you know, the problem that the scale creates, you know, there's various problems, but I think one problem, one way to think about it is you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right. Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, it's not working. And the exact same problem now happens in these distributed environments, but at massive scale, right. Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right. And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster right. But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ballgame of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you got to make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So, I think that's another example of problems that occur. >> Okay. So, I have to ask about scale because there are a lot of multiple steps involved when you see the success of cloud native. You know, you see some, you know, some experimentation. They set up a cluster, say, it's containers and Kubernetes, and then you say, okay, we got this, we configure it. And then, they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you got to scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is. And when companies transition from, I got this to, oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >> Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the two factors of scale, as we talked about start expanding. I think, one of them is what I like to call the, you know, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with P zeros and P ones from support teams, et cetera. And those issues can be really difficult to triage. Right. And so, in the Kubernetes environment, this problem kind of multi-folds, it goes, you know, escalates to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. And so, as you give that change to then run at your production edge location, like say your radio cell tower site or you hand it over to a customer to run it on their cluster, they might not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like (indistinct) hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is you have these distributed clusters at scale, you got to ensure someone's job is on the line to make sure that the security policies are configured properly. >> So, this is a huge problem. I love that comment. That's not happening on my system. It's the classic, you know, debugging mentality. >> Madhura: Yeah. >> But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >> Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what Arlon is, it's an open source project and it is a tool, it's a Kubernetes native tool for a complete end-to-end management of not just your clusters, but your clusters, all of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So, what Arlon lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >> So, what's the elevator pitch simply put for what dissolves in terms of the chaos you guys are reigning in, what's the bumper sticker? >> Yeah. >> What would it do? >> There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that assembly line brings, right? Arlon, and if you look at the logo we've designed, it's this funny little robot, and it's because when we think of Arlon, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well-structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage where again, it gets, you know, processed in a standardized way. And that's what Arlon really does. That's like deliver the pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for those. >> So keeping it smooth, the assembly line, things are flowing, CICD, pipelining. >> Madhura: Exactly. >> So, that's what you're trying to simplify that OPS piece for the developer. I mean, it's not really OPS, it's their OPS, it's coding. >> Yeah. Not just developer, the OPS, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middle layer of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly, monitoring and observability is integrated. And so, it solves problems for both those teams. >> Yeah, it's DevOps. So, the DevOps is the cloud-needed developer. The option teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >> Absolutely. Yeah. And, you know, Kubernetes really introduced or elevated this declarative management, right? Because you know, Kubernetes clusters are, or your, yeah, you know, specifications of components that go in Kubernetes are defined in declarative way, and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlon addresses that problem at the heart of it, and it does that using existing open source, well-known solutions. >> And, I want get into the benefits, what's in it for me as the customer, developer, but I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the current state of the product? You run the product group over there, Platform9, is it open source? And you guys have a product that's commercial. Can you explain the open-source dynamic? And first of all, why open source? >> Madhura: Yeah. >> And what is the consumption? I mean, open source is great, people want open source, they can download it, look up the code, but you know, maybe want to buy the commercial. So, I'm assuming you have that thought through, can you share? >> Madhura: Yeah. >> Open source and commercial relationship. >> Yeah. I think, you know, starting with why open source, I think, it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open-source technologies components, and then we, you know, make them available to our customers at scale through either a SaaS model or on-prem model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open-source economy, it's only right, I think in my mind that, we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with Fission, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why open source and also open source because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind a block box. >> Well, and that's what the developers want too. I mean, what we're seeing in reporting with SuperCloud is the new model of consumption is I want to look at the code and see what's in there. >> Madhura: That's right. >> And then also, if I want to use it, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I want to move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way is, well, but that's the benefit of open source. This is why standards and open source growing so fast, you have that confluence of, you know, a way for us to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian (indistinct) uses the dating metaphor, you know, hey, you know, I want to check it out first before I get married. >> Madhura: Right. >> And that's what open source. So, this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >> Absolutely. Yeah. Yeah. You know, I think in, you know, two things, I think one is just, you know, this cloud-native space is so vast that if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprise's use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so. Right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a Saas-hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open-source version and loved it and want to take it at scale and in production and need a partner to collaborate with, who can, you know, support them for that production environment. >> I have to ask you. Now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlon? What's in it for me? You know. 'Cause if I'm not enthused about it, I'm not going to be confident and it's going to be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlon? I'm a customer. >> Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public cloud-native Kubernetes, and then, we have our CICD pipelines that are automating the deployment of applications, et cetera. And then, there's this gray zone. And the gray zone is well before you can, your CICD pipelines can deploy the apps, somebody needs to do all of that groundwork of, you know, defining those clusters and yeah, you know, properly configuring them. And as these things start by being done hand grown. And then, as you scale, what typically enterprises would do today is they will have their homegrown DIY solutions for this. I mean, a number of folks that I talk to that have built Terraform automation, and then, you know, some of those key developers leave. So, it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course, technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think, (indistinct) would be delighted. The folks that we've talk, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on EKS Amazon, and we want to scale them to few thousands, but we don't think we are ready to do that. And this will give us the ability to, >> Yeah, I think, people are scared. I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale, small mistakes can become large mistakes. This is something that is concerning to enterprises. And I think, this is going to come up at (indistinct) this year where enterprises are going to say, okay, I need to see SLAs. I want to see track record, I want to see other companies that have used it. >> Madhura: Yeah. >> How would you answer that question to, or challenge, you know, hey, I love this, but is there any guarantees? Is there any, what's the SLA, I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >> Yeah, absolutely. So, two parts to that, right? One is Arlon leverages existing open-source components, products that are extremely popular. Two specifically. One is Arlon uses ArgoCD, which is probably one of the highest rated and used CD open-source tools that's out there, right? It's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is Arlon also makes use of cluster API (indistinct), which is a Kubernetes' sub-component, right? For life cycle management of clusters. So, there is enough of, you know, community users, et cetera, around these two products, right? Or open-source projects that will find Arlon to be right up in their alley because they're already comfortable, familiar with ArgoCD. Now, Arlon just extends the scope of what ArgoCD can do. And so, that's one. And then, the second part is going back to your point of the comfort. And that's where, you know, Platform9 has a role to play, which is when you are ready to deploy Arlon at scale, because you've been, you know, playing with it in your (indistinct) test environments, you're happy with what you get with it, then Platform9 will stand behind it and provide that SLA. >> And what's been the reaction from customers you've talked to Platform9 customers with, that are familiar with Argo and then Arlon? What's been some of the feedback? >> Yeah, I think, the feedback's been fantastic. I mean, I can give examples of customers where, you know, initially, you know, when you are telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlon, and we talk about the fact that it uses ArgoCD they start opening up, they say, we have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So, we've had that kind of validation. We've had validation all the way at the beginning of Arlon before we even wrote a single line of code saying, this is something we plan on doing. And the customer said, if you had it today, I would've purchased it. So, it's been really great validation. >> All right. So, next question is, what is the solution to the customer? If I asked you, look at, I have, I'm so busy, my team's overworked. I got a skills gap, I don't need another project that's so I'm so tied up right now, and I'm just chasing my tail. How does Platform9 help me? >> Yeah, absolutely. So I think, you know, one of the core tenants of Platform9 has always been that, we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS-hosted manner for our customers, right? So, our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so, from a customer's perspective, one, something like Arlon will integrate with what they have, so, they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today, and, you know, give you an inventory. And then, >> So, customers have clusters that are growing, that's a sign, >> Correct. >> Call you guys. >> Absolutely. Either they have massive large clusters. Right. That they want to split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now, they have management challenges. >> So, especially, operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure >> Madhura: Yeah. >> And or scale out. >> That's right. Exactly. >> And you provide that layer of policy. >> Absolutely. Yes. >> That's the key value here. >> That's right. >> So, policy-based configuration for cluster scale up. >> Profile and policy-based, declarative configuration and life cycle management for clusters. >> If I asked you how this enables SuperCloud, what would you say to that? >> I think, this is one of the key ingredients to SuperCloud, right? If you think about a SuperCloud environment, there is at least few key ingredients that come to my mind that are really critical. Like they are, you know, life-saving ingredients at that scale. One is having a really good strategy for managing that scale. You know, in a, going back to assembly line in a very consistent, predictable way. So, that Arlon solves, then you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are going to happen and you're going to have to figure out, you know, how to solve them fast. And Arlon by the way, also helps in that direction, but you also need observability tools. And then, especially if you're running at on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make SuperCloud successful. And you know, Arlon flows in one, >> Okay, so now, the next level is, okay, that makes sense. It's under the covers kind of speak under the hood. >> Madhura: Yeah. >> How does that impact the app developers of the cloud-native modern application workflows? Because the impact to me seems the apps are going to be impacted. Are they going to be faster, stronger? I mean, what's the impact, if you do all those things as you mentioned, what's the impact of the apps? >> Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge today where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my OPS counterpart to do their part, right? And so, this really gives them, you know, the right tooling for that. >> So, this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full-stack developer to the more specialized role. But this is a key point, and I have to ask you because if this Arlon solution takes place, as you say, and the apps are going to be (indistinct), they're designed to do, the question is, what does the current pain look like? Are the apps breaking? What is the signals to the customer, >> Madhura: Yeah. >> That they should be calling you guys up into implementing Arlon, Argo, and on all the other goodness to automate, what does some of the signals, is it downtime? Is it failed apps, is it latency? What are some of the things that, >> Madhura: Yeah, absolutely. >> Would be indications of things are F'ed up a little bit. >> Yeah. More frequent down times, down times that are, that take longer to triage. And so your, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners, and they're extremely interested in this because the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own scripts. So, these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your mean time to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small, focused, nimble OPS team, which has an immediate impact on your budget. So, those are the signals. >> This is the cloud-native at scale situation, the innovation going on. Final thought is your reaction to the idea that, if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where IT used to be supporting the business, you know, the back office and the (indistinct) terminals and some PCs and handhelds. Now, if technology's running, the business is the business. >> Yeah. >> Company is the application. >> Yeah. >> So, it can't be down. So, there's a lot of pressure on CSOs and CIOs now and boards is saying, how is technology driving the top-line revenue? That's the number one conversation. >> Yeah. >> Do you see the same thing? >> Yeah, it's interesting. I think there's multiple pressures at the CXO, CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, the technology that's, you know, that's going to drive your top line is going to drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially, when you're talking about, let's say, retailers or those kinds of large-scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So, I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >> Final question. What does cloud-native at scale look like to you? If all the things happen the way we want them to happen, the magic wand, the magic dust, what does it look like? >> What that looks like to me is a CIO sipping at his desk on coffee, production is running absolutely smooth. And he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are just taking care of themselves. >> John: And the CIO doesn't exist and there's no CISO, there at the beach. >> (laughs) Yeah. >> Thank you for coming on, sharing the cloud-native at scale here on The Cube. Thank you for your time. >> Fantastic. Thanks for having me. >> Okay. I'm John Furrier here, for special program presentation, special programming cloud-native at scale, enabling SuperCloud modern applications with Platform9. Thanks for watching. (gentle music)
SUMMARY :
My pleasure to have here Madhura Maskasky, and the SuperCloud as we call it, Yeah, you know, I And that's just the beginning. Can you share your view on what So, I think, you know, Can you scope the And that is just, you know, Kubernetes, and then you say, I like to call the, you know, you know, debugging mentality. And you guys have a and along the sites of those in a traditional, let's say, you know, the assembly line, piece for the developer. Because developers, you know, there is, So, the DevOps is the Because you know, Kubernetes clusters are, And you guys have a look up the code, but you know, Open source and And we have, you know, created and built the developers want too. the application, if you will. And that's what open to go that route, you know, enthusiastic view of, you know, And so, and there's multiple, you know, And I think, this is going to I'm an enterprise, I got tight, you know, And that's where, you know, of customers where, you know, and I'm just chasing my tail. clusters that you have today, And now, they have management challenges. That's right. Absolutely. So, policy-based configuration and life cycle management for clusters. at on the public cloud, you Okay, so now, the next level is, Because the impact to me seems the way you expect them to, and I have to ask you Would be indications of points, which is, you know, supporting the business, you know, That's the number one conversation. the technology that's, you know, If all the things happen the What that looks like to me John: And the CIO doesn't Thank you for your time. Thanks for having me. for special program presentation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Madhura Maskasky | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Madhura | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
Arlon | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
one site | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
two factors | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
each site | QUANTITY | 0.99+ |
each component | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Platform9 | ORGANIZATION | 0.99+ |
one flavor | QUANTITY | 0.99+ |
Argo | ORGANIZATION | 0.98+ |
two parts | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
Second | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
SuperCloud | TITLE | 0.98+ |
Adrian | PERSON | 0.98+ |
tens of thousands of nodes | QUANTITY | 0.98+ |
one problem | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one node | QUANTITY | 0.98+ |
two products | QUANTITY | 0.97+ |
tens of thousands of sites | QUANTITY | 0.97+ |
one picture | QUANTITY | 0.97+ |
The Cube | ORGANIZATION | 0.96+ |
one end | QUANTITY | 0.96+ |
CloudFlare | TITLE | 0.96+ |
Platform9 | TITLE | 0.95+ |
this year | DATE | 0.95+ |
CXO | ORGANIZATION | 0.95+ |
two axes | QUANTITY | 0.94+ |
three things | QUANTITY | 0.94+ |
EKS | ORGANIZATION | 0.93+ |
single line | QUANTITY | 0.92+ |
one example | QUANTITY | 0.91+ |
single cluster | QUANTITY | 0.91+ |
Platform9, Cloud Native at Scale
>>Everyone, welcome to the cube here in Palo Alto, California for a special presentation on Cloud native at scale, enabling super cloud modern applications with Platform nine. I'm John Furry, your host of The Cube. We've got a great lineup of three interviews we're streaming today. Mattor Makki, who's the co-founder and VP of Product of Platform nine. She's gonna go into detail around Arlon, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Bickley the chief architect of Platform nine Cube alumni. Going back to the OpenStack days. He's gonna go into why Arlon, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Vascar, Gort, the CEO of Platform nine, is gonna talk with me about his views on Super Cloud and why Platform nine has a scalable solutions to bring cloud native at scale. So enjoy the program, see you soon. Hello and welcome to the cube here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Forry, host of the Cube. Pleasure to have here me Makowski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. >>Thank you for having >>Me. So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on Super cloud as it fits to cloud native as scales up? >>Yeah, you know, I think what's interesting, and I think the reason why Super Cloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model, where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think super cloud is a, is an appropriate term for >>That. So you brought a couple things I want to dig into. You mentioned Edge Notes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. Wouldn't even know what's around the corner. You got buildings, you got iot, o ot, and it kind of coming together, but you also got this idea of regions, global infrastructures, big part of it. I just saw some news around cloud flare shutting down a site here, there's policies being made at scale. These new challenges there. Can you share because you can have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because it's something business consequences as well, but there are technical challenge. Can you share your view on what the technical challenges are for the super cloud across multiple edges and >>Regions? Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that skill really needs some well thought out, well-structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when you scale, is not at the level. >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're talking about at scale Yep. Challenges here. >>Yeah, absolutely. And I think, you know, I I like to call it, you know, the, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is, you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, It's not working. The exact same problem now happens and these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer's site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on all of these various factors at their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >>Okay. So I have to ask about scale because there are a lot of multiple steps involved when you see the success cloud native, you know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can configure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotpot is. And when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the, the two factors of scale is we talked about start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and POS from support teams, et cetera. And those issues can be really difficult to try us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalates to a higher degree because yeah, you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say you radio sell tower site, or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did it, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like ishly hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that these security policies are configured properly. >>So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching, Can you share what our lawn is, this new product, What is it all about? Talk about this new introduction. >>Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arwan is, it's an open source project and it is a tool, it's a Kubernetes native tool for complete end to end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So what alarm lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what this solves in, in terms of the chaos you guys are reigning in. What's the, what's the bumper sticker? Yeah, >>What would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right online. And if you look at the logo we've designed, it's this funny little robot. And it's because when we think of online, we, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each components, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage. But again, it gets, you know, processed in a standardized way. And that's what Arlon really does. That's like the I pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency >>For those. So keeping it smooth, the assembly on things are flowing. C C I CD pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, the developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of application that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secure properly, that they are logging, logs are being collected properly, monitoring and observability integrated. And so it solves problems for both those >>Teams. Yeah. It's DevOps. So the DevOps is the cloud native developer. The OP teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, Kubernetes really in introduced or elevated this declarative management, right? Because, you know, c communities clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined in a declarative way. And Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so online addresses that problem at the heart of it, and it does that using existing open source well known solutions. >>Ed, do I wanna get into the benefits? What's in it for me as the customer developer? But I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over at platform nine, is it open source? And you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SaaS model on from model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fi, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why opensource and also opensource because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a block box. >>Well, and that's, that's what the developers want too. I mean, what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way that, Well, but that's, that's the benefit. Open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for helpers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating me metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think, and you know, two things. I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a SAS hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production >>Environment. I have to ask you now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlo? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo customer? >>Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native es, and then we have our C I CD pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS CD pipelines can deploy the apps. Somebody needs to do all of their groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think Spico would be delighted. The folks that we've talked, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on s Amazon and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us >>Stability. Yeah, I think people are scared, not sc I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises. And, and I think this is gonna come up at co con this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Lon uses Argo cd, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of Intuit team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is arlon also makes use of cluster api capi, which is a ES sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with algo cd. Now Arlan just extends the scope of what Algo CD can do. And so that's one. And then the second part is going back to a point of the comfort. And that's where, you know, Platform nine has a role to play, which is when you are ready to deploy Alon at scale, because you've been, you know, playing with it in your DEF test environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that sla. >>And what's been the reaction from customers you've talked to Platform nine customers with, with, that are familiar with, with Argo and then Arlo? What's been some of the feedback? >>Yeah, I, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo CD and they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our line before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and, you know, give you an inventory and that, >>So customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. So >>Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yeah. And or scale out. >>That's right. Exactly. >>And you provide that layer of policy. >>Absolutely. >>Yes. That's the key value >>Here. That's right. >>So policy based configuration for cluster scale up >>Profile and policy based declarative configuration and life cycle management for clusters. >>If I asked you how this enables Super club, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there's at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a, going back to assembly line in a very consistent, predictable way so that our lot solves then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And alon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And, you know, alarm flows >>In one. Okay, so now the next level is, Okay, that makes sense. There's under the covers kind of speak under the hood. Yeah. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me, seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge to their, where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for >>That. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point, and I have to ask you because if this Arlo solution takes place, as you say, and the apps are gonna be stupid, there's designed to do, the question is, what did, does the current pain look like of the apps breaking? What does the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo, and, and, and on all the other goodness to automate, What are some of the signals? Is it downtime? Is it, is it failed apps, Is it latency? What are some of the things that Yeah, absolutely would be in indications of things are effed up a little bit. >>Yeah. More frequent down times, down times that are, that take longer to triage. And so you are, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners. And they're extremely interested in this because the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your, your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you are looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your, So those are, those are the >>Signals. This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where it used to be supporting the business, you know, the back office and the IIA terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. The company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and see, and boards is saying, how is technology driving the top line revenue? That's the number one conversation. Yeah. Do you see that same thing? >>Yeah. It's interesting. I think there's multiple pressures at the CXO CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloudnative at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things with things. So just >>Taking care of, and the CIO doesn't exist. There's no CSO there at the beach. >>Yeah. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for having >>Me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, the cube and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here at Bickley, who's the chief architect and co-founder of Platform nine b. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or well later, earlier when opens Stack was going. Great to see you and great to see congratulations on the success of platform nine. >>Thank you very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now was realized, and you've seen what Docker's doing with the new docker, the open source Docker now just a success Exactly. Of containerization, right? And now the Kubernetes layer that we've been working on for years is coming, bearing fruit. This is huge. >>Exactly. Yes. >>And so as infrastructure's code comes in, we talked to Bacar talking about Super Cloud, I met her about, you know, the new Arlon, our R lawn you guys just launched, the infrastructure's code is going to another level. And then it's always been DevOps infrastructure is code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon, connect the dots for us. What is the state of infrastructures code today? >>So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures code. But with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure as configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specify. So I think it's, it's a even better version of infrastructures code. >>Yeah, yeah. And, and that really means it's developer just accessing resources. Okay. Not declaring, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source, so popular, you don't have to have to write a lot of code. It's code being developed. And so it's into integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new, new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space that, >>That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is the new breed, the trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your >>View? It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project. Arlon. >>Yeah. So, so we're gonna get into Arlan in a second. I wanna get into the why Arlon. You guys announced that at our GoCon, which was put on here in Silicon Valley at the, at the by intu. They had their own little day over there at their headquarters. But before we get there, Vascar, your CEO came on and he talked about Super Cloud at our inaugural event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or application specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so to deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection for the internet at the, the layer too is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna continue. >>It's interesting. I just really wrote another post today on LinkedIn called the Silicon Wars AMD Stock is down arm has been on rise, we've remember pointing for many years now, that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd wanna have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in, in this community of co con, which will be a covering. So that brings up the whole what's next? You guys just announced our lawn at ar GoCon, which came out of Intuit. We've had Maria Teel at our super cloud event, She's a cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why our lawn, why this announcement? Yeah, >>So the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built AR lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, And >>What's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the, the, this abstraction or thin layer below the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads at the end of the day, and I talk to CXOs and IT folks that, that are now DevOps engineers. They care about the workloads and they want the infrastructure's code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened. They need observability and they need to, to know that it's working. That's right. And here's my workloads running effectively. So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, right? >>So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I want to run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem, like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's coming like an EC two instance, spin up a cluster. We've heard people used words like that. That's >>Right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with AR loan you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call the profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, So >>It's essentially standard, like creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook, just deploy it. Now what there is between say a script like I'm, I have scripts, I can just automate scripts >>Or yes, this is where that declarative API and infrastructure as configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things are controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure as configuration is built kind of on, it's a super set of infrastructures code because it's >>An evolution. >>You need edge's code, but then you can configure the code by just saying do it. You basically declaring saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years. I mean people are now starting to figure out, okay, it's not as easy as it sounds. Kubernetes has value. We're gonna hear this year at CubeCon a lot of this, what does cloud native at scale >>Mean? Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state, and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot rogue, that CubeCon coming up and now this'll be shipping this segment series out before. What do you expect to see at this year? It's the big story this year. What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jockeying for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time and there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of coupon and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, well >>Maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career VMware over decades with them within 12 years with 14 years or something like that. Big number co-founder here a platform. I you's been around for a while at this game, man. We talked about OpenStack, that project we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a Cloud Aati team at that time. We would joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform Nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? Opens Stack was an example and then Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will a, platform nine will be there and we will, you know, take the innovations from the the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart this segment. What is at scale, how many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you, Yeah, how would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. >>Yeah. And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing, doing over $2 billion billions of transactions a year and, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud need of its scale? >>The, the hyper squares? >>Yeah, yeah. A's Azure Google, >>You mean from a business perspective, they're, they have their own interests that, you know, that they're, they will keep catering to, they, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep well, >>They got great performance. I mean, from a, from a hardware standpoint, yes. That's gonna be key, >>Right? Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyper skaters really want to be in the game in terms of, you know, the, the new risk and arm ecosystems, the platforms. >>Yeah. Not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh the cloud is one big distributed computer. It's, it's hardware and you got software and you got middleware and he kinda over, well he's kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. Yes, >>Exactly. >>It's, we're back in the same game. Thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud, cloud native develop for developers. And John Furrier with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up. Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's it all going? Making it super multi-cloud is around the corner and public cloud is winning. Got the private cloud on premise and Edge. Got a great guest here, Vascar Gorde, CEO of Platform nine, just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you >>Again. So Kubernetes is a blocker enabler by, with a question mark I put on on there. Panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's just thing about what you guys are doing at Platform nine? Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm, right? >>Absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know, the application world is moving very fast in trying to become digital and cloud native. There are many options for you to run the infrastructure. The biggest blocking factor now is having a unified platform. And that's what where we come into >>Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days in 2000, 2001 when the first ASPs application service providers came out. Kind of a SaaS vibe, but that was kind of all kind of cloud-like >>It wasn't, >>And web services started then too. So you saw that whole growth. Now, fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>In fact, you know, as we were talking offline, I was in one of those ASPs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflations sea year. It's interesting. This is the first downturn, the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. Nope. Cause pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud. >>Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing infrastructure is not just some, you know, new servers and new application tools. It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. >>I wanna get your thoughts on Super Cloud because one of the things Dave Alon and I want to do with Super Cloud and calling it that was we, I, I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like they're not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, More dynamic, more unreal. >>Yeah. I think the reason why we think Super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? So, but they're not connected. So you can say, okay, it's more than one cloud. So it's, you know, multi-cloud. But super cloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch. You are looking at this as one unit. And that's where we see the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pan or a single platform for you to build your innovations on, regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is as a pilot to get the conversations flowing with, with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third Cloudworks in public cloud. We see the use cases on premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration. And then containers had a good shot with, with Docker. They re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. >>What's, >>What's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere in the journey is going on. And you know, most companies are, 70 plus percent of them have 1, 2, 3 container based, Kubernetes based applications now being rolled out. So it's very much here. It is in production at scale by many customers. And it, the beauty of it is yes, open source, but the biggest gating factor is the skill set. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool and >>Just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores about thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also un on-prem as an air gap version. Can >>You give an example on how you guys are deploying your platform to enable a super cloud experience for your customer? Right. >>So I'll give you two different examples. One is a very large networking company, public networking company. They have hundreds of products, hundreds of r and d teams that are building different, different products. And if you look at few years back, each one was doing it on a different platforms, but they really needed to bring the agility. And they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact, the customer says like, like the Maytag service person, cuz we provide it as a service and it barely takes one or two people to maintain it for them. >>So it's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, >>Whatever they want on their tools. They're using whatever app development tools they use, but they use our platform. What >>Benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely they're speeding. Speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we heard it all here, ops and security teams. Cause they're kind of part of one thing, but option security specifically need to catch up speed wise. Are you delivering that value to ops and security? Right? >>So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams. So >>You working two sides to that coin. You've got the dev side and then >>And then infrastructure >>Side. >>Okay. Another customer that I give an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on it's classic edge. It's classic edge, yeah. Right? They can't send it people to go install rack access servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You, you say little servers like how big one like a box, like a small little box, >>Right? And all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. >>Yep. >>We provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage thousands of >>Them. True plug and play >>Two, plug and play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the locations. >>So you guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud native. >>Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented, very well. >>Tucan, of course Detroit's >>Coming so, so it's already there, right? So we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud, not you have to rewrite and redevelop your application in business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing a lot of our customers in that journey. Now everybody wants to be cloudnative, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to use. Okay? So I think the complexity is there, but the business benefits of agility and uniformity and customer experience are truly being done. >>And I'll give you an example, I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how dominoes actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered. There were a pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow to the application changes what the expectations >>Are >>For the customer. Customer, >>The customer's expectations change, right? Once you get used to a better customer experience, you learn. >>That's to wrap it up. I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have a lot of insights. I want to ask you, given your career where you've been and what you've done and now let's CEO platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super Cloud. >>Sure. I think as you said, a lot of battles. CARSs being been in an asb, being in a real time software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is a couple of paddles come to me. One is, think of it, which was forced to honors like y2k. Everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. >>And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloudnative. You know, that is what I see >>Happening there. I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the gain, it's just changing how you operate, >>How you think and how you operate. See, if you think about the early days of e-commerce, just putting up a shopping cart that made you an e-commerce or e retailer or an e e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Nascar, thank you for coming on, spending the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, we're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Fur with Super Cloud 22 in the Cube. Thanks for watching. >>Thank you. Thank you. >>Hello and welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super Cloud and its challenges, new opportunities around solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.
SUMMARY :
So enjoy the program, see you soon. a lot different, but kind of the same as the first generation. And so you gotta rougher and it kind of coming together, but you also got this idea of regions, So I think, you know, in in the context of this, the, Can you scope the scale of the problem? And I think, you know, I I like to call it, you know, And that is just, you know, one example of an issue that happens. you know, you see some, you know, some experimentation. which is, you know, you have your perfectly written code that is operating just fine on your And so as you give that change to then run at your production edge location, And you guys have a solution you're launching, Can you share what So what alarm lets you do in a in terms of the chaos you guys are reigning in. And if you look at the logo we've designed, So keeping it smooth, the assembly on things are flowing. Because developers, you know, there is, the developers are responsible for one picture of So the DevOps is the cloud native developer. And so online addresses that problem at the heart of it, and it does that using So I'm assuming you have that thought through, can you share open source and commercial relationship? products starting all the way with fi, which was a serverless product, you know, that we had built to buy, but also actually kind of date the application, if you will. I think one is just, you know, this, this, this cloud native space is so vast I have to ask you now, let's get into what's in it for the customer. And so, and there's multiple, you know, enterprises that we talk to, shared that this is a major challenge we have today because we have, you know, I'm an enterprise, I got tight, you know, I love the open source trying to It's created by folks that are as part of Intuit team now, you know, And the customer said, If you had it today, I would've purchased it. So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been that And now they have management challenges. Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure That's right. And alon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? And so this really gives them, you know, the right tooling for But this is a key point, and I have to ask you because if this Arlo solution of challenges, and those are the pain points, which is, you know, if you're looking to reduce your, not where it used to be supporting the business, you know, that, you know, that the, the technology that's, you know, that's gonna drive your top line is If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Taking care of, and the CIO doesn't exist. Thank you for your time. Thanks for having of Platform nine b. Great to see you Cube alumni. And now the Kubernetes layer that we've been working on for years is Exactly. you know, the new Arlon, our R lawn you guys just launched, you know, do step A, B, C, and D instead with Kubernetes, I mean now with open source, so popular, you don't have to have to write a lot of code. you know, the emergence of systems and layers to help you manage that complexity is becoming That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is the new breed, the trend of SaaS you know, you think you have things under control, but some people from various teams will make changes here in the industry technical, how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection for the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, all the variations around and you know, compute storage networks the DevOps engineers, they get a a ways to So how do you guys look at the workload side of it? like K native, where you can express your application in more at a higher level, It's coming like an EC two instance, spin up a cluster. And then you can stamp out your app, your applications and your clusters and manage them And it's like a playbook, just deploy it. You just tell the system what you want and then You need edge's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at this year? If you look at a stack necessary for hosting We would joke we, you know, about, about the dream. So the successor to Kubernetes, you know, I don't Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, They have some SaaS apps, but mostly it's the ecosystem. you know, that they're, they will keep catering to, they, they will continue to find I mean, from a, from a hardware standpoint, yes. terms of, you know, the, the new risk and arm ecosystems, It's, it's hardware and you got software and you got middleware and he kinda over, Great to have you on. What's just thing about what you guys are doing at Platform nine? clouds, you know, the application world is moving very fast in trying to Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days So you saw that whole growth. In fact, you know, as we were talking offline, I was in one of those And if you look at the tech trends, GDPs down, but not tech. some, you know, new servers and new application tools. you know, more, More dynamic, more unreal. So it's, you know, multi-cloud. the purpose of this event is as a pilot to get the conversations flowing with, with the influencers like yourselves And you know, most companies are, 70 plus percent of them have 1, 2, 3 container It runs on the edge, You give an example on how you guys are deploying your platform to enable a super And if you look at few years back, each one was doing So it's kinda like an SRE vibe. Whatever they want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Are you delivering that value to ops and security? Our buyer is usually, you know, the product divisions of companies You've got the dev side and then enhance the customer experience that happens when you either order the product or go into And all the person in the store has to do like And so that dramatically brings the velocity for them. of the public clouds. So you guys got some success. How do you explain when someone says what's cloud native, what isn't cloud native? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring I don't know anything about that, but the whole experience of how you order, For the customer. Once you get used to a better customer experience, One of the benefits of chatting with you here and been on the app side, I did the infrastructure right and then tried to build our If you did not adapt and adapt and accelerate I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your Nascar, thank you for coming on, spending the time to come in and share with our community and being part of Thank you. I hope you enjoyed this program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Vascar | PERSON | 0.99+ |
Mattor Makki | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul Morritz | PERSON | 0.99+ |
Sean Fur | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Vascar Gorde | PERSON | 0.99+ |
Adrian Karo | PERSON | 0.99+ |
John Forry | PERSON | 0.99+ |
John Furry | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
50,000 | QUANTITY | 0.99+ |
Dave Alon | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
Maria Teel | PERSON | 0.99+ |
14 years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tens | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Gort | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Nascar | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
4,000 engineers | QUANTITY | 0.99+ |
one site | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
Arlon | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
Makowski | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
Arlo | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
both | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
first generation | QUANTITY | 0.99+ |
22 years later | DATE | 0.99+ |
1 | QUANTITY | 0.99+ |
first downturn | QUANTITY | 0.99+ |
Platform nine | ORGANIZATION | 0.99+ |
one unit | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
one flavor | QUANTITY | 0.98+ |
more than one cloud | QUANTITY | 0.98+ |
two thousands | QUANTITY | 0.98+ |
One person | QUANTITY | 0.98+ |
Bickley | PERSON | 0.98+ |
Bacar | PERSON | 0.98+ |
12 years | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
GoCon | EVENT | 0.98+ |
each site | QUANTITY | 0.98+ |
thousands of stores | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
20 years later | DATE | 0.98+ |
Madhura Maskasky, Platform9 Cloudnative at Scale
>>Hello everyone. Welcome to the cube here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Forer, host of the Cube. My pleasure to have here me Makoski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. Thank >>You for having >>Me. So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native as scales up? >>Yeah. You know, I think what's interesting, and I think the reason why Super Cloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think super cloud is a, is an appropriate term >>For that. So you brought a couple things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. We even know what's around the corner. You got buildings, you got I O D OT and IT kind of coming together. But you also got this idea of regions, global infrastructure is big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale. These new challenges there, can you share because you gotta have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's some business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud or across multiple edges and regions? >>Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of notes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when your scale is not at the level, >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud data become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're about at scale Yep. Challenges here. Yeah, >>Absolutely. And I think, you know, I I like to call it, you know, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, it's not working. And the exact same problem now happens in these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that >>Occur. Okay. So I have to ask about scale because there are a lot of multiple steps involved when you see the success cloud native, you know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can figure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is. And when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the, the two factors of scale is we talked about start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and POS from support teams, et cetera. And those issues can be really difficult to triage us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalate to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say your radio cell tower site or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like ishly hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that the security policies are configured >>Properly. So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >>Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arwan is, it's an open source project and it is a tool, it's a Kubernetes native tool for complete end-to-end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So what Arlan lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what dissolves in, in terms of the chaos you guys are reigning in, what's the, what's the bumper sticker? Yeah, >>What would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right? Lon. And if you look at the logo we've designed, it's this funny little robot, and it's because when we think of lon, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well-structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage where again, it gets, you know, processed in a standardized way. And that's what Alon really does. That's like the deliver pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for those. >>So keeping it smooth, the assembly line, things are flowing. See c i CD pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops is coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly, monitoring and observability is integrated. And so it solves problems for both those teams. >>Yeah, it's dev op, So the DevOps is the cloud needed developer, The kins have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, es really in introduced or elevated this declarative management, right? Because you know, Kubernetes clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined in a declarative way. And Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlan addresses that problem at the heart of it, and it does that using existing open source, well known solutions. >>Medo, I want to get into the benefits, what's in it for me as the customer developer, but I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over there, Platform nine, is it open source? And you guys have a product that's commercial. Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share that open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SAS model or onpro model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fi, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why open source and also open source because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a blog box. >>Well, and that's, that's what the developers want too. And what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way it is. Well, but that's, that's the benefit. Open source. This is why standards and open source growing so fast, you have that confluence of, you know, a way fors to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think in, you know, two things, I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprise's use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a sa hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production environment. I >>Have to ask you now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlo? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo if I'm a >>Customer? Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you will hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native Kubernetes, and then we have our C I C D pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS CD pipelines can deploy the apps. Somebody needs to do all of that groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think spic would be delighted. The folks that we've spoken, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on s Amazon and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us the ability. >>Yeah, I think people are scared. Not, I won't say scare, that's a a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises and, and I think this is gonna come up at Cuban this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the sla I'm an enterprise, I got tight, you know, I love the open source kind of free, fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Arlan uses Argo cd, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is arlon also makes use of cluster api capi, which is a sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with algo cd. Now Arlan just extends the scope of what Algo CD can do. And so that's one. And then the second part is going back to your point of the comfort. And that's where, you know, Platform nine has a role to play, which is when you are ready to deploy arlon at scale, because you've been, you know, playing with it in your dev tested environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that sla. >>And what's been the reaction from customers you've talked to Platform nine customers with, with, that are familiar with, with Argo and then Arlo? What's been some of the feedback? >>Yeah, I, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo cdn, they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our lawn before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SAS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and, you know, give you an inventory. And so >>Customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. >>So especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yep. And or scale out. >>That's right. Exactly. And >>You provide that layer of policy. >>Absolutely. Yes. >>That's the key value >>Here. That's right. >>So policy based configuration for cluster scale >>Up, well profile and policy based declarative configuration and lifecycle management for >>Clusters. If I asked you how this enables Super Cloud, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there is at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale. You know, in a, going back to assembly line in a very consistent, predictable way. So that are land solves, then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And arlon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running at, on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And, you know, our long flows >>In one. Okay, so now the next level is, Okay, that makes sense. Is under the covers kind of speak under the hood. Yeah. How does that impact the app developers of the cloud native modern application workflows? Because the impact to me seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge today where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own these stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for >>That. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the fulls stack developer to the more specialized role. But this is a key point, and I have to ask you because if this, our low solution takes place, as you say, and the apps are gonna be stupid, they designed to do, the question is, what did, does the current pain look like? Are the apps breaking? What is the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo and, and all the other goodness to automate? What does some of the signals, is it downtime? Is it, is it failed apps, is it latency? What are some of the things that Yeah, absolutely. That would be indications of things are effed up a little bit. >>Yeah. More frequent down times, down times that are, that take longer to triage. And so your, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they're, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners, and they're extremely interested in this because the, the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own scripts. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your budget. So those are, those are the signals. >>This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application not where it used to be supporting the business, you know, the back office and the immediate terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. Company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and boards are saying, How is technology driving the top line revenue? That's the number one conversation. Yep. Do you see the same thing? >>Yeah, it's interesting. I think there's multiple pressures at the cx, OCI O level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, the, the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloud native at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are just taking >>Care and the CIO doesn't exist. There's no seeso there at the beach. >>Yep. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for >>Having me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching.
SUMMARY :
I'm John Forer, host of the Cube. a lot different, but kind of the same as the first generation. And so you gotta rougher that with a terminology that, Can you share your view on what the technical challenges So I think, you know, in in the context of this, the, this, Can you scope the scale of the problem? the problem that the scale creates, you know, there's various problems, but I think one, And that is just, you know, one example of an issue that happens. cloud native, you know, you see some, you know, some experimentation. you know, you have your perfectly written code that is operating just fine on your machine, And so as you give that change to then run at your production edge location, And you guys have a solution you're launching. So what Arlan lets you do in a then handing to the next stage where again, it gets, you know, processed in a standardized way. So keeping it smooth, the assembly line, things are flowing. Because developers, you know, there is, developers are responsible for one picture of Yeah, it's dev op, So the DevOps is the cloud needed developer, The kins have to kind of set policies. of that world of a single cluster, and when you actually talk about defining the clusters or defining And you guys have a product that's commercial. products starting all the way with fi, which was a serverless product, you know, that we had built to of date the application, if you will. choose to go that route, you know, once they have used the open source enthusiastic view of, you know, why I should be enthused about Arlo if I'm a And so, and there's multiple, you know, enterprises that we talk to, The folks that we've spoken, you know, spoken with, have been absolutely excited Is there any, what's the sla I'm an enterprise, I got tight, you know, I love the open source kind of free, It's created by folks that are as part of into team now, you know, you know, initially, you know, when you are, when you're telling them about your entire So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been that And now they have management challenges. So especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure And Absolutely. And arlon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things as you mentioned, And so this really gives them, you know, the right tooling for But this is a key point, and I have to ask you because if this, our low solution So these are the kinds of challenges, and those are the pain points, which is, you know, to be supporting the business, you know, the back office and the immediate terminals and some that, you know, the, the, the technology that's, you know, that's gonna drive your top line is gonna If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Care and the CIO doesn't exist. Thank you for your time. Thanks for at scale, enabling super cloud modern applications with Platform nine.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Madhura Maskasky | PERSON | 0.99+ |
Adrian Karo | PERSON | 0.99+ |
John Forer | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Two | QUANTITY | 0.99+ |
one site | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
two factors | QUANTITY | 0.99+ |
one flavor | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
tens of thousands of notes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
each component | QUANTITY | 0.99+ |
one picture | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
each site | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Medo | PERSON | 0.98+ |
Second | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Arlan | ORGANIZATION | 0.98+ |
second | QUANTITY | 0.98+ |
tens of thousands of sites | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |
Argo | ORGANIZATION | 0.98+ |
Makoski | PERSON | 0.97+ |
two products | QUANTITY | 0.97+ |
Platform nine | TITLE | 0.96+ |
one problem | QUANTITY | 0.96+ |
single line | QUANTITY | 0.96+ |
Arlon | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
CloudFlare | TITLE | 0.95+ |
one node | QUANTITY | 0.95+ |
algo cd | TITLE | 0.94+ |
customers | QUANTITY | 0.93+ |
hundreds | QUANTITY | 0.92+ |
lon | ORGANIZATION | 0.92+ |
Arlan | PERSON | 0.92+ |
arlon | ORGANIZATION | 0.91+ |
one example | QUANTITY | 0.91+ |
Kubernetes | TITLE | 0.9+ |
single cluster | QUANTITY | 0.89+ |
Arlo | ORGANIZATION | 0.89+ |
Platform nine | ORGANIZATION | 0.87+ |
one way | QUANTITY | 0.85+ |
day two | QUANTITY | 0.85+ |
day one | QUANTITY | 0.82+ |
Cloudnative | ORGANIZATION | 0.8+ |
two access | QUANTITY | 0.79+ |
one end | QUANTITY | 0.78+ |
Cuban | LOCATION | 0.78+ |
Platform9 | ORGANIZATION | 0.78+ |
Alon | ORGANIZATION | 0.77+ |
thousands | QUANTITY | 0.77+ |
Platform9, Cloud Native at Scale
>>Hello, welcome to the Cube here in Palo Alto, California for a special presentation on Cloud native at scale, enabling super cloud modern applications with Platform nine. I'm John Furr, your host of The Cube. We had a great lineup of three interviews we're streaming today. Meor Ma Makowski, who's the co-founder and VP of Product of Platform nine. She's gonna go into detail around Arlon, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Bickley the chief architect of Platform nine Cube alumni. Going back to the OpenStack days. He's gonna go into why Arlon, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Vascar, Gort, the CEO of Platform nine, is gonna talk with me about his views on Super Cloud and why Platform nine has a scalable solutions to bring cloudnative at scale. So enjoy the program. See you soon. Hello everyone. Welcome to the cube here in Palo Alto, California for special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Furry, host of the Cube. A pleasure to have here, me Makoski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. Thank >>You for having me. >>So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native as scales up? >>Yeah, you know, I think what's interesting, and I think the reason why Super Cloud is a really good, in a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributions of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of microsites, these microsites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think supercloud is a, is an appropriate term for that. >>So you brought a couple of things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. Wouldn't even know what's around the corner. You got buildings, you got iot, ot, and IT kind of coming together, but you also got this idea of regions, global infras infrastructures, big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale, These new challenges there. Can you share because you can have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's something business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud or across multiple edges and regions? >>Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have deploy a number of clusters in the Kubernetes space. And then on the other axis you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when you scale, is not at the level. >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're talking about at scale Yep. Challenges here. Yeah, >>Absolutely. And I think, you know, I I like to call it, you know, the, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is, you know, it works on my cluster problem, right? So I, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this chain, everything was fantastic, it worked flawlessly on my machine, on production, It's not working. The exact same problem now happens and these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies, or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors are their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >>Okay. So I have to ask about scale, because there are a lot of multiple steps involved when you see the success of cloud native. You know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can figure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is in when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the two factors of scale, as we talked about, start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and pos from support teams, et cetera. And those issues can be really difficult to triage us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalates to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say your radio cell tower site, or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes nightmarishly hard, right? It's just one of the examples of the problem, another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that these security policies are configured properly. >>So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >>Yeah, absolutely. Very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arlon is, it's an open source project, and it is a tool, it's a Kubernetes native tool for complete end to end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the site of those clusters, security policies, your middleware, plug-ins, and finally your applications. So what our LA you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what dissolves in, in terms of the chaos you guys are reigning in, what's the, what's the bumper sticker? Yeah, what >>Would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right? Our line, and if you look at the logo we've designed, it's this funny little robot. And it's because when we think of online, we think of these enterprise large scale environments, you know, sprawling at scale, creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage. But again, it gets, you know, processed in a standardized way. And that's what arlon really does. That's like the deliver pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for >>Those. So keeping it smooth, the assembly on things are flowing. See c i CD pipe pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secure properly, that they are logging, logs are being collected properly, monitoring and observability integrated. And so it solves problems for both >>Those teams. Yeah. It's DevOps. So the DevOps is the cloud needed developer's. That's right. The option teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, ES really in introduced or elevated this declarative management, right? Because, you know, s clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined a declarative way, and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlon addresses that problem at the heart of it, and it does that using existing open source well known solutions. >>And do I want to get into the benefits? What's in it for me as the customer developer? But I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over at Platform nine, is it open source? And you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SaaS model or on-prem model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fision, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why opensource and also open source, because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a block box. >>Well, and that's, that's what the developers want too. And what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way that long. But that's, that's the benefit. Open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for developers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating met metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think, and you know, two things. I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a SAS hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production >>Environment. I have to ask you now, let's get into what's in it for the customer. I'm a customer. Yep. Why should I be enthused about Arla? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo? I'm a >>Customer. Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native Kubernetes, and then we have our C I C D pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS c D pipelines can deploy the apps. Somebody needs to do all of that groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think Ops FICO would be delighted. The folks that we've talk, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on ecos Amazon, and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us the >>Ability to, Yeah, I think people are scared. Not sc I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises. And, and I think this is gonna come up at co con this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Arlan uses Argo cd, which is probably one of the highest and used CD open source tools that's out there. Right's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is Alon also makes use of Cluster api cappi, which is a Kubernetes sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with Argo cd. Now Arlan just extends the scope of what City can do. And so that's one. And then the second part is going back to a point of the comfort. And that's where, you know, platform line has a role to play, which is when you are ready to deploy online at scale, because you've been, you know, playing with it in your DEF test environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that >>Sla. And what's been the reaction from customers you've talked to Platform nine customers with, with that are familiar with, with Argo and then rlo? What's been some of the feedback? >>Yeah, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo adn, they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our land before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customers' hands and offloading it to our hands, right? And giving them that full white glove treatment, as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and you know, give you an inventory. And that will, >>So if customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. So >>Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yep. And or scale out. >>That's right. Exactly. And >>You provide that layer of policy. >>Absolutely. >>Yes. That's the key value here. >>That's right. >>So policy based configuration for cluster scale up, >>Well profile and policy based declarative configuration and lifecycle management for clusters. >>If I asked you how this enables supercloud, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there's at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a, going back to assembly line in a very consistent, predictable way so that our lot solves then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And arlon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And you know, our alarm fills in >>One. Okay. So now the next level is, Okay, that makes sense. Is under the covers kind of speak under the hood. Yeah. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me, seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge to their, where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for that. >>So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point, and I have to ask you because if this RLO solution takes place, as you say, and the apps are gonna be stupid, they're designed to do, the question is, what did does the current pain look like of the apps breaking? What does the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo and, and all the other goodness to automate? What are some of the signals? Is it downtime? Is it, is it failed apps, Is it latency? What are some of the things that Yeah, absolutely would be indications of things are effed up a little bit. Yeah. >>More frequent down times, down times that are, that take longer to triage. And so you are, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they're, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners. And they're extremely interested in this because they're the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you are looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your budget. So those are, those are the signals. >>This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where it used to be supporting the business, you know, the back office and the maybe terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. Company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and boards is saying, How is technology driving the top line revenue? That's the number one conversation. Yep. Do you see that same thing? >>Yeah. It's interesting. I think there's multiple pressures at the CXO CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloudnative at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are >>Just taking care of the CIO doesn't exist. There's no ciso, they're at the beach. >>Yep. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for >>Having me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, the cube and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here with Bickley, who's the chief architect and co-founder of Platform nine Pick. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or later, earlier when OpenStack was going. Great to see you and great to see congratulations on the success of platform nine. >>Thank you very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now has realized, and you've seen what Docker's doing with the new docker, the open source Docker now just the success Exactly. Of containerization, right? And now the Kubernetes layer that we've been working on for years is coming, bearing fruit. This is huge. >>Exactly. Yes. >>And so as infrastructures code comes in, we talked to Bacar talking about Super Cloud, I met her about, you know, the new Arlon, our, our lawn, and you guys just launched the infrastructures code is going to another level, and then it's always been DevOps infrastructures code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon, connect the dots for us. What is the state of infrastructure as code today? >>So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures code. But with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure is configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specified. So I think it's, it's a even better version of infrastructures code. >>Yeah. And that really means it's developer just accessing resources. Okay. That declare, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source so popular, you don't have to have to write a lot of code, this code being developed. And so it's into integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space. >>That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is a new breed. The trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your view? >>It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project. Arlon. >>Yeah. So, so we're gonna get into Arlan in a second. I wanna get into the why Arlon. You guys announced that at AR GoCon, which was put on here in Silicon Valley at the, at the community meeting by in two, they had their own little day over there at their headquarters. But before we get there, vascar, your CEO came on and he talked about Super Cloud at our in AAL event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or applications specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so of deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection for the internet at the, the layer two is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think that the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna >>Continue. It's interesting. I just, when I wrote another post today on LinkedIn called the Silicon Wars AMD stock is down arm has been on a rise. We remember pointing for many years now that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd wanna have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in in this community of, of co con, which will be a covering. So that brings up the whole what's next? You guys just announced our lawn at Argo Con, which came out of Intuit. We've had Mariana Tessel at our super cloud event. She's the cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why our lawn, why this announcement? >>Yeah, so the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and, you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself. You can, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built our lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, And >>What's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the the, this abstraction or thin layer below as the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads. At the end of the day, you know, I talk to CXOs and IT folks that are now DevOps engineers. They care about the workloads and they want the infrastructures code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened. They need observability and they need to, to know that it's working. That's right. And is my workloads running effectively? So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, >>Right? So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I want to run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's kinda like an EC two instance, spin up a cluster. We very, people used words like that. That's >>Right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with Armon you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call a profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, so >>Essentially standard creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook. You deploy it. Now what's there is between say a script like I'm, I have scripts, I could just automate scripts >>Or yes, this is where that declarative API and infrastructures configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things about controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure has configuration is built kind of on, it's as super set of infrastructures code because it's >>An evolution. >>You need edge's code, but then you can configure the code by just saying do it. You basically declaring and saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years? I mean people are now starting to figure out, okay, it's not as easy as it sounds. Could be nice, it has value. We're gonna hear this year coan a lot of this. What does cloud native at scale >>Mean? Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot road that CubeCon coming up and obviously this will be shipping this segment series out before. What do you expect to see at Coan this year? What's the big story this year? What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jogging for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time and there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of cons and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, well >>Maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career, VMware over decades with them obviously in 12 years with 14 years or something like that. Big number co-founder here at Platform. Now you guys have been around for a while at this game. We, man, we talked about OpenStack, that project you, we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. And I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a cloud ERO team at that time. We would to joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys tr pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? OpenStack was an example when the Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will a, platform nine will be there and we will, you know, take the innovations from the the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart this segment. What is at scale, how many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you, Yeah, how would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. >>Yeah. And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing over $2 billion billions of transactions a year and, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud native of its scale? >>The, the hyperscalers, >>Yeahs Azure, Google. >>You mean from a business perspective? Yeah, they're, they have their own interests that, you know, that they're, they will keep catering to, they, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep, >>Well they got great I performance, I mean from a, from a hardware standpoint, yes, that's gonna be key, right? >>Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyperscalers really want to be in the game in terms of, you know, the the new risk and arm ecosystems and the platforms. >>Yeah, not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh the cloud is one big distributed computer, it's, it's hardware and he got software and you got middleware and he kind over, well he's kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. >>Yes, >>Exactly. It's, we're back on the same game. Vic, thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud Cloud Native Phil for developers and John Furrier with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up. Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's it all going? Making it super multi-cloud clouds around the corner and public cloud is winning. Got the private cloud on premise and edge. Got a great guest here, Vascar Gorde, CEO of Platform nine, just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you >>Again. So Kubernetes is a blocker enabler by, with a question mark. I put on on that panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's interest thing about what you guys are doing at Platform nine? Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm? Yeah, right. >>Absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know, the application world is moving very fast in trying to become digital and cloud native. There are many options for you do on the infrastructure. The biggest blocking factor now is having a unified platform. And that's what we, we come into, >>Patrick, we were talking before we came on stage here about your background and we were gonna talk about the glory days in 2000, 2001, when the first as piece application service providers came out, kind of a SaaS vibe, but that was kind of all kind of cloudlike. >>It wasn't, >>And and web services started then too. So you saw that whole growth. Now, fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>I, in fact you, you know, as we were talking offline, I was in one of those ASPs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflation's here. It's interesting. This is the first downturn in the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. >>Nope. >>Cuz the pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud. >>Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing Infras infrastructure is not just some new servers and new application tools, It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. >>I wanna get your thoughts on Super Cloud because one of the things Dave Ante and I want to do with Super Cloud and calling it that was we, I, I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like they're not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, more dynamic, more real. >>Yeah. I think the reason why we think super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? So, but they're not connected. So you can say okay, it's more than one cloud. So it's you know, multi-cloud. But super cloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch. You are looking at this as one unit. And that's where we see the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pain, a single platform for you to build your innovations on regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is to, as a pilots, to get the conversations flowing with with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third cloudworks in public cloud. We see the use cases on premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration and then containers had a good shot with, with Docker. They re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. What's the, what's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere else in the journey is going on. And you know, most companies are, 70 plus percent of them have won two, three container based, Kubernetes based applications now being rolled out. So it's very much here, it is in production at scale by many customers. And the beauty of it is, yes, open source, but the biggest gating factor is the skill set. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool >>And just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores are thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also un OnPrem as an air gap version. >>Can you give an example on how you guys are deploying your platform to enable a super cloud experience for your >>Customer? Right. So I'll give you two different examples. One is a very large networking company, public networking company. They have, I dunno, hundreds of products, hundreds of r and d teams that are building different, different products. And if you look at few years back, each one was doing it on a different platforms but they really needed to bring the agility and they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact the customer says like, like the Maytag service person cuz we provide it as a service and it barely takes one or two people to maintain it for them. >>So it's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, >>Whatever they want on their tools. They're using whatever app development tools they use, but they use our platform. >>What benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely they're speeding. Speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we've heard it all here, ops and security teams cuz they're kind of too part of one theme, but ops and security specifically need to catch up speed wise. Are you delivering that value to ops and security? Right. >>So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams, >>You working two sides of that coin. You've got the dev side and then >>And then infrastructure >>Side side, okay. >>Another customer like give you an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on, It's a classic edge. It's classic edge. Yeah. Right. They can't send it people to go install rack access servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You, you say little servers like how big one like a net box box, like a small little >>Box and all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. >>Yep. >>We provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage >>Thousands of them. True plug and play >>Two, plug and play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the location. So >>You guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud native. >>Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented where you, well >>Con of course Detroit's >>Coming here, so, so it's already there, right? So, so we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud native, you have to rewrite and redevelop your application and business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing a lot of our customers in that journey. Now everybody wants to be cloudnative, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to use. Okay? So I think the complexities there, but the business benefits of agility and uniformity and customer experience are truly them. >>And I'll give you an example. I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how dominoes actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered. There were a pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow to the application changes what the expectations of the, for the customer. >>Customer, the customer's expectations change, right? Once you get used to a better customer experience, you learn >>Best car. To wrap it up, I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have a lot of insights. I want to ask you, given your career where you've been and what you've done and now the CEO platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super >>Cloud. Sure. I think as you said, a lot of battles. Cars being been, been in an asp, been in a realtime software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with a lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is a couple of paddles come to me. One is, think of it, which was forced to honors like y2k. Everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. >>And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloudnative that know that is what I see >>Happening there. I think that that e-commerce was interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the game, it's just changing how you operate, >>How you think, and how you operate. See, if you think about the early days of eCommerce, just putting up a shopping cart didn't made you an eCommerce or an E retailer or an e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Mascar, thank you for coming on, spending the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, we're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Feer with Super Cloud 22 in the Cube. Thanks for watching. >>Thank you. Thank you, John. >>Hello. Welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super Cloud and its challenges, new opportunities around the solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.
SUMMARY :
See you soon. but kind of the same as the first generation. And so you gotta rougher and IT kind of coming together, but you also got this idea of regions, So I think, you know, in in the context of this, the, this, Can you scope the scale of the problem? the problem that the scale creates, you know, there's various problems, but I think one, And that is just, you know, one example of an issue that happens. Can you share your reaction to that and how you see this playing out? which is, you know, you have your perfectly written code that is operating just fine on your And so as you give that change to then run at your production edge location, And you guys have a solution you're launching. So what our LA you do in a But again, it gets, you know, processed in a standardized way. So keeping it smooth, the assembly on things are flowing. Because developers, you know, there is, developers are responsible for one picture of So the DevOps is the cloud needed developer's. And so Arlon addresses that problem at the heart of it, and it does that using existing So I'm assuming you have that thought through, can you share open source and commercial relationship? products starting all the way with fision, which was a serverless product, you know, that we had built to buy, but also actually kind of date the application, if you will. I think one is just, you know, this, this, this cloud native space is so vast I have to ask you now, let's get into what's in it for the customer. And so, and there's multiple, you know, enterprises that we talk to, shared that this is a major challenge we have today because we have, you know, I'm an enterprise, I got tight, you know, I love the open source trying And that's where, you know, platform line has a role to play, which is when been some of the feedback? And the customer said, If you had it today, I would've purchased it. So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been been that And now they have management challenges. Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and And And arlon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? And so this really gives them, you know, the right tooling for that. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to to be supporting the business, you know, the back office and the maybe terminals and that, you know, that the, the technology that's, you know, that's gonna drive your top line is If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Just taking care of the CIO doesn't exist. Thank you for your time. Thanks for Great to see you and great to see congratulations on the success And now the Kubernetes layer that we've been working on for years is Exactly. you know, the new Arlon, our, our lawn, and you guys just launched the So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures I mean now with open source so popular, you don't have to have to write a lot of code, you know, the emergence of systems and layers to help you manage that complexity is becoming That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is a new breed. you know, you think you have things under control, but some people from various teams will make changes here in the industry technical, how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection for the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, the state that you want and more consistency. the DevOps engineers, they get a a ways to So how do you guys look at the workload native ecosystem like K native, where you can express your application in more at It's kinda like an EC two instance, spin up a cluster. And then you can stamp out your app, your applications and your clusters and manage them And it's like a playbook. You just tell the system what you want and then You need edge's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at Coan this year? If you look at a stack necessary for hosting We would to joke we, you know, about, about the dream. So the successor to Kubernetes, you know, I don't Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, They have some SaaS apps, but mostly it's the ecosystem. you know, that they're, they will keep catering to, they, they will continue to find terms of, you know, the the new risk and arm ecosystems it's, it's hardware and he got software and you got middleware and he kind over, Great to have you on. What's interest thing about what you guys are doing at Platform nine? clouds, you know, the application world is moving very fast in trying to Patrick, we were talking before we came on stage here about your background and we were gonna talk about the glory days in So you saw that whole growth. So I think things are in And if you look at the tech trends, GDPs down, but not tech. Cuz the pandemic showed everyone digital transformation is here and more And modernizing Infras infrastructure is not you know, more, more dynamic, more real. So it's you know, multi-cloud. So you got containers And you know, most companies are, 70 plus percent of them have won two, It runs on the edge, And if you look at few years back, each one was doing So it's kinda like an SRE vibe. Whatever they want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Are you delivering that value to ops and security? Our buyer is usually, you know, the product divisions of companies You've got the dev side and then that happens when you either order the product or go into the store and pick up your product or like what you and I do at home and we get a, you know, a router is And so that dramatically brings the velocity for them. Thousands of them. of the public clouds. The question I want to ask you is that's How do you explain when someone says what's cloud native, what isn't cloud native? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring out which I don't know anything about that, but the whole experience of how you order, Being agility and having that flow to the application changes what the expectations of One of the benefits of chatting with you here and been on the app side, I did the infrastructure right and then tried to build our own If you did not adapt and adapt and accelerate I think that that e-commerce was interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your Mascar, thank you for coming on, spending the time to come in and share with our community and being part of Thank you, John. I hope you enjoyed this program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Paul Morritz | PERSON | 0.99+ |
Vascar | PERSON | 0.99+ |
Adrian Karo | PERSON | 0.99+ |
Sean Feer | PERSON | 0.99+ |
2000 | DATE | 0.99+ |
John Furry | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
50,000 | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Furr | PERSON | 0.99+ |
Vascar Gorde | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
Meor Ma Makowski | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Makoski | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
14 years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
12 years | QUANTITY | 0.99+ |
2001 | DATE | 0.99+ |
Gort | PERSON | 0.99+ |
Mascar | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Mariana Tessel | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
hundreds | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Two | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
next year | DATE | 0.99+ |
Arlon | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Kubernetes | TITLE | 0.99+ |
eight years ago | DATE | 0.99+ |
one site | QUANTITY | 0.99+ |
Thousands | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
each component | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
one unit | QUANTITY | 0.99+ |
one flavor | QUANTITY | 0.99+ |
4,000 engineers | QUANTITY | 0.99+ |
first generation | QUANTITY | 0.99+ |
Super Cloud | TITLE | 0.99+ |
Dave Ante | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Vic | PERSON | 0.99+ |
two sides | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two thousands | QUANTITY | 0.99+ |
Bickley | PERSON | 0.98+ |
tens of thousands of nodes | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
two people | QUANTITY | 0.98+ |
each site | QUANTITY | 0.98+ |
Kubernetes | PERSON | 0.98+ |
super cloud | TITLE | 0.98+ |
One person | QUANTITY | 0.98+ |
two factors | QUANTITY | 0.98+ |
Arlan | ORGANIZATION | 0.98+ |
Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally
hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching
SUMMARY :
that's the sort of stuff that we do you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick Coughlin | PERSON | 0.99+ |
Jennifer Lee | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Raina Richter | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Patrick | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
seven minutes | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
2.7 billion | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
Finland | LOCATION | 0.99+ |
seven hours | QUANTITY | 0.99+ |
sixty percent | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Sweden | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
six weeks | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
19 credentials | QUANTITY | 0.99+ |
ten dollars | QUANTITY | 0.99+ |
Jennifer | PERSON | 0.99+ |
5 000 hosts | QUANTITY | 0.99+ |
Horizon 3 | TITLE | 0.99+ |
Wednesday | DATE | 0.99+ |
30 | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
American Airlines | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
three licenses | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
European Union | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
seven occurrences | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
Horizon 3.ai | TITLE | 0.99+ |
ATT | ORGANIZATION | 0.99+ |
Net Zero | ORGANIZATION | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
less than two percent | QUANTITY | 0.99+ |
less than two hours | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
UK | LOCATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
four issues | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
three steps | QUANTITY | 0.99+ |
node 0 | TITLE | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
node zero | TITLE | 0.99+ |
10x | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
7 minutes | QUANTITY | 0.99+ |
one license | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
thousands of hosts | QUANTITY | 0.99+ |
five thousand hosts | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |