UNLIST TILL 4/1 - How The Trade Desk Reports Against Two 320-node Clusters Packed with Raw Data
hi everybody thank you for joining us today for the virtual Vertica BBC 2020 today's breakout session is entitled Vertica and en mode at the trade desk my name is su LeClair director of marketing at Vertica and I'll be your host for this webinar joining me is Ron Cormier senior Vertica database engineer at the trade desk before we begin I encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we don't address we'll do our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also a quick reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you a notification as soon as it's ready so let's get started over to you run thanks - before I get started I'll just mention that my slide template was created before social distancing was a thing so hopefully some of the images will harken us back to a time when we could actually all be in the same room but with that I want to get started uh the date before I get started in thinking about the technology I just wanted to cover my background real quick because I think it's peach to where we're coming from with vertically on at the trade desk and I'll start out just by pointing out that prior to my time in the trade desk I was a tech consultant at HP HP America and so I traveled the world working with Vertica customers helping them configure install tune set up their verdict and databases and get them working properly so I've seen the biggest and the smallest implementations and everything in between and and so now I'm actually principal database engineer straight desk and and the reason I mentioned this is to let you know that I'm a practitioner I'm working with with the product every day or most days this is a marketing material so hopefully the the technical details in this presentation are are helpful I work with Vertica of course and that is most relative or relevant to our ETL and reporting stack and so what we're doing is we're taking about the data in the Vertica and running reports for our customers and we're an ad tech so I did want to just briefly describe what what that means and how it affects our implementation so I'm not going to cover the all the details of this slide but basically I want to point out that the trade desk is a DSP it's a demand-side provider and so we place ads on behalf of our customers or agencies and ad agencies and their customers that are advertised as brands themselves and the ads get placed on to websites and mobile applications and anywhere anywhere digital advertising happens so publishers are what we think ocean like we see here espn.com msn.com and so on and so every time a user goes to one of these sites or one of these digital places and an auction takes place and what people are bidding on is the privilege of showing and add one or more ads to users and so this is this is really important because it helps fund the internet ads can be annoying sometimes but they actually help help are incredibly helpful in how we get much much of our content and this is happening in real time at very high volumes so on the open Internet there is anywhere from seven to thirteen million auctions happening every second of those seven to thirteen million auctions happening every second the trade desk bids on hundreds of thousands per second um so that gives it and anytime we did we have an event that ends up in Vertica that's that's one of the main drivers of our data volume and certainly other events make their way into Vertica as well but that wanted to give you a sense of the scale of the data and sort of how it's impacting or how it is impacted by sort of real real people in the world so um the uh let's let's take a little bit more into the workload and and we have the three B's in spades late like many many people listening to a massive volume velocity and variety in terms of the data sizes I've got some information here some stats on on the raw data sizes that we deal with on a daily basis per day so we ingest 85 terabytes of raw data per day and then once we get it into Vertica we do some transformations we do matching which is like joins basically and we do some aggregation group buys to reduce the data and make it clean it up make it so it's more efficient to consume buy our reporting layer so that matching in aggregation produces about ten new terabytes of raw data per day it all comes from the it all comes from the data that was ingested but it's new data and so that's so it is reduced quite a bit but it's still pretty pretty high high volume and so we have this aggregated data that we then run reports on on behalf of our customers so we have about 40,000 reports per day oh that's probably that's actually a little bit old and older number it's probably closer to 50 or 55,000 reports per day at this point so it's I think probably a pretty common use case for for Vertica customers it's maybe a little different in the sense that most of the reports themselves are >> reports so they're not it's not a user sitting at a keyboard waiting for the result basically we have we we have a workflow where we do the ingest we do this transform and then and then once once all the data is available for a day we run reports on behalf of our customer to let me have our customers on that that daily data and then we send the reports out you via email or we drop them in a shared location and then they they look at the reports at some later point of time so it's up until yawn we did all this work on on enterprise Vertica at our peak we had four production enterprise clusters each which held two petabytes of raw data and I'll give you some details on on how those enterprise clusters were configured in the hardware but before I do that I want to talk about the reporting workload specifically so the the reporting workload is particularly lumpy and what I mean by that is there's a bunch of work that becomes available bunch of queries that we need to run in a short period of time after after the days just an aggregation is completed and then the clusters are relatively quiet for the remaining portion of the day that's not to say they are they're not doing anything as far as read workload but they certainly are but it's much less reactivity after that big spike so what I'm showing here is our reporting queue and the spike is is when all those reports become a bit sort of ailable to be processed we can't we can't process we can't run the report until we've done the full ingest and matching and aggregation for the day and so right around 1:00 or 2:00 a.m. UTC time every day that's when we get this spike and the spike we affectionately called the UTC hump but basically it's a huge number of queries that need to be processed sort of as soon as possible and we have service levels that dictate what as soon as possible means but I think the spike illustrates our use case pretty pretty accurately and um it really as we'll see it's really well suited for pervert icky on and we'll see what that means so we've got our we had our enterprise clusters that I mentioned earlier and just to give you some details on what they look like there they were independent and mirrored and so what that means is all four clusters held the same data and we did this intentionally because we wanted to be able to run our report anywhere we so so we've got this big queue over port is big a number of reports that need to be run and we've got these we started we started with one cluster and then we got we found that it couldn't keep up so we added a second and we found the number of reports went up that we needed to run that short period of time and and so on so we eventually ended up with four Enterprise clusters basically with this with the and we'd say they were mirrored they all had the same data they weren't however synchronized they were independent and so basically we would run the the tailpipe line so to speak we would run ingest and the matching and the aggregation on all the clusters in parallel so they it wasn't as if each cluster proceeded to the next step in sync with which dump the other clusters they were run independently so it was sort of like each each cluster would eventually get get consistent and so this this worked pretty well for for us but it created some imbalances and there was some cost concerns that will dig into but just to tell you about each of these each of these clusters they each had 50 nodes they had 72 logical CPU cores a half half a terabyte of RAM a bunch of raid rated disk drives and 2 petabytes of raw data as I stated before so pretty big beefy nodes that are physical physical nodes that we held we had in our data centers we actually reached these nodes so so it was on our data center providers data centers and the these were these these were what we built our business on basically but there was a number of challenges that we ran into as we as we continue to build our business and add data and add workload and and the first one is is some in ceremony can relate to his capacity planning so we had to prove think about the future and try to predict the amount of work that was going to need to be done and how much hardware we were going to need to satisfy that work to meet that demand and that's that's just generally a hard thing to do it's very difficult to verdict the future as we can probably all attest to and how much the world has changed and even in the last month so it's a it's a very difficult thing to do to look six twelve eighteen eighteen months into the future and sort of get it right and and and what people what we tended to do is we reach or we tried to our art plans our estimates were very conservative so we overbought in a lot of cases and not only that we had to plan for the peak so we're planning for that that that point in time that those number of hours in the early morning when we had to we had all those reports to run and so that so so we ended up buying a lot of hardware and we actually sort of overbought at times and then and then as the hardware were days it would kind of come into it would come into maturity and we have our our our workload would sort of come approach matching the demand so that was one of the big challenges the next challenge is that we were running on disk you can we wanted to add data in sort of two dimensions the only dimensions that everybody can think about we wanted to add more columns to our big aggregates and we wanted to keep our big aggregates for for longer periods of time so both horizontally and vertically we wanted to expand the datasets but we basically were running out of disk there was no more disk in and it's hard to add a disc to Vertica in enterprise mode not not impossible but certainly hard and and one cannot add discs without adding compute because enterprise mode the disk is all local to each of the nodes for most most people you can do not exchange with sands and other external rays but that's there are a number of other challenges with that so um adding in order to add disk we had to add compute and that basically meant kept us out of balance we're adding more compute than we needed for the amount of disk so that was the problem certainly physical nodes getting them the order delivered racked cables even before we even start such Vertica there's lead times there and and so it's also long commitment since we like I mentioned me Lisa hardware so we were committing to these nodes these physical servers for two or three years at a time and I mentioned that can be a hard thing to do but we wanted to least to keep our capex down so we wanted to keep our aggregates for a long period of time we could have done crazy things or more exotic things to to help us with this if we had to in enterprise mode we could have started to like daisy chain clusters together and that would have been sort of a non-trivial engineering effort because we would need to then figure out how to migrate data source first to recharge the data across all the clusters and we had to migrate data from one cluster to another cluster hesitation and we would have to think about how to aggregate run queries across clusters so if you assured data set spans two clusters it would have had to sort of aggregated within each cluster maybe and then build something on top the aggregated the data from each of those clusters so not impossible things but certainly not easy things and luckily for us we started talking about two Vertica about separation of compute and storage and I know other customers were talking to Vertica as we were people had had these problems and so Vertica inyeon mode came to the rescue and what I want to do is just talk about nyan mode really briefly for for those in the audience who aren't familiar but it's basically Vertigo's answered to the separation of computing storage it allows one to scale compute and or storage separately and and this there's a number of advantages to doing that whereas in the old enterprise days when you add a compute you added stores and vice-versa now we can now we can add one or the other or both according to how we want to and so really briefly how this works this slide this figure was taken directly from the verdict and documentation and so just just to talk really briefly about how it works the taking advantage of the cloud and so in this case Amazon Web Services the elasticity in the cloud and basically we've got you seen two instances so elastic cloud compute servers that access data that's in an s3 bucket and so three three ec2 nodes and in a bucket or the the blue objects in this diagram and the difference is a couple of a couple of big differences one the data no longer the persistent storage of the data the data where the data lives is no longer on each of the notes the persistent stores of the data is in s3 bucket and so what that does is it basically solves one of our first big problems which is we were running out of disk the s3 has for all intensive purposes infinite storage so we can keep much more data there and that mostly solved one of our big problems so the persistent data lives on s3 now what happens is when a query runs it runs on one of the three nodes that you see here and assuming we'll talk about depo in a second but what happens in a brand new cluster where it's just just spun up the hardware is the query will will run on those ec2 nodes but there will be no data so those nodes will reach out to s3 and run the query on remote storage so that so the query that the nodes are literally reaching out to the communal storage for the data and processing it entirely without using any data on on the nodes themselves and so that that that works pretty well it's not as fast as if the data was local to the nodes but um what Vertica did is they built a caching layer on on each of the node and that's what the depot represents so the depot is some amount of disk that is relatively local to the ec2 node and so when the query runs on remote stores on the on the s3 data it then queues up the data for download to the nodes and so the data will get will reside in the Depot so that the next query or the subsequent subsequent queries can run on local storage instead of remote stores and that speeds things up quite a bit so that that's that's what the role of the Depot is the depot is basically a caching layer and we'll talk about the details of how we can see your in our Depot the other thing that I want to point out is that since this is the cloud another problem that helps us solve is the concurrency problem so you can imagine that these three nodes are one sort of cluster and what we can do is we can spit up another three nodes and have it point to the same s3 communal storage bucket so now we've got six nodes pointing to the same data but we've you isolated each of the three nodes so that they act as if they are their own cluster and so vertical calls them sub-clusters so we've got two sub clusters each of which has three nodes and what this has essentially done it is it doubled the concurrency doubled the number of queries that can run at any given time because we've now got this new place which new this new chunk of compute which which can answer queries and so that has given us the ability to add concurrency much faster and I'll point out that for since it's cloud and and there are on-demand pricing models we can have significant savings because when a sub cluster is not needed we can stop it and we pay almost nothing for it so that's that's really really important really helpful especially for our workload which I pointed out before was so lumpy so those hours of the day when it's relatively quiet I can go and stop a bunch of sub clusters and and I will pay for them so that that yields nice cost savings let's be on in a nutshell obviously engineers and the documentation can use a lot more information and I'm happy to field questions later on as well but I want to talk about how how we implemented beyond at the trade desk and so I'll start on the left hand side at the top the the what we're representing here is some clusters so there's some cluster 0 r e t l sub cluster and it is a our primary sub cluster so when you get into the world of eon there's primary Club questions and secondary sub classes and it has to do with quorum so primary sub clusters are the sub clusters that we always expect to be up and running and they they contribute to quorum they decide whether there's enough instances number a number of enough nodes to have the database start up and so these this is where we run our ETL workload which is the ingest the match in the aggregate part of the work that I talked about earlier so these nodes are always up and running because our ETL pipeline is always on we're internet ad tech company like I mentioned and so we're constantly getting costly running ad and there's always data flowing into the system and the matching is happening in the aggregation so that part happens 24/7 and we wanted so that those nodes will always be up and running and we need this we need that those process needs to be super efficient and so what that is reflected in our instance type so each of our sub clusters is sixty four nodes we'll talk about how we came at that number but the infant type for the ETL sub cluster the primary subclusters is I 3x large so that is one of the instance types that has quite a bit of nvme stores attached and we'll talk about that but on 32 cores 240 four gigs of ram on each node and and that what that allows us to do I should have put the amount of nvme but I think it's seven terabytes for anything me storage what that allows us to do is to basically ensure that our ETL everything that this sub cluster does is always in Depot and so that that makes sure that it's always fast now when we get to the secondary subclusters these are as mentioned secondary so they can stop and start and it won't affect the cluster going up or down so they're they're sort of independent and we've got four what we call Rhian subclusters and and they're not read by definition or technically they're not read only any any sub cluster can ingest and create your data within the database and that'll all get that'll all get pushed to the s3 bucket but logically for us they're read only like these we just most of these the work that they happen to do is read only which it is which is nice because if it's read only it doesn't need to worry about commits and we let we let the primary subclusters or ETL so close to worry about committing data and we don't have to we don't have to have the all nodes in the database participating in transaction commits so we've got a for read subclusters and we've got one EP also cluster so a total of five sub clusters each so plus they're running sixty-four nodes so that gives us a 320 node database all things counted and not all those nodes are up at the same time as I mentioned but often often for big chunks of the days most of the read nodes are down but they do all spin up during our during our busy time so for the reading so clusters we've got I three for Excel so again the I three incidents family type which has nvme stores these notes have I think three and a half terabytes of nvme per node we just rate it to nvme drives we raid zero them together and 16 cores 122 gigs of ram so these are smaller you'll notice but it works out well for us because the the read workload is is typically dealing with much smaller data sets than then the ingest or the aggregation workbook so we can we can run these workloads on on smaller instances and leave a little bit of money and get more granularity with how many sub clusters are stopped and started at any given time the nvme doesn't persist the data on it isn't persisted remember you stop and start this is an important detail but it's okay because the depot does a pretty good job in that in that algorithm where it pulls data in that's recently used and the that gets pushed out a victim is the data that's least reasons use so it was used a long time ago so it's probably not going to be used to get so we've got um five sub-clusters and we have actually got to two of those so we've got a 320 node cluster in u.s. East and a 320 node cluster in u.s. West so we've got a high availability region diversity so and their peers like I talked about before they're they're independent but but yours they are each run 128 shards and and so with that what that which shards are is basically the it's similar to segmentation when you take those dataset you divide it into chunks and though and each sub cluster can concede want the data set in its entirety and so each sub cluster is dealing with 128 shards it shows 128 because it'll give us even distribution of the data on 64 node subclusters 60 120 might evenly by 64 and so there's so there's no data skew and and we chose 128 because the sort of ginger proof in case we wanted to double the size of any of the questions we can double the number of notes and we still have no excuse the data would be distributed evenly the disk what we've done is so we've got a couple of raid arrays we've got an EBS based array that they're catalog uses so the catalog storage location and I think we take for for EBS volumes and raid 0 them together and come up with 128 gigabyte Drive and we wanted an EPS for the catalog because it we can stop and start nodes and that data will persist it will come back when the node comes up so we don't have to run a bunch of configuration when the node starts up basically the node starts it automatically joins the cluster and and very strongly there after it starts processing work let's catalog and EBS now the nvme is another raid zero as I mess with this data and is ephemeral so let me stop and start it goes away but basically we take 512 gigabytes of the nvme and we give it to the data temp storage location and then we take whatever is remaining and give it to the depot and since the ETL and the reading clusters are different instance types they the depot is is side differently but otherwise it's the same across small clusters also it all adds up what what we have is now we we stopped the purging data for some of our big a grits we added bunch more columns and what basically we at this point we have 8 petabytes of raw data in each Jian cluster and it is obviously about 4 times what we can hold in our enterprise classes and we can continue to add to this maybe we need to add compute maybe we don't but the the amount of data that can can be held there against can obviously grow much more we've also built in auto scaling tool or service that basically monitors the queue that I showed you earlier monitors for those spikes I want to see as low spikes it then goes and starts up instances one sub-collector any of the sub clusters so that's that's how that's how we we have compute match the capacity match that's the demand also point out that we actually have one sub cluster is a specialized nodes it doesn't actually it's not strictly a customer reports sub clusters so we had this this tool called planner which basically optimizes ad campaigns for for our customers and we built it it runs on Vertica uses data and Vertica runs vertical queries and it was it was wildly successful um so we wanted to have some dedicated compute and beyond witty on it made it really easy to basically spin up one of these sub clusters or new sub cluster and say here you go planner team do what you want you can you can completely maximize the resources on these nodes and it won't affect any of the other operations that were doing the ingest the matching the aggregation or the reports up so it gave us a great deal of flexibility and agility which is super helpful so the question is has it been worth it and without a doubt the answer is yes we're doing things that we never could have done before sort of with reasonable cost we have lots more data specialized nodes and more agility but how do you quantify that because I don't want to try to quantify it for you guys but it's difficult because each eon we still have some enterprise nodes by the way cost as you have two of them but we also have these Eon clusters and so they're there they're running different workloads the aggregation is different the ingest is running more on eon does the number of nodes is different the hardware is different so there are significant differences between enterprise and and beyond and when we combine them together to do the entire workload but eon is definitely doing the majority of the workload it has most of the data it has data that goes is much older so it handles the the heavy heavy lifting now the query performance is more anecdotal still but basically when the data is in the Depot the query performance is very similar to enterprise quite close when the data is not in Depot and it needs to run our remote storage the the query performance is is is not as good it can be multiples it's not an order not orders of magnitude worse but certainly multiple the amount of time that it takes to run on enterprise but the good news is after the data downloads those young clusters quickly catch up as the cache populates there of cost I'd love to be able to tell you that we're running to X the number of reports or things are finishing 8x faster but it's not that simple as you Iran is that you it is me I seem to have gotten to thank you you hear me okay I can hear you now yeah we're still recording but that's fine we can edit this so if I'm just talking to the person the support person he will extend our recording time so if you want to maybe pick back up from the beginning of the slide and then we'll just edit out this this quiet period that we have sir okay great I'm going to go back on mute and why don't you just go back to the previous slide and then come into this one again and I'll make sure that I tell the person who yep perfect and then we'll continue from there is that okay yeah sound good all right all right I'm going back on yet so the question is has it been worth it and for us the answer has been a resounding yes we're doing things that we never could have done at reasonable cost before and we got more data we've got this Y note this law has nodes and in work we're much more agile so how to quantify that um well it's not quite as simple and straightforward as you might hope I mean we still have enterprise clusters we've got to update the the four that we had at peak so we've still got two of those around and we got our two yawn clusters but they're running different workloads and they're comprised of entirely different hardware the dependence has I've covered the number of nodes is different for sub-clusters so 64 versus 50 is going to have different performance the the workload itself the aggregation is aggregating more columns on yon because that's where we have disk available the queries themselves are different they're running more more queries on more intensive data intensive queries on yon because that's where the data is available so in a sense it is Jian is doing the heavy lifting for the cluster for our workload in terms of query performance still a little anecdotal but like when the queries that run on the enterprise cluster the performance matches that of the enterprise cluster quite closely when the data is in the Depot when the data is not in a Depot and Vertica has to go out to the f32 to get the data performance degrades as you might expect it can but it depends on the curious all things like counts counts are is really fast but if you need lots of the data from the material others to realize lots of columns that can run slower I'm not orders of magnitude slower but certainly multiple of the amount of time in terms of costs anecdotal will give a little bit more quantifying here so what I try to do is I try to figure out multiply it out if I wanted to run the entire workload on enterprise and I wanted to run the entire workload on e on with all the data we have today all the queries everything and to try to get it to the Apple tab so for enterprise the the and estimate that we do need approximately 18,000 cores CPU cores all together and that's a big number but that's doesn't even cover all the non-trivial engineering work that would need to be required that I kind of referenced earlier things like starting the data among multiple clusters migrating the data from one culture to another the daisy chain type stuff so that's that's the data point now for eon is to run the entire workload estimate we need about twenty thousand four hundred and eighty CPU cores so more CPU cores uh then then enterprise however about half of those and partly ten thousand of both CPU cores would only run for about six hours per day and so with the on demand and elasticity of the cloud that that is a huge advantage and so we are definitely moving as fast as we can to being on all Aeon we have we have time left on our contract with the enterprise clusters or not we're not able to get rid of them quite yet but Eon is certainly the way of the future for us I also want to point out that uh I mean yawn is we found to be the most efficient MPP database on the market and what that refers to is for a given dollar of spend of cost we get the most from that zone we get the most out of Vertica for that dollar compared to other cloud and MPP database platforms so our business is really happy with what we've been able to deliver with Yan Yan has also given us the ability to begin a new use case which is probably this case is probably pretty familiar to folks on the call where it's UI based so we'll have a website that our customers can log into and on that website they'll be able to run reports on queries through the website and have that run directly on a separate row to get beyond cluster and so much more latent latency sensitive and concurrency sensitive so the workflow that I've described up until this point has been pretty steady throughout the day and then we get our spike and then and then it goes back to normal for the rest of the day this workload it will be potentially more variable we don't know exactly when our engineers are going to deliver some huge feature that is going to make a 1-1 make a lot of people want to log into the website and check how their campaigns are doing so we but Yohn really helps us with this because we can add a capacity so easily we cannot compute and we can add so we can scale that up and down as needed and it allows us to match the concurrency so beyond the concurrency is much more variable we don't need a big long lead time so we're really excited about about this so last slide here I just want to leave you with some things to think about if you're about to embark or getting started on your journey with vertically on one of the things that you'll have to think about is the no account in the shard count so they're kind of tightly coupled the node count we determined by figuring like spinning up some instances in a single sub cluster and getting performance smaller to finding an acceptable performance considering current workload future workload for the queries that we had when we started and so we went with 64 we wanted to you want to certainly want to increase over 50 but we didn't want to have them be too big because of course it costs money and so what you like to do things in power to so 64 nodes and then the shard count for the shards again is like the data segmentation is a new type of segmentation on the data and the start out we went with 128 it began the reason is so that we could have no skew but you know could process the same same amount of data and we wanted to future-proof it so that's probably it's probably a nice general recommendation doubleness account for the nodes the instance type and and how much people space those are certainly things you're going to consider like I was talking about we went for they I three for Excel I 3/8 Excel because they offer good good Depot stores which gives us a really consistent good performance and it is all in Depot the pretty good mud presentation and some information on on I think we're going to use our r5 or the are for instance types for for our UI cluster so much less the data smaller so much less enter this on Depot so we don't need on that nvm you stores the reader we're going to want to have a reserved a mix of reserved and on-demand instances if you're if you're 24/7 shop like we are like so our ETL subclusters those are reserved instances because we know we're going to run those 24 hours a day 365 days a year so there's no advantage of having them be on-demand on demand cost more than reserve so we get cost savings on on figuring out what we're going to run and have keep running and it's the read subclusters that are for the most part on on demand we have one of our each sub Buster's is actually on 24/7 because we keep it up for ad-hoc queries your analyst queries that we don't know when exactly they're going to hit and they want to be able to continue working whenever they want to in terms of the initial data load the initial data ingest what we had to do and now how it works till today is you've got to basically load all your data from scratch there isn't a great tooling just yet for data populate or moving from enterprise to Aeon so what we did is we exported all the data in our enterprise cluster into park' files and put those out on s3 and then we ingested them into into our first Eon cluster so it's kind of a pain we script it out a bunch of stuff obviously but they worked and the good news is that once you do that like the second yon cluster is just a bucket copy in it and so there's tools missions that can help help with that you're going to want to manage your fetches and addiction so this is the data that's in the cache is what I'm referring to here the data that's in the default and so like I talked about we have our ETL cluster which has the most recent data that's just an injected and the most difficult data that's been aggregated so this really recent data so we wouldn't want anybody logging into that ETL cluster and running queries on big aggregates to go back one three years because that would invalidate the cache the depot would start pulling in that historical data and it was our assessing that historical data and evicting the recent data which would slow things out flow down that ETL pipelines so we didn't want that so we need to make sure that users whether their service accounts or human users are connecting to the right phone cluster and I mean we just created the adventure users with IPS and target groups to palm those pretty-pretty it was definitely something to think about lastly if you're like us and you're going to want to stop and start nodes you're going to have to have a service that does that for you we're where we built this very simple tool that basically monitors the queue and stops and starts subclusters accordingly we're hoping that that we can work with Vertica to have it be a little bit more driven by the cloud configuration itself so for us all amazon and we love it if we could have it have a scale with the with the with the eight of us can take through points do things to watch out for when when you're working with Eon is the first is system table queries on storage layer or metadata and the thing to be careful of is that the storage layer metadata is replicated it's caught as a copy for each of the sub clusters that are out there so we have the ETL sub cluster and our resources so for each of the five sub clusters there is a copy of all the data in storage containers system table all the data and partitions system table so when you want to use this new system tables for analyzing how much data you have or any other analysis make sure that you filter your query with a node name and so for us the node name is less than or equal to 64 because each of our sub clusters at 64 so we limit we limit the nodes to the to the 64 et 64 node ETL collector otherwise if we didn't have this filter we would get 5x the values for counts and some sort of stuff and lastly there is a problem that we're kind of working on and thinking about is a DC table data for sub clusters that are our stops when when the instances stopped literally the operating system is down and there's no way to access it so it takes the DC table DC table data with it and so I cannot after after my so close to scale up in the morning and then they scale down I can't run DC table queries on how what performed well and where and that sort of stuff because it's local to those nodes so we're working on something so something to be aware of and we're working on a solution or an implementation to try to suck that data out of all the notes you can those read only knows that stop and start all the time and bring it in to some other kind of repository perhaps another vertical cluster so that we can run analysis and monitoring even you want those those are down that's it um thanks for taking the time to look into my presentation really do it thank you Ron that was a tremendous amount of information thank you for sharing that with everyone um we have some questions come in that I would like to present to you Ron if you have a couple min it your first let's jump right in the first one a loading 85 terabytes per day of data is pretty significant amount what format does that data come in and what does that load process look like yeah a great question so the format is a tab separated files that are Jesus compressed and the reason for that could basically historical we don't have much tabs in our data and this is how how the data gets compressed and moved off of our our bidders the things that generate most of this data so it's a PSD gzip compressed and how you kind of we kind of have how we load it I would say we have actually kind of a Cadillac loader in a couple of different perspectives one is um we've got this autist raishin layer that's homegrown managing the logs is the data that gets loaded into Vertica and so we accumulate data and then we take we take some some files and we push them to redistribute them along the ETL nodes in the cluster and so we're literally pushing the file to through the nodes and we then run a copy statement to to ingest data in the database and then we remove the file from from the nodes themselves and so it's a little bit extra data movement which you may think about changing in the future assisting we move more and more to be on well the really nice thing about this especially for for the enterprise clusters is that the copy' statements are really fast and so we the coffee statements use memory but let's pick any other query but the performance of the cautery statement is really sensitive to the amount of available memory and so since the data is local to the nodes literally in the data directory that I referenced earlier it can access that data from the nvme stores and the kabhi statement runs very fast and then that memory is available to do something else and so we pay a little bit of cost in terms of latency and in terms of downloading the data to the nose we might as we move more and more PC on we might start ingesting it directly from s3 not copying the nodes first we'll see about that what's there that's how that's how we read the data interesting works great thanks Ron um another question what was the biggest challenge you found when migrating from on-prem to AWS uh yeah so um a couple of things that come to mind the first was the baculum the data load it was kind of a pain I mean like I referenced in that last slide only because I mean we didn't have tools built to do this so I mean we had to script some stuff out and it wasn't overly complex but yes it's just a lot of data to move I mean even with starting with with two petabytes so making sure that there there is no missed data no gaps making and moving it from the enterprise cluster so what we did is we exported it to the local disk on the enterprise buses and we then we push this history and then we ingested it in ze on again Allspark X oh so it's a lot of days to move around and I mean we have to you have to take an outage at some point stop loading data while we do that final kiss-up phase and so that was that was a challenge a sort of a one-time challenge the other saying that I mean we've been dealing with a week not that we're dealing with but with his challenge was is I mean it's relatively you can still throw totally new product for vertical and so we are big advantages of beyond is allow us to stop and start nodes and recently Vertica has gotten quite good at stopping in part starting nodes for a while there it was it was it took a really long time to start to Noah back up and it could be invasive but we worked with with the engineering team with Yan Zi and others to really really reduce that and now it's not really an issue that we think that we think too much about hey thanks towards the end of the presentation you had said that you've got 128 shards but you have your some clusters are usually around 64 nodes and you had talked about a ratio of two to one why is that and if you were to do it again would you use 128 shards ah good question so that is a reference the reason why is because we wanted to future professionals so basically we wanted to make sure that the number of stars was evenly divisible by the number of nodes and you could I could have done that was 64 I could have done that with 128 or any other multiple entities for but we went with 128 is to try to protect ourselves in the future so that if we wanted to double the number of nodes in the ECL phone cluster specifically we could have done that so that was double from 64 to 128 and then each node would have happened just one chart that it had would have to deal with so so no skew um the second part of question if I had to do it if I had to do it over again I think I would have done I think I would have stuck with 128 we still have I mean so we either running this cluster for more than 18 months now I think especially in USC and we haven't needed to increase the number of nodes so in that sense like it's been a little bit extra overhead having more shards but it gives us the peace of mind that we can easily double that and not have to worry about it so I think I think everyone is a nice place to start and you may even consider a three to one or four to one if if you're if you're expecting really rapid growth that you were just getting started with you on and your business and your gates that's a small now but what you expect to have them grow up significantly less powerful green thank you Ron that's with all the questions that we have out there for today if you do have others please feel free to send them in and we will get back to you and we'll respond directly via email and again our engineers will be available on the vertical forums where you can continue the discussion with them there I want to thank Ron for the great presentation and also the audience for your participation in questions please note that a replay of today's event and a copy of the slides will be available on demand shortly and of course we invite you to share this information with your colleagues as well again thank you and this concludes this webinar and have a great day you
SUMMARY :
stats on on the raw data sizes that we is so that we could have no skew but you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron Cormier | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
Ron | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
8 petabytes | QUANTITY | 0.99+ |
122 gigs | QUANTITY | 0.99+ |
85 terabytes | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
512 gigabytes | QUANTITY | 0.99+ |
128 gigabyte | QUANTITY | 0.99+ |
three nodes | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
six nodes | QUANTITY | 0.99+ |
each cluster | QUANTITY | 0.99+ |
two petabytes | QUANTITY | 0.99+ |
240 | QUANTITY | 0.99+ |
2 petabytes | QUANTITY | 0.99+ |
16 cores | QUANTITY | 0.99+ |
espn.com | OTHER | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Yan Yan | ORGANIZATION | 0.99+ |
more than 18 months | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
each cluster | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one cluster | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
32 cores | QUANTITY | 0.99+ |
ten thousand | QUANTITY | 0.98+ |
each sub cluster | QUANTITY | 0.98+ |
one cluster | QUANTITY | 0.98+ |
72 | QUANTITY | 0.98+ |
seven terabytes | QUANTITY | 0.98+ |
two dimensions | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
5x | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
eon | ORGANIZATION | 0.98+ |
128 | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
four gigs | QUANTITY | 0.98+ |
s3 | TITLE | 0.98+ |
three and a half terabytes | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
64 | QUANTITY | 0.98+ |
8x | QUANTITY | 0.97+ |
one chart | QUANTITY | 0.97+ |
about ten new terabytes | QUANTITY | 0.97+ |
one-time | QUANTITY | 0.97+ |
two instances | QUANTITY | 0.97+ |
Depot | ORGANIZATION | 0.97+ |
last month | DATE | 0.97+ |
five sub-clusters | QUANTITY | 0.97+ |
two clusters | QUANTITY | 0.97+ |
each node | QUANTITY | 0.97+ |
five sub clusters | QUANTITY | 0.96+ |
Monica Ene-Pietrosanu, Intel Corporation | Node Summit 2017
>> Hey welcome back everybody, Jeff Frick here with theCUBE. We are in downtown San Francisco at the Mission Bay Convention Center at Node Summit 2017. We've been coming to Node Summit off and on for a number of years. And it's pretty amazing, the growth of this application for development. It really seems to take off. There's about 800 or 900 people here. It's kind of the limits of the facility here at Mission Bay. But we're really excited to be here. And it's not surprising to have me see Intel is here in full force. Our first guest is Monica Ene-Pietrosanu. And she is the Director of Software Engineering for Intel, welcome. >> Thank you, hello, and thank you very much for inviting me. It's definitely exciting to be here. Node is this dynamic community that grows in one year, like others can. So it's always exciting to be part one of these events. And present about the work we are doing for Node. >> So you're on a panel later on Taking Benchmarking to the Next Level. So what is that all about? >> That is part of the work we are doing for Node. And I want to mention here the word stewardship. Intel is a long time contributor in the open source communities. And has assumed a performance leadership in many of these communities. We are doing the same for Node. We are driving, we are trying to be a steward for the performance in OJS. And what this means, is we are watching to make sure that every check in that happens, doesn't impact performance. We are also optimizing Nodes, so it give the best of the hardware, Node runs best on the newest hardware that we have. And also, we are developing, right now new measures, new benchmarks which better reflect the reality of the data center use cases. The way your Node is getting used in the Cloud. The way Node is getting used in the data center. There are very few ways to measure that today. And with this fast development of the ecosystem, my team has also taken this role of working with the industry partners and coming up with realistic measures for the performance. >> Right, so these new benchmarks that you're defining around the capabilities of Node. Or are you using old benchmarks? Or how are you kind of addressing that challenge? >> We started by running what was available. And most of the benchmarks were quite, let's say, isolated. They were focused on single Node, one operation, not realistic in terms of what the measurements were being done for the data center. Especially, in the data center everything is evolving. So nothing is just running with one single computer. Everything is impacted by network latencies. We have a significant number of servers out there. We have multiple software components interacting. So it's way more complex. And then you have containers coming into the picture. And everything makes it harder and harder to evaluate from the performance perspective. And I think Node is doing a pretty good job from the performance perspective. But who's watching that it stays the same? I think performance is one of those things that you value when you don't have it, right? Otherwise you just take it as granted, like it's there. So, my team at Intel is focused on top tier scripting languages. We are part of this larger software organization called Software and Services Group. And we are, right now, optimizing and writing the performance for Python, No-gs, PHP HHVM, and for some of the top tier languages used in the data centers. So Node is actually our interesting story in terms of evolution. Because we've seen, also, an extraordinary growth. We've seen, it's probably the one who's doubled for the past three years. The community has doubled. Everything has doubled for Node, right? Even, the number of commits, it depends on which statuses you look-- >> They're all up and to the right, very steep. >> Yeah, so then it's a very fast progress which we need to keep pace with. And one thing that is important for us is to make sure that we expose the best of our hardware to the software. With Node that is taking an interesting approach. Because Node is one of, what we called CPU front end bounce. It's having a large footprint. It's one of the largest footprint applications that we've seen. And for this we want to make sure that the newest CPUs we bring to market are able to handle it. >> I was just going to say, they have Trevor Livingston on it from HomeAway. Kicked off things today. We're talking about the growth. He said a year ago, they had one Node JS project. And this is a big site that competes with, like, Air B&B. That's now owned by Expedia. Now they say, he said, they had, "15 projects in production. "22 almost in production, and 75 other internal projects." In one year, from one. So that shows pretty amazing growth and the power of the application. And from Intel's point of view, you guys are all in on cloud. You're all in on data centers. You've all seen all the adds. So you guys are really, aggressively taking on the optimization, for the unique challenges and special environment that is Cloud. Which is computing everywhere, computing nowhere. But at the end of the day, it's got to sit on somebody's servers. And there's got to be a CPU in the background. So you look at all these different languages. Why do you think Node has gone so crazy? >> I think there are several reasons. And my background is a C++ developer, coming and security. So coming into the Node space, one thing amazed me. Like, only 2% of the code is yours, when you write an application. So that is like-- >> Jeff: 2%? >> So where is the other 98% coming from? Or it's already pre developed. It's an ecosystem, you just pull in those libraries. So that's what brings, in addition to the security risks you have. It brings a fantastic time to market. So it enables you as the developer to launch an application in a matter of days, instead of months or a year. So time to market is an unbeatable proposition. And I think that's what drives this space. When you need to launch new applications faster and faster, and upgrade. For us, that's also an interesting challenge. Because we have, our super road maps are not days, right? Are years? So what we want to make sure is that we feed back into the CPU road map the developments we are seeing into this space. I have on my team, I have several principal engineers who are working with the CPU architects to make sure that we are continuously providing this information back. One thing I wanted to mention is, as you probably know, since you've been talking to other Intel people, we've been launching recently, the latest generation server, Skylake. And on this latest generation Nodes. So all the Node workloads we've been optimizing and measuring. So one point five x performance improvement, from the prior generation. So this is a fantastic boost. And this doesn't happen only from hardware. It happens from a combination of hardware and software. And we are continuing to work now with the CPU architects to make sure that the future generation also keeps space with the developments. >> It's interesting, kind of the three horsemen of computing, if you will, right? There's compute, there's store, and there's IO. And now we're working, and it's interesting that Ryan Dahl, it's funny, they brought up Ryan Dahl. We interviewed him back at the Node JS, I think back in 2011? Still one of our most popular segments on theCUBE. We do thousands of interviews a year. He's still one of the most popular. But to really rethink the IO problem, in this asynchronous form, seems to be just another real breakthrough that opens up all types of capacity in compute and store. When you don't have to sit and wait. So that must be another thing that you guys have addressed from coming from the hardware and the software perspective? >> You are right on spot, because I think Node, comparing to other scripting languages brings more into the picture, the whole platform. So it's not only a CPU. It's also a networking. It's also related to storage. Also, it makes the entire platform to shine if it's optimized to the right capability. And we've been investing a lot into this. We have all our work is made available is open source. All our contributions are up-streamed back into the mainstream. We also started an effort to work with the industry in developing these new workloads. So last year at Node Interactive, we launched one new workload, benchmark, for Node. Which we called Node DC. With his first use case, which is an employee information system, simulating what a large data center distributed application will be doing. This year, now at Node Summit, we will be presenting the updated version of that, one point zero, this time. It was version zero point nine, last time. Where we added support for containers. We included several capabilities to be able to run, in a configural manner, in as many configurations as needed. And we are also contributing this back. We submitted it to the Node Foundation. So it becomes an official benchmark for the Node Foundation. Which means, every night, after the build system runs, this will be run as part of the regressions. To make sure that the performance doesn't degrade. So that's part of our work. And that's also continuing an effort we started with what we call the languages performance portal. If you go to languagesperformance.intel.com we have an entire lab behind that portal, in which every night we build this top tier scripting languages. Including Python, including Node, including PHP, and we run performance regressions on the latest Intel architecture. So we are contributing the results back into the open source community, to make sure that the community is aware if any regression happens. And we have a team of engineers who jumps on those regression center root causes and analyzes it. So to figure it out. >> So Monica, but we're almost out of time. But before I let you go, we talked before we got started, I love Kim Stevenson, I've interviewed her a bunch of times. And one of the conversations that we had was about Moore's Law. And that Moore's Law's really an attitude. And it's kind of a way to do things more than hitting the physical limitations on chips, which I think is a silly conversation. You're in a constantly, the role of constantly optimizing. And making things better, faster, cheaper. As you sit back and look at, kind of, what you've done to date, and looking forward, do you see any slowdown in this ability to continue to tweak, optimize, tweak, optimize? And just get more and more performance out of some of these new technologies? >> I wouldn't see slow down. At least from where I sit on the software side. I'm seeing only acceleration. So, the hardware brings a 30%, 40% improvement. We add, on top of that, the software optimizations. Which bring 10%, 20% improvements as well. So that continuously is going on. And I am not seeing it improving. I'm seeing it becoming more, there is a need for customization. So that's where when we design the workloads, we need to make them customizable. Because there are different use cases across the data center customers. So they are used differently. And we want to make sure that we reflect the reality. That's how they're in the world. And that's how our customers, our partners can also leverage them, to measure something that's meaningful for them. So in terms of speed, now, we want to make sure that we fully utilize our CPU. And we grow to more and more cores and increase frequency. We also grow to more capabilities. And our focus is also to make the entire platform to shine. And when we talk about platform we talk about networking. We talk about non volatile memory. We talk about storage as well as CPU. >> So Gordon's safe. You're safe, Gordon Moore. Your law's still solid. Monica, thanks for taking a few minutes out of your day and good luck on your panel later this afternoon. >> Thank you very much for having me here. It was pleasure. >> Absolutely, all right, Jeff Frick checking in from Node Summit 2017 in San Francisco. We'll be right back after this short break. Thanks for watching. (upbeat music)
SUMMARY :
And it's pretty amazing, the growth And present about the work we are doing for Node. Taking Benchmarking to the Next Level. Node runs best on the newest hardware that we have. Or are you using old benchmarks? And most of the benchmarks were quite, let's say, isolated. the best of our hardware to the software. But at the end of the day, it's got to So coming into the Node space, one thing amazed me. So all the Node workloads we've We interviewed him back at the Node JS, Also, it makes the entire platform to shine And one of the conversations that we had And our focus is also to make the entire platform to shine. So Gordon's safe. Thank you very much for having me here. We'll be right back after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Monica Ene-Pietrosanu | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
30% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
15 projects | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Ryan Dahl | PERSON | 0.99+ |
Kim Stevenson | PERSON | 0.99+ |
Node | TITLE | 0.99+ |
Node Foundation | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Expedia | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Node Interactive | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Nodes | TITLE | 0.99+ |
Intel Corporation | ORGANIZATION | 0.99+ |
PHP | TITLE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HomeAway | ORGANIZATION | 0.99+ |
This year | DATE | 0.99+ |
Gordon Moore | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
98% | QUANTITY | 0.99+ |
Gordon | PERSON | 0.99+ |
languagesperformance.intel.com | OTHER | 0.99+ |
2% | QUANTITY | 0.98+ |
Air B&B. | ORGANIZATION | 0.98+ |
Mission Bay Convention Center | LOCATION | 0.98+ |
900 people | QUANTITY | 0.98+ |
one year | QUANTITY | 0.98+ |
first guest | QUANTITY | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
one point | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Trevor Livingston | PERSON | 0.98+ |
one thing | QUANTITY | 0.98+ |
one operation | QUANTITY | 0.97+ |
Node Summit | EVENT | 0.97+ |
today | DATE | 0.96+ |
single | QUANTITY | 0.96+ |
OJS | TITLE | 0.96+ |
75 other internal projects | QUANTITY | 0.95+ |
Mission Bay | LOCATION | 0.94+ |
Moore | PERSON | 0.94+ |
three horsemen | QUANTITY | 0.93+ |
PHP HHVM | TITLE | 0.93+ |
about 800 | QUANTITY | 0.93+ |
later this afternoon | DATE | 0.92+ |
one single computer | QUANTITY | 0.92+ |
22 | QUANTITY | 0.91+ |
thousands of interviews | QUANTITY | 0.91+ |
Node JS | TITLE | 0.88+ |
first use case | QUANTITY | 0.88+ |
C+ | TITLE | 0.86+ |
Software and Services Group | ORGANIZATION | 0.86+ |
five | QUANTITY | 0.85+ |
a year | QUANTITY | 0.81+ |
Nick O'Leary, IBM | Node Summit 2017
>> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at Node Summit 2017 in downtown San Francisco at the Mission Bay Convention Center. About 800 hardcore developers talkin' about Node and really the crazy growth and acceleration in this community as well as the applications. We're excited to have our next quest. He's Nick O'Leary, Developer Advocate from IBM for Watson IoT, and you're workin' on somethin' kind of cool called Node-REDS. First off, welcome. >> Thank you, thank you very much for havin' me. >> Absolutely, so what is Node-RED? >> So, Node-RED is an open source project we started working on about four years ago now in the Emerging Technologies group in the UK parts of IBM, and it's a Node.js application that gives you a visual programming tool for Internet of Things-type applications. So when you run it, you point your web browser at it, and it gives you this visual workspace to start dragging in nodes into your canvas that represent some sort of functionality, like connect to Twitter and get some tweets or save something to a database or read some sensor data, whatever it might be, and you start drawing wires between those nodes to express how you want your application to flow, how you want data to flow through your application. So it's quite a lightweight tool and really accessible to a wide range of developers whether sort of seasoned, experienced Node developers or your kids just learning how to program because it hides complexity. And, yeah, it's Node.js-based, so it runs down on a Raspberry Pi, it runs up in the cloud like IBM Bluemix, wherever you want to run it. So really flexible developer platform. >> Pretty interesting 'cause we just had Monica on from Intel, and she was talking about one of the interesting things in this development world of Node.js is so much of the code was written by somebody else. I think she said in a lot of projects the actual original code may be 2% because you're using all these other stuff, and libraries have already been created. And it sounds like you're really kind of leveraging that infrastructure to be able to do something like this. >> Absolutely, so, one of the key things we enabled very early on was to, 'cause we recognized the power of our tool, is those nodes in our palette that you drag on. So we built the system so that people could write their own nodes and extend the palette, and we used the same node packaging as the standard MPM ecosystem. And as of a couple weeks ago, we have over a thousand third party nodes people have written, so there's probably already a module for most hardware devices, online APIs, databases, whatever you want. People are creating and extending the platform in all sorts of ways just building on top of that incredible ecosystem that Node.js has. >> And then how does that tie back to Watson? You said you're involved in Watson. So Watson people don't think of necessarily a simple, simple interface but not necessarily a simple application. So what's the tie between Watson and Node.js and Node-RED? >> So, Node-RED is a development tool. I say it all hinges on those nodes and what they connect to, so we have got nodes for the Watson IoT platform, so that's great for getting, if you're running node-RED on a Raspberry Pi, connected up to our IoT platform, connect to applications in the Bluemix space. But we also have nodes for the Watson cognitive services, like the machine learning things, visual recognition, text to speech, all of those services we have nodes for. So, again, it allows people to start playing with the rich capabilities of the Watson platform without having to dive straight into understanding lines of code and you can start being productive and create real meaningful solutions without having to understand whether it's Node.js or Java, whatever language you would normally write to access low-level APIs. >> And can the visual tool connect to things that are not necessarily Node specific? >> So, anything that provides some sort of API. If it's got a programmatic API, then it's easier to do with Node 'cause we are in a Node ecosystem. But we've got established patterns for talking to other languages but also things often provides like a rest API, HTTP, MQTT, many other protocols, and we have all of that support built straight into the platform. >> Right, and so what was the motivation to build this, just to have an easier development interface? >> Yeah, it was twofold really. One was in the Emerging Technologies where I was, we do proof of concepts for clients we have to turn around really quickly, so whereas we're more than capable of writing individual lines of code, having that tool that lets us experiment much quicker and solve real client problems much quicker was a great value to us. But then we also saw the advantage for the developers who don't understand individual lines of code for educational purposes, whatever it might be. Those great motivators there in the various communities we're involved with, in IoT home hobbyists, all that sort of space as well, it's found a real incredible user community across the board. >> And when it started, was it designed to be an open source project or that kind of realization, if you will, kind of came along the way? >> I think on day one it wasn't the first thing to mind. You know, we were just experimenting with technology, which is kind of how we operated. But we very quickly got to the point where we realized we didn't have the time and resource to write all the nodes that could be written, and there was a much broader audience than just us doing our day job that this tool could tap into. So, maybe not on day one but maybe on a month in we thought this has to be open source. So, it was about six months after we started it we moved to an open source project, and that was September 2013. And then in October last year, IBM contributed the project to be a founding project of the JavaScript Foundation. Whereas it's a project that has come from IBM, it's now a project that is independently governed. It's not owned by IBM, it's part of the foundation. So, look at the wide range of other companies getting involved, making use of it, contributing back, and really good to see that ecosystem build. >> Oh, that's great, so I'm just curious, you said you deal with a lot of customer prototyping. Obviously you're involved in Watson, which is kind of the pointy end of the spear right now with IBM, with the cognitive and the IoT. As you kind of look at the landscape and stuff you're workin' on over the next, I would never say multiple years 'cause that's way too long, six months, nine months, what are some of your priorities, what are some of the things you're seeing, kind of that customers are doing today that they couldn't do before that gets you excited to get up out of bed and go to work every day? >> From my perspective, with our focus on Node-RED, which is kind of where my focus is right now, it's really that developer experience. We've gone so far with our really intuitive to use tooling, but we recognize there's more to do. So, how can we enable better collaboration, better basic workflows within our particular tooling, because there are people using Node-RED, in particular happily in production today, but it's funny 'cause we don't have a 1.0 version number because, for us, that wasn't interesting to us because we are delivering meaningful function. But in the project, we have just published our road map to a one point zero to really give that firm statement to people who are unsure about it as a technology that this is good for production. And we've got a wealth of use cases of companies who are using it today, so, that's very much our focus, my focus within Node-RED, and all of it does then tie back to yes, it's a JS foundation project, but then with my developer advocate hat on, making sure that draw from Node-RED into the Watson platform is as seamless and intuitive as possible because that helps everyone. >> Right, right, okay, so before I let you go, two things: One begs the question what version are you on, and where can people go to find more information so they can see when that 1.0 and obviously contribute? >> So as a Node project, we've stuck to Symantec versioning, so we are currently version naught dot 17. So we've done 17 major releases over the last about three and a bit years, and that's where we're moving forward. We've got this road map to get to 1.0 first quarter of next year. And if you want to find out more, nodered.org is where we're based, or you can find us through links by the JS Foundation as well. >> Alright, well, Nick, thanks for takin' a little bit of your time and safe travels home at the end of the show. >> Thank you very much. >> Alright, he's Nick O'Leary from IBM. I'm Jeff Frick, you're watchin' theCUBE. Thanks for watchin', see ya next time. (bubbly electronic music)
SUMMARY :
and really the crazy growth and acceleration to express how you want your application to flow, that infrastructure to be able to do something like this. and we used the same node packaging as And then how does that tie back to Watson? text to speech, all of those services we have nodes for. and we have all of that support But then we also saw the advantage for the developers So, it was about six months after we started it before that gets you excited to get up But in the project, we have just published One begs the question what version are you on, so we are currently version naught dot 17. of your time and safe travels home at the end of the show. I'm Jeff Frick, you're watchin' theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick O'Leary | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
September 2013 | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
Node.js | TITLE | 0.99+ |
2% | QUANTITY | 0.99+ |
Node-RED | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Node | TITLE | 0.99+ |
UK | LOCATION | 0.99+ |
Nick | PERSON | 0.99+ |
October last year | DATE | 0.99+ |
Watson | TITLE | 0.99+ |
six months | QUANTITY | 0.99+ |
JavaScript Foundation | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
JS Foundation | ORGANIZATION | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Mission Bay Convention Center | LOCATION | 0.99+ |
Node-REDS | TITLE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
first quarter of next year | DATE | 0.97+ |
17 major releases | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
one | QUANTITY | 0.97+ |
node-RED | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
Monica | PERSON | 0.96+ |
First | QUANTITY | 0.95+ |
Node Summit 2017 | EVENT | 0.95+ |
one point | QUANTITY | 0.95+ |
first thing | QUANTITY | 0.93+ |
About 800 hardcore developers | QUANTITY | 0.93+ |
Raspberry Pi | COMMERCIAL_ITEM | 0.92+ |
today | DATE | 0.92+ |
nodered.org | OTHER | 0.9+ |
day one | QUANTITY | 0.89+ |
couple weeks ago | DATE | 0.88+ |
Bluemix | TITLE | 0.85+ |
about six months | QUANTITY | 0.85+ |
a month | QUANTITY | 0.84+ |
over a thousand third | QUANTITY | 0.82+ |
1.0 | QUANTITY | 0.82+ |
Emerging Technologies | ORGANIZATION | 0.78+ |
1.0 | DATE | 0.77+ |
theCUBE | ORGANIZATION | 0.76+ |
MQTT | OTHER | 0.74+ |
JS | ORGANIZATION | 0.73+ |
about four years ago | DATE | 0.73+ |
San Francisco | LOCATION | 0.72+ |
zero | QUANTITY | 0.67+ |
Watson IoT | ORGANIZATION | 0.64+ |
HTTP | OTHER | 0.63+ |
twofold | QUANTITY | 0.61+ |
about three | QUANTITY | 0.59+ |
Stephen Fluin, Google | Node Summit 2017
>> Hey, welcome back everybody. Jeff Frick with theCUBE. We're at Node Summit 2017, downtown San Francisco Mission Bay Conference Center, 800 people, a lot of developers, pretty much all developers talking about what's going on with Node, the Node community and some tangental things that are involved in Node, as well. We're excited to have our next guest on, he's Stephen Fluin, he's a developer advocate for Google, Stephen, welcome. >> Thank you so much for having me. >> Absolutely. First off, just kind of impressions of the show. You said you were here last year, the community's obviously very active, growing, I don't know that they're going to be able to come back to this space for very much longer. >> I know. >> What do you think? >> Probably not, I love how the community's continuing to grow and evolve, right? This technology is moving faster than almost any technology I've seen before. I call it a communatorial explosion of complexity because there's always new tools coming out, new ways of thinking and that's really rich and a great way to have a lot of innovation happening. >> Right, there was a great, one of the early ones this morning, the speaker said they had one Node app a year ago, and now they have 15 in production, 22 almost ready and 75 other internal projects, in one year! >> Yeah, it's definitely crazy. >> So why, I mean there's lots of things as to why Node's successful, but from your perspective, why is it growing so fast? >> I think it's fast because it's the first time that we've had a real extended eco-system where a lot of developers are coming together, bringing their own perspectives, and it's a very collaborative environment. Everyone's trying to help each other. >> So you just got off stage, you had your own session >> I did. >> But Angular on the Server. >> Yes. >> Even for the folks that missed it, kind of what was the main theme of your talk? >> Sure, sure, so I'm on the Angular Team, which is a client-side framework for building applications. We've really been focused a lot on really great web experiences for the client. How do we run code as close as possible to the browser so that you get these very rich, engaging applications. >> Right. >> But one of the things that we've been focused on and has been one of our design goals since the beginning is how do we write JavaScript and TypeScript in a way that you can run it on the client or the server? And so just last week we announced new support has landed in our CLI that makes this process easier so that you can run your applications on the server and then bootstrap a client-side application on top of that. >> Why is that important? >> It's important for a few different reasons. You want to run applications sometimes on the server, first, because there's a lot of computers that are processing the web and browsing the web across the internet >> Right. >> so there's search engines, there's things like Facebook and Twitter, which are scraping websites looking for metadata, looking for thumnbnails and other sorts of content, but then also there's a human aspect where by rendering things on the server, you can actually have an increased perception of your load times, so things look like they're loading faster while you can still then, on top of that, deliver very rich, engaging client side experience with animations and transitions and all those sorts of things. >> That's interesting. Before we got started you had talked about thinking of the world in terms of the user experience, at the end of the line versus thinking of it from the server. I thought you were going down kind of the server optimization, power, when you say think about the server, those types of things but you're talking about a whole different set of reasons to think about the server >> Yeah, absolutely. >> and the way that that connects to the rest of the web. >> Yes, because there's a lot of consumers of content that we don't necessarily think about when we're building applications >> Right, right. >> we normally think about the human side of things but having an application, whether it's a single application or whatever, that is also well optimized for servers can be very helpful. >> Yeah, that's pretty >> Servers as the consumers. >> servers as the consumers which I guess makes sense, right? Because the Google's Indexes and all the other ones are crawling servers >> Absolutely. >> they're not scraping web pages, hopefully, I assume, I assume we're past that stage. Alright, good, so what else is going on, in terms of the Angular community, that you're working on next? >> Sure, sure. I think we're really just focused on continuing to make things easier, smaller and faster to use, so those are kind of the three focus points we've got as we continue to invest and evolve in the platforms. So, how do we make it easier for new developers to come into the kind of Angular platform and take advantage of all we have to offer? How do we make smaller bundles so that the experience is faster for users? >> Right, right. >> And then how do we make all these things understandable and digestable for developers? >> It's like the bionic men never went away, right? It's still better, stronger, faster. >> Exactly. >> Alright, Steve, thanks for taking a few minutes out of your day and sharing your story with us. >> Thanks so much for having me. >> Absolutely, Stephen Fluin, from Google. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time. Take care.
SUMMARY :
the Node community and some tangental things the community's obviously very active, growing, Probably not, I love how the community's and it's a very collaborative environment. so that you get these very rich, engaging applications. so that you can run your applications on the server that are processing the web and browsing the web you can actually have an increased perception kind of the server optimization, power, and the way that the human side of things but having an application, in terms of the Angular community, so that the experience is faster for users? It's like the bionic men never went away, right? and sharing your story with us. Thanks for watching, we'll catch you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Stephen Fluin | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
last week | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
22 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Node | TITLE | 0.99+ |
800 people | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
a year ago | DATE | 0.98+ |
first time | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
ORGANIZATION | 0.95+ | |
single application | QUANTITY | 0.95+ |
Angular | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.94+ |
Node Summit 2017 | EVENT | 0.94+ |
ORGANIZATION | 0.94+ | |
three focus points | QUANTITY | 0.93+ |
San Francisco Mission Bay Conference Center | LOCATION | 0.93+ |
this morning | DATE | 0.92+ |
75 other internal projects | QUANTITY | 0.91+ |
Angular | TITLE | 0.79+ |
theCUBE | ORGANIZATION | 0.75+ |
JavaScript | TITLE | 0.73+ |
lot of computers | QUANTITY | 0.72+ |
TypeScript | OTHER | 0.64+ |
Angular Team | ORGANIZATION | 0.61+ |
Node | ORGANIZATION | 0.53+ |
CLI | TITLE | 0.45+ |
Michael Dawson, IBM | Node Summit 2017
>> Welcome back everybody, Jeff Frick here with theCUBE. We're at Node Summit 2017 in downtown San Francisco Mission Bay Conference Center, we've been coming here for years. The vibe is growing and exciting and some really interesting use cases in earlier sessions about how fast a Node adoption is happening in some of these enterprises and we're excited to have Michael Dawson. He's a software developer, but more importantly, he's a Node.js community lead for IBM. Michael welcome. >> Alright, thank you. It's great to be here. Nice to be able to talk to you and talk about Node.js and what's going on in the community. >> Just to get your impressions in terms of a temporal perspective, of how this has changed and evolved over time. A lot of talk about the community. I think the facility here only holds like 800 people. I think it's full to the capacity. You know, how has it been growing and kind of what's your perspective from a little bit of a higher point of view. >> It's really great, you know I was at Node Summit three years ago, and other conferences, and it's great to see that over the years how we get more and more people involved. Different constituencies, you know, more people who are deploying Node.js. And even just, you know, day-to-day we see a larger and larger number of collaborators who are getting involved in contributing to make the success of Node really grow and the functionality and all that great stuff. >> Jeff: Right. So what's your function inside of IBM as being kind of a Node advocate for the community I assume outside the walls of IBM, but then also inside the walls of IBM? >> So, I really have sort of the pleasure to be able to work out in the community. That's the large part of my job. But I also work very closely with our internal teams with a focus on Node.js, supporting it for our bundling products. IBM has about 50-60 products that bundle Node.js. We also support it through our platforms like Bluemix, and so I work with the team who supports those. You know if you're running Bluemix in Node it's the code that we've contributed and built. And our development approach is very much do that out in the community, so if a particular product needs some sort of feature we'll go out and work in the community to do that and then pull that back in to use it. So you see we have about 10 collaborators. I'm one of them and the great thing is that I get to be involved in a lot of the working group efforts like the N-API, the build work groups, the LTS work groups. And, you know, so my role is really to sort of bridge the community work that we do there to our internal needs and consumers as well. >> Right, so how is the uptake in the IBM world of this technology within all the different stats that you guys have? >> I work in the run time technologies team and we were called the Java Technology Center for a number of years, we're now called the Run Time Technology Center because we see it's a polyglot world with Node.js being one of the three key run times you know, it's Node.js, Java and Swift. [Jeff] - Right. >> And, we see that because we see our costumers as well as our products, you know, really embracing Node and using it in all sorts of places. They've mentioned earlier that Bluemix ARPAs is a very heavy user of Node.js in terms of the implementation of the UIs and the backend services, as well as Node.js is the biggest run time in terms of deployments in that environment as well. >> So it's interesting, we had Monica on earlier from Intel. I think you're going to be on a panel with her later today about benchmarking. >> Yeah. >> And she talked about that there's some unique challenges in trying to figure out how to benchmark these types of applications against kind of the benchmark standards of old. I wondered if you could share some of your thoughts on this challenge, and for the folks that aren't going to be here watching the panel, what are some of the topics that you want to make sure that get exposed in that panel. >> So, you know, I've been working with the benchmarking work group. I actually kicked it off a number of years back. The approach that we're following is we want to document the key use cases for Node, as well as the key attributes of the run time, like you know, like starting up fast, being small, the things that have made it successful. [Jeff] - Right. >> As well as the key use cases like a web front end, backend services for mobile, and then fill in that matrix with important benchmarks. I mean that's where one of the challenges comes in; other languages have a more mature and established set of benchmarks that different vendors and different people can use. >> Right. >> Whereas the work in the working group is to try and either find benchmarks and encourage people to write those benchmarks, and pull together a more comprehensive suite that we can use because performance is important to people, and as a community, we really want to make sure that we encourage a rapid pace of change, but be able to have a good handle on what's going on on the other side. >> Jeff: Right. >> And, having the benchmarks in place should be an enabler, in that if we can easily and quickly find out what a change impact has, a positive or negative, that'll help us move things forward as opposed to if you're uncertain it's a lot harder to make the decision as to which way you should go. >> It's funny on benchmarking, right, because on one hand, people can just poo-poo benchmarks because I'm writing my benchmark so that it beats your product and my benchmark, and you can write a benchmark the other way. But I think what you've just touched on is really important; it's really for optimization of what you're doing for improving your own performance over time. That's really the key to the benchmarks. >> Yeah, absolutely, the focus of the work in the benchmarking work group has been on a framework for like regression testing, and letting us make the right decision, not competition. >> Jeff: Right. >> I think that some of the pieces that we develop will perhaps factor into that, but the core focus is to get a good established set, and other individual companies can then maybe use it for other purposes as well. >> Jeff: Right. So Michael before I let you go I just wanted to get your perspective. You work for a big company. >> Michael: Yep. >> I don't think it's this as much anymore; there used to be a lot of opened source conferences people like oh we don't want the big people coming in, they're going to take it over. And to get your perspective of being kind of that liaison between kind of this really organic open source community with Node and big Blue back behind you, and how you kind of navigate that and in your experience of the acceptance of IBM into this community as well as your ability to bring some of that open source essos back into IBM. >> Right. You know, I found that it's been really great. I love this community, they've been very welcoming. I've had no issues at all, you know, getting involved. I think IBM is respected in the way that we've contributed. We're trying to contribute in a very constructive and collaborative way, you know, nothing that we do, do we really do on our own. If you look at the N-API, we're working with other individuals. People from different companies or just individual contributors to come to a consensus on what it should be, and to basically move things forward. So yeah, in terms of a big company coming in, you do hear some concerns, but I haven't seen any on the ground impediments or problems. You know, it's been very welcoming and it's been a great experience. >> Alright, very good. Alright, well, before I let you go, kind of final thoughts on this event where we are. >> It's a great event, I always enjoy being able to come and meet people. A lot of time you work on Git Hub you know somebody's handle, but there's nothing like making that personal connection to be able to like put the face to the name, and I think it affects your ongoing sort of interactions when you're not face-to-face. >> Jeff: Absolutely. >> So it's a really important thing to do, and that's why I like to come to a lot of these events. >> Alright, well Michael Dawson, we'll let you get back to meeting some more developers. Thanks for taking a few minutes out of your day. >> Thank you very much, bye. >> Absolutely, he's Michael Dawson from IBM. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time.
SUMMARY :
and some really interesting use cases Nice to be able to talk to you and kind of what's your perspective and it's great to see that over the years as being kind of a Node advocate for the community and the great thing is that I get to be involved and we were called the Java Technology Center and the backend services, I think you're going to be on a panel with her later today and for the folks that aren't going to be here like you know, like starting up fast, being small, and then fill in that matrix with important benchmarks. and encourage people to write those benchmarks, to make the decision as to which way you should go. That's really the key to the benchmarks. in the benchmarking work group has been on a framework but the core focus is to get a good established set, So Michael before I let you go and how you kind of navigate that and collaborative way, you know, Alright, well, before I let you go, and I think it affects your ongoing sort of interactions So it's a really important thing to do, we'll let you get back to meeting some more developers. Thanks for watching, we'll catch you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Dawson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
800 people | QUANTITY | 0.99+ |
Swift | TITLE | 0.99+ |
Monica | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
Node | TITLE | 0.99+ |
one | QUANTITY | 0.98+ |
Bluemix | TITLE | 0.98+ |
Git Hub | TITLE | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
three key | QUANTITY | 0.97+ |
Node Summit 2017 | EVENT | 0.95+ |
about 10 collaborators | QUANTITY | 0.95+ |
San Francisco Mission Bay Conference Center | LOCATION | 0.94+ |
Node Summit | EVENT | 0.94+ |
about 50-60 products | QUANTITY | 0.92+ |
theCUBE | ORGANIZATION | 0.9+ |
Node | ORGANIZATION | 0.89+ |
later today | DATE | 0.88+ |
Java Technology Center | ORGANIZATION | 0.83+ |
Run Time Technology Center | ORGANIZATION | 0.77+ |
Bluemix | ORGANIZATION | 0.59+ |
big Blue | ORGANIZATION | 0.59+ |
years | DATE | 0.4+ |
ARPAs | TITLE | 0.34+ |
James Bellenger, Twitter | Node Summit 2017
>> Hey welcome back everybody. Jeff Frick, with the Cube. We're at Node Summit 2017 in downtown San Francisco. About 800 people, developers talking about Node and Node GS. And really the crazy adoption of Node as a development platform. Enterprise adoption. Everything's up and to the right. Some crazy good stories. And we're excited to have somebody coming right off his keynote. It's James Bellenger. He is an engineer at Twitter. James, welcome. >> Thank you, thank you for having me. >> Yeah, absolutely. So you just got off stage and you were talking all about Twitter Lite. What is Twitter Lite? I like Twitter as it is. >> Ah, so Twitter Lite is an optimized, it's a mobile web app. So if you pull up your phone, open up the web browser and go to twitter.com, in your smart phone web browser, you get a Twitter experience that we're calling Twitter Lite. >> Okay. >> And it used to be a little bit out of date. But we've been able to update it using a lot of new exciting web technologies. And so now we have this thing that feels very much like a native web app. >> Okay. >> They call them progressive web apps these days. And so we're using that as sort of a way to sort of compete in areas and markets where maybe a native apps are less able to compete. Where you know, people don't want to download a 200 megabyte iOS app. They want something that fits under 600 kilobytes. >> Okay. So you had the Twitter Lite app before. And then this was really a re-deployment? Or am I getting it wrong? >> I think, well we had We had a web app at mobile.twitter.com. >> Okay. >> And it was just sort of the mobile web app. >> Okay. >> But you know we sort of really rewrote everything. And that includes the back end on Node. And then we're now sort of pushing that and calling it Twitter Lite. >> Okay. And when did that go live or GA? >> About three months ago. >> Three months ago, okay. Super. So obviously you're here at Node. You just spoke at Node. You know, how was the experience using a Node tool set versus whatever you had it built on before? >> It's definitely faster in every way. Well, I mean, >> Faster in every way. That's a good thing. >> So well, let me Let me catch that. Be more specific. It is ... >> It's those benchmarking people. We need them back over here. >> It is very fast for how we apply it. It's really fast for development speed. And perhaps the biggest win is that on both sort of areas of our stack whether it's the part of the application that runs on the browser or it's the part of the application that runs inside the Twitter data center. We have one language and technology. So when a problem comes up and an engineer needs to like go and find the problem and fix it they don't need to sort of "Oh, well that's server code. "I don't know how it works. "And it's written in this language I don't understand." We really just have one application and it happens to run in both places. And so it really improves engineering efficiency. >> And you saw that in the development process, QA and the ongoing. >> Yeah. >> Yeah. And was it more ... So it's more like the guys that were more front end that now have access to the back end and then the other way around. Is that correct? Yeah, it's a little bit of both. >> Okay. >> You know, I think before I think there's people that they really like Scala. And they only want to work in Scala. Or there's people that really don't like it. So you end up, I think, having engineers kind of get bulkanized by their technology choices, and their preferred systems. But I think it really sort of tears down a couple walls. And so it makes, it improves engineering efficiency that way. But we found also that some of the tool sets and the tool chains that we're using allow engineers to just sort of like move faster. >> Right. >> So you know, whether that's like recompiling the service in like one second. Instead of having to wait for multiple minutes. There's just sort of less time spent waiting. >> Right. And in terms of don't share anything you're not supposed to share but in terms of, you know, frequency of releases and ongoing maintenance and kind of the development of the I won't say the app, not the app. I guess it is the app. Going forward, you know, how has that been impacted by moving to this platform? >> I think it might be too early to say. >> Okay. >> We've, you know, right now we've got about 12 to 15 engineers and we're ramping up. And it, I think it might, we're kind of looking to finish around 25 engineers, by the end of the year. >> Okay. >> So the sort of team and contributor base of the kind of like core team that are working on the app is growing. But you know, otherwise there's, you know, we're releasing every day. We're, you know, we try to you know, we're always pushing code. We're running experiments a lot. >> Right. I don't know if that answers your question but. >> So it sounds like it's a little easier but you're still doing everything you were doing before but now it just feels like it's easier because of this. >> Well, you know, talk to me in a couple months. >> Okay. >> Then maybe we'll have some better answers for you. >> Okay. So the other thing I want, if I talk to you in a couple months, I talk to you a year from now, just in terms of as you look down the road, you know, what this opens up. You know, kind of what are some of your priorities now that you've got it out. You said you've been out there for three months. What's kind of next on your roadmap, your horizon? >> So far, I think we've been really encouraged by the success of using this stack for development. So we're looking to kind of double down on that. >> Okay. >> So that means looking at some of the other Twitter web apps. Oh, sorry, Twitter apps in general. The other ways people use Twitter. And to sort of look at how they were built. And to see, because we're using React, and because we're using, I think technologies that make it very easy to you know, be responsive and you know, either be have a wide layout or a very narrow layout, or work offline. We have a lot of potential to sort of cannibalize or replace and also update some of the existing apps >> Right. >> That maybe don't get the attention that they need. >> Right. >> So there's some of that. And then I think Twitter Lite as a product I think that we're going, you know, we're looking to really expand it's reach. And make a big push in some of the developing areas. >> Yeah. Because the other thing people don't know, I mean, Twitter's acquired a bunch of companies, you know, over the years. So we've heard some examples earlier today, where that's a use case when you do have the opportunity to maybe redo an acquired application. You know, that those are kind of natural opportunities to look to redo them with this method. >> Yeah. Sure. >> All right. Cool. Well, James, thanks for taking a few minutes. >> Thank you. >> Congratulations on the talk. And I'll think of you next time I go to Twitter Lite. >> Yeah. Thank you so much. >> All righty. He's James Bellenger from Twitter. I'm Jeff Frick. You're watching the Cube from Node Summit 2017. Thanks for watching. (techno music)
SUMMARY :
And really the crazy adoption of Node So you just got off stage and you were talking all about So if you pull up your phone, open up the web browser And it used to be a little bit out of date. And so we're using that as sort of a way to And then this was really a re-deployment? I think, well we had And that includes the back end on Node. a Node tool set versus whatever you had it built on before? It's definitely faster in every way. Faster in every way. So well, let me We need them back over here. And perhaps the biggest win is that on both And you saw that in the development process, QA So it's more like the guys that were more front end that So you end up, I think, having So you know, whether that's like recompiling the service in terms of, you know, frequency of releases and And it, I think it might, we're kind of looking to finish But you know, otherwise there's, you know, I don't know if that answers your question but. So it sounds like it's a little easier but Well, you know, I talk to you a year from now, So we're looking to kind of double down on that. So that means looking at some of the other And make a big push in some of the developing areas. you know, over the years. Well, James, thanks for taking a few minutes. And I'll think of you next time I go to Twitter Lite. I'm Jeff Frick.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tim Yokum | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Anna | PERSON | 0.99+ |
James Bellenger | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Valante | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
16 times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Python | TITLE | 0.99+ |
mobile.twitter.com | OTHER | 0.99+ |
Influx Data | ORGANIZATION | 0.99+ |
iOS | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
30,000 feet | QUANTITY | 0.99+ |
Russ Foundation | ORGANIZATION | 0.99+ |
Scala | TITLE | 0.99+ |
Twitter Lite | TITLE | 0.99+ |
two rows | QUANTITY | 0.99+ |
200 megabyte | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
Three months ago | DATE | 0.99+ |
one application | QUANTITY | 0.99+ |
both places | QUANTITY | 0.99+ |
each row | QUANTITY | 0.99+ |
Par K | TITLE | 0.99+ |
Anais Dotis Georgiou | PERSON | 0.99+ |
one language | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
15 engineers | QUANTITY | 0.98+ |
Anna East Otis Georgio | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
one second | QUANTITY | 0.98+ |
25 engineers | QUANTITY | 0.98+ |
About 800 people | QUANTITY | 0.98+ |
sql | TITLE | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
two temperature values | QUANTITY | 0.98+ |
one times | QUANTITY | 0.98+ |
c plus plus | TITLE | 0.97+ |
Rust | TITLE | 0.96+ |
SQL | TITLE | 0.96+ |
today | DATE | 0.96+ |
Influx | ORGANIZATION | 0.95+ |
under 600 kilobytes | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
c plus plus | TITLE | 0.95+ |
Apache | ORGANIZATION | 0.95+ |
par K | TITLE | 0.94+ |
React | TITLE | 0.94+ |
Russ | ORGANIZATION | 0.94+ |
About three months ago | DATE | 0.93+ |
8:30 AM Pacific time | DATE | 0.93+ |
twitter.com | OTHER | 0.93+ |
last decade | DATE | 0.93+ |
Node | ORGANIZATION | 0.92+ |
Hadoop | TITLE | 0.9+ |
InfluxData | ORGANIZATION | 0.89+ |
c c plus plus | TITLE | 0.89+ |
Cube | ORGANIZATION | 0.89+ |
each column | QUANTITY | 0.88+ |
InfluxDB | TITLE | 0.86+ |
Influx DB | TITLE | 0.86+ |
Mozilla | ORGANIZATION | 0.86+ |
DB IOx | TITLE | 0.85+ |
Jacob Groundwater, Github | Node Summit 2017
(click) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Node Summit 2017 in San Francisco at the Mission Bay Convention Center. We've been coming here for years. A really active community, a lot of good mojo, about 800 developers here. About to the limits that the Mission Bay center can hold. Now we're excited to have our next guest. He just came off a panel. It's Jacob Groundwater. He's an engineering manager for Electron at Github. Jacob, welcome. >> Thank you, it's great to be here. >> So really interesting panel, Electron. I hadn't heard about Electron before, I was kind digging in a little bit while the panel was going on, but for the folks that aren't familiar, what is Electron? >> Yeah. Electron, there's a good chance that people who haven't even heard of it might already be using it. >> (chuckles) That's always a good thing. >> Yeah. Electron is a project that's started by Github and it's open source and you can use it to build desktop applications but with web technologies. We're leveraging the Google Chrome project to do a lot of that. And Node. And Node. Node.js is a big part of it as well. >> So build desktop apps using web technologies. >> Yep. >> And why would somebody want to do that? >> You know, I think at the root of that question, it's always the same answer which is just economics right now. Developers are in demand, software developers are in demand. The web is taking over and the web is becoming the most common skillset that people have. So you get a few benefits by using Electron. You get to distribute to three platforms automatically, you get Linux, Mac, and Windows. Sometimes it's like super easy. Sometimes you do a little bit of building to get that to happen, but it's, you know, you could cut your team size down by maybe two thirds if you do it that way. >> Wow, that's a pretty significant cut. Now you said one 1.0 released year, and how's the, how's the adoption? >> I actually can't even keep up with the number of applications that are being published on top of Electron. I'm often surprised, I'll go to a company and I'll say, oh I work on Electron at Github. And they'll be like, oh we're developing an Electron app, or we're working on an Electron app. So it, it's kind of unreal. Like I've never really been in this situation before where something that I'm working on is being used so much. I think it's out, it's out there, it's in production, it's running in millions of laptops and desktops. >> Yeah. That's great though, 'cause that's the whole promise of software, right? That's why people want to get into software. >> Yeah. >> 'Cause you can actually write something that people use and you can change the world. It could be distributed all over the world with millions of users before you even know it. >> There's this wonderful thought of like writing something once and then it running in millions of places potentially. I just love it. I love it. I think it's super cool. Yeah. So as it's grown what have been some of the main kind of concerns, issues, what are some of the things you're managing within that growth that's not pure technical? >> Yeah. That's a great question. One of the biggest things that I found interesting is when I got on our website and check the analytics, it's almost uniform across the globe. People are interested in it from everywhere. So there's challenges like, right now I had to set up a core meeting to talk about some of the like, updates to Electron and that had to be at midnight pacific time because we had to include the Prague time zone, Tokyo time zone, and Chennai in India. And we're trying to see if we can squeeze in someone from Australia. And just the global distributive nature of Electron, like people around the world are working on this and using it. >> Right. The other part you mentioned in the session, was the management of the community. And you made an interesting, you know, we go to a lot of conferences, everyone's got their code of conduct published these days which is kind of sad. It's good, but it's kind of sad that people don't have basic manners it seems like anymore. We've covered a lot of opensource communities. One that jumps to mind is OpenStack and watch that evolve over time and there's kind of community management issues that come up as these things grow. And you brought up, kind of an interesting paradigm, if you've got a great technical contributor who's just not a good person for, I don't know you didn't really define kind of the negative side but got some issues that may impact the cohesiveness of the community going forward, especially because community is so important in these projects. But if you got a great technical mind, I never really heard that particular challenge. >> I think it comes up a lot more than people realize. And it's something that I think about a lot. And one thing I want to focus on is, what we're really zeroing in on is bad behavior. >> Bad behavior. That was the word. >> And not a bad person. >> Right, right. >> One of the best ways to, to maybe get around that happening is to set an expectation early about what is acceptable behavior and alert people early when they're doing things that are going to cause harm to the community or cause harm to others. And also frame it in a way where they know, we're trying to keep other people safe, but we're also trying to keep those offenders, give them the space to change. If you choose not to change, that's a whole different story. So I think that by keeping the community strong, we encourage people around the globe to work on this project and we've already seen great returns by doing this far, so that's why I'm really focused on keeping it, keeping it a place where you know you can come and show up and do your work and do your best work. >> Right. Right. Well hopefully that's not taking too many of your cycles, you don't got too many of those, of those characters. >> Every hour I put in, I get like 10s and 20, like hours and hours back in return from the people who give back. So it's well worth it. It's the best use of my time. >> Alright good. So great growth over the year. As you look forward to next calendar year, kind of what are some of your priorities? What are some of the community's priorities? Where is Electron going? And if we touch base a year from now, what are we going to be talking about? >> Excellent question. So strengthening, formalizing some aspects of the community that we have so far, it's a little ad hoc, would be great. We want to look to having people outside of Github that feel more ownership over the project. For example, we have contributors who probably should be reviewing and committing code on their own, without necessarily needing to loop in someone from my team. So really turning this into a community project. In addition, we are focusing up on what might go into a version 2 release. And we're really focusing on security as a key feature in version two. >> Yeah, security's key and it's got to be baked in all the way to the bottom. >> Yeah. >> Alright Jacob, well it sounds like you've got your work cut out for you >> Thank you. and it should be an exciting year. >> Yeah, thanks very much. >> Alright. He's Jacob Groundwater. He's from the Electron project at Github. I'm Jeff Frick. You're watching theCUBE. We'll see you next time. Thanks for watching. (sharp music)
SUMMARY :
at the Mission Bay Convention Center. but for the folks that aren't familiar, there's a good chance that people and you can use it to build desktop applications and the web is becoming the most common skillset Now you said one 1.0 released year, So it, it's kind of unreal. 'cause that's the whole promise of software, right? and you can change the world. So as it's grown what have been some of the main One of the biggest things that I found interesting kind of the negative side And it's something that That was the word. One of the best ways to, you don't got too many of those, from the people who give back. So great growth over the year. that feel more ownership over the project. all the way to the bottom. and it should be an exciting year. He's from the Electron project at Github.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jacob | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
Jacob Groundwater | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
India | LOCATION | 0.99+ |
Github | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Electron | ORGANIZATION | 0.99+ |
10s | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
Chennai | LOCATION | 0.99+ |
Mission Bay Convention Center | LOCATION | 0.99+ |
about 800 developers | QUANTITY | 0.98+ |
Node.js | TITLE | 0.98+ |
next calendar year | DATE | 0.97+ |
Linux | TITLE | 0.97+ |
One | QUANTITY | 0.96+ |
Windows | TITLE | 0.95+ |
millions of users | QUANTITY | 0.94+ |
Node Summit 2017 | EVENT | 0.94+ |
three platforms | QUANTITY | 0.93+ |
two thirds | QUANTITY | 0.93+ |
millions of places | QUANTITY | 0.9+ |
Electron | TITLE | 0.89+ |
Tokyo time zone | LOCATION | 0.89+ |
Mission Bay center | LOCATION | 0.87+ |
theCUBE | ORGANIZATION | 0.86+ |
Prague time zone | LOCATION | 0.85+ |
version 2 | OTHER | 0.83+ |
one thing | QUANTITY | 0.78+ |
millions of laptops | QUANTITY | 0.78+ |
one | QUANTITY | 0.77+ |
version two | OTHER | 0.75+ |
Mac | COMMERCIAL_ITEM | 0.75+ |
a year | QUANTITY | 0.74+ |
midnight | DATE | 0.71+ |
OpenStack | ORGANIZATION | 0.68+ |
Google Chrome | TITLE | 0.68+ |
1.0 | QUANTITY | 0.36+ |
Guy Podjarny, Snyk | Node Summit 2017
>> Hey welcome back everybody Jeff Frick here with theCUBE. We're at Node Summit 2015 in Downtown San Francisco Mission Bay Conference Center. About 800 people talking about nodes, Node JS. The crazy growth in this application development platform and we're excited to have our next guest to talk about security. Which I don't think we've talked about yet. He's Guy Podjarny, I'm sorry. >> Podjarny Correct. >> Welcome, he's a CEO of Snyk, not spelled like Snyk. (laughing) You'll see it on the lower third. >> It's amazing how often we that question. How do you pronounce Snyk? >> Well I know, obviously people that have never had this start up and tried to go through a URL search. >> Indeed. >> Just don't know what's it's all about. >> It's sort of Google dominance. It's short for so now you know. So now you know. >> Oh, so now you know. Okay perfect, super. First off welcome, great to see you. >> Thank you. Thanks for having me. >> You said this is your second year at the conference. Just kind of share your general impressions of what's going on here. >> Sure, well I think Node Summit is an awesome conference. I think this year's event is bigger, better organized. I don't know if it's bigger people wise but definitely feels that way. It sort of feels more structured. It's nice to see in the audience as well. Just an increased amount of larger organizations that are around and talking about their challenges and a little bit a lot earlier in the conference but a little bit of more experienced conversations. So conversations about hey, we've used node and we've encountered these issues versus we're about to use it. We're thinking of using it so definitely can see the enterprise adoption kind of growing up. That's my primary impression so far. >> Yeah and it's it in 'cause you're a start up but Microsoft is here, Google's here, Intel is here, IBM is here so a lot of the big players. Who've demonstrated in other open source communities that they have completely embraced open source as a method and way to get actually more than the software is getting closer to development community. >> Yeah, agreed and I think another adjacent trend that's happening is ServerList and ServerList has grown ridiculously, by massive amounts in these last while. And Node JS is sort of the de facto default language for ServerList. LAM just started with it and AWS and many of the other platforms only support it. I think that contribution also brings the giants a little bit more in here. The Cloud giants but also I think again just sort of boost the Node JS. As though the Node JS echo system needed a boost. They get another amplifier. Just raise enterprise awareness and general usage. >> Okay, so what's the Snyk all about? Gives us, some people aren't familiar with the company. >> Cool, so Snyk deals with open source security and specifically in Node JS, the world of MPMs. MPM is amazing and it allows us to build on the shoulders of giants and all the others in the community. But there are some inherent security risks with just pulling code off the internet and running it in your application. >> Jeff: Right, right. >> What we do at Snyk is we help you find known security flaws, known vulnerabilities in MPM packages, and do that in a natural fashion as part of your continuous development process, and then fix those efficiently and monitor for them over time. That's basically. >> That's your focus is really keeping track of all these other packages that people are using to their development. Precisely and we're helping you just use open source code and stay secure. The word node is our flag ship and it's where we started and build and now we support a bunch of other systems as well. >> It's interesting, Monica from Intel said that in some of their work they found that some of these applications. The actual developers only contributing 2% of the code 'cause they're pulling in all this other stuff. >> Precisely, I have this example I use in a bunch of my talks that shows ServerList example that has 19 lines of codes. Copies some file from URL and puts it on S3. That's 19 lines of codes which is awesome. Uses two packages which in turn use 19 packages which bring in 190,000 lines of code. >> Wow. >> That's a massive-- >> So what is that step function again? Start from the beginning. >> 19 to 190,000. >> It starts at two? >> 19 lines of code use two MPM packages. They use 19 packages because every package uses other packages as well, and combined those 19 packages bring in 190,000 lines of code. >> Wow, that's amazing. That's an extreme example but you see that pattern. You see this again and again that the majority of your code in your applications especially node is not first party it's third party code. >> Jeff: Right. >> And that means most of your security risks. Most of your vulnerabilities, they come from there so there is a lot of challenges around managing dependencies. I know it's called dependency help for a reason but specifically security is still not sufficiently taken care of. It's still overlooked and we need to make sure that it's not just addressed by security people. But it's addressed a part of the development process by developers. >> How do you keep up? Both with the number as the proliferation grows as well as the revisions and versions inside of any particular package? You kind of chasing a multi headed beast there. >> It's definitely tough. First of all the short answer is automation. Any scale solution has to start with automation. I've got a security research team in Israel that has a vulnerability pipeline that feeds in from activity in the open source world. Some developer opens an issue and gets helps that say SQL injection in some package and that disappears into the ether. So we try to surface those, get it to our security analysts, determine if it's a real vulnerability curated in our database, and then just build that database with your own research but a lot of it is around tapping into community. And then subsequently when you consume this if you want to be able to apply security correctly as you develop your applications Node JS or otherwise. It has to come to you. The security tool has to be a seamless integration with how you currently work. If you impose another step, another two steps, another three steps on the developers. They're just not going to use it. That's a lot of our emphasis is scale on the consumption and the tracking of the database and simplicity and ease of use on the developer on the user side. >> And do you help with just like flagging. Flagging is a problem or is there an alternative. I mean I would imagine with all these interdependencies, you find one rotten apple kind of have a huge impact. It's a huge scale of impact right. >> Absolutely so we do really what our moniker is that we don't find vulnerabilities, we fix them and our goal is to fix vulnerabilities. So we actually, first of all in the flow we have single click, open a fixed PR. We figure out what changes we need to do. What upgrades you need to make the vulnerability go away. Literally click a button to fix it. Put on one bat for everything and then what we also do. We build patches, sort of a little known fact is in the world of operation systems RedHat and Canonical. They build a lot of fixes or they back port a lot open source fixes, and they put them into their repository. You can just say on updates or upgrade and just get those fixes. You don't even know which vulnerabilities you're fixing. You're just getting the fixes so we build patches for our MPM packages as well to allow you to patch vulnerabilities you can not upgrade away. A lot of it is around fix. Make fix easy. >> Right and then the other part as you said is baking security in the development all the way through which we hear over and over and over. >> Build it in and bolt it in. >> The cast in method doesn't work anymore. You've got to have it throughout the application so you said you're speaking on a panel tomorrow. And I wondered if you can just highlight some of the topics for tomorrow for the folks that aren't going to be here and see the panel. When you look at ServerList security. Say that three times fast. What are some of the real special challenges that people need to be thinking about? >> Sure, so you know I actually have two talks tomorrow. One is a panel on Node JS security as a whole and that's sort of a broader panel. We have a few other colleagues in there and we talk about the evolution of Node JS security that includes the platform itself which is increasingly well handled by the foundation. Definitely some improvements there over the years and some of it is around best practices like the ones that was just discussed which is understanding known pitfalls and Node JS sort of security mistakes that you might do as well as handling the MPM echo system. The other talk that I have later in the day is around ServerList security. ServerList security is interesting because a lot of the promise of ServerList is function as a service is that a lot of the concerns. A lot of the earlier or lower levels get abstracted away from you. You don't need to manage servers. You don't need to manage operation systems and with those auto security concerns go away. Which in turns focuses the attackers and should focus you on the application. As attackers are not just going to give up because they can't hack the operating system that the pros are managing. They would look at the next low hanging fruit and that would be the application. Platform as a service and function as a service really increase the importance of dealing with application security as a whole. So my talk is a lot about that but also deals with other security concerns that you might of course any new methodology introduces its own concerns so talk a little bit about how to address those. ServerList like Node JS is an opportunity to build security into the culture and into our methodologies from the early day so trying to help us get that right. >> Alright, as you look forward, the next 12 months. I won't say more than 12 months, 6 months, 9 months, 12 months. What are some of your priorities at Snyk? What are you working on if we get together a year from now, what will we be talking about? I think, so two primary ones. One is continuing the emphasis on fix. Making fixing trivial in the Node JS environments as well as others. I think we've done well there but there is more work to be done. It needs to be as seamless as possible. The other aspect is indeed in this sort of past and fast world and platform and function as a service. Where increasingly there is this awareness as we work with different platforms to the blind spot that they have to open source libraries. They fix your NGX vulnerabilities but not your express vulnerabilities. I sometimes refer to MPM packages or open source packages as sprinkles of infrastructure that are just scattered through your application. And today, all of these Cloud platforms are blind to it so I expect us at Snyk to be helping past and fast users dealing with that security concerns efficiently. >> Alright, well I look forwards to the conversation. >> Thanks. >> Thanks for stopping by. >> Thank you. >> He's Guy Podjarny. He is from Snyk. The CEO of Snyk. I'm Jeff Frick, you're watching theCUBE. (uptempo techno music)
SUMMARY :
and we're excited to have our next guest You'll see it on the lower third. How do you pronounce Snyk? that have never had this start up It's short for so now you know. Oh, so now you know. Thank you. Just kind of share your general impressions and a little bit a lot earlier in the conference IBM is here so a lot of the big players. and AWS and many of the other platforms only support it. Gives us, some people aren't familiar with the company. and specifically in Node JS, the world of MPMs. and do that in a natural fashion Precisely and we're helping you The actual developers only contributing 2% of the code That's 19 lines of codes which is awesome. Start from the beginning. and combined those 19 packages but you see that pattern. And that means most of your security risks. How do you keep up? and that disappears into the ether. And do you help with just like flagging. and our goal is to fix vulnerabilities. Right and then the other part as you said and see the panel. and some of it is around best practices like the ones that they have to open source libraries. The CEO of Snyk.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
190,000 lines | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two steps | QUANTITY | 0.99+ |
19 lines | QUANTITY | 0.99+ |
Guy Podjarny | PERSON | 0.99+ |
19 packages | QUANTITY | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
Node JS | TITLE | 0.99+ |
two packages | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
second year | QUANTITY | 0.99+ |
Podjarny | PERSON | 0.99+ |
6 months | QUANTITY | 0.99+ |
three steps | QUANTITY | 0.99+ |
9 months | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Intel | ORGANIZATION | 0.99+ |
ServerList | TITLE | 0.99+ |
190,000 | QUANTITY | 0.98+ |
Canonical | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
three times | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
About 800 people | QUANTITY | 0.98+ |
Node Summit | EVENT | 0.96+ |
one bat | QUANTITY | 0.96+ |
nodes | TITLE | 0.95+ |
more than 12 months | QUANTITY | 0.95+ |
Node Summit 2017 | EVENT | 0.95+ |
two talks | QUANTITY | 0.94+ |
single click | QUANTITY | 0.94+ |
Downtown San Francisco Mission Bay Conference Center | LOCATION | 0.93+ |
this year | DATE | 0.93+ |
S3 | TITLE | 0.92+ |
node | TITLE | 0.9+ |
Node JS security | TITLE | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
19 p | QUANTITY | 0.87+ |
apple | ORGANIZATION | 0.85+ |
two primary ones | QUANTITY | 0.84+ |
echo | COMMERCIAL_ITEM | 0.84+ |
LAM | TITLE | 0.84+ |
Node Summit 2015 | EVENT | 0.82+ |
one | QUANTITY | 0.81+ |
2% of | QUANTITY | 0.8+ |
19 | QUANTITY | 0.8+ |
MPM | TITLE | 0.74+ |
first | QUANTITY | 0.73+ |
RedHat | ORGANIZATION | 0.71+ |
next 12 months | DATE | 0.69+ |
Gaurav Seth, Microsoft | Node Summit 2017
(switch clicking) >> Hey, welcome back, everybody. Jeff Frick, here with theCUBE. We're at the Mission Bay Conference Center in downtown San Francisco at Node Summit 2017. TheCUBE's been coming here for a number of years. In fact, Ryan Dahl's one of our most popular interviews in the history of the show, talking about Node. And, the community's growing, the performance is going up and there's a lot of good energy here, so we're excited to be here and there's a lot of big companies that maybe you would or wouldn't expect to be involved. And, we're excited to have Gaurav Seth. He is the Product Manager for Several Things JavaScript. I think that's the first time we've ever had that title on. He's from Microsoft. Thanks for stopping by. >> Yeah, hey, Jeff, nice to be here. Thanks for having me over. >> Absolutely, >> Yes. >> so let's just jump right into it. What is Microsoft doing here in such a big way? >> So, one of the things that Microsoft is, like, I think we really are, now, committed and, you know, we have the mantra that we are trying to follow which is any app, any developer, any platform. You know, Node actually is a great growing community and we've been getting soaked more and more and trying to help the community and build the community and play along and contribute and that's the reason that brings us here, like, it's great to see the energy, the passion with people around here. It's great to get those connections going, have those conversations, hear from the customers as to what they really need, hear from developers about their needs and then having, you know, a close set of collaboration with the Core community members to see how we can even evolve the project further. >> Right, right, and specifically on Azure, which is interesting. You know, it's been interesting to watch Microsoft really go full bore into cloud, via Azure. >> Right. >> I just talked to somebody the other day, I was talking about 365 being >> Uh huh. >> such a game-changer in terms of cloud implementation, as a big company. There was a report that came out about, you know, the path at 20 billion, >> Right. >> so, clearly, Microsoft is not only all-in, but really successfully >> Right. >> executing on that strategy >> Yeah, I mean-- >> and you're a big piece of that. >> Yes, I mean, I think one of the big, big, big pieces, really, is as the developer paradigms are changing, as the app paradigms are changing, you know, how do you really help make developers this transition to a cloud-native world? >> Right, right. >> How do you make sure that the app platforms, the underlying infrastructure, the cloud, the tools that developer use, how do you combine all of them and make sure that you're making it a much easier experience for developers to move on >> Right. >> from their existing paradigms to these new cloud-native paradigms? You know, one of the things we've been doing on the Azure side of the house and when, especially when we look at Node.js as a platform, we've been working on making sure that Node.js has a great story across all the different compute models that we support on Azure, starting from, like, hey, if you you want to do server list of functions, if you want to do BasS, if you want to go the container way, if you want to just use WEAMS, and, in fact, we just announced the Azure container instances, today, >> Right. >> so it's, one of the work, some of the work we are doing is really focused on making sure that the developer experiences as you migrate your workloads from old traditional, monolithic apps are also getting ready to move to this cloud native era. >> Right, so it's an interesting point of view from Microsoft 'cause some people, again, people in-the-know already know, but a lot of people maybe don't know, kind of, Microsoft's heritage in open source. We think, you know, that I used to buy my Office CD, >> Right. >> and my Outlook CD >> Right. >> you know, it's different, especially as you guys go more heavily into cloud, >> Right. >> you need to be more open to the various tools of the developer community. >> That's absolutely true and one of the focus areas for us, really, has been, you know, as we think through the cloud-native transition, what are the big pieces, the main open source tools, the frameworks that are available and how do we provide great experiences for those on Azure? >> Right, right. >> Right, because, at times, people come with the notion that, hey, Azure probably might just be good for dot NET or might just be good for Windows, but, you know, the actual fact, today, is really that Azure has great supporting story for Linux, Azure has great story for a lot of these open source tools and we are continuing to grow our story in that perspective. >> Right. >> So, we really want to make sure that open source developers who come and work on our platform are successful. >> And then, specifically for Node, and you're actually on the Board, so you've got >> Right. >> a leadership position, >> Yep. >> when you look at Node.js within the ecosystem of opensource projects and the growth that we keep hearing about in the sessions, >> Yep. >> you know, how are you, and you specifically and Microsoft generally, kind of helping to guide the growth of this community and the development of this community as it gets bigger and bigger and bigger? >> Right, I think that's a great question. I think from my perspective, and also Microsoft's perspective, there are a bunch of things we are actually doing to engage with the community, so I'll kind of list out three or four things that we are doing. I think the first and foremost is, you know, we are a participant in the Node.js Foundation. >> Right. >> You know, that's where like, hey, we kind of look at the administrative stuff. We are a sponsor of, you know, at the needed levels, et cetera, so that's just the initial monetary support, but then it gets to really being a part of the Node Core Committee, like, as we work on some of the Core pieces, as we evolve Node, how can we actually bring more perspectives, more value, into the actual project? So, that's, you know, we have many set of engineers who are, right now, working across different working groups with Node and helping evolve Node. You know, you might have heard about the NAPI effort. We are working with the Diagnostics Working Group, we are working with the Benchmarking Working Group and, you know, bringing the thing. The third thing that we did, a while back, was we also did this integration of bringing Chakra which is the JavaScript Runtime from Microsoft that powers Microsoft Edge. We made Node work with Chakra because we wanted to bring the power of Node to this new platform called Windows IoT >> Right, right. >> and, you know, the existing Node could not get there because some of the platform limitations. So, those are like some of the few examples that we've, and how we've been actually communicating and contributing. And then, I think the biggest and the foremost for me, really, are the two pillars, like when I think about Microsoft's contribution, it's really, like, you know, the big story or the big pivot for us is, we kind of go create developer tools and help make developer live's easier by giving them the right set of tools to achieve what they want to achieve in less time, be more productive >> Right, right. >> and the second thing is, really, like the cloud platforms, as things are moving. I think across both of those areas, our focus really had been to make sure that Node as a language, Node as a platform has great first-class experiences that we can help define. >> Right. Well, you guys are so fortunate. You have such a huge install base of developers, >> Right. >> but, again, traditionally, it wasn't necessarily cloud application developers and that's been changing >> Yep. >> over time >> Yep. >> and there's such a fierce competition for that guy, >> Yep. >> or gal, who wakes up >> Yep. >> in the morning or not, maybe, the morning, at 10:00, >> Yep. >> has a cup of coffee >> Yep. >> and has to figure out what they're going to develop today >> Right. >> and there's so many options >> Right. >> and it's a fierce competition, >> Right. >> so you need to have an easy solution, you need to have a nice environment, you need to have everything that they want, so they're coding on your stuff and not on somebody else's. >> That's true, I mean I, you know, somehow, I kind of instead of calling it competition, I have started using this term coopetition because between a lot of the companies and vendors that we talk about, right, it's more about, for all of us, it's working together to grow the community. >> Right. >> It's working together to grow the pie. You know, with open source, it's not really one over the other. It's like the more players you have and the more players who engage with great ideas, I think better things come out of that, so it's all about that coopetition, >> rather than competition, >> Right. >> I would say. >> Well, certainly, around and open source project, here, >> Yes, exactly. >> and we see a lot of big names, >> Exactly. >> but I can tell you, I've been to a lot of big shows where they are desperately trying to attract >> Right, right, yes. >> the developer ecosystem. "Come develop on our platforms." >> Yes, yes. >> So, you're in a fortunate spot, you started, >> Yes, I mean that-- >> not from zero, but, but open source is different >> Yes. >> and it's an important ethos because it is much more community >> Exactly, exactly. >> and people look at the name, they don't necessarily look at the title >> Exactly. >> or even the company >> Yep, exactly. >> that people work for. >> Exactly, and I think having more players involved also means, like, it's going to be great for the developer ecosystem, right, because everybody's going to keep pushing for making it better and better, >> Right. >> so, you know, as we grow from a smaller stage to, like, hey, there's actually a lot of enterprised option of these use case scenarios that people are coming up with, et cetera, it's always great to have more parties involved and more people involved. >> Gaurav, thank you very much >> Yeah. >> and, again, congratulations on your work here in Node. Keep this community strong. >> Sure. >> It looks like you guys are well on your way. >> Yeah. Thanks, Jeff. >> All right. >> Thanks for your time, take care, yeah. >> Guarav Seth, he's a Project Lead at Microsoft. I'm Jeff Frick. You're watching theCUBE from Node Summit 2017. Thanks for watching. (upbeat synthpop music)
SUMMARY :
in the history of the show, talking about Node. Yeah, hey, Jeff, nice to be here. so let's just jump right into it. and then having, you know, a close set of collaboration to watch Microsoft really go full bore There was a report that came out about, you know, You know, one of the things we've been doing on making sure that the developer experiences We think, you know, that I used to buy my Office CD, you need to be more open but, you know, the actual fact, today, is really So, we really want to make sure and the growth that we keep hearing about you know, we are a participant the power of Node to this new platform and, you know, the existing Node could not get there and the second thing is, really, Well, you guys are so fortunate. so you need to have because between a lot of the companies and vendors It's like the more players you have the developer ecosystem. so, you know, as we grow and, again, congratulations on your work here in Node. It looks like you guys are Yeah. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Ryan Dahl | PERSON | 0.99+ |
Gaurav Seth | PERSON | 0.99+ |
Gaurav | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
20 billion | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Node.js Foundation | ORGANIZATION | 0.99+ |
Node.js | TITLE | 0.99+ |
Guarav Seth | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Node | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
two pillars | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
Outlook | TITLE | 0.98+ |
Chakra | TITLE | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
one | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
JavaScript | TITLE | 0.97+ |
Mission Bay Conference Center | LOCATION | 0.97+ |
10:00 | DATE | 0.97+ |
Windows | TITLE | 0.97+ |
WEAMS | TITLE | 0.97+ |
Linux | TITLE | 0.96+ |
third thing | QUANTITY | 0.96+ |
first time | QUANTITY | 0.95+ |
TheCUBE | ORGANIZATION | 0.95+ |
Office | TITLE | 0.95+ |
today | DATE | 0.95+ |
Node Core Committee | ORGANIZATION | 0.94+ |
Azure | TITLE | 0.93+ |
four things | QUANTITY | 0.86+ |
NAPI | ORGANIZATION | 0.83+ |
San Francisco | LOCATION | 0.81+ |
Node | ORGANIZATION | 0.8+ |
NET | ORGANIZATION | 0.75+ |
zero | QUANTITY | 0.75+ |
Azure | ORGANIZATION | 0.7+ |
Node Summit | LOCATION | 0.69+ |
Diagnostics Working Group | ORGANIZATION | 0.64+ |
2017 | DATE | 0.58+ |
365 | QUANTITY | 0.54+ |
Edge | TITLE | 0.53+ |
Things | ORGANIZATION | 0.52+ |
BasS | TITLE | 0.52+ |
Group | ORGANIZATION | 0.47+ |
Charles Beeler, Rally Ventures | Node Summit 2017
>> Hey welcome back everybody. Jeff Frick here at theCUBE. We're at Node Summit 2017 in Downtown San Francisco. 800 people hanging out at the Mission Bay Conference Center talking about development and really monumental growth curve. One of the earlier presenters have one project last year. I think 15 this year, 22 in development and another 75 toy projects. The development curve is really steep. IBM's here, Microsoft, Google, all the big players so there is a lot of enterprise momentum as well and we're happy to have our next guest. Who's really started this show and one of the main sponsors of the show He's Charles Beeler. He's a general partner at Rally Ventures. Charles great to see you. >> Good to be back. Good to see you. >> Yeah, absolutely. Just kind of general impression. You've been doing this for a number of years I think when we talked earlier. Ryan Dawles interview from I don't even know what year it is I'd have to look. >> 2012, January 2012. >> 2012. It's still one of our most popular interviews of all the thousands we've done on the theCUBE, and now I kind of get it. >> Right place, right time but it was initially a lot. In 2011, we were talking about nodes. Seemed like a really interesting project. No one was really using it in a meaningful way. Bryan Cantrell from Joint. I know you all have talked before, walked me through the Hello World example on our board in my office, and we decided let's go for it. Let's see if we can get a bunch of enterprises to come and start talking about what they're doing. So January 2012, there were almost none who were actually doing it, but they were talking about why it made sense. And you fast forward to 2017, so Home Away was the company that actually had no apps. Now 15, 22 in development like you were mentioning and right now on stage you got Twitter talking about Twitter light. The breath and it's not just internet companies when you look at Capital One. You look at some of the other big banks and true enterprise companies who are using this. It's been fun to watch and for us. We do enterprise investing so it fits well but selfishly this community is just a fun group of people to be around. So as much as this helps for our rally and things. We've always been in awe of what the folks around the node community have meant to try to do, and it did start with Ryan and kind of went from there. It's fun to be back and see it again for the fifth annual installment. >> It's interesting some of the conversations on stage were also too about community development and community maturation and people doing bad behavior and they're technically strong. We've seen some of these kind of growing pains in some other open source communities. The one that jumps out is Open Stack as we've watched that one kind of grow and morph over time. So these are good. There's bad problems and good problems. These are good growing pain problems. >> And that's an interesting one because you read the latest press about the venture industry and the issues are there, and people talk more generally about the tech industry. And it is a problem. It's a challenge and it starts with encouraging a broad diverse group of people who would be interested in this business. >> Jeff: Right, right. >> And getting into it and so the node community to me is always been and I think almost any other out source community could benefit at looking at not just how they've done it, but who the people are and what they've driven. For us, one of the things we've always tried to do is bring a diverse set of speakers to come and get engaged. And it's really hard to go and find enough people who have the time and willingness to come up on stage and it's so rewarding when you start to really expose the breath of who's out there engaged and doing great stuff. Last year, we had Stacy Kirk, who she runs a company down in L.A. Her entire team pretty much is based in Jamaica brought the whole team out. >> Jeff: Really? >> It was so much fun to have whole new group people. The community just didn't know, get to know it and be in awe of what they're building. I thought the electron conversation. They were talking about community, that was Jacob from GitHub. It's an early community though. They're trying to figure it out. On the Open Stack side, it's very corporate driven. It's harder to have those conversations. In the node community, it's still more community driven and as a result they're able to have more of the conversation around how do we build a very inclusive group of people who can frankly do a more effective job of changing development. >> Jeff: Right, well kudos to you. I mean you open up the conference in your opening remarks talking about the code of conduct and it's kind of like good news bad news. Like really we have to talk about what should basically be. It's common sense but you have to do it and that's part of the program. It was Woman Attack Wednesday today so we've got a boat load of cards going out today with a lot of the women and it's been proven time and time again. That the diversity of opinions tackling any problem is going to lead to a better solution and hopefully this is not new news to anybody either. >> No and we have a few scholarship folks from Women who code over here. We've done that with them for the last few years but there are so many organizations that anyone who actually wants to spend a little time figuring out how can I be apart of the, I don't know if I'd call it solution but help with a challenge that we have to face. It's Women who code. It's Girls who code. It's Black girls code and it's not just women. There's a broad diverse set of people we need to engage. >> Jeff: Right, right. >> We have a group here, Operation Code who's working with Veterans who would like to find a career, and are starting to become developers and we have three or four sponsored folks from Operation Code too. And again, it's just rewarding to watch people who are some of the key folks who helped really make node happen. Walking up to some stranger who's sort of staring around. Hasn't met anybody. Introduce himself say, "Hey, what are you interested in "and how can I help?" And it's one of the things that frankly brings us back to do this year after year. It's rewarding. >> Well it's kind of interesting piece of what node is. Again we keep hearing time and time again. It's an easy language. Use the same language for the front end or the back end. >> Yep. >> Use a bunch of pre-configured model. I think Monica from Intel, she said that a lot of the codes they see is 2% is your code and everything you're leveraging from other people. And we see in all these tech conferences that the way to have innovation is to label more people to contribute. That have the tools and the data and that's really kind of part of what this whole ethos is here. >> And making it. Just generally the ethos around making it easier to develop and deploy. And so when we first started, Google was nowhere to be found and Microsoft was actually already here. IBM wasn't here yet and now you look at those folks. The number of submissions we saw for talk proposals. The depth of engagement within those organizations. Obviously Google's got their go and a bunch of it but node is a key part of what they're doing. Node and I think for both IBM and also for Google is the most deployed language or the most deployed stack in terms of what they're seeing on their Cloud, Which is why they're here. And they're seeing just continued growth, so yeah it drives that view of how can we make software easier to work with, easier to put together, create and deploy and it's fun to watch. Erstwhile competitors sitting comparing notes and ideas and someone said to me. One of the Google folks, Miles Boran had said. Mostly I love coming to this because the hallway chatter here is just always so fascinating. So you go hear these great talks and you walk out and the speakers are there. You get to talk to them and really learn from them. >> I want to shift gears a little. I always great to get a venture capitalist on it. Everybody wants to hear your thoughts and you see a lot of stuff come across your desk. As you just look at the constant crashing of waves of innovation that we keep going through here and I know that's apart of why you live here and why I do too. And Cloud clearly is probably past the peak of the wave but we're just coming into IoT and internet of things and 5G which is going to be start to hit in the near future. As you look at it from an enterprise perspective. What's getting you excited? What are some of the things that maybe people aren't thinking about that are less obvious and really the adoption of enterprises of these cutting edge technologies. Of getting involved in open source is really phenomenal thing of environment for start ups. >> Yeah and what you're seeing as the companies, the original enterprises that were interested in nodes. You decided to start deploying. The next question is alright this worked, what else can we be doing? And this is where you're seeing the advent of first Cloud but now how people are thinking about deployment. There's a lot of conversation here this week about ServerList. >> Jeff: Right, right. We were talking about containers. Micro services and next thing you know people are saying oh okay what else can we be doing to push the boundaries around this? So from our perspective, what we think about when we think about when we think of enterprise and infrastructure and Dev Ops et cetera is it is an ever changing thing. So Cloud as we know it today is sort, it's done but it's not close to being finished when you think about how people are making car-wny apps and deploying them. How that keeps changing, questions they keep asking but also now to your point when you look at 5G. When you look at IoT, the deployment methodology. They're going to have to change. The development languages are going to change and that will once again result in further change across the entire infrastructure. How am I going to go to place so I would say that we have not stopped seeing innovative stuff in any of those categories. You asked about where do we see kind of future things that we like. Like NEVC, if I don't say AI and ML and what are the other ones I'm suppose to say? Virtual reality, augmented reality, drones obviously are huge. >> It's anti drones. Drone detection. >> We look at those as enabling technology. We're more interested from a rally perspective and applied use of those technologies so there's some folks from GrowBio here today. And I'm sure you know Grail, right they raise a billion dollars. The first question I asked the VP who is here. I said, did you cure cancer yet? 'Cause it's been like a year and a half. They haven't yet, sorry. But what's real interesting is when you talk to them about what are they doing. So first they're using node but the approach they're taking to try to make their software get smarter and smarter and smarter by the stuff they see how they're changing. It's just fundamentally different than things people were thinking about a few years ago. So for us, the applied piece is we want to see companies like a Grail come in and say, here's what we're doing. Here's why and here's how we're going to leverage all of these enabling technologies to go accomplish something that no one has ever been able to do before. >> Jeff: Right, right. And that's what gets us excited. The idea of artificial intelligence. It's cool, it's great. I love talking about it. Walk me through how you're going to go do something compelling with that. Block chain is an area that we're spending, have been but continue to spend a lot of time looking right now not so much from a currency perspective. Just very compelling technology and the breath of our capability there is incredible. We've met in the last week. I met four entrepreneurs. There are three of them who are here talking about just really novel ways to take advantage of a technology that is still just kind of early stages, from our perspective of getting to a point where people can really deploy within large enterprise. And then I'd say the final piece for us and it's not a new space. But kind of sitting over all of this is security. And as these things change constantly. The security needs are going to change right. The foot print in terms of what the attack surface looks like. It gets bigger and bigger. It gets more complex and the unfortunate reality of simplifying the development process is you also sometimes sort of move out the security thought process from a developer perspective. From a deployment perspective, you assume I've heard companies say well we don't need to worry about security because we keep our stuff on Amazon. As a security investor, I love hearing that. As a user of some of those solutions it's scares me to death and so we see this constant evolution there. And what's interesting you have, today I think we have five security companies who are sponsoring this conference. The first few years, no one even wanted to talk about security. And now you have five different companies who are here really talking about why it matters if you're building out apps and deploying in the Cloud. What you should be thinking about from a security perspective. >> Security is so interesting because to me, it's kind of like insurance. How much is enough? And ultimate you can just shut everything down and close it off but that's not the solution. So where's the happy medium and the other thing that we hear over and over is it's got to be baked in all the layers of the cake. It can't just be the castle and moat methodology anymore. >> Charles: Absolutely. >> How much do you have? Where do you put it in? But where do you stop? 'cause ultimately it's like a insurance. You can just keep buying more and more. >> And recognize the irony of sitting here in San Francisco while Black Hat's taking place. We should both be out there talking about it too. (laughing) >> Well no 'cause you can't go there with your phone, your laptop. No, you're just suppose to bring your car anymore. >> This is the first year in four years that my son won't be at DEF CON. He just turned seven so he set the record at four, five and six as the youngest DEF CON attendee. A little bitter we're not going this year and shout out because he was first place in the kid's capture the flag last year. >> Jeff: Oh very good. >> Until he decided to leave and go play video games. So the way we think about the question you just asked on security, and this is actually, I give a lot of credit to Art Covella. He's one of our venture partners. He was the CEO at our safe for a number of years. Ran it post DMC acquisition as well is it's not so much of a okay, I've got this issue. It could be pay it ransom or whatever it is. People come in and say we solve that. You might solve the problem today but you don't solve the problem for the future typically. The question is what is it that you do in my environment that covers a few things. One, how does it reduce the time and energy my team needs to spend on solving these issues so that I can use them? Because the people problem in security is huge. >> Right. >> And if you can reduce the amount of time people are doing automated. What could be automated task, manual task and instead get them focused on hired or bit sub, you get to cover more. So how does it reduce the stress level for my team? What do I get to take out? I don't have unlimited budget. That could be buying point solutions. What is it that you will allow me to replace so that the net cost to me to add your solution is actually neutral or negative, so that I can simplify my environment. Again going back to making these work for the people, and then what is it that you do beyond claiming that you're going to solve a problem I have today. Walk me through how this fits into the future. They're not a lot of the thousands of-- >> Jeff: Those are not easy questions. >> They're not easy questions and so when you ask that and apply that to every company who's at Black Hat today. Every company at RSA, there's not very many of that companies who can really answer that in a concise way. And you talk to seesos, those are the questions they're starting to ask. Great, I love what you're doing. It's not a question of whether I have you in my budget this year or next. What do I get to do in my environment differently that makes my life easier or my organization's life easier, and ultimately nets it out at a lower cost? It's a theme we invest in. About 25% of our investments have been in the securities space and I feel like so far every one of those deals fits in some way in that category. We'll see how they play out but so far so good. >> Well very good so before we let you go. Just a shout out, I think we've talked before. You sold out sponsorship so people that want to get involved in node 2018. They better step up pretty soon. >> 2018 will happen. It's the earliest we've ever confirmed and announced next year's conference. It usually takes me five months before >> Jeff: To recover. >> I'm willing to think about it again. It will happen. It will probably happen within the same one week timeframe, two week timeframe. I actually, someone put a ticket tier up for next year or if you buy tickets during the conference the next two days. You can buy a ticket $395 for today. They're a $1000 bucks. It's a good deal if people want to go but the nice thing is we've never had a team that out reaches the sponsors. It's always been inbound interest. People who want to be involved and it's made the entire thing just a lot of fun to be apart of. We'll do it next year and it will be really fascinating to see how much additional growth we see between now and then. Because based on some of the enterprises we're seeing here. I mean true Fortune 500, nothing to do with technology from a revenue perspective. They just used it internally. You're seeing some really cool development taking place and we're going to get some of that on stage next year. >> Good, well congrats on a great event. >> Thanks. And thanks for being here. It's always fun to have you guys. >> He's Charles Beeler. I'm Jeff Frick. You're watching theCUBE, Node Summit 2017. Thanks for watching. (uptempo techno music)
SUMMARY :
and one of the main sponsors of the show Good to see you. it is I'd have to look. of all the thousands we've done on the theCUBE, and right now on stage you got Twitter talking It's interesting some of the conversations and people talk more generally about the tech industry. and so the node community to me is always been and be in awe of what they're building. and hopefully this is not new news to anybody either. No and we have a few scholarship folks And again, it's just rewarding to watch people who Well it's kind of interesting piece of what node is. she said that a lot of the codes they see is 2% is your code and someone said to me. and I know that's apart of why you live here Yeah and what you're seeing as the companies, but it's not close to being finished It's anti drones. and smarter by the stuff they see how they're changing. and the breath of our capability there is incredible. and the other thing that we hear over and over But where do you stop? And recognize the irony of sitting here in San Francisco Well no 'cause you can't go there with your phone, This is the first year in four years and this is actually, I give a lot of credit to Art Covella. so that the net cost to me to add your solution They're not easy questions and so when you ask Well very good so before we let you go. It's the earliest we've ever confirmed and announced just a lot of fun to be apart of. It's always fun to have you guys. He's Charles Beeler.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Charles Beeler | PERSON | 0.99+ |
Stacy Kirk | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Charles | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
$1000 | QUANTITY | 0.99+ |
January 2012 | DATE | 0.99+ |
Jamaica | LOCATION | 0.99+ |
Bryan Cantrell | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
seven | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ryan Dawles | PERSON | 0.99+ |
$395 | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Miles Boran | PERSON | 0.99+ |
next year | DATE | 0.99+ |
GrowBio | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.99+ |
2017 | DATE | 0.99+ |
L.A. | LOCATION | 0.99+ |
Home Away | ORGANIZATION | 0.99+ |
800 people | QUANTITY | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
one week | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
75 toy projects | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Mission Bay Conference Center | LOCATION | 0.99+ |
Jacob | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
this week | DATE | 0.99+ |
Rally Ventures | ORGANIZATION | 0.99+ |
first year | QUANTITY | 0.98+ |
DMC | ORGANIZATION | 0.98+ |
first place | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Ryan | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
thousands | QUANTITY | 0.98+ |
five security companies | QUANTITY | 0.98+ |
five different companies | QUANTITY | 0.98+ |
Wednesday | DATE | 0.98+ |
a year and a half | QUANTITY | 0.98+ |
Node Summit 2017 | EVENT | 0.98+ |
DEF CON. | EVENT | 0.98+ |
One | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
four entrepreneurs | QUANTITY | 0.97+ |