Image Title

Search Results for Maurice:

UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL


 

hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you

Published Date : Mar 30 2020

SUMMARY :

the end to the stage in the area we have

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TomPERSON

0.99+

MartaPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

Chris KegPERSON

0.99+

Laura IpsenPERSON

0.99+

Jeffrey ImmeltPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris O'MalleyPERSON

0.99+

Andy DaltonPERSON

0.99+

Chris BergPERSON

0.99+

Dave VelantePERSON

0.99+

Maureen LonerganPERSON

0.99+

Jeff FrickPERSON

0.99+

Paul FortePERSON

0.99+

Erik BrynjolfssonPERSON

0.99+

AWSORGANIZATION

0.99+

Andrew McCafeePERSON

0.99+

YahooORGANIZATION

0.99+

CherylPERSON

0.99+

MarkPERSON

0.99+

Marta FedericiPERSON

0.99+

LarryPERSON

0.99+

Matt BurrPERSON

0.99+

SamPERSON

0.99+

Andy JassyPERSON

0.99+

Dave WrightPERSON

0.99+

MaureenPERSON

0.99+

GoogleORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

NetflixORGANIZATION

0.99+

$8,000QUANTITY

0.99+

Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

EuropeLOCATION

0.99+

AndyPERSON

0.99+

30,000QUANTITY

0.99+

MauricioPERSON

0.99+

PhilipsORGANIZATION

0.99+

RobbPERSON

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike NygaardPERSON

0.99+

CloudNativeCon Keynote Analysis | KubeCon 2017


 

from Austin Texas it's the cube covering cube con and cloud native con 2017 brought to you by Red Hat the Lenox foundations and the cubes ecosystem partners hello everyone welcome to the cube live in Austin Texas for exclusive coverage of cloud native conference and cube con cube con with the linux foundation on john fourier co-founder silicon angle media tube Minutemen with ricky bond and also covering the developer community we just came off Amazon reinvent last week we're now in Austin Texas for a continuation of the Builder theme around this new generation of developers exclusive coverage of cloud native con and cube con or cube cons to cube con like kubernetes john byrne not to be confused with the cube of course when 2018 we're gonna do cube con right John yeah so cube con is coming to check for local listings around an area near you will be will be there stew what a great event I love this events one of my favorite events as you know personally for the cube audience out there and who know us I've been following us we've been growing up with this community we've been covering the Linux Foundation from the beginning if you go back to our roots around 2010 we've always been on the next wave whether it was big date of the converge infrastructure on the enterprise and then cloud the cube is always on the wave and then wave and we call that we were there when kubernetes was formed we were there with the principles JJ when and his team cuz Maddock with blue Tucker kind of brain so me hey we should do kubernetes and we said then kubernetes would be huge it would be the orchestration that would be the battleground in what we were at the time calling the middleware of the cloud turns out that was true that is happening huge change in the ecosystem as containerization with docker originally starting it and then the evolution of how software developers are voting with their workloads they're voting with their code and no better place than the Linux Foundation to your analysis obviously we're super excited but there's some dynamics going on there's a class of venture backed companies that I won't say are groping for a strategic position are certainly investing in open source but brings up the questions of the business model where's the value being created what is the right strategy do I do services do I have a different approach there's a lot of different opinions and if the customers choose wrong they could be on the wrong side of history as this massive wave of innovation with AI machine learning is impacting infrastructure and DevOps it's awesome we heard Netflix on stage let's do what's your take what's going on here cloud native cons yeah so so John I love you know Dan who were you know runs the CN CF gets out on stage and he says you know it's exciting time for boring infrastructure maybe maybe too exciting I even said you know we've been watching this wave of you know containerization and kubernetes and this whole CN CF ecosystem has really taken you know that container piece and exploded beyond this really talking about how I build for these cloud native environments you know there's 14 projects here kubernetes is the one that kicked it off but so many pieces of what's happening here john AWS last week phenomenal like 45,000 people a lot of the real builders the ones you know heavily involved in projects or like ah I actually might skip AWS come to come to coop con this coop con this is where you know so many people we've seen you know founders of companies working on so many projects you know large community you know great community focus I know you like Netflix up there talking about culture big diversity I think what was it 130 scholarships for people of diversity there so really phenomenal stuff you know this is where really that multi-cloud world is being built yeah and good points too because that's really the elephant in the room which is the prophets and the monetization of developer communities is not the primary but it's a big driver and how people are behaving and Amazon reinvent in this world are parallel universes you know it's interesting you don't see a lot of reinvent hoodies I wore mine last night got a couple dirty looks but this is you see a lot of Google you see a lot of Microsoft John John John we have Adrian Cockcroft was in the keynote this morning everybody's saying we're braising the databases here you don't think that's the case I think everyone does embrace it isn't number one isn't really no second place they're far back as I said at reinvent I still stand by that but you got big players okay dan cohen basically said on stage okay it's my projects products and profits and they're putting profits actually in the narrative because they're not shying away from soup but it's not a pay-to-play kind of ecosystem here it's like saying look at the visibility of the cloud has shine the light on the fact that there is an opportunity to create value of which value then can be translated to monetization and developers like to get paid no one likes that do things totally for free that is the scoreboard of value it's not just about chasing the dollar and I think I like how the CN CF is putting out the prophets saying look at this real value here in businesses is real value in products that come from these projects this is a new era and open source I think that's legit again pay-to-play is a completely different animal yeah vendors come in control the standards pay pay pay not anymore Brendan burns told me last year Microsoft no pay to play Microsoft's got a big platform they're gonna come in and make things happen ok so John the money thing is a big question I have coming into this week dan talked up on stage there's certified service providers for kubernetes and there's certified kubernetes partners 42 certified kubernetes partners for the most part kubernetes has been commoditized today that certification doesn't mean that hundred-percent everything works but it definitely over a you know short period of time it will be means that I if I choose any platform that uses kubernetes that certified I can move from one to the other it doesn't mean that I'm actually going to make money selling kubernetes it's that that's part of the platform or services that arm offering and it is an enabler and you know that's what's a little different you think about you know John we try for years OpenStack thought we were gonna make money on it how we're gonna make money even go back to Linux you know it's what can be built using this set of tools so people have said this is really rebuilding the analytics for the cloud environment but money is kind of its derivative off of it it's an enabler to are there great software it brilliants dude this is the bottom line here it's the tale of two stories in the industry okay this in the backdrop is this and if prices are an IT specifically in development teams platforms are shifting big time the old is an old guard as Andy Jackson said the invest in a new guard the dynamics are containerization drove megatrend number one that turbocharged the cloud infrastructure and gave developers some freedom micro-services then take it to another level what it's actually done has changed at two theaters in the industry theater one is the vendors that are getting funded that participants in open source work trying to create value and then what I would call the rest of the market there is an onboarding a tsunami of new developers coming in I'm seeing in the in theater one all the people that we know in the industry and then I'm seeing new faces these are people who are going to the light the light is the monetization and that's the value creation so you seeing people here for the first time you're seeing developers who have a clear line of sight that this community creates value so that's two dynamics so that the companies that got a hundred million dollars in funding from venture capitals they're trying to figure out can they take advantage of that wave of new developers there's been an in migration into cloud native of new developers and these are the ones they're going to be creating the value the creativity the solutions and certainly the cultural impact from those solutions will be great I see a great opportunity if people just don't get scared and just hold the line keep your hitting value it'll figure itself out so the evolution is natural and that is something I'm interesting to see okay and John the thing I'm looking for this week first of all when we talked about containers we talked about this whole cloud native environment that boring infrastructure stuff it still matters networking has matured a little bit there's the CNI initiative the cloud native I'm sorry container networking interface which is approaching one auto they're getting feedback here second one is storage most of the these solutions we really started talking about stateless environments state absolutely has to be a piece of this how do we fit you know you know data AI ml all these things data is critically critically important so that needs to be there and then the new technology that you know we spent a lot of time talking about at AWS that was serverless and there's actually like a half day track here at this show talking about how all of these solutions how serverless fits into them there was a question does serverless replace the because I don't need to think about it really a lot of the same tooling a lot of these usage will fit into those server lists frameworks so it's not in either/or but really more of an an environment but definitely something that we expect to hear more of this we've done we've got a phenomenal lineup I'm super excited did you know some of these builders that we've got you know big players we've got startups we've got authors we've got a good diverse audience coming on the cube so and you know I know near and dear to your heart you know lots of developer talk a lot of their over talks do this is a fun time the commoditization of kubernetes is actually a good thing in my mind I think there will be a lot of value to be created and this really is about multi cloud you mentioned all three of the major clouds and now Maurice are all a bob on stage just in China you got a lot more growth you're seeing that kubernetes really is an opportunity for Google and Microsoft and the rest of the community to run as fast as they can to create services so that customers can have a choice choice is the new black that's what's going on and multi-cloud not yet here but certainly on the horizon and if Google and Azure do not establish a mike-mike multi cloud environment Amazon could run away with it that's my that's my tag that's my visibility on it the bottom line is whoever can creates the value so what I'm gonna look for is the impact of the continued kubernetes kinda monetization and the new formations do the new relationships the existing players like red hat are going to continue to kick ass you're gonna start to see new players come in you can expect to see new partnerships because the stack is being developed very fast smooth announcements for me theists flew and deke container D Windows support coming with 1.9 kubernetes what's happening is they're running as fast as they can they're pedaling as fast as they can because if they do not they will be blown away that's the cube coverage here kicking off day one I'm John Purdue minimun exciting times here at cloud native and cube on back after this short break

Published Date : Dec 6 2017

SUMMARY :

the ones you know heavily involved in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy JacksonPERSON

0.99+

Adrian CockcroftPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

14 projectsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

hundred-percentQUANTITY

0.99+

2018DATE

0.99+

ChinaLOCATION

0.99+

John PurduePERSON

0.99+

Linux FoundationORGANIZATION

0.99+

first timeQUANTITY

0.99+

AWSORGANIZATION

0.99+

LinuxTITLE

0.99+

Austin TexasLOCATION

0.99+

DanPERSON

0.99+

last yearDATE

0.99+

JohnPERSON

0.99+

45,000 peopleQUANTITY

0.99+

two storiesQUANTITY

0.99+

john byrnePERSON

0.99+

JJPERSON

0.99+

last weekDATE

0.99+

CloudNativeConEVENT

0.98+

john fourierPERSON

0.98+

Brendan burnsPERSON

0.98+

OpenStackORGANIZATION

0.98+

MauricePERSON

0.98+

cloud nativeORGANIZATION

0.96+

waveEVENT

0.96+

two theatersQUANTITY

0.96+

two dynamicsQUANTITY

0.96+

LenoxORGANIZATION

0.95+

NetflixORGANIZATION

0.95+

oneQUANTITY

0.94+

John John JohnPERSON

0.94+

130 scholarshipsQUANTITY

0.94+

this weekDATE

0.94+

last nightDATE

0.93+

KubeCon 2017EVENT

0.9+

todayDATE

0.9+

2010DATE

0.9+

42QUANTITY

0.88+

threeQUANTITY

0.85+

second placeQUANTITY

0.84+

this morningDATE

0.83+

second oneQUANTITY

0.82+

hundred million dollarsQUANTITY

0.82+

linux foundationORGANIZATION

0.81+

CN CFORGANIZATION

0.8+

firstQUANTITY

0.79+

innovationEVENT

0.78+

dockerTITLE

0.74+

WindowsTITLE

0.73+

day oneQUANTITY

0.71+

MaddockPERSON

0.7+

half dayQUANTITY

0.7+

2017DATE

0.68+

cloud native conEVENT

0.67+

johnORGANIZATION

0.66+

next waveEVENT

0.64+

conORGANIZATION

0.64+

red hatORGANIZATION

0.61+

MinutemenORGANIZATION

0.59+

lotQUANTITY

0.59+

timeQUANTITY

0.56+

AzureTITLE

0.52+

favorite eventsQUANTITY

0.51+

TuckerPERSON

0.47+

CNIORGANIZATION

0.44+

oneOTHER

0.4+