Image Title

Search Results for Felicia:

UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL


 

hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you

Published Date : Mar 30 2020

SUMMARY :

the end to the stage in the area we have

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TomPERSON

0.99+

MartaPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

Chris KegPERSON

0.99+

Laura IpsenPERSON

0.99+

Jeffrey ImmeltPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris O'MalleyPERSON

0.99+

Andy DaltonPERSON

0.99+

Chris BergPERSON

0.99+

Dave VelantePERSON

0.99+

Maureen LonerganPERSON

0.99+

Jeff FrickPERSON

0.99+

Paul FortePERSON

0.99+

Erik BrynjolfssonPERSON

0.99+

AWSORGANIZATION

0.99+

Andrew McCafeePERSON

0.99+

YahooORGANIZATION

0.99+

CherylPERSON

0.99+

MarkPERSON

0.99+

Marta FedericiPERSON

0.99+

LarryPERSON

0.99+

Matt BurrPERSON

0.99+

SamPERSON

0.99+

Andy JassyPERSON

0.99+

Dave WrightPERSON

0.99+

MaureenPERSON

0.99+

GoogleORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

NetflixORGANIZATION

0.99+

$8,000QUANTITY

0.99+

Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

EuropeLOCATION

0.99+

AndyPERSON

0.99+

30,000QUANTITY

0.99+

MauricioPERSON

0.99+

PhilipsORGANIZATION

0.99+

RobbPERSON

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike NygaardPERSON

0.99+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> live from Barcelona, Spain. It's the key covering Cook Con Cloud, Native Con Europe twenty nineteen by Red Hat, The Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm student of my co host is Corey Quinn and happy to welcome back to the program. Doug Davis, who's a senior technical staff member and PM of a native and happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright. So, Corey, I got really excited when he saw this Because server lists, uh, is something that, you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, Kay native. You know, I look at it kind of a bridging the solution, but, you know, we're talking. It's not the, you know, you know, containers or server. Listen, you know, we understand that world, they're spectrums, and there's overlap. So maybe is that is a set up. You know what is the service. Working groups, you know, Charter, Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use? It went out through it. Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen staff in general. What should they do? Do next and basic came down to three different things. One was education. We want to be educate the community on what services, when it's appropriate >> stuff like that >> to what should wait. I'm sorry I'm getting somebody thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And, third, >> what should we >> do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability portability, stuff like that. And one of the low hang your food so they identified was, Well, service seems to be all about events. So there's something inventing space we can do and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cloud events working or project, and they're basically divine, Eva said. Specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and run it. I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that point is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers things like event base, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a one of the key. Thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think Canada does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do If you want to think of it as a pass. Go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything else. >> So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says, No, you gotta know about networks, Congress on values, that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could. This stuff that you deploy on K native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is a first. It was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience, time or on Lee runs in certain run times, or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial I kind of understand why they're there, because the way things have progressed, But again, it's one reason I excited like a native is because a lot of those limitations aren't there. Now. Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the service features it offers up, became these simplified incriminate his experience. So if you think about what you could do with Coronet is right, you can deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seemed very excited by K native and I think a lot of it was because we were just talking about sort of the merging of the worlds because I do think there is still some confusion around, as you said, when to use one versus the other. And I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going the future. I don't know the honest They didn't actually say a whole lot there. I had my own personal opinion, and lot of is what already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick technology gets my job done and hopefully that one will solve a lot of my needs. But for the most part, I think it was really more about Kenya than anything else yesterday. >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to seriously with server lists where some of its most vocal proponents are also so the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The solution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your >> mind? Um, so this is hardly the answer because I have the IBM Army running through my head because what's interesting is. I do hear people talk about service is good for this and not this or you can date. It was good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around, they would say, You know when to use viens roses containers. My go to answer was, always try containers first. Your life would be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured >> thank you Usual response back. Teo >> I'LL give you the tough one. The critical guy had in Seattle when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws lambda, and number two is probably as your functions and as of Seattle, neither of those was fully integrated since then. I talked a little startup called I Believe his Trigger Mash that that has made some connections between Lambda on K Native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary. Tightly integrated and leveraging the carbonates features the way Kay native is doing, and I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it. I think people were doing were noticing. Is that what you said? There's a lot of other players out there and his heart feel the choose and what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure like communities is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, obviously. >> Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff, obviously, through the eventing side of a native. We do. But from a serving perspective, I'm not sure I hold her old water. From that perspective, you have to be >> honest. All right, Well, Doug Davis, we're done for This one really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 22 2019

SUMMARY :

It's the key covering Cook Con It's not the, you know, you know, containers or server. And so the output of this working group was a white paper. others other service projects, you know, getting encouraged in the joint to grow the community. and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native help us move towards It is not the easiest thing in the world to use, and it kind of forced you that it's about business logic and solving the things that Air corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual in terms of what you should not run on time like he needed or any of the platform is whatever that platform I tend to see most of the constraints around. thank you Usual response back. And, you know, what can you tell us about, Is that what you said? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the servant working

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

CoreyPERSON

0.99+

EvaPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

third pieceQUANTITY

0.99+

K NativeORGANIZATION

0.99+

KayPERSON

0.99+

eightQUANTITY

0.99+

TeoPERSON

0.99+

eightyQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

DougPERSON

0.99+

IBM ArmyORGANIZATION

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

Missing the PointTITLE

0.99+

yesterdayDATE

0.98+

YesterdayDATE

0.98+

CongressORGANIZATION

0.98+

two serviceQUANTITY

0.98+

KubeConEVENT

0.98+

two worldsQUANTITY

0.98+

KaniaPERSON

0.98+

Ecosystem PartnersORGANIZATION

0.98+

zeroQUANTITY

0.98+

IBM Red HatORGANIZATION

0.97+

CincyLOCATION

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.96+

ninety percentQUANTITY

0.96+

one reasonQUANTITY

0.96+

thirdQUANTITY

0.95+

Cooper NeteaseORGANIZATION

0.93+

KenyaLOCATION

0.93+

MohrPERSON

0.92+

Native Computing FoundationORGANIZATION

0.91+

surlyPERSON

0.91+

One thingQUANTITY

0.91+

KeitaPERSON

0.9+

Cube KhanPERSON

0.9+

CloudORGANIZATION

0.9+

K nativePERSON

0.89+

Cloud FoundryORGANIZATION

0.87+

twenty nineteenQUANTITY

0.86+

LennoxORGANIZATION

0.85+

The Cloud,ORGANIZATION

0.85+

CoronetORGANIZATION

0.83+

CloudNativeCon EU 2019EVENT

0.83+

Kay nativePERSON

0.82+

fourQUANTITY

0.82+

point BOTHER

0.81+

FeliciaPERSON

0.78+

couple of weeks agoDATE

0.77+

KenyaORGANIZATION

0.75+

three different thingsQUANTITY

0.74+

CainPERSON

0.73+

Cook ConEVENT

0.73+

two thousand seventeenQUANTITY

0.72+

CubesORGANIZATION

0.7+

KORGANIZATION

0.69+

CubeCOMMERCIAL_ITEM

0.69+

K nativeORGANIZATION

0.69+

Ueda GoodPERSON

0.69+

K nativePERSON

0.67+

lambdaORGANIZATION

0.67+

Native Con EuropeEVENT

0.67+

coupleQUANTITY

0.67+

Cooper NeteaseORGANIZATION

0.66+

Doug Davis, IBM | KubeCon + CloudNativeCon EU 2019


 

>> about >> fifteen live from basically about a room that is a common club native con Europe twenty nineteen by Red Hat, The >> Cloud, Native Computing Foundation and Ecosystem Partners. >> Welcome back to the Cubes. Live coverage of Cloud Native Con Cube Khan, twenty nineteen I'm stupid in my co host is Corey Quinn and having a welcome back to the program, Doug Davis, who's a senior technical staff member and PM of a native. And he happens to be employed by IBM. Thanks so much for joining. Thanks for inviting me. Alright, So Corey got really excited when he saw this because server Lis is something that you know he's been doing for a while. I've been poking in, trying to understand all the pieces have done marvelous conflict couple of times and, you know, I guess, I guess layout for our audience a little bit, you know, k native. You know, I look at it kind of a bridging a solution, but, you know, we're talking. It's not the, you know, you know, containers or server lists. And, you know, we understand that world. They're spectrums and there's overlap. So maybe as that is a set up, you know, What is the surveillance working groups? You know, Charter. Right. So >> the service Working Group is a Sand CF working group. It was originally started back in mid two thousand seventeen by the technical recite committee in Cincy. They basically wanted know what is service all about his new technology is that some of these get involved with stuff like that. So they started up the service working group and our main mission was just doing some investigation. And so the output of this working group was a white paper. Basically describing serval is how it compares with the other as is out there. What is the good use cases for when to use that went out through it? Common architectures, basically just explaining what the heck is going on in that space. And then we also produced a landscape document basically laying out what's out there from a proprietors perspective as well is open source perspective. And then the third piece was at the tail end of the white paper set of recommendations for the TOC or seen stuff in general. What do they do next? And basic came down to three different things. One was education. We want to be educate the community on what services when it's appropriate stuff like that. Two. What should wait? I'm sorry I'm getting somebody Thinks my head recommendations. What other projects we pull into the CNC f others other service projects, you know, getting encouraged in the joint to grow the community. And third, what should we do around improbability? Because obviously, when it comes to open source standards of stuff like that, we want in our ability, portability stuff like that and one of the low hang your food should be identified was, well, service seems to be all about events. So there's something inventing space we could do, and we recognize well, if we could help the processing of events as it moves from Point A to point B, that might help people in terms of middleware in terms of routing, of events, filtering events, stuff like that. And so that's how these convents project that started. Right? And so that's where most of service working group members are nowadays. Is cod events working or project, and they're basically divine, Eva said specification around cloud events, and you kind of think of it as defining metadata to add to your current events because we're not going to tell you. Oh, here's yet another one size fits all cloud of in format, right? It's Take your current events. Sprinkle a little extra metadata in there just to help routing. And that's really what it's all about. >> One of the first things people say about server list is quoted directly from the cover of Missing the Point magazine Server list Runs on servers. Wonderful. Thank you for your valuable contribution. Go away slightly less naive is, I think, an approach, and I've seen a couple of times so far at this conference. When talking to people that they think of it in terms of functions as a service of being able to take arbitrary code and running, I have a wristwatch I can run arbitrary code on. That's not really the point. It's, I think you're right. It's talking more about the event model and what that unlocks As your application. Mohr less starts to become more self aware. Are you finding that acceptance of that viewpoint is taking time to take root? >> Yeah, I think what's interesting is when we first are looking. A serval is, I think, very a lot of people did think of service equals function of the service, and that's all it was. I think what we're finding now is this this mode or people are more open to the idea of sort of as you. I think you're alluding to merging of these worlds because we look at the functionality of service offers, things like event based, which really only means is the messages coming in? It just happens to look like an event. Okay, fine. Mrs comes in you auto scale based upon, you know, loaded stuff like that scale down to zero is a the monkey thought it was really like all these other things are all these features. Why should you limit those two service? Why not a past platform? Why not? Container is a service. Why would you want those just for one little as column? And so my goal with things like a native though I'm glad you mentioned it is because I think he does try to span those, and I'm hoping it kind of merges them altogether and says, Look, I don't care what you call it. Use this piece of technology because it does what you need to do. If you want to think of it as a pass, go for I don't care. This guy over here he wants think that is a FAZ Great. It's the same piece of technology. Does the feature do what you need? Yes or no? Ignore that, nor the terminology around it more than anything >> else. So I agree. Ueda Good, Great discussion with the user earlier and he said from a developer standpoint, I actually don't want to think too much about which one of these pass I go down. I want to reduce the friction for them and make it easy. So you know, how does K native help us move towards that? You know, ideal >> world, right? And I think so fine. With what I said earlier, One of the things I think a native does, aside from trying to bridge all the various as columns is I also look a K native as a simplification of communities because as much as everybody here loves communities, it is kind of complicated, right? It is not the easiest thing in the world to use, and it kind of forced you to be a nightie expert which almost goes against the direction we were headed. When you think of Cloud Foundry stuff like that where it's like, Hey, you don't worry about this something, we're just give us your code, right? Cos well says No, you gotta know about Network Sing Gris on values that everything else it's like, I'm sorry, isn't this going the wrong way? Well, Kania tries to back up a little, say, give you all the features of Cooper Netease, but in a simplified platform or a P I experience that you can get similar Tokat. Foundry is Simo, doctor and stuff, but gives you all the benefits of communities. But the important thing is if for some reason you need to go around K native because it's a little too simplified or opinionated, you could still go around it to get to the complicated stuff. And it's not like you're leaving that a different world or you're entering a different world because it's the same infrastructure they could stuff that you deploy on. K Native can integrate very nicely with the stuff you deploy through vanilla communities if you have to. So it is really nice emerging these two worlds, and I'm I'm really excited by that. >> One thing that I found always strange about server list is at first it was defined by what it's not and then quickly came to be defined almost by its constraints. If you take a look at public cloud offerings around this, most notably a ws land other there, many others it comes down well. You can only run it for experience time or it only runs in certain run times. Or it's something the cold starts become a problem. I think that taking a viewpoint from that perspective artificially hobbles what this might wind up on locking down the road just because these constraints move. And right now it might be a bit of a toy. I don't think it will be as it because it needs to become more capable. The big value proposition that I keep hearing around server listen I've mostly bought into has been that it's about business logic and solving the things that Air Corps to your business and not even having to think about infrastructure. Where do you stand on that >> viewpoint? I completely agree. I think a lot of the limitations you see today are completely artificial. I kind of understand why they're there, because the way things have progressed. But again, that's one reason I excited like a native is because a lot of those limitations aren't there. Now, Kay native doesn't have its own set of limitations. And personally, I do want to try to remove those. Like I said, I would love it if K native, aside from the serval ISS features it offers up, became these simplified, incriminate his experience. So if you think about what you could do with Coronet is right, you could deploy a pod and they can run forever until the system decides to crash. For some reason, right, why not do that with a native and you can't stay with a native? Technically, I have demos that I've been running here where I set the men scale the one it lives forever, and teenager doesn't care right? And so deploying an application through K native communities. I don't care that it's the same thing to me. And so, yes, I do want to merge in those two worlds. I wantto lower those constraints as long as you keep it a simplified model and support the eighty to ninety percent of those use cases that it's actually meant to address. Leave the hard stuff for going around it a little. >> Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble of arguing over, you know? You know what we call it, how the different pieces are. Yesterday you had a practitioner Summit four server list. So what? I want to hear his You know, whats the practitioners of you put What are they excited about? What are they using today and what are the things that they're asking for? Help it become, you know, Maur were usable and useful for them in the future. >> So in full disclosure, we actually kind of a quiet audience, so they weren't very vocal. But what little I did here is they seem very excited by K native and I think a lot of it was because we were just talking about that sort of merging of the worlds because I do think there is still some confusion around, as you said when you use one verse of the other and I think a native is helping to bring those together. And I did hear some excitement around that in terms of what people actually expect from us going in the future. I don't know. Be honest. They didn't actually say a whole lot there. I had my own personal opinion, and lot of years would already stayed in terms of emerging. Stop having me pick a technology or pick a terminology, right? Let me just pick the technology. It gets my job done and hopefully that one will solve a lot of my needs. But for the most parts, I think it was really more about Kaneda than anything else. Yesterday, >> I think like Lennox before it. Any technology? At some point you saw this with virtual ization with cloud, with containers with Cooper Netease. And now we're starting to Syria to see with server lists where some of its most vocal proponents are also the most obnoxious in that they're looking at this from a perspective of what's your problem? I'm not even going to listen to the answer. The absolution is filling favorite technology here. So to that end today, what workloads air not appropriate for surveillance in your mind? >> Um, >> so this is hardly an answer because I have the IBM Army running through my head because what's interesting is I do hear people talk about service is good for this and not this or you can date. It is good for this and not this. And I hear those things, and I'm not sure I actually buy it right. I actually think that the only limitations that I've seen in terms of what you should not run on time like he needed or any of the platform is whatever that platform actually finds you, too. So, for example, on eight of us, they may have time limited in terms of how long you can run. If that's a problem for you, don't use it to me. That's not an artifact of service. That's artifact of that particular choice of how the implement service with K native they don't have that problem. You could let it run forever if you want. So in terms of what workloads or good or bad, I honestly I don't have a good answer for that because I don't necessary by some of the the stories I'm hearing, I personally think, try to run everything you can through something like Cain native, and then when it fails, go someplace else is the same story had when containers first came around. They would say, You know when to use BMS vs Containers. My go to answer was, always try containers first. Your life will be a whole lot easier when it doesn't work, then look at the other things because I don't want to. I don't want to try to pigeonhole something like surly or K native and say, Oh, don't even think about it for these things because it may actually worked just fine for you, right? I don't want people to believe negative hype in a way that makes sense, >> and that's very fair. I tend to see most of the constraints around. This is being implementation details of specific providers and that that will dictate answers to that question. I don't want to sound like I'm coming after you, and that's very thoughtful of measured with >> thank you. That's the usual response back. So don't >> go. I'Ll give you the tough one critical guy had in Seattle. Okay, when I looked at K Native is there's a lot of civilised options out there yet, but when I talked to users, the number one out there is a ws Lambda, and number two is probably as your functions. And as of Seattle, neither of those was fully integrated. Since then, I talk to a little startup called Believers Trigger Mash, that that has made some connections between Lambda Ah, and a native. And there was an announcement a couple of weeks ago, Kedia or Keita? That's azure and some kind of future to get Teo K native. So it feels like it's a maturity thing. And, you know, what can you tell us about, you know, the big cloud guys on Felicia? Google's involved IBM Red Hat on and you know Oracle are involved in K Native. So where do those big cloud players? Right? >> So from my perspective, what I think Kenya has going for it over the others is one A lot of other guys do run on Cooper Netease. I feel like they're sort of like communities as well as everything else, like some of them can run. Incriminate is Dr anything else, and so they're not necessary, tightly integrated and leveraging the community's features the way Kay Native is doing. And I think that's a little bit unique right there. But the other thing that I think K native has going for it is the community around it? I think people were doing were noticing. Is that what you said? There's a lot of other players out there, and it's hard for people to choose. And what? I think Google did a great job of this sort of bringing the community together and said, Look, can we stop bickering and develop a sort of common infrastructure? Like Who Burnett is is that we can all then base our surveillance platforms on, and I think that rallying cry to bring the community together across a common base is something a little bit unique for K native. When you compare it with the others, I think that's a big draw for people. Least from my perspective. I know it from IBM Zzzz Well, because community is a big thing for us, >> obviously. Okay, so will there be a bridge to those other cloud players soon as their road map? For that, >> we think a native itself. Yeah, I am not sure I can answer that one, because I'm not sure I heard a lot of talk about bridging per se. I know that when you talk about things like getting events from other platforms and stuff. Obviously, through the eventing side of a native we do went from a serving perspective. I'm not sure I hold her old water. From that perspective, you have >> to be honest. All right, Well, Doug Davis, we're done for This one. Really appreciate all the updates there. And I definitely look forward, Teo, seeing the progress that the servant working group continues to do, so thank you so much. Thank you for having me. Alright for Corey Quinn. I'm stupid and will be back with more coverage here on the Cube. Thanks for watching.

Published Date : May 21 2019

SUMMARY :

So maybe as that is a set up, you know, What is the surveillance working groups? you know, getting encouraged in the joint to grow the community. Thank you for your valuable contribution. Does the feature do what you need? So you know, how does K native But the important thing is if for some reason you need to go around K that it's about business logic and solving the things that Air Corps to your business and not even having to think I don't care that it's the same thing to me. Alright, So, Doug, you know, it's often times, you know, we get caught in this bubble And I did hear some excitement around that in terms of what people actually expect At some point you saw this with virtual I honestly I don't have a good answer for that because I don't necessary by some of the the I don't want to sound like I'm coming after you, That's the usual response back. And, you know, what can you tell us about, Is that what you said? Okay, so will there be a bridge to those other cloud players soon as their road map? I know that when you talk about things like getting And I definitely look forward, Teo, seeing the progress that the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug DavisPERSON

0.99+

Corey QuinnPERSON

0.99+

IBMORGANIZATION

0.99+

SeattleLOCATION

0.99+

CoreyPERSON

0.99+

OracleORGANIZATION

0.99+

EvaPERSON

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

third pieceQUANTITY

0.99+

Air CorpsORGANIZATION

0.99+

TeoPERSON

0.99+

K NativeORGANIZATION

0.99+

eightyQUANTITY

0.99+

DougPERSON

0.99+

eightQUANTITY

0.99+

IBM ArmyORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.99+

Missing the PointTITLE

0.99+

YesterdayDATE

0.99+

KubeConEVENT

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

Cloud, Native Computing FoundationORGANIZATION

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

fifteenQUANTITY

0.99+

TwoQUANTITY

0.98+

two worldsQUANTITY

0.98+

SyriaLOCATION

0.98+

thirdQUANTITY

0.98+

IBM Red HatORGANIZATION

0.98+

two serviceQUANTITY

0.98+

one reasonQUANTITY

0.98+

CincyLOCATION

0.98+

zeroQUANTITY

0.97+

KayPERSON

0.97+

ninety percentQUANTITY

0.96+

K nativeORGANIZATION

0.96+

Believers Trigger MashORGANIZATION

0.96+

Kay NativePERSON

0.95+

One thingQUANTITY

0.95+

EuropeLOCATION

0.95+

point BOTHER

0.95+

Cooper NeteaseORGANIZATION

0.94+

MohrPERSON

0.93+

twentyQUANTITY

0.93+

KaniaPERSON

0.93+

threeQUANTITY

0.91+

one verseQUANTITY

0.89+

KediaPERSON

0.89+

Point AOTHER

0.88+

couple of weeks agoDATE

0.87+

KeitaPERSON

0.83+

fourQUANTITY

0.82+

K NativePERSON

0.81+

CloudNativeCon EU 2019EVENT

0.79+

KenyaORGANIZATION

0.79+

two thousand seventeenQUANTITY

0.78+

Ueda GoodPERSON

0.78+

K nativePERSON

0.76+

coupleQUANTITY

0.76+

Teo K nativePERSON

0.75+

LambdaTITLE

0.75+

twenty nineteenQUANTITY

0.75+

Cloud FoundryORGANIZATION

0.75+

LennoxPERSON

0.74+

CoronetORGANIZATION

0.73+

FeliciaPERSON

0.71+

Cube KhanPERSON

0.71+

K nativeORGANIZATION

0.7+

Network Sing GrisORGANIZATION

0.67+

NeteaseORGANIZATION

0.65+

surlyPERSON

0.64+

ConORGANIZATION

0.64+