Image Title

Search Results for Burner:

UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL


 

hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you

Published Date : Mar 30 2020

SUMMARY :

the end to the stage in the area we have

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TomPERSON

0.99+

MartaPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

Chris KegPERSON

0.99+

Laura IpsenPERSON

0.99+

Jeffrey ImmeltPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris O'MalleyPERSON

0.99+

Andy DaltonPERSON

0.99+

Chris BergPERSON

0.99+

Dave VelantePERSON

0.99+

Maureen LonerganPERSON

0.99+

Jeff FrickPERSON

0.99+

Paul FortePERSON

0.99+

Erik BrynjolfssonPERSON

0.99+

AWSORGANIZATION

0.99+

Andrew McCafeePERSON

0.99+

YahooORGANIZATION

0.99+

CherylPERSON

0.99+

MarkPERSON

0.99+

Marta FedericiPERSON

0.99+

LarryPERSON

0.99+

Matt BurrPERSON

0.99+

SamPERSON

0.99+

Andy JassyPERSON

0.99+

Dave WrightPERSON

0.99+

MaureenPERSON

0.99+

GoogleORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

NetflixORGANIZATION

0.99+

$8,000QUANTITY

0.99+

Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

EuropeLOCATION

0.99+

AndyPERSON

0.99+

30,000QUANTITY

0.99+

MauricioPERSON

0.99+

PhilipsORGANIZATION

0.99+

RobbPERSON

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike NygaardPERSON

0.99+

Keynote Analysis | AWS Summit New York 19


 

>> live from New York. It's the Q covering AWS Global Summit 2019. Brought to you by Amazon Web service is >> Hi and welcome to New York City, The Big Apple. I'm stupid and my co host for today is Cory Quinn, and this is eight of us. Summit New York City. It is one of the regional events that they have, but these regional events are actually tend to be bigger and more exciting than many companies. You know, big events not, you know, say that companies don't do good shows, but if you look, we've got 11,500 people in attendance over 120 seconds over 125. Sponsoring partners here in the ecosystem just had Werner Vogels up on stage. A number of the customers such a fin ra and Gordon, who we will have on the program on good energy, a local show it is free to attend Cory. Before we get into the technology, though, there's a little bit of a protest going on. Here is actually the second Amazon show in a row. That this was was an Amazon re Mars, where a protester talking about I believe it was something around about chickens in Whole Foods. Basically, she got really close to the richest man in the world. But the protest here, it's outside, it's going and it's about ice and border control was actually a very well organized protest. Security had to take many of them out for the first least half hour of the of the keynote. Warner stopped a few times and said, Look, I'll be happy to talk to you after, but please let me finish. I thought he handled it, respectively. But what was your take? >> Very much so. And it's, I think it's an issue with There aren't too many people you'd want to associate with. On the other side of it, kids in Cages is not something anyone sensible wants to endorse. The challenge that I continually have, I think, is that it's easy to have these conversations. Now is not the time. Okay, great. Typically, it's difficult to get big companies to say, and now is the time for us to address this in anything outside of very carefully worded statements. So I empathize. I really do. I mean, as a speaker myself, it's terrifying to me the idea that I could >> go up and >> have to have that level of conversation and a suddenly interrupted by people yelling at me. It's gotta be nerve wracking. Speaking to 10,000 people on its own is not easy, and having to carry that forward with something that effectively comes down to a morality question is it's gotta be tough. I have sympathy for people going through this on work on Amazon, and it's I don't know that there's a great answer right now. >> So, Cory, I know you know You are not deep in the government space, but you were at the public sector show there and there's always this discussion as you know Well, you're supplying the technology. While Amazon might not be providing, you know, bummers and, you know, guns. They are providing the technology underneath. Facial recognition causes a lot of concern. You rightfully so that make sure we understand this thing security products and the like. So you know, when you have the Department of Defense and Border Control as your clients, they do open themselves up >> for some criticism, right? At some point you have to wonder who you do business with versus who don't do business with and the historical approach. Well, as long as there are sanctions or laws preventing us from doing business with someone, we'll be open to all comers. I some level I find that incredibly compelling. In practice, the world is messy. If things were that black and white, he wouldn't have the social media content, moderation issues. It would be a very different story with a very different narrative. >> Yeah, definitely. Amazon as a whole has a platform, and they have relationships. You know, Jeff Bezos has met with, you know, the highest levels of power in this country. They've got a carny. The foot was part of the Obama administration helping with policy. So absolutely with great to see Amazon, you know, take a strong puff statement and you know, for good is something that we're hugely a part of and therefore way want to see all the suppliers you know, having a dialogue and helping to move this >> for you. I think the lesson that we take from it, too, is that there are multiple ways to agitate for change and protest. One is to disrupt the keynote, and I understand that it gets attention and it's valuable But you could do that, or you can have a seat at the table and start lobbying for change, either internally or with stakeholders. But you need to it. There's a bunch of different paths to get there, and I think that I don't blame anyone who's protesting today, and I don't blame anyone who chose not to. >> All right, So let's let's let's talk now about some of the content. So Cory lutely, you know, there there's in the Amazon ecosystem. Every day we wake up and there were multiple new announcements. A matter of fact. We're always saying, Oh my gosh, how do I keep up with all of the things happening there? Well, one of the ways we keep up with it is reading last week in a VWs, which is your newsletter. I'll do the shameless plug, you know, for a much appreciated by telling my story. Cory. But Amazon Cloudwatch Container Insight, Amazon Event Bridge. You know, new developer kids fluent bit, you know, talking about the momentum of the company security databases on you know, the general adoption overall, you know, quick take for me as I love to hear you know, Werner up there talking about applications. It's not purely Oh, everything's going to live in the cloud and it'll be sun shines and unicorns and rainbows. But we understand that there's challenges here, your data and how we manage that requires, you know, a broad ecosystem that was the event bridge is something I would definitely want drilling on because from a serverless environment, not just one thing, it's lots of different things. And how do we play between all of them? But since you do sort through and sift through all of these announcements, give us a date. It was there anything new here? Did you already know all of this because it's in your R S s feed newsletters? What did grab you? >> Surprisingly, it turns out, in the weeks with you have, obviously reinvent is just a firehose torrent that no human being can wind up consuming. And you see a few releases in Santa Clara and a few in New York. But I thought I knew most of things that were coming out, and I did. I missed one that I just noticed. About two minutes we went on the air called cloudwatch anomaly detection. The idea is that it uses machine learning. So someone check that off the business card of the bingo card. And at that point, you take all the cloudwatch logs and start running machine learning and look for anomalies discrepancies. In the rest it uses machine learning. But rather than go figure out what it's for, it's applied to a very specific problem and those of the A. I am l products. I like the best where it's we're solving a problem with your data for you. But riding guard rails as opposed to step one, hire $2,000,000 worth of data. Scientists Step two. We're still working on that. >> All right, so court cloudwatch Actually, you saw the event bridge that I mentioned, which is that event ecosystem around Lambda uh, Deepak, who we're going to have on the program that said that it was the learnings from cloudwatch that helped them to build. This may be for audience. Just give us cloudwatch. There's a lot of different products under that. Give us what you hear from your customers. You know, we're cloudwatch fits and, you >> know, let's start at the beginning for those who are fortune enough never to have used it. Cloudwatch is AWS is internal monitoring solution. It gathers metrics. It gathers logs, it presents them in different ways. And it has interesting bill impacts as a cloud economist. I see it an awful lot where every time you, the monitoring company, walk around the Expo Hall, you'll trip over 40 of those. They're all gathering their data on the infrastructure from Bob Watch and interpreting that. Now you're paying for the monitoring company and you're paying for the FBI charges against it. And it was sort of frozen in amber, more or less for a good five years or so. I wrote a bit of a hit piece late last year and had some fascinating conversations afterwards, and it hasn't aged well, they're really coming to the floor with a lot of enhancements that are valuable on it. The problem is, there's a tremendous amount of data. How do you get signal from it? How do you look at actionable things? If you're running 10,000 instances, you're not looking at individual metrics or individualist. You care about aggregates, but you also care about observe ability. You care about drilling down into things. Burner talked about X Rays distributed tracing framework today, and I think we're rapidly seeing across the board that it all ties back to events. Cloudwatch events is what's driving a lot of things like Event Bridge and the idea of a defense centric architecture is really what we're trying to see Software's evolving into. >> Yeah, it's one of those things, you know, when you talk, you know that server lis term out, their events are at the center of them. And how do I get some standardization across the industry? There's open source groups that are trying to insert themselves and give some flexibility here. You know, when I want understand from Ben, Fridge says, Okay, it's Lambda and their ecosystem. But is this going to be a lame the only ecosystem? Or will this lay the ground work so that, yes, there are other clouds out there? You know what azure has other environment? Will this eventually be able to extend beyond this for? Is this a Amazon proprietary system? You have any insight there? >> It's a great question. I would argue that I guess one of the taking a step back for a second. It would have to be almost irrelevant In some cases when you start looking at server this lock in, it's not the fact that who there's this magic system only in one provider that will take my crappy code and run it for me. It's tied into the entire event ecosystem. It's tied into a bunch of primitives that do not translate very well. Now, inherently by looking. What event bridge is in the fact that anyone who wants to be integrated into their applications, you absolutely could wind up with a deep native integration coming from another large, hyper scale pop provider? The only question is, will >> you great, great point. I know when I've talked to some of the server lis ecosystem. It's that skills on understanding, you know, each environment because today, doing A W S versus doing azure, there's still a lot of differences there. Sure, I could learn it, but >> yeah, and one of the things that I think is fascinating to is we've seen a couple attempts of this before from other start ups that are doing very similar things in open stores or trying to do something themselves. But one of the things that change this tremendously here is it this is AWS doing that? It doesn't matter what they do, what ridiculous name they give it when they want something. World generally tends to sit up and notice, just by sheer virtue of its scale and the fact that it's already built out. And you don't have to build the infrastructure, help to run these things. If anything has a chance to start driving a cohesive standard around this, it's something coming from someone like Amazon. >> Yeah, absolutely. All right, Cory, you know, database is always a hot topic. Latest stat from Warner is I believe it was 150,000 databases migrated. You know, you called and said, Hey, why is amazon dot com on there? Jeff Faris like, Well, they have a choice. And of course, Amazon would point out they were using a traditional database for a long time and now have completely unplug the last in a >> long time. But they finally got off of a database that was produced by a law firm, and I understand the reasons behind that. But I was talking with people afterwards. Amazon does have a choice. Do they use, And if AWS wants to win them over to use their service is they have to sell them just like any other customer. And that's why it's on that slide as a customer. Now, if you're not in the ecosystem like some of us are, it looks a little disjointed of weight. So successfully sold yourself and put yourself on the slide. Okay, >> Yes, it was actually. So so. The biggest thing I learned at the Amazon remarks show when you talk about all the fulfillment centers in the robotics in machine, learning almost everything underneath there it's got eight of us. Service is underneath it. So absolutely, it is one company. But yes, Amazon is the biggest customer of AWS. But that doesn't mean that there isn't somewhere, you know. You know, I still haven't gotten the word if they're absolutely 100% on that, because we expect that there's some 400 sitting in the back ground running one of those financial service things. Maybe they finally micro did that one >> that's building in AWS 400. >> All right, Cory, what else you know either from the key note or from your general observations about Amazon that you want to share, >> I I want to say that it's very clear that Amazon is getting an awful lot of practice at putting these events on and just tracking a year to year, not just the venue. Logistics, which, Okay, great. Get a bunch of people in a conference room, have a conversation. Do Aquino throw him out the end. But the way they're pacing the Gino's, the way they're doing narratives. The customer stories that are getting up on stage are a lot less challenging. But then they were in years past. Where people get on stage, they seem more comfortable. It's very clear that a number of Amazon exacts not just here but another. Summits have been paying serious attention to how to speak publicly to 10,000 people at once. It's its own unique skill. >> Yeah, and you gotta like that, You know that. You know, the two first customers that they put on which will have on financial service is, of course, a big presence here in New York City. Gord Ash has their headquarters, you know, just a few blocks uptown from good, deep stories. Isn't you know, there there's that mixed that they did a good job. I thought of kind of cloud 101 because still many customers are very early on that journey. We're not all cloud native, you know, run by the developers and everything there. But, you know, good looks of technology and the new pieces for those people that have been in a while, But still, you know, welcoming and embracing offer how to get started >> and the stories we're moving up the stack to. It's not >> We had a bunch of B. >> M s and we put them in a different place. >> Hey, >> which is great news. Everyone starts there. But now the stories are moving into running serious regulated workloads with higher level of service is And that's great because it's also not the far extreme Twitter for pets. We built this toy project last week when someone else fell through. And now we have to give this talk. It's very clearly something large enterprises. >> Yeah. So, Corey, last thing I want to ask you is you remember in the early days, you know that public cloud? Oh, it was It was cheap and easy to use today. They have 200 instance types up there, you know? What does that mean for customers. You know you are a cloud economist. So need your official opinion diagnosis. >> I think it reduces the question, too, before you buy a bunch of reserve businesses. Are you on the right instance? Types. And the answer is almost certainly not just based on statistics alone. So now it's a state of indecision. It's rooted in an epic game of battleship between two Amazon s VVS, and I really hope one of the winds already so we can stop getting additional instance dives every couple of months. But so far no luck. >> So in your your your perfect world, you know what the announcement reinvented, fixes the problem. >> That's a really good question. I think that fundamentally, I don't I don't And I don't think I have any customers who care what type of incidents they're running on. They want certain resource levels. They want certain performance characteristics. But whatever you call that does not matter to them and having to commit to, though what you picked for 1 to 3 years, that's a problem. You don't have to. You can go on demand, but you're leaving 30% of the day. >> Yeah, and I love that point is actually taken. Notes fin rot. I want to talk to them because they say they've done three major re architectures in four years. So therefore, how did they make sure that they get the latest price performance but still get you no good? Good economics on the outdated >> regulatory authority? I just assume they get there with audit threats when it comes time for >> renegotiating. All right. You're Cory Quinn. I am stupid. I mean, we have a full day here of world Wall coverage from eight of US. Summit, New York City. Thank you so much for watching.

Published Date : Jul 11 2019

SUMMARY :

Brought to you by Amazon Web service You know, big events not, you know, say that companies don't do good shows, and now is the time for us to address this in anything outside of very carefully worded statements. and having to carry that forward with something that effectively comes down to a morality question So you know, when you have the Department of Defense At some point you have to wonder who you do business with versus who don't do business with and You know, Jeff Bezos has met with, you know, the highest levels of power in this country. But you need to it. the general adoption overall, you know, quick take for me as I love to hear you And at that point, you take all the cloudwatch logs and start running machine learning and You know, we're cloudwatch fits and, you You care about aggregates, but you also care about observe ability. Yeah, it's one of those things, you know, when you talk, you know that server lis term out, It would have to be almost irrelevant In some cases when you start looking at server this lock in, understanding, you know, each environment because today, doing A W S versus doing azure, And you don't have to build the infrastructure, help to run these things. All right, Cory, you know, database is always a hot topic. But I was talking with people afterwards. But that doesn't mean that there isn't somewhere, you know. But the way they're pacing the Gino's, the way they're doing narratives. We're not all cloud native, you know, run by the and the stories we're moving up the stack to. But now the stories are moving into running serious regulated You know you are a cloud economist. I think it reduces the question, too, before you buy a bunch of reserve businesses. having to commit to, though what you picked for 1 to 3 years, that's a problem. the latest price performance but still get you no good? Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Jeff FarisPERSON

0.99+

Jeff BezosPERSON

0.99+

Cory QuinnPERSON

0.99+

FBIORGANIZATION

0.99+

New YorkLOCATION

0.99+

$2,000,000QUANTITY

0.99+

30%QUANTITY

0.99+

1QUANTITY

0.99+

AWSORGANIZATION

0.99+

New York CityLOCATION

0.99+

GordonPERSON

0.99+

CoreyPERSON

0.99+

CoryPERSON

0.99+

100%QUANTITY

0.99+

WernerPERSON

0.99+

Department of DefenseORGANIZATION

0.99+

10,000 instancesQUANTITY

0.99+

Cory QuinnPERSON

0.99+

11,500 peopleQUANTITY

0.99+

10,000 peopleQUANTITY

0.99+

eightQUANTITY

0.99+

oneQUANTITY

0.99+

200 instanceQUANTITY

0.99+

150,000 databasesQUANTITY

0.99+

last weekDATE

0.99+

Werner VogelsPERSON

0.99+

twoQUANTITY

0.99+

BenPERSON

0.99+

five yearsQUANTITY

0.99+

Santa ClaraLOCATION

0.99+

3 yearsQUANTITY

0.98+

USLOCATION

0.98+

secondQUANTITY

0.98+

over 120 secondsQUANTITY

0.98+

one providerQUANTITY

0.98+

four yearsQUANTITY

0.98+

TwitterORGANIZATION

0.98+

BurnerPERSON

0.97+

todayDATE

0.97+

DeepakPERSON

0.97+

firstQUANTITY

0.97+

OneQUANTITY

0.96+

LambdaLOCATION

0.96+

one companyQUANTITY

0.95+

over 40QUANTITY

0.95+

one thingQUANTITY

0.94+

Event BridgeEVENT

0.94+

WarnerPERSON

0.94+

HallLOCATION

0.93+

About two minutesQUANTITY

0.93+

late last yearDATE

0.93+

AWS Global Summit 2019EVENT

0.93+

Do AquinoPERSON

0.93+

each environmentQUANTITY

0.92+

amazon dot comORGANIZATION

0.91+

cloudwatchORGANIZATION

0.91+

Gord AshPERSON

0.9+

Adrian Scott, DecentBet | Cube Conversation


 

(bright music) >> Hello everyone, welcome to a special Cube Conversation here, in the Palo Alto studios, for theCUBE, I'm John Furrier, the founder of SiliconeANGLE Media and theCUBE, and cohost of theCUBE. My next guest is Adrian Scott, who is the CEO of Soma Capital and Head of Technology of decent.bet. You can get the idea of that going to be all about, but, industry legend-- >> Yeah. >> Star of the big screen, good to see you, thanks for comin' in. >> Thank you John, it's great to see you. >> I'm glad I wanted to talk to you, because I know you've been doing a lot of traveling, you've been living in Panama, and overseas, outside the US, mainly around the work you've been doing on the crypto side, obviously Blockchain and with the start of decent.bet, lot of great stuff, but congratulations on a successful initial coin offering! >> Thank you. >> Great stuff, but you're also notable in the industry, initial investor in Napster, our generation, first P2P, the first renegade, you know, break down the movie business, but the beginning of what we're now seeing as that decentralized revolution. But you've seen many waves of innovation. You've seen 'em come and go. But this one in particular, Blockchain, decentralized internet, decentralized applications, crypto. Pretty awesome, and lot of young guns are coming in, a lot of older, experienced, alpha entrepreneurs are coming in like yourself, and, we're lookin' at it too. What's your take on it? I mean, how do you talk people that are like, "Well, hey, this is just a scam on the ICS site, "is this real, is it a bubble?" Share your vision on what this is all about, this whole mega-trend, crypto, decentralized. >> And I'll also add, in addition to what you mentioned, the other neat thing here is just the global nature of it. Because we're so used to being Silicon Valley-centric, and having to dig around for funding here, and also, looking only at talent that would move here, whereas with this whole new industry, it's very global, there's global teams, international teams, and, some of the Silicon Valley folks are just struggling to stay relevant, and stay in the game, so that's a fascinating aspect to this new revolution as well. >> And also, the thing I love about this market, it's very efficient, it takes away inefficiencies, in venture capital right now, and private equity being disrupted, that's where the arbitrage is, hence the ICO bubble, but, there is real, legit opportunities, you have Soma Capital, you're an investment fund, that you're doing token investments on. The global nature is interesting, I want to just ask here about this, because, my view is, it changes valuation, it changes valuation mechanisms, it changes the makeup of the venture architecture, it makes up on how people recruit teams, the technology used, and with open source, I mean, this is a first-time view at a new landscape. You can't take a pattern match, model, to this, your thoughts. >> Agree completely, and the efficiency you mentioned, applied to teams, and surfacing engineering talent, and the mathematical minds that can handle crypto internationally, the formation of teams internationally online is actually something special as well, so, with Decent Bet, our team, our founding team includes folks from the US, Panama, Australia, as well, who met up, in a Facebook chat group! And that's how they initially connected, and they didn't know each other physically, before this connection online, and that led to this project, Decent Bet, and ICO, and so on. So it's-- >> You created value from essentially a digital workforce, but, I mean, it reminds me of, like in the old days, you'd chat, and it wasn't a lot of face-to-face, but then now there's video gaming culture, you know, you come in, "Hey, you want to play a game," people don't even know each other, and get a visual, and also an immersive experience with each other. This is now the application for entrepreneurial equations, so this kind of gaming, the game is startups! So how are you looking at this, and how are you investing in it, what are some of the things, and what can people learn from what we're seeing in this new game-ified, if you will, you know, world of starting companies? >> I think one of the things you alluded to there has really become visible, which is the importance of video, as a medium, and I'm still, absorbing and adjusting to that myself. For example, we do video communications, we do conversations at Decent Bet, of the founding team, and, it really connects to the community, and it's so important, and I'm still absorbing it, like I mentioned, 'cause I'm just so used to publishing articles that are very clearly written, and detailed, and so on. We just did an AMA video, an Ask Me Anything video, in Las Vegas, with the executive team, and it went for 80 minutes, answering the questions, that the community had all submitted! And I just try and imagine that five years ago, it's new way of relating-- >> 'Cause there was no blogging, link back, the only thing you could do in blogging. >> Yeah. >> And then write a perfect blog post, or white paper. >> Exactly. >> And that was who you were. >> Yeah. >> Not anymore, it's more community driven. >> Exactly, and that video as a piece of it, has become so, so important, as a way of communicating the character of the team, and-- >> Before we get into decent.bet, I want to drill those, I think it's a great use case, and again, congratulations on great work there. I want to ask you about something that I've been fascinated with, because I obviously, our generation, we grew up on open source when it was second-class citizen, now it runs the whole world, as first-tier, first-class citizen in software world. The role of the community was really important in software development, 'cause that kept a, it kept a balance, there was governance, was consensus, these are words that you hear in the crypto world. And now, whether it's content and or ICO, the role of the community, and certainly, areas that's out of control in the ICO site, people are cracking down on certainly, like you see Facebook and Twitter trying to do something, but you can't stop the wisdom of the crowd. The role of the community in this crypto, decentralized market, ICOs and whatnot, is super important. Can you share your thoughts, and color commentary on why the community's so important, how do you deal with it (laughs), any best practices, either through scar tissue, or successes, share your thoughts on this. >> Oh yeah, it's totally become a factor, and it's 24/7, right? So, when you are running a crypto project, you need your community management team to be there, in the community channels, 24/7, you need to have somebody there, and they need to be at a certain level that they can handle the challenging questions! And we've definitely had moments where, we have people who try to create FUD, potentially, you know, and bring up stuff, and bring it up again later and whatnot, and we need to be proactive, so when questions come up, we were there to be able to explain, "Okay, here's where you can see this on the Blockchain. "You can verify it yourself." And sometimes, it happens when the team is just about to get on a plane (laughs), and be out of internet communication for a while, so, it's a real challenge, and there's been the voice of experience, on that. >> So talk about how you guys connect, because obviously, being connected is important with community access, but also, with connection, increases the service area for hacks, are you guys carrying five burner phones each, how do you handle email, how have you guys dealt with the whole, you know, there is a lot of online activity, certainly, people trying to do some spear phishing, or whatever tactics there are. Telegram has been littered with a lot of spoofing, and what not, so, all this is going on, that you got to have access communication. But there's a safety component that could have really big impacts to these businesses, that aren't tokeners, because, hacking can be easy if you don't protect yourself. >> We really like Signal app, as a communications medium, there's a new one, starting to grow now, called Threema, which is pretty interesting. Telegram, is just a real challenge, and it's unfortunate, because it's now become this metric. >> How many people are active on your channels-- >> That investors like to look at the size of the Telegram group, but we don't actually have a Telegram group for Decent Bet. And we've used Slack, we are going to be rolling out a internally hosted Slack replacement soon based on Rocket.Chat, we really like Rocket.Chat. As you mentioned, there are spear phishing, we do see that, and, one of the nice things is, a few years ago, you had trouble convincing a team to take security seriously! But you know, when you have team members who may have lost $10,000 in a hack-- >> Or more! >> Or more, you know, there's no question that this needs to be a priority, and everybody buys in on it. So that is one net positive out of this. >> Well let's talk about Decent Bet, fascinating use case, it's in the gaming area, gaming as in like betting, my friend Paul Martino invested I think in DraftKings, one of those other companies, I forget which one it was. In the US, there was regulatory issues, but, you know, outside the US where I think you guys are, there's not as much issue. Perfect use case for tokens, in my opinion. So, take a minute to explain Decent Bet, what you guys are all about, and talk about the journey of conception, when you guys conceived it, to ICO. >> Yeah. Decent Bet was founded about a year ago, by the CEO Jedidiah Taylor, who developed an interesting idea, and plan, so, the neat thing about Decent Bet is, first of all, you have all the benefits of the Ethereum Blockchain, in terms of verifying, transactions, and verifying the house's take. Additionally, what Decent Bet does is distributes all the profits of the casino back to the token-holders. 95% goes as proportionally, and then 5% is awarded in a lottery, so there's no profit for any Decent Bet entity, it all goes back to the tokenholders. So you use the token to play, by gambling, but you can also use your token to convert into house shares, for a quarter, and participate in-- >> So the house always wins, that a good model, right? >> Yes. >> You could become the house, through the tokens. >> Exactly, so, the motto we use is our house is your house (laughs). >> Don't bet against the house. >> Yeah. >> Alright so, I love the gambling aspect of it, I think that's going to be a winner. Tech-involved, ICO process bumps, learnings, things you could share with folks? >> Yeah, so, on the technology, one of the neat things we are doing is, we do offer a slots game, which is a primary component of online gambling, and casinos, a pretty dominant piece of the action. But, if you are going to do a simple slots game on the Blockchain, and wait around for blocks to be mined, you're not going to have a great experience. 'Cause you're going to be waiting around, more than you're going to be clicking that button. So, what we use is a technology called state channels, which allows us to do a session, kind of on a side channel, so to speak, and through this state channel, at the end of the session, you post back the results. So you get the verifiability of the Blockchain, but without the delay. So that's a major difference. >> That's off chain, right? >> Yeah. >> Or the on chain is off chain. >> It's kind of-- >> So you're managing the league, to see the chain, so you still experience, and then get to preserve it on the chain. >> Exactly-- >> Okay. >> In terms of the ICO experience, we initiated the ICO end of September, ran for a month, raised more than 52,000 Ether, so very productive ICO process, but with actually some interesting details, so, the ICO structure limited the amount that a particular address could purchase, in the first phases, to 10,000 worth, and then 20,000 dollars worth, with the idea of getting the tokens into the hand of, of people who are going to potentially use them for betting, not just-- >> The more the merrier for you, not, no one taking down allocations, big players. >> Exactly. >> Or whales. >> Not just for the whales, take all, kind of thing. So, that was a interesting structure, and-- >> And that worked well? >> Yeah! >> Alright, talk about the dynamic of post-ICO, because now you guys are building, can you give an update on the state of where you guys are at with the product, availability, how that's going, 'cause obviously you raised the capital through the ICO, democratize it if you will through clever mechanism, which is cool, thanks for sharing that, now what happens? Now, what's going on? >> Yeah, I mean, I think we're doing pretty well in terms of hitting milestones, and showing progress compared to a lot of projects, we released our test net, with slots, and then sportsbook, at the beginning of January, and mid-January, for sportsbook. And, we also did some upgrades with our wallet, we released that, for some enhanced usability, and handling during high peaks on the Ether network, Ethereum network. And then, also, our moving to main net. So we did some newer versions of the test net-- >> When did the main net come in? >> Main net is coming out end of April, and we're on track with that. >> Great, awesome. Congratulations, congratulations on a great job, 52,000 Ether, great raise there, and awesome opportunity. Soma Capital. >> Mm-hmm. >> You're investing now, what do you look for for deals, there's more money chasing good deals now, as we can see, has been a flight to quality obviously. Great global landscape still, what are you looking for? And advice to folks who are looking to do a token, sale, what's your-- >> Big thing we look for are real projects, so (laughs), and they're not that many out there, so we do look for a real use case that makes sense, because, there's a lot of folks out there just sticking Blockchain tag onto anything. And it's not just-- >> Like Kodak for instance. >> Yeah. >> Kodak's the prime example. >> Yes. There are projects out there doing interesting things, Guardium is doing some neat things in terms of 911 response, and opening that up, and creating an alternative to government services. There's WorkCoin, which is-- >> Do you invest in Guardium? >> Yeah, in Guardium, yeah. >> I interviewed them in Puerto Rico. >> Okay, great. >> Great project. >> So very interesting, I was recently giving a talk at a university in Guatemala, and, the students there at business school, it really resonated, the message there, to them, about okay, government 911 is maybe not the ultimate solution for getting help when you need it. >> Well I think, there's a lot of this AI for a good concept, going to Blockchain for good, because, you're seeing a lot of these easy, low-hanging fruit applications around these old structural intuitions. And that's where the action is, right, I mean, do you agree? >> Yeah, yes. And the other thing we're looking at is not just Blockchain. So I really like talking about the field more as crypto, and, I have a little video I did on calling it kind of decentralized, crypto-enabled applications, or platforms. So, beyond Blockchain, we have DAGs, Directed Acyclic Graphs, one interesting-- >> Like Hashgraph. >> Yeah, Ha-- >> Hashgraph's a DAG, isn't it? It's kind of a DAG, Hashgraph? >> Yeah, so, I'm not a huge fan of Hashgraph, one that I do like is called Guld, G-U-L-D, which is, again, thinking beyond the Blockchain. 'Cause we get so tied into Blockchain, Blockchain, Blockchain-- >> What does beyond the Blockchain mean to you? Thinking beyond the Blockchain, what does that mean to you? >> So, the proof of work process, the mining process, the creating new blocks process, is one way of doing things. But we have all these other things going on in crypto, like the signing process, and so on, and so, you can use those in a DAG, a different architecture than just this mining new blocks, you know, mental model. And so, that can be used for different use cases, for publishing, for group consensus, and so on. And so, Guld is an example of a project where it looks like there is something real there, and that's a very interesting product. >> Advice for folks that are looking at tokeneries, because, again, we've said this on theCUBE many times, people know, I'm beating this drum, you got the startups, that see an opportunity, which is fantastic, and then on the end of the spectrum, you got the, "Oh, shit, we're out of business, "let's pivot, throw the Hail Mary, put Blockchain on it, "crypto, and get an ICO, and get some going." And then you've got these growth companies that are, either self funded and or growing, that have decentralized kind of feel to it, it has an architecture that's compatible with tokenization. >> Yeah. >> So we see those three categories. Do you agree, am I missing anything? In terms of the profile? And which ones do you like? >> Well, I think one thing that we need to look at, in each of those cases, is decentralization actually happening, in the project? And are people actually thinking about decentralization. Because, it can be scary for a traditional company! Because, if it truly becomes decentralized, you're not controlling it anymore. And so, that is-- >> If you're based on control, then it's incompatible. >> And that's the real Hail Mary, right? (laughs) When you give up that control, if you give it up, so, we have examples coming out, where, you know, Ripple is running just a few nodes, Neo's running a few more, and you know, things that are not really decentralized, and they're saying, "Well, we're going to be," (laughs) you know? >> Will they ever? >> Is it going to be in the future-- >> Yeah, that's always the question, will they ever be? They've already made their money, well certainly Ripple's done well, but, I mean, what's the incentive to go-- >> Yeah. >> Decentralized. >> Yeah, so if, if you are creating a new project, the benefit from this architecture, beyond the money, is to think about it in that decentralized way, and figure out token economics that work, in that context, in that paradigm! And that's really where the challenge is, but also really where some of the benefits can rise, because, that is what enables truly new ways of doing things. >> Talk about the dynamic, because I actually, I live in Silicon Valley, I've been here 19 years, going on 20, you know, I moved from the east coast, and basically, if you weren't here, this is where the action is. If you're in the sports of tech, this is where all the athletes are. That's now changed, as you mentioned earlier, when we started, it's everywhere. Now, also there's jurisdictional issues, I mean the US, one guy's told me, the US is turning into Europe, all these regulations, it's not as much free capital as you think, and then, we certainly know that. With FCC, and others are putting the clamp down. But, structuring the token, is a concern, right? Or consideration. >> Yes. >> And a concern, so, you know, US entrepreneur, what should they do in your opinion, and if someone's outside the US, what do they do? What's the play book, or, not play book, what's the best path right now? >> Leave the US (laughs). Move out of the US. >> Tell that, wife and four kids. See you later. Yeah, but that's real legit, that's-- >> Come and check out Panama, one of my friends is building a Blockchain incubator, crypto-incubator, I mean I think if you're-- >> What's it like to move out of the United States, I know you just recently went to Panama for this, but, what's it like? Is it scary down there, I mean, is it entrepreneurially friendly? What's the vibe, what's the scene like, take a minute to explain that. >> So I've actually been out there 12 years now, in Panama. One of the neat things, you want a place that has an international outlook, international perspectives, so, you want to think in terms of a Dubai, a Singapore, a Hong Kong. And so, Panama has some aspects of that, it's not perfect, but it does have that international perspective thanks to the Canal! So it has, you know, a hundred years! (laughs) >> It also has the Panama papers, which is a negative blowback for those guys, so it's a safe place to do commerce, in your opinion? >> Um, it is a nice geographic base to do international commerce. >> Got it. >> So, you don't necessarily want to rely on the local jurisdiction, but, in terms of a geographic base, that is US time zone, US dollar, no hurricanes, it's a very interesting place. >> Puerto Rico's got the hurricanes, we know that. >> Yeah. >> Final thoughts, just overall perspective, you've been around the block, we've been around the block, both of us have, I mean, I kind of have these pinch me almost like, "Damn, this is great time, "I wish I was 22," I mean, do you have those? What's it like, how you explain this environment? If people ask you, "Hey, what was it like in the old days?" You know, when you have to provision all your own stack, and do all the stuff, it's pretty interesting right now. What's your thoughts? >> Yeah, I mean, I think we're going through an interesting moment right now, where, we are getting to a point where the forces of centralization are coming against the forces of decentralization, and that includes from the regulatory, as well as the business side, and so, I think it is important, as we look where to dedicate our efforts to, to really find ways to increase the decentralization as a factor that encourages creativity, and entrepreneurship. >> Yeah, it really is a personal, I think it's a great environment. Decent.bet, bet, make your bets, any updates on how to get tokens, what people can expect, a quick plug-in for Decent. >> Yeah, check out our website, we've got links to exchanges, the token is currently listed on Cryptotopia, HitBTC, and a couple other exchanges, and, yeah! Please check out the test net, please check out the white paper, and just learn about how this protocol works, this platform works. I think it is very inspiring, as a structure. >> Adrian Scott here, inside theCUBE, Soma Capital, also experienced entrepreneur himself, technologist, and has been through the ICO process, head of technology at decent.net, we'll be checkin' it out, it's theCUBE Conversation, I'm John Furrier, here in Palo Alto, California. Thanks for watching. (bright music)

Published Date : Mar 29 2018

SUMMARY :

in the Palo Alto studios, Star of the big Thank you John, doing on the crypto side, first P2P, the first renegade, you know, of the Silicon Valley folks it changes the makeup of and the mathematical minds that can handle and how are you investing in it, that the community had all submitted! the only thing you could do And then write a perfect blog post, Not anymore, it's The role of the community in this crypto, in the community channels, 24/7, the whole, you know, there and it's unfortunate, because of the Telegram group, you know, there's no outside the US where I think you guys are, of the Ethereum Blockchain, You could become the Exactly, so, the motto we use is Alright so, I love the one of the neat things we are doing is, the league, to see the chain, The more the merrier Not just for the whales, on the Ether network, Ethereum network. of April, and we're on track congratulations on a great job, what are you looking for? and they're not that many out there, and opening that up, it really resonated, the I mean, do you agree? And the other thing we're looking beyond the Blockchain. and so on, and so, you on the end of the spectrum, In terms of the profile? happening, in the project? If you're based on control, of the benefits can rise, I mean the US, one guy's told me, Move out of the US. See you later. What's the vibe, what's the One of the neat things, you to do international commerce. on the local jurisdiction, but, Puerto Rico's got the and do all the stuff, it's and that includes from the regulatory, it really is a personal, I Please check out the test net, head of technology at decent.net,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PanamaLOCATION

0.99+

FCCORGANIZATION

0.99+

GuatemalaLOCATION

0.99+

Adrian ScottPERSON

0.99+

Adrian ScottPERSON

0.99+

Puerto RicoLOCATION

0.99+

Paul MartinoPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

USLOCATION

0.99+

United StatesLOCATION

0.99+

10,000QUANTITY

0.99+

80 minutesQUANTITY

0.99+

$10,000QUANTITY

0.99+

Soma CapitalORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

20,000 dollarsQUANTITY

0.99+

95%QUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

12 yearsQUANTITY

0.99+

19 yearsQUANTITY

0.99+

Las VegasLOCATION

0.99+

end of AprilDATE

0.99+

Decent BetORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

AustraliaLOCATION

0.99+

5%QUANTITY

0.99+

Hong KongLOCATION

0.99+

Jedidiah TaylorPERSON

0.99+

four kidsQUANTITY

0.99+

20QUANTITY

0.99+

first phasesQUANTITY

0.99+

SiliconeANGLE MediaORGANIZATION

0.99+

SingaporeLOCATION

0.99+

DraftKingsORGANIZATION

0.99+

eachQUANTITY

0.99+

end of SeptemberDATE

0.99+

GuardiumORGANIZATION

0.99+

DubaiLOCATION

0.98+

firstQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

mid-JanuaryDATE

0.98+

Decent BetORGANIZATION

0.98+

KodakORGANIZATION

0.98+

a monthQUANTITY

0.98+

bothQUANTITY

0.98+

22QUANTITY

0.98+

TwitterORGANIZATION

0.98+

FacebookORGANIZATION

0.98+

52,000QUANTITY

0.97+

oneQUANTITY

0.97+

OneQUANTITY

0.97+

decent.betORGANIZATION

0.97+

first-timeQUANTITY

0.97+

EuropeLOCATION

0.96+

Rocket.ChatTITLE

0.96+

three categoriesQUANTITY

0.96+

one thingQUANTITY

0.95+

five years agoDATE

0.95+

DecentBetORGANIZATION

0.94+

ICOORGANIZATION

0.94+

EtherPERSON

0.94+

RippleORGANIZATION

0.93+

911OTHER

0.93+

a hundred yearsQUANTITY

0.92+

Ask MeTITLE

0.92+

first-tierQUANTITY

0.92+

more than 52,000 EtherQUANTITY

0.91+