Breaking Analysis: Snowflake's IPO the Rewards & Perils of Early Investing
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante snowflake's eye-popping ipo this week has the industry buzzing we have had dozens and dozens of inbound pr from firms trying to hook us offering perspectives on the snowflake ipo so they can pitch us on their latest and greatest product people are pumped and why not an event like this doesn't happen very often hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll give you our take on the snowflake ipo and address the many questions that we've been getting on the topic i'm also going to discuss at the end of this segment an angle for getting in on the ground floor and investments which is not for the faint of heart but it's something that i believe is worth talking about now let's first talk about the hottest ipo in software industry history first i want to say congratulations to the many people at snowflake you know the big hitters yeah they're all the news slootman mooglia spicer buffett benioff even scarpelli interestingly you know you don't hear much about the founders they're quite humble and we're going to talk about that in some future episodes but they created snowflake they had the vision and the smarts to bring in operators that could get the company to this point so awesome for them but you know i'm especially happy for the rank and file and the many snowflake people where an event like this it really can be life-changing versus the billionaires on the leaderboard so fantastic for you okay but let's get into the madness as you know by now snowflake ipod at a price of 120. now unless you knew a guy he paid around 245 at the open that's if you got in otherwise you bought at a higher price so you kind of just held your nose and made the trade i guess you know but snowflakes value it went from 33 billion to more than 80 billion in a matter of minutes now there's a lot of finger pointing going on this is this issue that people are claiming that it was underpriced and snowflake left four billion dollars on the table please stop that's just crazy to me snowflakes balance sheet is in great shape thanks to this offering and you know i'm not sure jamming later stage investors even more would have been the right thing to do this was a small float i think it was around 10 percent of the company so you would expect a sharp uptick on day one i had predicted a doubling to a 66 billion dollar valuation and it ended up around 70. now the big question that we now get is is this a fair valuation and can snowflake grow into its value we'll address this in more detail but the short answer is snowflake is overvalued in my opinion right now but it can grow into its valuation and of course as always they're going to be challenges now the other comment we get is yeah but the company is losing tons of money and i say no kidding that's why they're so valuable we've been saying for years that the street right now is rewarding growth because they understand that to compete in software you need to have massive scale so i'm not worried in the least about snowflakes bottom line not yet eventually i'm going to pay much closer attention to operating cash flow but right now i want to see growth i want to see them grow into their valuation now the other common question we get is should i buy when should i buy what are the risks and can snowflake compete with the biggest cloud vendors i'll say this before we get into it and i've said before look it's it's very rare that you're not going to get better buying opportunities than day one of an ipo and i think in this case you will i remember back in 2015 it was i think it was the first calendar for quarter and servicenow missed its earnings and the stock got hit and we had the opportunity to interview frank slootman then ceo of servicenow right after that and i think it's instructive to hear what he said let's listen roll the clip well yeah i think that a lot of the high-flying cloud companies and obviously we're one of them you know we're we're priced to perfection right um and that's that's not an easy place to be for uh for for anybody and you know we're not really focused on that it's it's this is a marathon you know every quarter is one mile marker you can't get too excited about you know one versus the other we're really pacing ourselves we're building you know an enterprise that's going to be here for for a long time you know and after that we saw the stock drop as low as 50 today servicenow is a 450 stock so my point is that snowflake like servicenow is going to be priced to perfection and there will be bumps in the road possibly macro factors or other and if you're a believer you'll have opportunities to get in so be patient now finally i'm going to make some comments later but i'll give you the bumper sticker right now i mean i calculated the weighted average price that the insiders paid on the the s1 that they paid for snowflake and it came out to around six dollars a share and i heard somebody say on tv it was five dollars but my weighted average math got me to six dollars regardless on day one of the ipo the insiders made a 50x return on their investment if you bought on day one you're probably losing some money or maybe about even and there are some ground floor opportunities that exist that are complicated and may be risky but if you're young and motivated or older and have some time to research i think you'll be interested in what i have to say later on all right let's compare snowflake to some other companies on a valuation basis this ought to be interesting so this chart shows some high flyers as compared to snowflake we show the company the trailing 12-month revenue the market cap at the close of the 16th which is the day that snowflake ipod and then we calculate and sort the data on the revenue multiple of the trailing 12 months and the last column is the year-on-year growth rate of the last quarter and i used trailing 12 months because it's simple and it's easy to understand and it makes the revenue multiple bigger so it's more dramatic and many prefer to use a forward revenue uh but that's why i put the growth rate there you can pick your own projected revenue growth and and do the math yourself so let's start with snowflake 400 million dollars in revenue and that's based on a newish pricing model of consumption not a sas subscription that locks you in for a year or two years or three years i love this model because it's true cloud and i've talked about it a while so for a while so i'm not going to dwell on it today but you can see the trailing 12-month revenue multiple is massive and the growth rate is 120 which is very very impressive for a company this size zoom we put zoom in the chart just because why not and and the growth grade is sick so so who knows how that correlates to the revenue multiple but as you can see snowflake actually tops the zoom frothiness on that metric now maybe zoom is undervalued i should take that back let's see i think crowdstrike is really interesting here and as a company that we've been following and talking about quite a bit in my last security breaking analysis they were at a 65 x trailing 12-month revenue multiple and you see how that's jumped since they reported and they beat expectations but they're similar in size to snowflake with a slower growth rate in a lower revenue multiple so there's some correlation between that growth rate and the revenue multiple sort of now snowflake pulled back on day two it was down early uh this morning as you would expect with both the market being off and maybe some profit taking you know if you got in an allocation at 120 why not take some profits and play with house money so snowflake's value is hovering today it actually bounced back is hovering today you're just under 70 billion and that that brings the revenue multiple down a bit but it's still very elevated now if you project 2x growth let's say 100 for next year and the stock stays in some kind of range which i think it likely will you could see snowflake coming down to crowdstrike revenue multiples in 12 months it'll depend of course on snowflakes earnings reports which i'm sure are going to beat estimates for the next several quarters and if if it's growing faster than these others at that time it should command a premium you know wherever the market prices market's going to go up it's going to go down but we'll look at all these companies i think on a relative basis snowflakes still should command a premium at higher growth rates so you can see also in this chart you've got shopify awesome mongodb twilio servicenow and their respective growth rates shopify incredibly impressive [ __ ] and twilio as well servicenow is like the old dog in this mix so that's kind of interesting now the other big question we get is can snowflake grow in to its valuation this is a chart we shared with you a bit ago and it talks to snowflake's total available market and its expansion opportunity there tam expansion this is something we saw slootman execute at servicenow when everybody underestimated that company's value and i'll briefly explain here look snowflake is disrupting the traditional data warehouse and data lake markets data lake spending is relatively small it's under 2 billion but data lakes they're inexpensive and that's what made them attractive the edw market however the enterprise data warehouse market is it's much much larger now traditional edws they're they're big they're slow they're cumbersome they're expensive and they're complicated but they've been operationalized and are critical for companies reporting and basic analytics but they've failed to live up to their promise of the 360 degree view of the customer and real-time analytics you know i had a customer tell me a while ago that my data warehouse it's like a snake swallowing a basketball he gave me example where a change in a regulation this was a financial company it would occur and it would force a change in the data model in their data warehouse and they'd have to ingest all this new data and the data warehouse choked and every time intel came out with a new processor they'd rush out they'd throw more compute at the problem he called this chasing the chips now what snowflake did was to envision a cloud native world where you could bring compute to massive data volumes on an elastic basis and only pay for what you use sounds so simple but technically snowflakes founders and those innovations of that innovation of separating compute from storage to leverage the flexibility of the cloud it really was profound and clearly based on this week's performance was the right call now i'll come back to this in a bit now where we think snowflake is going is to build a data cloud and and you can see this in the chart where your data can be ingested and accessed to perform near real-time analytics with machine learning and ai and snowflake's advantage as we've discussed in the past is that it runs on any cloud and it can ingest data from a variety of sources now there are some challenges here we're not saying that snowflake is going to participate in all these use cases that we show however with its resources now we expect snowflake to create new capabilities organically and then do tuck-in acquisitions that will allow it to attack many more more use cases in adjacent markets and so you look at this chart and the third layer if that's 60 billion it means snowflake needs to extend into the fourth layer because its valuation is already over 60 billion it's not going to get 100 market share so we call this next layer automated decision making this is where real time analytics and systems are making decisions for humans and acting in real time now clearly data is going to be a pretty critical part of this equation now at this point it's unclear that snowflake has the capability to go after this space as much of the data in this area is probably going to live at the edge but snowflake is betting on becoming a data data layer across clouds and presumably at the edge and as you can see this market is enormous so there's no lack of tam in our view for snowflakes that brings us to the other big question around competition everybody's talking about this look a lot of the investment thesis behind snowflakes snowflake is that slootman and his army including cfo mike scarpelli and what they did at servicenow will be repeated scarpelli is this operational guru he keeps the engine running you know with very very tight controls and you know what it's a pretty good bet snoopman and scarpelli and their team i'm not denying that but i will tell you that snowflake's competition is much more capable than what servicenow faced in its early days now here's a picture of some of the key competitors this is one of our favorites the xy graph and on the vertical axis is net score or spending momentum that is etr's version of velocity based on their quarterly surveys now i'm showing july survey october is in the works it's in the field as i speak on the horizontal axis is market share or pervasiveness in the data set so it's a proxy for market share it's it's based on mentions not dollars and and that's why microsoft is so far to the right because they're huge and they're everywhere and they get a lot of mentions the more relevant data to us is the position of snowflake it remains one of the highest net scores in the entire etr survey based not just the database sector aw aws is its biggest competitor because most of snowflake's business runs on aws but google bigquery you can see there is is technically the most capable relative to snowflake because it's a true cloud native database built from the ground up whereas aws took a database that was built for on-prem par excel and brilliantly really made it work in the cloud by re-architecting many of the pieces but it still has legacy parts to it now here's oracle oracle's huge it's slow growth overall but it's making investments in r d we've talked about that a lot and that's going to allow it to hold on to its customers huge base and you can see teradata and cloud era cloudera is a proxy for data lakes which are low cost as i said and cloudera which acquired hortonworks is credited with the commercialization of that whole big datum and hadoop movement and then teradata is in there as well which of course they've been around forever now there are a zillion other database players we've heard a lot of them from a lot of them this week is on that inbound pr that i talked about but these are the ones that we wanted to focus on today the bottom line is we expect snowflakes vertical axis spending momentum to remain elevated and we think it will continue to steadily move to the right now let's drill into this data a bit more here we break down the components of etr's net score and this is specifically for snowflake over time now remember lime green is new adoptions the forest green is spending more relative to last year than more five percent more uh than last year or or greater gray is flat spending the pink is less spending and the bright red is we're leaving the platform the line up top that's netscore which subtracts the red from the green is an indicator of spending velocity the yellow line at the bottom is market market share or pervasiveness in the survey based on mentions now note the the blue text there that's etr's number one takeaway on snowflake two h-20 spending intentions on snowflake continue to trend robustly mostly characterized by high customer acquisition and expansion rates new adoptions market share among all customers is simultaneously growing impressive let's now look at snowflake against the competition in fortune 500 customers now here we show net score or again spending momentum over time for some of the key competitors and you can see snowflakes net score has actually increased since the april survey again this is the july survey this was taken the april survey was taken at the height of the us lockdown so snowflake's net score is actually higher in the fortune 500 than it was overall which is a good proxy for spend because fortune 500 spends more google mongodb and microsoft also also show meaningful momentum growth since the april survey you know notably aws has come off its elevated levels from last october and april it's still strong but that's something that we're going to continue to watch finally let's look at snowflakes market share or pervasiveness within the big three cloud vendors again this is a cut on the fortune 500 and you can see there are 125 respondents within the big three cloud and the fortune 500 and 21 snowflake respondents within that base of 125 and you can see the steady and consistent growth of share not huge ends but enough to give some confidence in the data now again note the etr callout but this trend is occurring despite the fact that each of the big three cloud vendors has its own competitive offering okay but i want to stress this is not a layup for snowflake as i've said this is not servicenow part two it's a different situation so let's talk about that look the competition here is not bmc which was servicenow's target as much as i love the folks at bmc we're talking here about aws microsoft and google amazon with redshift is dialed into this i've said often that they have copycatted snowflake in many cases and last fall at re invent we heard andy jassy make a big deal about separating compute from storage and he took a kind of a swipe at snowflake without mentioning them by name but let's listen to what andy jassy had had to say and then we'll come back and talk about it play the clip then what we did is because we have nitro like i was talking about earlier we built unique instances that have very fast bandwidth so that if you actually need some of those data from s3 for a query it moves much faster than if you just had to leave it there with without that high speed bandwidth instance and so with ra3s you get to separate your storage from your compute if it turns out by the way on your local ssds that you're not using all the ssd on that local ssd you only pay for what you use so a pretty significant enhancement for customers using redshift at the same time if you think about the prevailing way that people are thinking about separating storage from compute letting people scale separately that way as well as how you're going to do this large-scale compute where you move the storage to the a bunch of awaiting compute nodes there are some issues with this that you got to think about the first is think about how much data you're going to have at the scale that we're at but then just fast forward a few years think about how much data you're going to actually have to move over the network to get to the compute and we so look first of all jassy is awesome he stands up at these events for like reinvent for two hours and it connects trends and business to technology he's got a very deep understanding of the tech he's amazing however what aws has done in separating compute and storage is good but it's not as elegant architecturally as snowflake aws essentially has tiered the storage off the cluster to lower the overall costs but you really you can't turn off the compute completely with snowflake they've truly separated compute and storage and the reason is that redshift is great but it's built on an on-prem architecture that was originally an on-prem architecture that they had to redo so when jassy talks about moving the data to compute what he's really saying is our architecture is such that we had to do this workaround which is actually quite clever but this whole narrative about the prevailing ways to separate compute from storage that's snowflake and moving the data's use the word data's plural to the compute it really doesn't apply to snowflake because they'll just move the compute to the data thank you hadoop for that profound concept now does this mean snowflake is going to cakewalk over redshift not at all aws is going to continue to innovate so snowflake had better keep moving fast multi-cloud new workloads adjacent markets tam expansion etc etc etc microsoft they're huge but as usual there's not a lot to say you know about them they're everywhere they put out 1.0 products they eventually get them right because with their heft they get mulligans that they turn into pars or birdies but i think snowflake is going to bring some innovations to azure and that they're going to get good traction there in my opinion now google bigquery is interesting by all accounts it gets very high technical marks google's playing the long game and i would expect that snowflake is going to have a harder time competing in google cloud than it does within aws and what i'm predicting for azure but we'll see the last point here is that many are talking about the convergence of analytic and operational and transaction databases and the thinking is this doesn't necessarily bode well for specialists like snowflake and i would say a couple of things here first is that while it's definitely true you're not seeing snowflake positioning today as responding at the point of transaction to say for instance influence and order in real time and this may have implications at the edge it's going to have a lot of real-time inferencing but we've learned there are a lot of ways to skin a cat and we see integration layers and innovative approaches emerging in the cloud that could address this gap and present opportunities for snowflake now the other thing i'd say is you know maybe that thinking misses something altogether with the idea of snowflake in that third data layer that we showed you in our tam chart that data as a service layer or data cloud which is maybe a giant opportunity that they are uniquely positioned to address because they're cloud agnostic they've got the vision and they've got the architecture to allow them to very simply ingest data and then serve it up to businesses nonetheless we're going to see this battle continue between what i've often talked about these integrated suites and converged databases in the case of oracle converged pipelines in the case of the cloud guys versus the best of breed players like snowflake we talk about this all the time and there really isn't one single answer it's really horses for courses and customer preferences okay well you know i know you've been waiting for for me to tell you about the angles on ground floor investing and you probably think this is going to be crazy but bear with me and i got to caution you this is a bit tongue-in-cheek and it's one big buyer beware but as i said the insiders on snowflake had a 50x return on day one you probably didn't so i want to talk about the confluence of software engineering crypto cryptography and game theory powered by the underlying value of blockchain and we're talking here about innovations around a new internet in a distributed web or d-web where many distributed computers come together to form one computer that guarantees trust between two or more users for a variety of use cases not just financial store like bitcoin but that too and the motivation behind this is the fact that a small number of companies say five or six today control the internet and have essentially co-opted the major protocols like tcp http smtp pop3 etc etc and these people that we're showing here on this chart they're working on these new innovations there are many of them but i just name a few here olaf carlson we he started poly chain capital to invest in core infrastructure around these new computing paradigms this gentleman mark nadal is someone who's working on new d apps tim berners-lee who invented the internet he's got a project called solid at mit and it emphasizes data ownership and privacy and of course satoshi got it all started when she invented bitcoin and created the notion of fractional shares and by the way the folks at andreessen horowitz are actively making bets in this space so you know maybe this is not so crazy but here's the premise if you're a little guy and you wanted to invest in snowflake you couldn't until late in the game if you wanted to invest in the lamp stack directly in the late 90s there was no way to do that you had to wait for red hat to go public or to get a piece of the linux action but in this world that we're talking about here there are opportunities that are not mainstream and often they're based yes on cryptocurrencies again it's dangerous there are scams and and losers but if you do your homework there are actually vehicles for you to get in on the ground floor and you know some of these innovations are going to take off you could get a 50x or 100 bagger but you have to do your research and there's no guarantee that these innovations are going to be able to take on the big internet giants but there are people really smart technologists and software engineers that are young they're mission driven and they're forming a collective voice against a dystopian future because they want to level the playing field on the internet and this may be the disruptive force that challenges today's giants and if your game i would take a look at the space and see if it's worth throwing a few dollars at okay a little tangent from snowflake but i wanted to put that out there snowflake wow closes its first trading week as a company worth 66 billion dollars roughly the same as goldman sachs worth more than vmware and the list goes on i mean what's what's more is there to say other than remember these episodes are all available as podcasts so please subscribe i publish weekly on wikibon.com and siliconangle.com so please check that out and please comment on my linkedin post or feel free to email me at david.velante at siliconangle.com this is dave vellante for the cube insights powered by etr thanks for watching everyone we'll see you next time you
SUMMARY :
now the other thing i'd say is you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
33 billion | QUANTITY | 0.99+ |
five dollars | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
six dollars | QUANTITY | 0.99+ |
olaf carlson | PERSON | 0.99+ |
66 billion dollars | QUANTITY | 0.99+ |
60 billion | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
one mile | QUANTITY | 0.99+ |
450 stock | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
65 x | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
125 respondents | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
12-month | QUANTITY | 0.99+ |
third layer | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
shopify | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
four billion dollars | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
66 billion dollar | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
fourth layer | QUANTITY | 0.99+ |
50x | QUANTITY | 0.99+ |
scarpelli | PERSON | 0.99+ |
mark nadal | PERSON | 0.99+ |
more than 80 billion | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
october | DATE | 0.99+ |
last year | DATE | 0.99+ |
next year | DATE | 0.99+ |
400 million dollars | QUANTITY | 0.99+ |
frank slootman | PERSON | 0.99+ |
april | DATE | 0.99+ |
2x | QUANTITY | 0.99+ |
each | QUANTITY | 0.98+ |
100 market share | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
last year | DATE | 0.98+ |
this week | DATE | 0.98+ |
under 2 billion | QUANTITY | 0.98+ |
boston | LOCATION | 0.97+ |
dave vellante | PERSON | 0.97+ |
ORGANIZATION | 0.97+ | |
twilio | ORGANIZATION | 0.97+ |
july | DATE | 0.97+ |
excel | TITLE | 0.97+ |
late 90s | DATE | 0.97+ |
one computer | QUANTITY | 0.97+ |
slootman | PERSON | 0.96+ |
both | QUANTITY | 0.96+ |
under 70 billion | QUANTITY | 0.96+ |
around 10 percent | QUANTITY | 0.96+ |
last quarter | DATE | 0.96+ |
over 60 billion | QUANTITY | 0.96+ |
five percent | QUANTITY | 0.96+ |
bmc | ORGANIZATION | 0.96+ |
jassy | PERSON | 0.96+ |
125 | QUANTITY | 0.95+ |
fortune 500 | ORGANIZATION | 0.95+ |
oracle oracle | ORGANIZATION | 0.95+ |
last october | DATE | 0.94+ |
servicenow | ORGANIZATION | 0.94+ |
Vijoy Pandey, Cisco | KubeCon + CloudNativeCon Europe 2020 - Virtual
>> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the CloudNative Computing Foundation, and Ecosystem Partners. >> Hi and welcome back to theCUBE's coverage of KubeCon, CloudNativeCon 2020 in Europe, of course the virtual edition. I'm Stu Miniman and happy to welcome back to the program one of the keynote speakers, he's also a board member of the CNCF, Vijoy Pandey who is the vice president and chief technology officer for Cloud at Cisco. Vijoy, nice to see you and thanks so much for joining us. >> Thank you Stu, and nice to see you again. It's a strange setting to be in but as long as we are both health, everything is good. >> Yeah, it's still a, we still get to be together a little bit even though while we're apart, we love the engagement and interaction that we normally get through the community but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Network, Please Evolve", been watching that journey. But why don't we start it first, you know, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so that, yeah let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports into Chuck. The role, the charter for this team, this new team, is to incubate the next bets for Cisco. And, if you can imagine, it's natural for Cisco to start with bets which are closer to its core business, but the charter for this group is to mover further and further out from Cisco's core business and takes this core into newer markets, into newer products, and newer businesses. I am running the engineering and research for that group. And, again, the whole deal behind this is to be a little bit nimble, to be a little startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast and you throw out 80% of those and concentrate on the 20% that make sense to take forward as a venture. >> Interesting. So it reminds me a little bit, but different, I remember John Chambers a number of years back talking about various adjacencies, trying to grow those next, you know, multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for, you know, driving innovation, giving engineering 20% of their time to work on things. Give us a little bit of insight. What's kind of an example of a bet that you might be looking at in the space? Bring us inside a little bit. >> Well that's actually a good question and I think a little bit of that comparison is, are those conversations that taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets. And, yes, my previous employer, I mean Google X actually goes pretty far out when it comes to incubations. The core business being primarily around ads, now Google Cloud as well, but you have things like Verily and Calico and others which are pretty far out from where Google started. And the way we are looking at these things within Cisco is, it's a new muscle for Cisco so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to go-to-market alignment or business alignment. So while the first bets that we are taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure's code, but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I talked a lot about this in the past couple of keynotes where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, whatever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets that we are going after is around connecting and securing API endpoints wherever they happen to be of whatever type they happen to be. And so API networking, or app networking, is one big bet that we're going after. Our other big bet is around API security and that has a bunch of other connotations to it where we think about security moving from runtime security where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security which is only under the developer pipeline and higher up in the stack. So those are two big bets that we're going after and as you can see, they're pretty close to Cisco's core business but also very differentiated from where Cisco is today. And once when you prove some of these bets out, you can walk further and further away or a few degrees away from Cisco's core as it exists today. >> All right, well Vijoy, I mentioned you're also on the board for the CNCF, maybe let's talk a little bit about open source. How does that play into what you're looking at for emerging technologies and these bets, you know, so many companies, that's an integral piece, and we've watched, you know really, the maturation of Cisco's journey, participating in these open source environments. So help us tie in where Cisco is when it comes to open source. >> So, yeah, so I think we've been pretty deeply involved in open source in our past. We've been deeply involved in Linux foundational networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack. We are big supporters of OpenStack. We have a couple of products that are on the OpenStack offering. And as you all know, we've been involved in CNCF right from the get go as a foundational member. We brought NSM as a project. It's sandbox currently. We're hoping to move it forward. But even beyond that, I mean we are big users of open source. You know a lot of us has offerings that we have from Cisco and you would not know this if you're not inside of Cisco, but Webex, for example, is a big, big user of linger D right from the get go from version 1.0. But we don't talk about it, which is sad. I think for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise site. We use Kubernetes very deeply in our security platforms. So we are pretty deep users internally in all our SAS products. But we want to press the accelerator and accelerate this whole journey towards open source quite a bit moving forward as part of ET&I, Emerging Tech and Incubation as well. So you will see more of us in open source forums, not just the NCF but very recently we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative and we actually are big believers in handling data in ethical and privacy preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health and we will be working very closely with Dan and the foundational companies there to, not just bring open source, but also evangelize and use what comes out of that forum. >> All right. Well, Vijoy, I think it's time for us to dig into your keynote. We've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on, and big focus you talked about was SD-WAN. Of course anybody that been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling Multicloud, so help us, you know, help explain to our audience a little bit, you know, what do you mean when you talk about things like CloudNative, SD-WAN, and how that helps people really enable their applications in the modern environment? >> Yeah, so, well we we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actual data connectivity across to your enterprise or branch and SD-WAN came in and changed the game there. But I think SD-WAN as it exists today is application-alaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM, or network service mesh, has actually helped us simplify operational complexities, simplify the ticketing and process hell that any developer needs to go through just to get a multicloud, multicluster app up and running. So the keynote actually talked about bringing those two things together where we've talked about using NSM in the past, in chapter one and chapter two, ah chapter two, no this is chapter three and at some point I would like to stop the chapters. I don't want this to be like, like an encyclopedia of networking (mumbling) But we are at chapter three and we are talking about how you can take the same consumption models that I talked about in chapter two which is just adding a simple annotation in your CRD and extending that notion of multicloud, multicluster wires within the components of our application but extending it all the way down to the user in an enterprise. And as you saw an example, Gavin Russom is trying to give a keynote holographically and he's suffering from SD-WAN being application alaware. And using this construct of a simple annotation, we can actually make SD-WAN CloudNative. We can make it application-aware, and we can guarantee the SLOs that Gavin is looking for in terms of 3D video, in terms of file access or audio just to make sure that he's successful and Ross doesn't come in and take his place. >> Well I expect Gavin will do something to mess things up on his own even if the technology works flawly. You know, Vijoy the modernization journey that customers are on is a neverending story. I understand the chapters need to end on the current volume that you're working on. But, you know, we'd love to get your view point. You talk about things like service mesh. It's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the the kind of real challenges but opportunities that they see in today's CloudNative space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree and we are seeing a lot of that happening where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've got linger D, you've got various proprietary notions around control planes like App Mesh from Amazon. There's Console which is an open source project But not part of (mumbles) today. So there's a whole bunch of service meshes in terms of control planes coming in on volumes becoming a de facto side car data plane, whatever you would like to call it, de facto standard there which is good for the community I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment. And that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for. Like API Gate was in the past. And those functions are actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of some service meshes is here to stay. And so some work has to be done in bringing these things together and that's something that we are trying to focus in on all as well because that's something that our customers are asking for. >> Yeah, actually you connected for me something I wanted to get your viewpoint on. Dial back you know 10, 15 years ago and everybody would say, "Ah, you know, I really want to have single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, you know, not that long before this event, Google had their Google Cloud show talking about the partnership that you have with Cisco with Google. They have Anthos. You look at Azure has Arc. You know, VMware has Tanzu. Everybody's talking about, really, kind of this multicluster management type of solution out there. And just want to get your viewpoint on this Vijoy is to, you know, how are we doing on the management plane and what do you think we need to do as a industry as a whole to make things better for customers? >> Yeah, but I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all of these things together. And I think the SMI apropos from Microsoft is bang on on that front where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes. As you saw in the SD-WAN and also going back in the Google discussion that you just, or Google conference that we just offered It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a CloudNative environment in the keynote. But I think the bigger problem here is how do we manage this complexity of disparate stacks, whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And, single pane of glass is over loaded as a term because it brings in these notions of big, monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, you know, the API economy should be able to help on these, you know, multi, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco and congratulations on the new role. >> Thank you Stu. It was a pleasure to be here. >> All right, and stay tuned for much more coverage of theCUBE at KubeCon, CloudNativeCon. I'm Stu Miniman and thanks for watching. (light digital music)
SUMMARY :
brought to you by Red Hat, Vijoy, nice to see you and nice to see you again. since the last time we got together. and concentrate on the 20% that make sense that you might be looking at in the space? And the way we are looking at and we've watched, you and the foundational companies there to, and big focus you talked about was SD-WAN. and we are talking about What are some of the the and we are seeing a lot of that happening and what do you think we need in the Google discussion that you just, and give me the visibility I need Thank you Stu. I'm Stu Miniman and thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan Kohn | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Liz Centoni | PERSON | 0.99+ |
CloudNative Computing Foundation | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Vijoy Pandey | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Linux Foundation for Public Health | ORGANIZATION | 0.99+ |
Gavin | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Vijoy | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
Emerging Tech | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
ET&I | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
first bets | QUANTITY | 0.99+ |
Gavin Russom | PERSON | 0.99+ |
CloudNativeCon | EVENT | 0.99+ |
Verily | ORGANIZATION | 0.99+ |
Ross | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Chuck | PERSON | 0.99+ |
Webex | ORGANIZATION | 0.99+ |
Ecosystem Partners | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
NSM | ORGANIZATION | 0.98+ |
Calico | ORGANIZATION | 0.98+ |
two big bets | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
NCF | ORGANIZATION | 0.98+ |
VMware | ORGANIZATION | 0.97+ |
Linux | TITLE | 0.97+ |
two things | QUANTITY | 0.97+ |
CloudNativeCon 2020 | EVENT | 0.97+ |
today | DATE | 0.96+ |
SAS | ORGANIZATION | 0.96+ |
Emerging Tech and Incubation | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.96+ |
one big bet | QUANTITY | 0.96+ |
chapter two | OTHER | 0.95+ |
this year | DATE | 0.95+ |
first few bets | QUANTITY | 0.95+ |
chapter one | OTHER | 0.94+ |
Tanzu | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
chapter three | OTHER | 0.93+ |
Vijoy Pandey, Cisco | kubecon + Cloudnativecon europe 2020
(upbeat music) >> From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation, and the ecosystem partners. >> Hi, and welcome back to theCUBE's coverage of KubeCon + CloudNativeCon 2020 in Europe, of course, the virtual edition. I'm Stu Miniman, and happy to welcome you back to the program. One of the keynote speakers is also a board member of the CNCF, Vijoy Pandey, who is the Vice President and Chief Technology Officer for Cloud at Cisco. Vijoy, nice to see you, thanks so much for joining us. >> Hi there, Stu, so nice to see you again. It's a strange setting to be in, but as long as we are both healthy, everything's good. >> Yeah, we still get to be together a little bit even though while we're apart. We love the the engagement and interaction that we normally get to the community, but we just have to do it a little bit differently this year. So we're going to get to your keynote. We've had you on the program to talk about "Networking, Please Evolve". I've been watching that journey. But why don't we start at first, you've had a little bit of change in roles and responsibility. I know there's been some restructuring at Cisco since the last time we got together. So give us the update on your role. >> Yeah, so let's start there. So I've taken on a new responsibility. It's VP of Engineering and Research for a new group that's been formed at Cisco. It's called Emerging Tech and Incubation. Liz Centoni leads that and she reports on to Chuck. The charter for the team, this new team, is to incubate the next bets for Cisco. And if you can imagine, it's natural for Cisco to start with bets which are closer to its core business. But the charter for this group is to move further and further out from Cisco's core business and take Cisco into newer markets, into newer products, and newer businesses. I'm running the engineering and resource for that group. And again, the whole deal behind this is to be a little bit nimble, to be a little bit, to startupy in nature, where you bring ideas, you incubate them, you iterate pretty fast, and you throw out 80% of those, and concentrate on the 20% that makes sense to take forward as a venture. >> Interesting. So it reminds me a little bit but different, I remember John Chambers, a number of years back, talking about various adjacencies trying to grow those next multi-billion dollar businesses inside Cisco. In some ways, Vijoy, it reminds me a little bit of your previous company, very well known for driving innovation, giving engineers 20% of their time to work on things, maybe give us a little bit insight, what's kind of an example of a bet that you might be looking at in this space, bring us in tight a little bit. >> Well, that's actually a good question. And I think a little bit of that comparison is all those conversations are taking place within Cisco as well as to how far out from Cisco's core business do we want to get when we're incubating these bets? And yes, my previous employer, I mean, Google X actually goes pretty far out when it comes to incubations, the core business being primarily around ads, now Google Cloud as well. But you have things like Verily and Calico, and others, which are pretty far out from where Google started. And the way we're looking at the these things within Cisco is, it's a new muscle for Cisco, so we want to prove ourselves first. So the first few bets that we are betting upon are pretty close to Cisco's core but still not fitting into Cisco's BU when it comes to, go to market alignment or business alignment. So one of the first bets that we're taking into account is around API being the queen when it comes to the future of infrastructure, so to speak. So it's not just making our infrastructure consumable as infrastructure as code but also talking about developer relevance, talking about how developers are actually influencing infrastructure deployments. So if you think about the problem statement in that sense, then networking needs to evolve. And I've talked a lot about this in the past couple of keynotes, where Cisco's core business has been around connecting and securing physical endpoints, physical I/O endpoints, wherever they happen to be, of whatever type they happen to be. And one of the bets that we are, actually two of the bets, that we're going after is around connecting and securing API endpoints, wherever they happen to be, of whatever type they happen to be. And so API networking or app networking is one big bet that we're going after. Another big bet is around API security. And that has a bunch of other connotations to it, where we think about security moving from runtime security, where traditionally Cisco has played in that space, especially on the infrastructure side, but moving into API security, which is earlier in the development pipeline, and higher up in the stack. So those are two big bets that we're going after. And as you can see, they're pretty close to Cisco's core business, but also are very differentiated from where Cisco is today. And once you prove some of these bets out, you can walk further and further away, or a few degrees away from Cisco's core. >> All right, Vijoy, why don't you give us the update about how Cisco is leveraging and participating in open source? >> So I think we've been pretty, deeply involved in open source in our past. We've been deeply involved in Linux Foundation Networking. We've actually chartered FD.io as a project there and we still are. We've been involved in OpenStack, we have been supporters of OpenStack. We have a couple of products that are around the OpenStack offering. And as you all know, we've been involved in CNCF, right from the get-go, as a foundation member. We brought NSM as a project. I had Sandbox currently, but we're hoping to move it forward. But even beyond that, I mean, we are big users of open source, a lot of those has offerings that we have from Cisco, and you will not know this if you're not inside of Cisco. But Webex, for example, is a big, big user of Linkerd, right from the get-go, from version 1.0, but we don't talk about it, which is sad. I think, for example, we use Kubernetes pretty deeply in our DNAC platform on the enterprise side. We use Kubernetes very deeply in our security platforms. So we're pretty good, pretty deep users internally in our SaaS products. But we want to press the accelerator and accelerate this whole journey towards open source, quite a bit moving forward as part of ET&I, Emerging Tech and Incubation, as well. So you will see more of us in open source forums, not just CNCF, but very recently, we joined the Linux Foundation for Public Health as a premier foundational member. Dan Kohn, our old friend, is actually chartering that initiative, and we actually are big believers in handling data in ethical and privacy-preserving ways. So that's actually something that enticed us to join Linux Foundation for Public Health, and we will be working very closely with Dan and foundational companies that do not just bring open source but also evangelize and use what comes out of that forum. >> All right, well, Vijoy, I think it's time for us to dig into your keynote. We've we've spoken with you in previous KubeCons about the "Network, Please Evolve" theme that you've been driving on. And big focus you talked about was SD-WAN. Of course, anybody that's been watching the industry has watched the real ascension of SD-WAN. We've called it one of those just critical foundational pieces of companies enabling multi-cloud. So help explain to our audience a little bit, what do you mean when you talk about things like Cloud Native SD-WAN and how that helps people really enable their applications in the modern environment? >> Yes, well, I mean, we've been talking about SD-WAN for a while. I mean, it's one of the transformational technologies of our time where prior to SD-WAN existing, you had to stitch all of these MPLS labels and actually get your connectivity across to your enterprise or branch. And SD-WAN came in and changed the game there, but I think SD-WAN, as it exists today, is application-unaware. And that's one of the big things that I talk about in my keynote. Also, we've talked about how NSM, the other side of the spectrum, is how NSM or Network Service Mesh has actually helped us simplify operational complexities, simplify the ticketing and process health that any developer needs to go through just to get a multi-cloud, multi-cluster app up and running. So the keynote actually talked about bringing those two things together, where we've talked about using NSM in the past in chapter one and chapter two. And I know this is chapter three, and at some point, I would like to stop the chapters. I don't want this like an encyclopedia of "Networking, Please Evolve". But we are at chapter three, and we are talking about how you can take the same consumption models that I talked about in chapter two, which is just adding a simple annotation in your CRD, and extending that notion of multi-cloud, multi-cluster wires within the components of our application, but extending it all the way down to the user in an enterprise. And as we saw an example, Gavin Belson is trying to give a keynote holographically and he's suffering from SD-WAN being application-unaware. And using this construct of a simple annotation, we can actually make SD-WAN cloud native, we can make it application-aware, and we can guarantee the SLOs, that Gavin is looking for, in terms of 3D video, in terms of file access for audio, just to make sure that he's successful and Ross doesn't come in and take his place. >> Well, I expect Gavin will do something to mess things up on his own even if the technology works flawlessly. Vijoy, the modernization journey that customers are on is a never-ending story. I understand the chapters need to end on the current volume that you're working on, but we'd love to get your viewpoint. You talk about things like service mesh, it's definitely been a hot topic of conversation for the last couple of years. What are you hearing from your customers? What are some of the kind of real challenges but opportunities that they see in today's cloud native space? >> In general, service meshes are here to stay. In fact, they're here to proliferate to some degree, and we are seeing a lot of that happening, where not only are we seeing different service meshes coming into the picture through various open source mechanisms. You've got Istio there, you've Linkerd, you've got various proprietary notions around control planes like App Mesh, from Amazon, there's Consul, which is an open source project, but not part of CNCF today. So there's a whole bunch of service meshes in terms of control planes coming in. Envoy is becoming a de facto sidecar data plane, whatever you would like to call it, de facto standard there, which is good for the community, I would say. But this proliferation of control planes is actually a problem. And I see customers actually deploying a multitude of service meshes in their environment, and that's here to stay. In fact, we are seeing a whole bunch of things that we would use different tools for, like API gateways in the past, and those functions actually rolling into service meshes. And so I think service meshes are here to stay. I think the diversity of service meshes is here to stay. And so some work has to be done in bringing these things together. And that's something that we are trying to focus in on as well. Because that's something that our customers are asking for. >> Yeah, actually, you connected for me something I wanted to get your viewpoint on, go dial back, 10, 15 years ago, and everybody would say, "Oh, I really want to have a single pane of glass "to be able to manage everything." Cisco's partnering with all of the major cloud providers. I saw, not that long before this event, Google had their Google Cloud Show, talking about the partnership that you have with, Cisco with Google. They have Anthos, you look at Azure has Arc, VMware has Tanzu. Everybody's talking about really the kind of this multi-cluster management type of solution out there, and just want to get your viewpoint on this Vijoy as to how are we doing on the management plane, and what do you think we need to do as an industry as a whole to make things better for customers? >> Yeah, I think this is where I think we need to be careful as an industry, as a community and make things simpler for our customers. Because, like I said, the proliferation of all of these control planes begs the question, do we need to build something else to bring all these things together? I think the SMI proposal from Microsoft is bang on on that front, where you're trying to unify at least the consumption model around how you consume these service meshes. But it's not just a question of service meshes as you saw in the SD-WAN announcement back in the Google discussion that we just, Google conference that you just referred. It's also how SD-WANs are going to interoperate with the services that exist within these cloud silos to some degree. And how does that happen? And there was a teaser there that you saw earlier in the keynote where we are taking those constructs that we talked about in the Google conference and bringing it all the way to a cloud native environment in the keynote. But I think the bigger problem here is how do we manage this complexity of this pallet stacks? Whether it's service meshes, whether it's development stacks, or whether it's SD-WAN deployments, how do we manage that complexity? And single pane of glass is overloaded as a term, because it brings in these notions of big monolithic panes of glass. And I think that's not the way we should be solving it. We should be solving it towards using API simplicity and API interoperability. And I think that's where we as a community need to go. >> Absolutely. Well, Vijoy, as you said, the API economy should be able to help on these, the service architecture should allow things to be more flexible and give me the visibility I need without trying to have to build something that's completely monolithic. Vijoy, thanks so much for joining. Looking forward to hearing more about the big bets coming out of Cisco, and congratulations on the new role. >> Thank you, Stu. It was a pleasure to be here. >> All right, and stay tuned for lots more coverage of theCUBE at KubeCon + CloudNativeCon. I'm Stu Miniman. Thanks for watching. (upbeat music)
SUMMARY :
and the ecosystem partners. One of the keynote speakers nice to see you again. since the last time we got together. and concentrate on the 20% that that you might be And one of the bets that we are, that are around the OpenStack offering. in the modern environment? And that's one of the big of conversation for the and that's here to stay. as to how are we doing and bringing it all the way and congratulations on the new role. It was a pleasure to be here. of theCUBE at KubeCon + CloudNativeCon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan Kohn | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
Liz Centoni | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Gavin | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Linux Foundation for Public Health | ORGANIZATION | 0.99+ |
Vijoy | PERSON | 0.99+ |
Gavin Belson | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
ET&I | ORGANIZATION | 0.99+ |
Emerging Tech | ORGANIZATION | 0.99+ |
NSM | ORGANIZATION | 0.99+ |
Vijoy Pandey | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Verily | ORGANIZATION | 0.99+ |
two big bets | QUANTITY | 0.99+ |
John Chambers | PERSON | 0.99+ |
Calico | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
one | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Ross | PERSON | 0.99+ |
10 | DATE | 0.99+ |
one big bet | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Webex | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
two things | QUANTITY | 0.97+ |
Linux Foundation for Public Health | ORGANIZATION | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
Linkerd | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
chapter three | OTHER | 0.97+ |
Tanzu | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
Incubation | ORGANIZATION | 0.94+ |
Arc | ORGANIZATION | 0.94+ |
Emerging Tech and Incubation | ORGANIZATION | 0.94+ |
first bets | QUANTITY | 0.93+ |
KubeCons | EVENT | 0.93+ |
bets | QUANTITY | 0.93+ |
chapter two | OTHER | 0.92+ |
FD.io | ORGANIZATION | 0.92+ |
two of | QUANTITY | 0.92+ |
first few bets | QUANTITY | 0.91+ |
chapter three | OTHER | 0.9+ |
Anthos | ORGANIZATION | 0.9+ |
UNLIST TILL 4/2 - Vertica Big Data Conference Keynote
>> Joy: Welcome to the Virtual Big Data Conference. Vertica is so excited to host this event. I'm Joy King, and I'll be your host for today's Big Data Conference Keynote Session. It's my honor and my genuine pleasure to lead Vertica's product and go-to-market strategy. And I'm so lucky to have a passionate and committed team who turned our Vertica BDC event, into a virtual event in a very short amount of time. I want to thank the thousands of people, and yes, that's our true number who have registered to attend this virtual event. We were determined to balance your health, safety and your peace of mind with the excitement of the Vertica BDC. This is a very unique event. Because as I hope you all know, we focus on engineering and architecture, best practice sharing and customer stories that will educate and inspire everyone. I also want to thank our top sponsors for the virtual BDC, Arrow, and Pure Storage. Our partnerships are so important to us and to everyone in the audience. Because together, we get things done faster and better. Now for today's keynote, you'll hear from three very important and energizing speakers. First, Colin Mahony, our SVP and General Manager for Vertica, will talk about the market trends that Vertica is betting on to win for our customers. And he'll share the exciting news about our Vertica 10 announcement and how this will benefit our customers. Then you'll hear from Amy Fowler, VP of strategy and solutions for FlashBlade at Pure Storage. Our partnership with Pure Storage is truly unique in the industry, because together modern infrastructure from Pure powers modern analytics from Vertica. And then you'll hear from John Yovanovich, Director of IT at AT&T, who will tell you about the Pure Vertica Symphony that plays live every day at AT&T. Here we go, Colin, over to you. >> Colin: Well, thanks a lot joy. And, I want to echo Joy's thanks to our sponsors, and so many of you who have helped make this happen. This is not an easy time for anyone. We were certainly looking forward to getting together in person in Boston during the Vertica Big Data Conference and Winning with Data. But I think all of you and our team have done a great job, scrambling and putting together a terrific virtual event. So really appreciate your time. I also want to remind people that we will make both the slides and the full recording available after this. So for any of those who weren't able to join live, that is still going to be available. Well, things have been pretty exciting here. And in the analytic space in general, certainly for Vertica, there's a lot happening. There are a lot of problems to solve, a lot of opportunities to make things better, and a lot of data that can really make every business stronger, more efficient, and frankly, more differentiated. For Vertica, though, we know that focusing on the challenges that we can directly address with our platform, and our people, and where we can actually make the biggest difference is where we ought to be putting our energy and our resources. I think one of the things that has made Vertica so strong over the years is our ability to focus on those areas where we can make a great difference. So for us as we look at the market, and we look at where we play, there are really three recent and some not so recent, but certainly picking up a lot of the market trends that have become critical for every industry that wants to Win Big With Data. We've heard this loud and clear from our customers and from the analysts that cover the market. If I were to summarize these three areas, this really is the core focus for us right now. We know that there's massive data growth. And if we can unify the data silos so that people can really take advantage of that data, we can make a huge difference. We know that public clouds offer tremendous advantages, but we also know that balance and flexibility is critical. And we all need the benefit that machine learning for all the types up to the end data science. We all need the benefits that they can bring to every single use case, but only if it can really be operationalized at scale, accurate and in real time. And the power of Vertica is, of course, how we're able to bring so many of these things together. Let me talk a little bit more about some of these trends. So one of the first industry trends that we've all been following probably now for over the last decade, is Hadoop and specifically HDFS. So many companies have invested, time, money, more importantly, people in leveraging the opportunity that HDFS brought to the market. HDFS is really part of a much broader storage disruption that we'll talk a little bit more about, more broadly than HDFS. But HDFS itself was really designed for petabytes of data, leveraging low cost commodity hardware and the ability to capture a wide variety of data formats, from a wide variety of data sources and applications. And I think what people really wanted, was to store that data before having to define exactly what structures they should go into. So over the last decade or so, the focus for most organizations is figuring out how to capture, store and frankly manage that data. And as a platform to do that, I think, Hadoop was pretty good. It certainly changed the way that a lot of enterprises think about their data and where it's locked up. In parallel with Hadoop, particularly over the last five years, Cloud Object Storage has also given every organization another option for collecting, storing and managing even more data. That has led to a huge growth in data storage, obviously, up on public clouds like Amazon and their S3, Google Cloud Storage and Azure Blob Storage just to name a few. And then when you consider regional and local object storage offered by cloud vendors all over the world, the explosion of that data, in leveraging this type of object storage is very real. And I think, as I mentioned, it's just part of this broader storage disruption that's been going on. But with all this growth in the data, in all these new places to put this data, every organization we talk to is facing even more challenges now around the data silo. Sure the data silos certainly getting bigger. And hopefully they're getting cheaper per bit. But as I said, the focus has really been on collecting, storing and managing the data. But between the new data lakes and many different cloud object storage combined with all sorts of data types from the complexity of managing all this, getting that business value has been very limited. This actually takes me to big bet number one for Team Vertica, which is to unify the data. Our goal, and some of the announcements we have made today plus roadmap announcements I'll share with you throughout this presentation. Our goal is to ensure that all the time, money and effort that has gone into storing that data, all the data turns into business value. So how are we going to do that? With a unified analytics platform that analyzes the data wherever it is HDFS, Cloud Object Storage, External tables in an any format ORC, Parquet, JSON, and of course, our own Native Roth Vertica format. Analyze the data in the right place in the right format, using a single unified tool. This is something that Vertica has always been committed to, and you'll see in some of our announcements today, we're just doubling down on that commitment. Let's talk a little bit more about the public cloud. This is certainly the second trend. It's the second wave maybe of data disruption with object storage. And there's a lot of advantages when it comes to public cloud. There's no question that the public clouds give rapid access to compute storage with the added benefit of eliminating data center maintenance that so many companies, want to get out of themselves. But maybe the biggest advantage that I see is the architectural innovation. The public clouds have introduced so many methodologies around how to provision quickly, separating compute and storage and really dialing-in the exact needs on demand, as you change workloads. When public clouds began, it made a lot of sense for the cloud providers and their customers to charge and pay for compute and storage in the ratio that each use case demanded. And I think you're seeing that trend, proliferate all over the place, not just up in public cloud. That architecture itself is really becoming the next generation architecture for on-premise data centers, as well. But there are a lot of concerns. I think we're all aware of them. They're out there many times for different workloads, there are higher costs. Especially if some of the workloads that are being run through analytics, which tend to run all the time. Just like some of the silo challenges that companies are facing with HDFS, data lakes and cloud storage, the public clouds have similar types of siloed challenges as well. Initially, there was a belief that they were cheaper than data centers, and when you added in all the costs, it looked that way. And again, for certain elastic workloads, that is the case. I don't think that's true across the board overall. Even to the point where a lot of the cloud vendors aren't just charging lower costs anymore. We hear from a lot of customers that they don't really want to tether themselves to any one cloud because of some of those uncertainties. Of course, security and privacy are a concern. We hear a lot of concerns with regards to cloud and even some SaaS vendors around shared data catalogs, across all the customers and not enough separation. But security concerns are out there, you can read about them. I'm not going to jump into that bandwagon. But we hear about them. And then, of course, I think one of the things we hear the most from our customers, is that each cloud stack is starting to feel even a lot more locked in than the traditional data warehouse appliance. And as everybody knows, the industry has been running away from appliances as fast as it can. And so they're not eager to get locked into another, quote, unquote, virtual appliance, if you will, up in the cloud. They really want to make sure they have flexibility in which clouds, they're going to today, tomorrow and in the future. And frankly, we hear from a lot of our customers that they're very interested in eventually mixing and matching, compute from one cloud with, say storage from another cloud, which I think is something that we'll hear a lot more about. And so for us, that's why we've got our big bet number two. we love the cloud. We love the public cloud. We love the private clouds on-premise, and other hosting providers. But our passion and commitment is for Vertica to be able to run in any of the clouds that our customers choose, and make it portable across those clouds. We have supported on-premises and all public clouds for years. And today, we have announced even more support for Vertica in Eon Mode, the deployment option that leverages the separation of compute from storage, with even more deployment choices, which I'm going to also touch more on as we go. So super excited about our big bet number two. And finally as I mentioned, for all the hype that there is around machine learning, I actually think that most importantly, this third trend that team Vertica is determined to address is the need to bring business critical, analytics, machine learning, data science projects into production. For so many years, there just wasn't enough data available to justify the investment in machine learning. Also, processing power was expensive, and storage was prohibitively expensive. But to train and score and evaluate all the different models to unlock the full power of predictive analytics was tough. Today you have those massive data volumes. You have the relatively cheap processing power and storage to make that dream a reality. And if you think about this, I mean with all the data that's available to every company, the real need is to operationalize the speed and the scale of machine learning so that these organizations can actually take advantage of it where they need to. I mean, we've seen this for years with Vertica, going back to some of the most advanced gaming companies in the early days, they were incorporating this with live data directly into their gaming experiences. Well, every organization wants to do that now. And the accuracy for clickability and real time actions are all key to separating the leaders from the rest of the pack in every industry when it comes to machine learning. But if you look at a lot of these projects, the reality is that there's a ton of buzz, there's a ton of hype spanning every acronym that you can imagine. But most companies are struggling, do the separate teams, different tools, silos and the limitation that many platforms are facing, driving, down sampling to get a small subset of the data, to try to create a model that then doesn't apply, or compromising accuracy and making it virtually impossible to replicate models, and understand decisions. And if there's one thing that we've learned when it comes to data, prescriptive data at the atomic level, being able to show end of one as we refer to it, meaning individually tailored data. No matter what it is healthcare, entertainment experiences, like gaming or other, being able to get at the granular data and make these decisions, make that scoring applies to machine learning just as much as it applies to giving somebody a next-best-offer. But the opportunity has never been greater. The need to integrate this end-to-end workflow and support the right tools without compromising on that accuracy. Think about it as no downsampling, using all the data, it really is key to machine learning success. Which should be no surprise then why the third big bet from Vertica is one that we've actually been working on for years. And we're so proud to be where we are today, helping the data disruptors across the world operationalize machine learning. This big bet has the potential to truly unlock, really the potential of machine learning. And today, we're announcing some very important new capabilities specifically focused on unifying the work being done by the data science community, with their preferred tools and platforms, and the volume of data and performance at scale, available in Vertica. Our strategy has been very consistent over the last several years. As I said in the beginning, we haven't deviated from our strategy. Of course, there's always things that we add. Most of the time, it's customer driven, it's based on what our customers are asking us to do. But I think we've also done a great job, not trying to be all things to all people. Especially as these hype cycles flare up around us, we absolutely love participating in these different areas without getting completely distracted. I mean, there's a variety of query tools and data warehouses and analytics platforms in the market. We all know that. There are tools and platforms that are offered by the public cloud vendors, by other vendors that support one or two specific clouds. There are appliance vendors, who I was referring to earlier who can deliver package data warehouse offerings for private data centers. And there's a ton of popular machine learning tools, languages and other kits. But Vertica is the only advanced analytic platform that can do all this, that can bring it together. We can analyze the data wherever it is, in HDFS, S3 Object Storage, or Vertica itself. Natively we support multiple clouds on-premise deployments, And maybe most importantly, we offer that choice of deployment modes to allow our customers to choose the architecture that works for them right now. It still also gives them the option to change move, evolve over time. And Vertica is the only analytics database with end-to-end machine learning that can truly operationalize ML at scale. And I know it's a mouthful. But it is not easy to do all these things. It is one of the things that highly differentiates Vertica from the rest of the pack. It is also why our customers, all of you continue to bet on us and see the value that we are delivering and we will continue to deliver. Here's a couple of examples of some of our customers who are powered by Vertica. It's the scale of data. It's the millisecond response times. Performance and scale have always been a huge part of what we have been about, not the only thing. I think the functionality all the capabilities that we add to the platform, the ease of use, the flexibility, obviously with the deployment. But if you look at some of the numbers they are under these customers on this slide. And I've shared a lot of different stories about these customers. Which, by the way, it still amaze me every time I talk to one and I get the updates, you can see the power and the difference that Vertica is making. Equally important, if you look at a lot of these customers, they are the epitome of being able to deploy Vertica in a lot of different environments. Many of the customers on this slide are not using Vertica just on-premise or just in the cloud. They're using it in a hybrid way. They're using it in multiple different clouds. And again, we've been with them on that journey throughout, which is what has made this product and frankly, our roadmap and our vision exactly what it is. It's been quite a journey. And that journey continues now with the Vertica 10 release. The Vertica 10 release is obviously a massive release for us. But if you look back, you can see that building on that native columnar architecture that started a long time ago, obviously, with the C-Store paper. We built it to leverage that commodity hardware, because it was an architecture that was never tightly integrated with any specific underlying infrastructure. I still remember hearing the initial pitch from Mike Stonebreaker, about the vision of Vertica as a software only solution and the importance of separating the company from hardware innovation. And at the time, Mike basically said to me, "there's so much R&D in innovation that's going to happen in hardware, we shouldn't bake hardware into our solution. We should do it in software, and we'll be able to take advantage of that hardware." And that is exactly what has happened. But one of the most recent innovations that we embraced with hardware is certainly that separation of compute and storage. As I said previously, the public cloud providers offered this next generation architecture, really to ensure that they can provide the customers exactly what they needed, more compute or more storage and charge for each, respectively. The separation of compute and storage, compute from storage is a major milestone in data center architectures. If you think about it, it's really not only a public cloud innovation, though. It fundamentally redefines the next generation data architecture for on-premise and for pretty much every way people are thinking about computing today. And that goes for software too. Object storage is an example of the cost effective means for storing data. And even more importantly, separating compute from storage for analytic workloads has a lot of advantages. Including the opportunity to manage much more dynamic, flexible workloads. And more importantly, truly isolate those workloads from others. And by the way, once you start having something that can truly isolate workloads, then you can have the conversations around autonomic computing, around setting up some nodes, some compute resources on the data that won't affect any of the other data to do some things on their own, maybe some self analytics, by the system, etc. A lot of things that many of you know we've already been exploring in terms of our own system data in the product. But it was May 2018, believe it or not, it seems like a long time ago where we first announced Eon Mode and I want to make something very clear, actually about Eon mode. It's a mode, it's a deployment option for Vertica customers. And I think this is another huge benefit that we don't talk about enough. But unlike a lot of vendors in the market who will dig you and charge you for every single add-on like hit-buy, you name it. You get this with the Vertica product. If you continue to pay support and maintenance, this comes with the upgrade. This comes as part of the new release. So any customer who owns or buys Vertica has the ability to set up either an Enterprise Mode or Eon Mode, which is a question I know that comes up sometimes. Our first announcement of Eon was obviously AWS customers, including the trade desk, AT&T. Most of whom will be speaking here later at the Virtual Big Data Conference. They saw a huge opportunity. Eon Mode, not only allowed Vertica to scale elastically with that specific compute and storage that was needed, but it really dramatically simplified database operations including things like workload balancing, node recovery, compute provisioning, etc. So one of the most popular functions is that ability to isolate the workloads and really allocate those resources without negatively affecting others. And even though traditional data warehouses, including Vertica Enterprise Mode have been able to do lots of different workload isolation, it's never been as strong as Eon Mode. Well, it certainly didn't take long for our customers to see that value across the board with Eon Mode. Not just up in the cloud, in partnership with one of our most valued partners and a platinum sponsor here. Joy mentioned at the beginning. We announced Vertica Eon Mode for Pure Storage FlashBlade in September 2019. And again, just to be clear, this is not a new product, it's one Vertica with yet more deployment options. With Pure Storage, Vertica in Eon mode is not limited in any way by variable cloud, network latency. The performance is actually amazing when you take the benefits of separate and compute from storage and you run it with a Pure environment on-premise. Vertica in Eon Mode has a super smart cache layer that we call the depot. It's a big part of our secret sauce around Eon mode. And combined with the power and performance of Pure's FlashBlade, Vertica became the industry's first advanced analytics platform that actually separates compute and storage for on-premises data centers. Something that a lot of our customers are already benefiting from, and we're super excited about it. But as I said, this is a journey. We don't stop, we're not going to stop. Our customers need the flexibility of multiple public clouds. So today with Vertica 10, we're super proud and excited to announce support for Vertica in Eon Mode on Google Cloud. This gives our customers the ability to use their Vertica licenses on Amazon AWS, on-premise with Pure Storage and on Google Cloud. Now, we were talking about HDFS and a lot of our customers who have invested quite a bit in HDFS as a place, especially to store data have been pushing us to support Eon Mode with HDFS. So as part of Vertica 10, we are also announcing support for Vertica in Eon Mode using HDFS as the communal storage. Vertica's own Roth format data can be stored in HDFS, and actually the full functionality of Vertica is complete analytics, geospatial pattern matching, time series, machine learning, everything that we have in there can be applied to this data. And on the same HDFS nodes, Vertica can actually also analyze data in ORC or Parquet format, using External tables. We can also execute joins between the Roth data the External table holds, which powers a much more comprehensive view. So again, it's that flexibility to be able to support our customers, wherever they need us to support them on whatever platform, they have. Vertica 10 gives us a lot more ways that we can deploy Eon Mode in various environments for our customers. It allows them to take advantage of Vertica in Eon Mode and the power that it brings with that separation, with that workload isolation, to whichever platform they are most comfortable with. Now, there's a lot that has come in Vertica 10. I'm definitely not going to be able to cover everything. But we also introduced complex types as an example. And complex data types fit very well into Eon as well in this separation. They significantly reduce the data pipeline, the cost of moving data between those, a much better support for unstructured data, which a lot of our customers have mixed with structured data, of course, and they leverage a lot of columnar execution that Vertica provides. So you get complex data types in Vertica now, a lot more data, stronger performance. It goes great with the announcement that we made with the broader Eon Mode. Let's talk a little bit more about machine learning. We've been actually doing work in and around machine learning with various extra regressions and a whole bunch of other algorithms for several years. We saw the huge advantage that MPP offered, not just as a sequel engine as a database, but for ML as well. Didn't take as long to realize that there's a lot more to operationalizing machine learning than just those algorithms. It's data preparation, it's that model trade training. It's the scoring, the shaping, the evaluation. That is so much of what machine learning and frankly, data science is about. You do know, everybody always wants to jump to the sexy algorithm and we handle those tasks very, very well. It makes Vertica a terrific platform to do that. A lot of work in data science and machine learning is done in other tools. I had mentioned that there's just so many tools out there. We want people to be able to take advantage of all that. We never believed we were going to be the best algorithm company or come up with the best models for people to use. So with Vertica 10, we support PMML. We can import now and export PMML models. It's a huge step for us around that operationalizing machine learning projects for our customers. Allowing the models to get built outside of Vertica yet be imported in and then applying to that full scale of data with all the performance that you would expect from Vertica. We also are more tightly integrating with Python. As many of you know, we've been doing a lot of open source projects with the community driven by many of our customers, like Uber. And so now with Python we've integrated with TensorFlow, allowing data scientists to build models in their preferred language, to take advantage of TensorFlow. But again, to store and deploy those models at scale with Vertica. I think both these announcements are proof of our big bet number three, and really our commitment to supporting innovation throughout the community by operationalizing ML with that accuracy, performance and scale of Vertica for our customers. Again, there's a lot of steps when it comes to the workflow of machine learning. These are some of them that you can see on the slide, and it's definitely not linear either. We see this as a circle. And companies that do it, well just continue to learn, they continue to rescore, they continue to redeploy and they want to operationalize all that within a single platform that can take advantage of all those capabilities. And that is the platform, with a very robust ecosystem that Vertica has always been committed to as an organization and will continue to be. This graphic, many of you have seen it evolve over the years. Frankly, if we put everything and everyone on here wouldn't fit on a slide. But it will absolutely continue to evolve and grow as we support our customers, where they need the support most. So, again, being able to deploy everywhere, being able to take advantage of Vertica, not just as a business analyst or a business user, but as a data scientists or as an operational or BI person. We want Vertica to be leveraged and used by the broader organization. So I think it's fair to say and I encourage everybody to learn more about Vertica 10, because I'm just highlighting some of the bigger aspects of it. But we talked about those three market trends. The need to unify the silos, the need for hybrid multiple cloud deployment options, the need to operationalize business critical machine learning projects. Vertica 10 has absolutely delivered on those. But again, we are not going to stop. It is our job not to, and this is how Team Vertica thrives. I always joke that the next release is the best release. And, of course, even after Vertica 10, that is also true, although Vertica 10 is pretty awesome. But, you know, from the first line of code, we've always been focused on performance and scale, right. And like any really strong data platform, the execution engine, the optimizer and the execution engine are the two core pieces of that. Beyond Vertica 10, some of the big things that we're already working on, next generation execution engine. We're already actually seeing incredible early performance from this. And this is just one example, of how important it is for an organization like Vertica to constantly go back and re-innovate. Every single release, we do the sit ups and crunches, our performance and scale. How do we improve? And there's so many parts of the core server, there's so many parts of our broader ecosystem. We are constantly looking at coverages of how we can go back to all the code lines that we have, and make them better in the current environment. And it's not an easy thing to do when you're doing that, and you're also expanding in the environment that we are expanding into to take advantage of the different deployments, which is a great segue to this slide. Because if you think about today, we're obviously already available with Eon Mode and Amazon, AWS and Pure and actually MinIO as well. As I talked about in Vertica 10 we're adding Google and HDFS. And coming next, obviously, Microsoft Azure, Alibaba cloud. So being able to expand into more of these environments is really important for the Vertica team and how we go forward. And it's not just running in these clouds, for us, we want it to be a SaaS like experience in all these clouds. We want you to be able to deploy Vertica in 15 minutes or less on these clouds. You can also consume Vertica, in a lot of different ways, on these clouds. As an example, in Amazon Vertica by the Hour. So for us, it's not just about running, it's about taking advantage of the ecosystems that all these cloud providers offer, and really optimizing the Vertica experience as part of them. Optimization, around automation, around self service capabilities, extending our management console, we now have products that like the Vertica Advisor Tool that our Customer Success Team has created to actually use our own smarts in Vertica. To take data from customers that give it to us and help them tune automatically their environment. You can imagine that we're taking that to the next level, in a lot of different endeavors that we're doing around how Vertica as a product can actually be smarter because we all know that simplicity is key. There just aren't enough people in the world who are good at managing data and taking it to the next level. And of course, other things that we all hear about, whether it's Kubernetes and containerization. You can imagine that that probably works very well with the Eon Mode and separating compute and storage. But innovation happens everywhere. We innovate around our community documentation. Many of you have taken advantage of the Vertica Academy. The numbers there are through the roof in terms of the number of people coming in and certifying on it. So there's a lot of things that are within the core products. There's a lot of activity and action beyond the core products that we're taking advantage of. And let's not forget why we're here, right? It's easy to talk about a platform, a data platform, it's easy to jump into all the functionality, the analytics, the flexibility, how we can offer it. But at the end of the day, somebody, a person, she's got to take advantage of this data, she's got to be able to take this data and use this information to make a critical business decision. And that doesn't happen unless we explore lots of different and frankly, new ways to get that predictive analytics UI and interface beyond just the standard BI tools in front of her at the right time. And so there's a lot of activity, I'll tease you with that going on in this organization right now about how we can do that and deliver that for our customers. We're in a great position to be able to see exactly how this data is consumed and used and start with this core platform that we have to go out. Look, I know, the plan wasn't to do this as a virtual BDC. But I really appreciate you tuning in. Really appreciate your support. I think if there's any silver lining to us, maybe not being able to do this in person, it's the fact that the reach has actually gone significantly higher than what we would have been able to do in person in Boston. We're certainly looking forward to doing a Big Data Conference in the future. But if I could leave you with anything, know this, since that first release for Vertica, and our very first customers, we have been very consistent. We respect all the innovation around us, whether it's open source or not. We understand the market trends. We embrace those new ideas and technologies and for us true north, and the most important thing is what does our customer need to do? What problem are they trying to solve? And how do we use the advantages that we have without disrupting our customers? But knowing that you depend on us to deliver that unified analytics strategy, it will deliver that performance of scale, not only today, but tomorrow and for years to come. We've added a lot of great features to Vertica. I think we've said no to a lot of things, frankly, that we just knew we wouldn't be the best company to deliver. When we say we're going to do things we do them. Vertica 10 is a perfect example of so many of those things that we from you, our customers have heard loud and clear, and we have delivered. I am incredibly proud of this team across the board. I think the culture of Vertica, a customer first culture, jumping in to help our customers win no matter what is also something that sets us massively apart. I hear horror stories about support experiences with other organizations. And people always seem to be amazed at Team Vertica's willingness to jump in or their aptitude for certain technical capabilities or understanding the business. And I think sometimes we take that for granted. But that is the team that we have as Team Vertica. We are incredibly excited about Vertica 10. I think you're going to love the Virtual Big Data Conference this year. I encourage you to tune in. Maybe one other benefit is I know some people were worried about not being able to see different sessions because they were going to overlap with each other well now, even if you can't do it live, you'll be able to do those sessions on demand. Please enjoy the Vertica Big Data Conference here in 2020. Please you and your families and your co-workers be safe during these times. I know we will get through it. And analytics is probably going to help with a lot of that and we already know it is helping in many different ways. So believe in the data, believe in data's ability to change the world for the better. And thank you for your time. And with that, I am delighted to now introduce Micro Focus CEO Stephen Murdoch to the Vertica Big Data Virtual Conference. Thank you Stephen. >> Stephen: Hi, everyone, my name is Stephen Murdoch. I have the pleasure and privilege of being the Chief Executive Officer here at Micro Focus. Please let me add my welcome to the Big Data Conference. And also my thanks for your support, as we've had to pivot to this being virtual rather than a physical conference. Its amazing how quickly we all reset to a new normal. I certainly didn't expect to be addressing you from my study. Vertica is an incredibly important part of Micro Focus family. Is key to our goal of trying to enable and help customers become much more data driven across all of their IT operations. Vertica 10 is a huge step forward, we believe. It allows for multi-cloud innovation, genuinely hybrid deployments, begin to leverage machine learning properly in the enterprise, and also allows the opportunity to unify currently siloed lakes of information. We operate in a very noisy, very competitive market, and there are people, who are in that market who can do some of those things. The reason we are so excited about Vertica is we genuinely believe that we are the best at doing all of those things. And that's why we've announced publicly, you're under executing internally, incremental investment into Vertica. That investments targeted at accelerating the roadmaps that already exist. And getting that innovation into your hands faster. This idea is speed is key. It's not a question of if companies have to become data driven organizations, it's a question of when. So that speed now is really important. And that's why we believe that the Big Data Conference gives a great opportunity for you to accelerate your own plans. You will have the opportunity to talk to some of our best architects, some of the best development brains that we have. But more importantly, you'll also get to hear from some of our phenomenal Roth Data customers. You'll hear from Uber, from the Trade Desk, from Philips, and from AT&T, as well as many many others. And just hearing how those customers are using the power of Vertica to accelerate their own, I think is the highlight. And I encourage you to use this opportunity to its full. Let me close by, again saying thank you, we genuinely hope that you get as much from this virtual conference as you could have from a physical conference. And we look forward to your engagement, and we look forward to hearing your feedback. With that, thank you very much. >> Joy: Thank you so much, Stephen, for joining us for the Vertica Big Data Conference. Your support and enthusiasm for Vertica is so clear, and it makes a big difference. Now, I'm delighted to introduce Amy Fowler, the VP of Strategy and Solutions for FlashBlade at Pure Storage, who was one of our BDC Platinum Sponsors, and one of our most valued partners. It was a proud moment for me, when we announced Vertica in Eon mode for Pure Storage FlashBlade and we became the first analytics data warehouse that separates compute from storage for on-premise data centers. Thank you so much, Amy, for joining us. Let's get started. >> Amy: Well, thank you, Joy so much for having us. And thank you all for joining us today, virtually, as we may all be. So, as we just heard from Colin Mahony, there are some really interesting trends that are happening right now in the big data analytics market. From the end of the Hadoop hype cycle, to the new cloud reality, and even the opportunity to help the many data science and machine learning projects move from labs to production. So let's talk about these trends in the context of infrastructure. And in particular, look at why a modern storage platform is relevant as organizations take on the challenges and opportunities associated with these trends. The answer is the Hadoop hype cycles left a lot of data in HDFS data lakes, or reservoirs or swamps depending upon the level of the data hygiene. But without the ability to get the value that was promised from Hadoop as a platform rather than a distributed file store. And when we combine that data with the massive volume of data in Cloud Object Storage, we find ourselves with a lot of data and a lot of silos, but without a way to unify that data and find value in it. Now when you look at the infrastructure data lakes are traditionally built on, it is often direct attached storage or data. The approach that Hadoop took when it entered the market was primarily bound by the limits of networking and storage technologies. One gig ethernet and slower spinning disk. But today, those barriers do not exist. And all FlashStorage has fundamentally transformed how data is accessed, managed and leveraged. The need for local data storage for significant volumes of data has been largely mitigated by the performance increases afforded by all Flash. At the same time, organizations can achieve superior economies of scale with that segregation of compute and storage. With compute and storage, you don't always scale in lockstep. Would you want to add an engine to the train every time you add another boxcar? Probably not. But from a Pure Storage perspective, FlashBlade is uniquely architected to allow customers to achieve better resource utilization for compute and storage, while at the same time, reducing complexity that has arisen from the siloed nature of the original big data solutions. The second and equally important recent trend we see is something I'll call cloud reality. The public clouds made a lot of promises and some of those promises were delivered. But cloud economics, especially usage based and elastic scaling, without the control that many companies need to manage the financial impact is causing a lot of issues. In addition, the risk of vendor lock-in from data egress, charges, to integrated software stacks that can't be moved or deployed on-premise is causing a lot of organizations to back off the all the way non-cloud strategy, and move toward hybrid deployments. Which is kind of funny in a way because it wasn't that long ago that there was a lot of talk about no more data centers. And for example, one large retailer, I won't name them, but I'll admit they are my favorites. They several years ago told us they were completely done with on-prem storage infrastructure, because they were going 100% to the cloud. But they just deployed FlashBlade for their data pipelines, because they need predictable performance at scale. And the all cloud TCO just didn't add up. Now, that being said, well, there are certainly challenges with the public cloud. It has also brought some things to the table that we see most organizations wanting. First of all, in a lot of cases applications have been built to leverage object storage platforms like S3. So they need that object protocol, but they may also need it to be fast. And the said object may be oxymoron only a few years ago, and this is an area of the market where Pure and FlashBlade have really taken a leadership position. Second, regardless of where the data is physically stored, organizations want the best elements of a cloud experience. And for us, that means two main things. Number one is simplicity and ease of use. If you need a bunch of storage experts to run the system, that should be considered a bug. The other big one is the consumption model. The ability to pay for what you need when you need it, and seamlessly grow your environment over time totally nondestructively. This is actually pretty huge and something that a lot of vendors try to solve for with finance programs. But no finance program can address the pain of a forklift upgrade, when you need to move to next gen hardware. To scale nondestructively over long periods of time, five to 10 years plus is a crucial architectural decisions need to be made at the outset. Plus, you need the ability to pay as you use it. And we offer something for FlashBlade called Pure as a Service, which delivers exactly that. The third cloud characteristic that many organizations want is the option for hybrid. Even if that is just a DR site in the cloud. In our case, that means supporting appplication of S3, at the AWS. And the final trend, which to me represents the biggest opportunity for all of us, is the need to help the many data science and machine learning projects move from labs to production. This means bringing all the machine learning functions and model training to the data, rather than moving samples or segments of data to separate platforms. As we all know, machine learning needs a ton of data for accuracy. And there is just too much data to retrieve from the cloud for every training job. At the same time, predictive analytics without accuracy is not going to deliver the business advantage that everyone is seeking. You can kind of visualize data analytics as it is traditionally deployed as being on a continuum. With that thing, we've been doing the longest, data warehousing on one end, and AI on the other end. But the way this manifests in most environments is a series of silos that get built up. So data is duplicated across all kinds of bespoke analytics and AI, environments and infrastructure. This creates an expensive and complex environment. So historically, there was no other way to do it because some level of performance is always table stakes. And each of these parts of the data pipeline has a different workload profile. A single platform to deliver on the multi dimensional performances, diverse set of applications required, that didn't exist three years ago. And that's why the application vendors pointed you towards bespoke things like DAS environments that we talked about earlier. And the fact that better options exists today is why we're seeing them move towards supporting this disaggregation of compute and storage. And when it comes to a platform that is a better option, one with a modern architecture that can address the diverse performance requirements of this continuum, and allow organizations to bring a model to the data instead of creating separate silos. That's exactly what FlashBlade is built for. Small files, large files, high throughput, low latency and scale to petabytes in a single namespace. And this is importantly a single rapid space is what we're focused on delivering for our customers. At Pure, we talk about it in the context of modern data experience because at the end of the day, that's what it's really all about. The experience for your teams in your organization. And together Pure Storage and Vertica have delivered that experience to a wide range of customers. From a SaaS analytics company, which uses Vertica on FlashBlade to authenticate the quality of digital media in real time, to a multinational car company, which uses Vertica on FlashBlade to make thousands of decisions per second for autonomous cars, or a healthcare organization, which uses Vertica on FlashBlade to enable healthcare providers to make real time decisions that impact lives. And I'm sure you're all looking forward to hearing from John Yavanovich from AT&T. To hear how he's been doing this with Vertica and FlashBlade as well. He's coming up soon. We have been really excited to build this partnership with Vertica. And we're proud to provide the only on-premise storage platform validated with Vertica Eon Mode. And deliver this modern data experience to our customers together. Thank you all so much for joining us today. >> Joy: Amy, thank you so much for your time and your insights. Modern infrastructure is key to modern analytics, especially as organizations leverage next generation data center architectures, and object storage for their on-premise data centers. Now, I'm delighted to introduce our last speaker in our Vertica Big Data Conference Keynote, John Yovanovich, Director of IT for AT&T. Vertica is so proud to serve AT&T, and especially proud of the harmonious impact we are having in partnership with Pure Storage. John, welcome to the Virtual Vertica BDC. >> John: Thank you joy. It's a pleasure to be here. And I'm excited to go through this presentation today. And in a unique fashion today 'cause as I was thinking through how I wanted to present the partnership that we have formed together between Pure Storage, Vertica and AT&T, I want to emphasize how well we all work together and how these three components have really driven home, my desire for a harmonious to use your word relationship. So, I'm going to move forward here and with. So here, what I'm going to do the theme of today's presentation is the Pure Vertica Symphony live at AT&T. And if anybody is a Westworld fan, you can appreciate the sheet music on the right hand side. What we're going to what I'm going to highlight here is in a musical fashion, is how we at AT&T leverage these technologies to save money to deliver a more efficient platform, and to actually just to make our customers happier overall. So as we look back, and back as early as just maybe a few years ago here at AT&T, I realized that we had many musicians to help the company. Or maybe you might want to call them data scientists, or data analysts. For the theme we'll stay with musicians. None of them were singing or playing from the same hymn book or sheet music. And so what we had was many organizations chasing a similar dream, but not exactly the same dream. And, best way to describe that is and I think with a lot of people this might resonate in your organizations. How many organizations are chasing a customer 360 view in your company? Well, I can tell you that I have at least four in my company. And I'm sure there are many that I don't know of. That is our problem because what we see is a repetitive sourcing of data. We see a repetitive copying of data. And there's just so much money to be spent. This is where I asked Pure Storage and Vertica to help me solve that problem with their technologies. What I also noticed was that there was no coordination between these departments. In fact, if you look here, nobody really wants to play with finance. Sales, marketing and care, sure that you all copied each other's data. But they actually didn't communicate with each other as they were copying the data. So the data became replicated and out of sync. This is a challenge throughout, not just my company, but all companies across the world. And that is, the more we replicate the data, the more problems we have at chasing or conquering the goal of single version of truth. In fact, I kid that I think that AT&T, we actually have adopted the multiple versions of truth, techno theory, which is not where we want to be, but this is where we are. But we are conquering that with the synergies between Pure Storage and Vertica. This is what it leaves us with. And this is where we are challenged and that if each one of our siloed business units had their own stories, their own dedicated stories, and some of them had more money than others so they bought more storage. Some of them anticipating storing more data, and then they really did. Others are running out of space, but can't put anymore because their bodies aren't been replenished. So if you look at it from this side view here, we have a limited amount of compute or fixed compute dedicated to each one of these silos. And that's because of the, wanting to own your own. And the other part is that you are limited or wasting space, depending on where you are in the organization. So there were the synergies aren't just about the data, but actually the compute and the storage. And I wanted to tackle that challenge as well. So I was tackling the data. I was tackling the storage, and I was tackling the compute all at the same time. So my ask across the company was can we just please play together okay. And to do that, I knew that I wasn't going to tackle this by getting everybody in the same room and getting them to agree that we needed one account table, because they will argue about whose account table is the best account table. But I knew that if I brought the account tables together, they would soon see that they had so much redundancy that I can now start retiring data sources. I also knew that if I brought all the compute together, that they would all be happy. But I didn't want them to tackle across tackle each other. And in fact that was one of the things that all business units really enjoy. Is they enjoy the silo of having their own compute, and more or less being able to control their own destiny. Well, Vertica's subclustering allows just that. And this is exactly what I was hoping for, and I'm glad they've brought through. And finally, how did I solve the problem of the single account table? Well when you don't have dedicated storage, and you can separate compute and storage as Vertica in Eon Mode does. And we store the data on FlashBlades, which you see on the left and right hand side, of our container, which I can describe in a moment. Okay, so what we have here, is we have a container full of compute with all the Vertica nodes sitting in the middle. Two loader, we'll call them loader subclusters, sitting on the sides, which are dedicated to just putting data onto the FlashBlades, which is sitting on both ends of the container. Now today, I have two dedicated storage or common dedicated might not be the right word, but two storage racks one on the left one on the right. And I treat them as separate storage racks. They could be one, but i created them separately for disaster recovery purposes, lashing work in case that rack were to go down. But that being said, there's no reason why I'm probably going to add a couple of them here in the future. So I can just have a, say five to 10, petabyte storage, setup, and I'll have my DR in another 'cause the DR shouldn't be in the same container. Okay, but I'll DR outside of this container. So I got them all together, I leveraged subclustering, I leveraged separate and compute. I was able to convince many of my clients that they didn't need their own account table, that they were better off having one. I eliminated, I reduced latency, I reduced our ticketing I reduce our data quality issues AKA ticketing okay. I was able to expand. What is this? As work. I was able to leverage elasticity within this cluster. As you can see, there are racks and racks of compute. We set up what we'll call the fixed capacity that each of the business units needed. And then I'm able to ramp up and release the compute that's necessary for each one of my clients based on their workloads throughout the day. And so while they compute to the right before you see that the instruments have already like, more or less, dedicated themselves towards all those are free for anybody to use. So in essence, what I have, is I have a concert hall with a lot of seats available. So if I want to run a 10 chair Symphony or 80, chairs, Symphony, I'm able to do that. And all the while, I can also do the same with my loader nodes. I can expand my loader nodes, to actually have their own Symphony or write all to themselves and not compete with any other workloads of the other clusters. What does that change for our organization? Well, it really changes the way our database administrators actually do their jobs. This has been a big transformation for them. They have actually become data conductors. Maybe you might even call them composers, which is interesting, because what I've asked them to do is morph into less technology and more workload analysis. And in doing so we're able to write auto-detect scripts, that watch the queues, watch the workloads so that we can help ramp up and trim down the cluster and subclusters as necessary. There has been an exciting transformation for our DBAs, who I need to now classify as something maybe like DCAs. I don't know, I have to work with HR on that. But I think it's an exciting future for their careers. And if we bring it all together, If we bring it all together, and then our clusters, start looking like this. Where everything is moving in harmonious, we have lots of seats open for extra musicians. And we are able to emulate a cloud experience on-prem. And so, I want you to sit back and enjoy the Pure Vertica Symphony live at AT&T. (soft music) >> Joy: Thank you so much, John, for an informative and very creative look at the benefits that AT&T is getting from its Pure Vertica symphony. I do really like the idea of engaging HR to change the title to Data Conductor. That's fantastic. I've always believed that music brings people together. And now it's clear that analytics at AT&T is part of that musical advantage. So, now it's time for a short break. And we'll be back for our breakout sessions, beginning at 12 pm Eastern Daylight Time. We have some really exciting sessions planned later today. And then again, as you can see on Wednesday. Now because all of you are already logged in and listening to this keynote, you already know the steps to continue to participate in the sessions that are listed here and on the previous slide. In addition, everyone received an email yesterday, today, and you'll get another one tomorrow, outlining the simple steps to register, login and choose your session. If you have any questions, check out the emails or go to www.vertica.com/bdc2020 for the logistics information. There are a lot of choices and that's always a good thing. Don't worry if you want to attend one or more or can't listen to these live sessions due to your timezone. All the sessions, including the Q&A sections will be available on demand and everyone will have access to the recordings as well as even more pre-recorded sessions that we'll post to the BDC website. Now I do want to leave you with two other important sites. First, our Vertica Academy. Vertica Academy is available to everyone. And there's a variety of very technical, self-paced, on-demand training, virtual instructor-led workshops, and Vertica Essentials Certification. And it's all free. Because we believe that Vertica expertise, helps everyone accelerate their Vertica projects and the advantage that those projects deliver. Now, if you have questions or want to engage with our Vertica engineering team now, we're waiting for you on the Vertica forum. We'll answer any questions or discuss any ideas that you might have. Thank you again for joining the Vertica Big Data Conference Keynote Session. Enjoy the rest of the BDC because there's a lot more to come
SUMMARY :
And he'll share the exciting news And that is the platform, with a very robust ecosystem some of the best development brains that we have. the VP of Strategy and Solutions is causing a lot of organizations to back off the and especially proud of the harmonious impact And that is, the more we replicate the data, Enjoy the rest of the BDC because there's a lot more to come
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephen | PERSON | 0.99+ |
Amy Fowler | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John Yavanovich | PERSON | 0.99+ |
Amy | PERSON | 0.99+ |
Colin Mahony | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
John Yovanovich | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Joy King | PERSON | 0.99+ |
Mike Stonebreaker | PERSON | 0.99+ |
John | PERSON | 0.99+ |
May 2018 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
Colin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Vertica Academy | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Joy | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Stephen Murdoch | PERSON | 0.99+ |
Vertica 10 | TITLE | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
AT&T. | ORGANIZATION | 0.99+ |
September 2019 | DATE | 0.99+ |
Python | TITLE | 0.99+ |
www.vertica.com/bdc2020 | OTHER | 0.99+ |
One gig | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Bruce Chizen, Informatica - Informatica World 2017 - #INFA17 - #theCUBE
>> Narrator: Live, from San Francisco, it's the Cube, covering Informatica World 2017. Brought to you by Informatica. (techno music) >> Hey, welcome back, everyone. Live here in San Francisco, this is the Cube's exclusive coverage of Informatica World 2017, our third year covering Informatica, and more to come. I'm John Furrier with Silicon Angle, the Cube. My co-host, Peter Burris, Head of Research for Silicon Angle Media, as well as General Manager of Wikibon.com, check out the great research at Wikibon. Some great stuff there on IOT, cloud ping data, great stuff. Of course, go to SiliconAngle.com for all the coverage YouTube.com/SiliconAngle for all the Cube videos. Our next guest is Bruce Chizen, board member of a lot of private companies, also Special Advisor at Informatica. You're on the board of Informatica, no? >> Executive Chair. >> John: Executive Chair of Informatica. Not only as Special Advisor, Executive Chair. Welcome back, good to see you. >> Great to be here. >> You were on last year, great to have you back. What a popular video. Jerry Held was on yesterday. Let's get some Board insights, so first question, when are you going public? (laughing) >> Good one. >> John: Warmed you up, and then, no. I mean the performance is doing well. Give us a quick update. >> Company's doing well. Q4 was a good quarter, Q1 was a good quarter. I think we will be positioned to do something late 2018, early 2019. A lot depends on how the company continues to do. A lot depends on the market. The private equity investors are in no hurry. >> John: Yeah. >> But it's always nice to have that option. >> So it's one of the things we, yeah, great option. Doing well. We heard that also from some of the management. We got O'Neil coming on, we'll press him on some of the performance side, but always had good products out, we talked about it last year. But the industry's going through a massive transformation. You've seen many waves over the years. The waves are hitting. What's your perspective right now? I mean, it's a pretty big wave. You got to get the surfboard out there, there's a set coming in. What's the big wave right now? >> So, data is driving every transformation within every organization. Any company that is not using and taking advantage of data will be left behind. You look at how companies like Amazon and Google and now a lot of our customers like Schwab and Tesla and others, the way they're using data, that will allow them to continue to either be successful in the case of a Schwab, or be a disruptor, like somebody like Tesla. Fortunately for us at Informatica, we are helping to drive that digital transformation. >> One of the things that I always observe, younger than you are, I've only seen a few waves in my day, but in the waves that were the most impactful in terms of creating wealth, and opportunity, and innovation, has had a cool and relevant factor. Meaning, if you go back to the PC days, it was cool and relevant. If you go back to mini computer, cool and relevant. And it goes on and on and on. And certainly internet, cool and relevant. But now, the, you mention Tesla. I'm testing driving one on Friday. My kids are like "Don't buy the Audi, buy the Tesla." This is my kids. So it's a cooler, it's a spaceship, it's cooler than the other cars. >> Bruce: Or an iPhone on wheels. >> Peter: (laughs) Exactly. A computer on wheels. >> So cool and relevant, talk about what is the cool and relevant thing right now. You talk about user experience, that's one. Data's changing it. So how is data being the cool and relevant trend? Point to some things that... >> If you look at what's happening from the chip on up, everything, everything will be intelligent. And I hate to use the term "internet of things," but the reality is everything will have intelligence. And that intelligent information will be able to be taken advantage of because of the scale of the cloud. Which means that any company will be able to take information, data, analyze it on the cloud, and then use it to do something with. And it's happening now. Fortunately, Informatica sits right in the middle of that, because they're the ones who could rationalize that data on behalf of their customers. 'Cause there's going to be a lot of it and somebody needs to govern it, secure it, homogenize it. >> John: You consider them an enabling platform? >> Absolutely, absolutely. I was joking, we just went through a rebranding exercise. And it's kind of cute, new logo, and it's kind of bold and sleek and it shows we'll have a leader, but it's a logo. But there's really around the messaging, we are finally getting across that we are the ones unleashing the power of data. That's what Informatica does. We'd just never really told anybody about it. We're very product focused, not really helping customers understand how uniquely positioned the company was. >> And it's also, you guys have done some things. Let's just go back and look at going private. Brought a new management team, have product chops again, we've talked about that in previous years. Last year in particular. So, okay, you have the wind at your back. Now you got Sally as a CMO, now you got to start being a humble braggart about the cool stuff you're doing. So which is marketing, basically. >> That's correct. >> John: But now, it's digital. >> Yeah. >> So, what's the Board conversation like, you say "Go, go build the brand!" >> So first of all, being private is great. (laughing) Because we get to do things you couldn't do as a public company. We're, a lot of our customers what to buy the products and solutions via subscription, that has huge impact to the P&L, especially in the short term. Cash flow's fine. So the PE guys are going okay, it's great, because we'll come out of this as a better company, and our customers like it because that's the way they want to buy products. So, that helps a lot. The conversation at the Board level has been, "Wow, we're number one in every category in which "we participate in. "Everything from big data to cloud integration "to traditional on-premise, to real-time streaming, "and, and, and data security." >> You're only one of three vendors in the Google general availabilities banner which went out yesterday. We covered that on Silicon Angle. >> We're number one there, we had AWS speak at our conference, we had Azure speak at our conference. All of the cloud guys love Informatica because we are the ones who are uniquely positioned to deal with all this data on behalf of their customers. As a private company, we're able to take advantage of that, spend some extra money on marketing. You know a lot of our customers know about us, but a lot more should know about us. So, part of coming out, having a new logo, having a new digital campaign, changing the website, that costs money. But as a private company, we get to do that. Because the fruits of those efforts will end up occurring a couple of years down the road, which is fine. >> So let me see if I can weave those two thoughts together in what I thought was an interesting way. Given that increasingly a lot of data's going to be in the cloud, and that's where the longer analysis is going to be required, that means a lot of the tools are going to have to be in the cloud. Amazon Marketplace is going to be a place where a lot of tools are going to be chosen. People are going to go into the Amazon Marketplace and see a lot of different options, including some that are free. They may not work as well, but they're free. You guys, what happens with marketing, and what's happening with that kind of a trend, is you need to buy, as customers, to choose tools that are actually going to work to serve or to solve the problem, to do the work that you need them to perform. And so what Sally Jenkins, the CMO, has done, with this new branding, is introduce the process of how do you buy us more customers to choose the right tool to do the right job? Does that make sense to you? >> It makes absolute sense, free is good. But be careful what you ask for. Sometimes you get what you pay for. You're talking about enterprise data. You want it to be governed, you want it to be secure. You want it to be accurate. >> John: Now there's laws coming out where you have to do it. >> You look at GTB... >> Peter: GDBPR. >> GDBPR in Europe, the privacy issues. You look at what's happening with Facebook, or what was reported today with France and how they're not happy with Facebook's privacy behaviors. It's an issue. It's an issue for anybody who does business anywhere, especially if you're a global company and you do business in Europe. You have to worry about corporate governance. Data security, data governance, data security. That's Informatica. The other thing is, while there will be some customers who will say "I'm going to AWS," there will be more customers who will either say "I have some legacy "systems that I'm going to leave on-premise, "and new projects will be in the cloud." Or they're going to say "I'm moving everything to "the cloud, but I don't want to be held hostage "by one cloud provider." And they're going to go with Amazon and Azure and Google and maybe Oracle, and, and, and. And again, because Informatica is Swiss, we're able to provide them with a solution that allows them to accomplish their data needs. >> Well, congratulations on the performance, I want to get that out of the way. But I want to ask a specific question on the historical, holistic picture of Informatica. Going back, what were the key bets that you guys made? 'Cause you guys sit around, and you got the private equity now coming to the table, they have expectations, but at the end of the day you've got to build a business. What were the key bets that is yielding the fruit that we're seeing? >> The number one bet was that the company had great products and a great R&D organization. We believed that, and fortunately, we got it right. Because if you don't have great products and passionate R&D organizations around the world, you can't make up for that. It doesn't make a difference how much you spend on marketing. At least not in the business that we're in. So that was number one bet, and that proved to play out well. The second thing was, this was a company that had done so well for so long that they never needed to change their business processes to behave like a billion, two billion, three billion, four billion dollar company. Many of their business processes were like that of a 200 million dollar company. And that's easier to fix. So things around back end, IT, legal, finance, go-to-market, marketing, sales. >> John: Less of a risk from an investment standpoint. >> That's correct. So that's what we believed, we were right And where we've been spending most of our energy and effort is helping the company, through the new management team, improve their business processes and their go-to-market. >> So we had a critical analysis yesterday during our wrap up session, and one of the comments I made, I want to get your reaction to this, was although impressive, your number one and all these Gartner Magic Quadrant categories, but that's an old scoreboard. If we're really living in digital transformation, those shouldn't really be a tell sign for what the performance of the new KBIs or the new metrics are. And so we were pontificating and analyzing what that would be, still unknown, we're going to see it. But Peter had a good point, he said "At the end "of the day, customer wins." >> Yeah, that was my reaction. It's like at the end of the day, all that matters do the customers.... >> What's the scoreboard look for customer wins? I know you were at the executive summit they had yesterday at the Intercontinental right around the corner. I had a chance to meet some of them at that dinner, some conversation. But I want to get your perspective. What is the vibe of the customers, what are those customer wins, and how does that translate into future growth for Informatica? >> Any customer who is looking at data, data management, strategically, is going with Informatica. >> Mmm hmm. >> There are a number of competitors that we have who try to compete with Informatica at the product level, and they end up doing okay through pricing, through better sales tactics, but when we have the opportunity to speak to the Chief Data Officer, the CIO, the CEO, they go with Informatica. It's the reason why Tesla went with Informatica on their project where they're trying to tie together the auto business with the solar business. Because if they get to know both sets of customers and are able to sync that up, one plus one will be greater than two for them, and that's why they did that deal. Or it's why Amazon has chosen our MDM solution for their sales operations. So you look at leading companies who are able to look at the enterprise level, at the strategic level, they are going with Informatica. That's why we know we're winning. >> So Bruce, give us three sentences, what is strategic data management? >> Strategic data management is being able to take reams and reams of data from all different platforms, traditional legacy, big data, real-time solutions, and data from the cloud and be able to look at it intelligently. Use artificial intelligence and machine learning to be able to analyze that data in a more intelligent way, and then act on it. >> So two questions on that point, I was going to ask about the AI washing going on in the industry. Every event now is like, "Oh my god, AI, we've got AI," but that's not really AI. What is AI, we call it augmented intelligence because you're really augmenting with the data, but even Google IO's got a little neural net throwback to the 80s, but what's your thoughts on how customers should look through the lens of b.s. to say, "Wow, that's the real AI, or the real "augmented intelligence." >> Does it do anything? That's ultimately the question that a Chief Data Officer or CIO or CEO...is something changing because of the artificial intelligence being applied? In the case of Informatica, we announced an AI platform called Clair, "clairvoyant," so artificial intelligence. What is Clair? It allows you to develop solutions like our enterprise information catalog, where an organization has thousands and thousands of databases, it's able to look at the metadata within those databases and then over time keep disclosing more and more data appropriate to the information that you're looking for. So then, if I'm an analyst or a businessperson, a marketing person, a sales person, I can take action on the right set of data. That's true artificial intelligence. >> Bruce, I want to get to one final point as we are winding down here. Again, you've seen many waves. But I want to talk about the companies that are trying to get through the transition of this transformation, Informatica certainly cleared the runway, they've got some things to work on, certainly brand-building. I see that as their air cover in many rising tide will float a lot of boats in the ecosystem. But there are companies where they have been in the infrastructure business and the cloud is one big infrastructure, selling boxes and whatnot. Other companies have traditional software models, download, whatever you want to call it, on-prem licenses, not subscriptions. They're working hard. Your advice to them if you are on their Board, or as a friend, what do you say to them, what do they got to do to get through this? And how should customers look at who's winning and who's losing, in terms of progress? >> The world of enterprise computing is moving to the cloud. Legacy systems will remain for a while. They need to figure out how to take their legacy solutions and make them relevant to the world of cloud computing. And if they can't do that, they should sell their company or get out of business. (laughing) >> And certainly data is the oil, it's the gold, it's the lifeblood of an organization. >> Of any organization. Even at Informatica, internally, we're using our own intelligent data platform to do our own marketing. Sally Jenkins is working closely with our CIO Graeme Thompson on working on solutions where we could help better understand what our customers want and need, so we can provide them with the right solution, leveraging our intelligent data leg. >> Bruce, thanks for coming on the Cube. Really appreciate your insight. Again, you've seen a lot of waves, you've been in the industry a long time, you have great Board presence, as well as other companies. Thanks for sharing the insight, and the data here on the Cube. A lot of insights and analytics being extracted here and sharing it with you. Certainly we're not legacy, we don't need to sell our business, we're doing great. If you haven't, make the transition. Good advice, thanks so much. >> Bruce: Great to be here. >> Bruce Chizen inside the Cube here. I'm John Furrier with Peter Burris. Stay with us for more coverage after this short break. (techno music)
SUMMARY :
Brought to you by Informatica. of Wikibon.com, check out the great research at Wikibon. Welcome back, good to see you. You were on last year, great to have you back. I mean the performance is doing well. A lot depends on how the company continues to do. So it's one of the things we, yeah, great option. and others, the way they're using data, that will One of the things that I always observe, younger A computer on wheels. So how is data being the cool and relevant trend? but the reality is everything will have intelligence. the company was. being a humble braggart about the cool stuff you're doing. and our customers like it because that's the way We covered that on Silicon Angle. All of the cloud guys love Informatica because or to solve the problem, to do the work that you need You want it to be governed, you want it to be secure. to do it. And they're going to go with Amazon and Azure and Google but at the end of the day you've got to build a business. At least not in the business that we're in. and effort is helping the company, through the But Peter had a good point, he said "At the end It's like at the end of the day, all that matters What is the vibe of the customers, what are those strategically, is going with Informatica. the opportunity to speak to the Chief Data Officer, and data from the cloud and be able to throwback to the 80s, but what's your thoughts on In the case of Informatica, we announced an AI Your advice to them if you are on their Board, solutions and make them relevant to the world And certainly data is the oil, it's the gold, intelligent data platform to do our own marketing. on the Cube. Bruce Chizen inside the Cube here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bruce | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Peter Burris | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Bruce Chizen | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
Sally Jenkins | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Schwab | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
Sally | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Silicon Angle | ORGANIZATION | 0.99+ |
Graeme Thompson | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
three billion | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
two billion | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
early 2019 | DATE | 0.99+ |
Wikibon.com | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
200 million dollar | QUANTITY | 0.99+ |
late 2018 | DATE | 0.99+ |
Friday | DATE | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Jerry Held | PERSON | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
O'Neil | PERSON | 0.98+ |
Audi | ORGANIZATION | 0.98+ |
three sentences | QUANTITY | 0.98+ |
two thoughts | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Oliver Chiu, IBM & Wei Wang, Hortonworks | BigData SV 2017
>> Narrator: Live from San Jose, California It's the CUBE, covering Big Data Silicon Valley 2017. >> Okay welcome back everyone, live in Silicon Valley, this is the CUBE coverage of Big Data Week, Big Data Silicon Valley, our event, in conjunction with Strata Hadoop. This is the CUBE for two days of wall-to-wall coverage. I'm John Furrier with Analyst from Wikibon, George Gilbert our Big Data as well as Peter Buress, covering all of the angles. And our next guest is Wei Wang, Senior Director of Product Market at Hortonworks, a CUBE alumni, and Oliver Chiu, Senior Product Marketing Manager for Big Data and Microsoft Cloud at Azure. Guys, welcome to the CUBE, good to see you again. >> Yes. >> John: On the CUBE, appreciate you coming on. >> Thank you very much. >> So Microsoft and Hortonworks, you guys are no strangers. We have covered you guys many times on the CUBE, on HD insights. You have some stuff happening, here, and I was just talking about you guys this morning on another segment, like, saying hey, you know the distros need a Cloud strategy. So you have something happening tomorrow. Blog post going out. >> Wei: Yep. >> What's the news with Microsoft? >> So essentially I think that we are truly adopting the CloudFirst. And you know that we have been really acquiring a lot of customers in the Cloud. We have that announced in our earnings that more than a quarter of our customers actually already have a Cloud strategy. I want to give out a few statistics that Gardner told us actually last year. The increase for their end users went up 57% just to talk about Hadoop and Microsoft Azure. So what we're here, is to talk about the next generation. We're putting our latest and greatest innovation in which comes in in the package of the release of HDP2.6, that's our last release. I think our last conversation was on 2.5. So 2.6's great latest and newest innovations to put on CloudFirst, hence our partner, here, Microsoft. We're going to put it on Microsoft HD Insight. >> That's super exciting. And, you know, Oliver, one of the things that we've been really fascinated with and covering for multiple years now is the transformation of Microsoft. Even prior to Satya, who's a CUBE alumni by the way, been on the CUBE, when we were at XL event at Stanford. So, CEO of Microsoft, CUBE alumni, good to have that. But, it's interesting, right? I mean, the Open Compute Project. They donated a boatload of IP into the open-source. Heavily now open-source, Brendan Burns works for Microsoft. He's seeing a huge transformation of Microsoft. You've been working with Hortonworks for a while. Now, it's kind of coming together, and one of the things that's interesting is the trend that's teasing out on the CUBE all the time now is integration. He's seeing this flash point where okay, I've got some Hadoop, I've got a bunch of other stuff in the enterprise equation that's kind of coming together. And you know, things like IOT, and AIs all around the corner as well. How are you guys getting this all packaged together? 'Cause this kind of highlights some of the things that are now integrated in with the tools you have. Give us an update. >> Yeah, absolutely. So for sure, just to kind of respond to the trend, Microsoft kind of made that transformation of being CloudFirst, you know, many years ago. And, it's been great to partner with someone like Hortonworks actually for the last four years of bringing HD Insight as a first party Microsoft Cloud service. And because of that, as we're building other Cloud services around in Azure, we have over 60 services. Think about that. That's 60 PAZ and IAZ services in Microsoft, part of the Azure ecosystem. All of this is starting to get completely integrated with all of our other services. So HD Insight, as an example, is integrated with all of our relational investments, our BI investments, our machine learning investments, our data science investments. And so, it's really just becoming part of the fabric of the Azure Cloud. And so that's a testament to the great partnership that we're having with Hortonworks. >> So the inquiry comment from Gardner, and we're seeing similar things on the Wikibon site on our research team, is that now the legitimacy of say, of seeing how Hadoop fits into the bigger picture, not just Hadoop being the pure-play Big Data platform which many people were doing. But now they're seeing a bigger picture where I can have Hadoop, and I can have some other stuff all integrating. Is that all kind of where this is going from you guys' perspective? >> So yeah, it's again, some statistics we have done tech-validate service that our customers are telling us that 43% of the responders are actually using that integrated approach, the hybrid. They're using the Cloud. They're using our stuff on-premise to actually provide integrated end-to-end processing workload. They are now, I think, people are less think about, I would think, a couple years ago, people probably think a little bit about what kind of data they want to put in the Cloud. What kind of workload they want to actually execute in the Cloud, versus their own premise. I think, what we see is that line starting to blur a little bit. And given the partnership we have with Microsoft, the kind of, the enterprise-ready functionalities, and we talk about that for a long time last time I was here. Talk about security, talk about governance, talk about just layer of, integrated layer to manage the entire thing. Either on-premise, or in the Cloud. I think those are some of the functionalities or some of the innovations that make people a lot more at ease with the idea of putting the entire mission-critical applications in the Cloud, and I want to mention that, especially with our blog going out tomorrow that we will actually announce the Spark 2.1. In which, in Microsoft Azure HD Insight, we're actually going to guarantee 99.9% of SLA. Right, so it's, for that, it's for enterprise customers. In which many of us have together that is truly an insurance outfield, that people are not just only feel at ease about their data, that where they're going to locate, either in the Cloud or within their data center, but also the kind of speed and response and reliability. >> Oliver, I want to queue off something you said which was interesting, that you have 60 services, and that they're increasingly integrated with each other. The idea that Hadoop itself is made up of many projects or services and I think in some amount of time, we won't look at it as a discrete project or product, but something that's integrated with together makes a pipeline, a mix-and-match. I'm curious if you can share with us a vision of how you see Hadoop fitting in with a richer set of Microsoft services, where it might be SQL server, it might be streaming analytics, what that looks like and so the issue of sort of a mix-and-match toolkit fades into a more seamless set of services. >> Yeah, absolutely. And you're right, Hadoop and Wei will definitely reiterate this, is that Hadoop is a platform right, and certainly there is multiple different workloads and projects on that platform that do a lot of different things. You have Spark that can do machine learning and streaming, and SQL-like queries, and you have Hadoop itself that can do badge, interactive, streaming as well. So, you see kind of a lot of workloads being built on open-source Hadoop. And as you bring it to the Cloud, it's really for customers that what we found, and kind of this new Microsoft that is often talked about, is it's all about choice and flexibility for our customers. And so, some customers want to be 100% open-source Apache Hadoop, and if they want that, HD Insight is the right offering, and what we can do is we can surround it with other capabilities that are outside of maybe core Hadoop-type capabilities. Like if you want to media services, all the way down to, you know, other technologies nothing related to, specifically to data and analytics. And so they can combine that with the Hadoop offering, and blend it into a combined offering. And there are some customers that will blend open-source Hadoop with some of our Azure data services as well, because it offers something unique or different. But it's really a choice for our customers. Whatever they're open to, whatever their kind of their strategy for their organization. >> Is there, just to kind of then compare it with other philosophies, do you see that notion that Hadoop now becomes a set of services that might or might not be mixed and matched with native services. Is that different from how Amazon or Google, you know, you perceive them to be integrating Hadoop into their sort of pipelines and services? >> Yeah, it's different because I see Amazon and Google, like, for instance, Google kind of is starting to change their philosophy a little bit with introduction of dataproc. But before, you can see them as an organization that was really focused on bringing some of the internal learnings of Google into the marketplace with their own, you can say proprietary-type services with some of the offerings that they have. But now, they're kind of realizing the value that Hadoop, that Apache Hadoop ecosystem brings. And so, with that comes the introduction of their own manage service. And for AWS, their roots is IAZ, so to speak, is kind of the roots of their Cloud, and they're starting to bring kind of other systems, very similar to, I would say Microsoft Strategy. For us, we are all about making things enterprise-ready. So that's what the unique differentiator and kind of what you alluded to. And so for Microsoft, all of our data services are backed by 99.9% service-level agreement including our relationship with Hortonworks. So that's kind of one, >> Just say that again, one more time. >> 99.9% up-time, and if, >> SLA. >> Oliver: SLA and so that's a guarantee to our customers. So if anything we're, >> John: One more time. >> It's a guarantee to our customers. >> No, this is important. SLA, I mean Google Next didn't talk much about last week their Cloud event. They talked about speed thieves, >> Exactly >> Not a lot of SLAs. This is mandate for the enterprise. They care more about SLA so, not that they don't care about price, but they'd much rather have solid, bulletproof SLAs than the best price. 'Cause the total cost of ownership. >> Right. And that's really the heritage of where Microsoft comes from, is we have been serving our on-premises customers for so long, we understand what they want and need and require for a mission-critical enterprise-ready deployment. And so, our relationship with Hortonworks absolutely 99.9% service-level agreement that we will guarantee to our customers and across all of the Hadoop workloads, whether it would be Hive, whether it would be Spark, whether it'd be Kafka, any of the workloads that we have on HD Insight, is enterprise-ready by virtue, mission-critical, built-in, all that stuff that you would expect. >> Yeah, you guys certainly have a great track record with enterprise. No debate about that, 100%. Um, back to you guys, I want to take a step back and look at some things we've been observing kicking off this week at the Strata Hadoop. This is our eighth year covering, Hadoop world now has evolved into a whole huge thing with Big Data SV and Big Data NYC that we run as well. The bets that were made. And so, I've been intrigued by HD Insights from day one. >> Yep. >> Especially the relationship with Microsoft. Got our attention right away, because of where we saw the dots connecting, which is kind of where we are now. That's a good bet. We're looking at what bets were made and who's making which bets when, and how they're panning out, so I want to just connect the dots. Bets that you guys have made, and the bets that you guys have made that are now paying off, and certainly we've done before camera revolution analytics. Obviously, now, looking real good middle of the fairway as they say. Bets you guys have made that hey, that was a good call. >> Right, and we think that first and foremost, we are sworn to work to support machine learning, we don't call it AI, but we are probably the one that first to always put the Spark, right, in Hadoop. I know that Spark has gained a lot of traction, but I remember that in the early days, we are the ones that as a distro that, going out there not only just verbally talk about support of Spark, but truly put it in our distribution as one of the component. We actually now in the last version, we are actually allows also flexibility. You know Spark, how often they change. Every six weeks they have a new version. And that's kind of in the sense of running into paradox of what actually enterprise-ready is. Within six weeks, they can't even roll out an entire process, right? If they have a workload, they probably can't even get everyone to adopt that yet, within six weeks. So what we did, actually, in the last version, in which we will continue to do, is to essentially support multiple versions of Spark. Right, we essentially to talk about that. And the other bet we have made is about Hive. We truly made that as kind of an initiative behind project Stinger initiative, and also have ties now with LAP. We made the effort to join in with all the other open-source developers to go behind this project that make sure that SQL is becoming truly available for our customers, right. Not only just affordable, but also have the most comprehensive coverage for SQL, and C20-11. But also now having that almost sub-second interactive query. So I think that's the kind of bet we made. >> Yeah, I guess the compatibility of SQL, then you got the performance advantage going on, and this database is where it's in memory or it's SSD, That seems to be the action. >> Wei: Yeah. >> Oliver, you guys made some good bets. So, let's go down the list. >> So let's go down memory lane. I always kind of want to go back to our partnership with Hortonworks. We partnered with Hortonworks really early on, in the early days of Hortonworks' existence. And the reason we made that bet was because of Hortonworks' strategy of being completely open. Right, and so that was a key decision criteria for Microsoft. That we wanted to partner with someone whose entire philosophy was open-source, and committing everything back to the Apache ecosystem. And so that was a very strategic bet that we made. >> John: It was bold at the time, too. >> It was very bold, at the time, yeah. Because Hortonworks at that time was a much smaller company than they are today. But we kind of understood of where the ecosystem was going, and we wanted to partner with people who were committing code back into the ecosystem. So that, I would argue, is definitely one really big bet that was a very successful one and continues to play out even today. Other bets that we've made and like we've talked about prior is our acquisition of Revolution Analytics a couple years ago and that's, >> R just keeps on rolling, it keeps on rolling, rolling, rolling. Awesome. >> Absolutely. Yeah. >> Alright, final words. Why don't we get updated on the data science experiences you guys have. Is there any update there? What's going on, what seems to be, the data science tools are accelerating fast. And, in fact, some are saying that looks like the software tools years and years ago. A lot more work to do. So what's happening with the data science experience? >> Yeah absolutely and just tying back to that original comment around R, Revolution Analytics. That has become Microsoft, our server. And we're offering that, available on-premises and in the Cloud. So on-premises, it's completely integrated with SQL server. So all SQL server customers will now be able to do in-database analytics with R built-in-to-the-core database. And that we see as a major win for us, and a differentiator in the marketplace. But in the Cloud, in conjunction with our partnership with Hortonworks, we're making Microsoft R server, available as part of our integration with Azure HD Insights. So we're kind of just tying back all that integration that we talked about. And so that's built in, and so any customer can take R, and paralyze that across any number of Hadoop and Sparknotes in a managed service within minutes. Clusters will spin up, and they can just run all their data science models and train them across any number of Hadoop and Sparknotes. And so that is, >> John: That takes the heavy lifting away on the cluster management side, so they can focus on their jobs. >> Oliver: Absolutely. >> Awesome. Well guys, thanks for coming on. We really appreciate Wei Wang with Hortonworks, and we have Oliver Chiu from Microsoft. Great to get the update, and tomorrow 10:30, the CloudFirst news hits. CloudFirst, Hortonworks with Azure, great news, congratulations, good Cloud play for Hortonworks. To CUBE, I'm John Furrier with George Gilbert. More coverage live in Silicon Valley after this short break.
SUMMARY :
It's the CUBE, covering all of the angles. and I was just talking about you guys this morning a lot of customers in the Cloud. and one of the things that's interesting that we're having with Hortonworks. is that now the legitimacy of say, And given the partnership we have with Microsoft, and that they're increasingly integrated with each other. all the way down to, you know, other technologies a set of services that might or might not be and kind of what you alluded to. Oliver: SLA and so that's a guarantee to our customers. No, this is important. This is mandate for the enterprise. and across all of the Hadoop workloads, that we run as well. and the bets that you guys have made but I remember that in the early days, Yeah, I guess the compatibility of SQL, So, let's go down the list. And so that was a very strategic bet that we made. and we wanted to partner with people it keeps on rolling, rolling, rolling. Yeah. on the data science experiences you guys have. and in the Cloud. on the cluster management side, and we have Oliver Chiu from Microsoft.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Oliver | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
Satya | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Oliver Chiu | PERSON | 0.99+ |
Peter Buress | PERSON | 0.99+ |
43% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
99.9% | QUANTITY | 0.99+ |
60 services | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
eighth year | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
tomorrow 10:30 | DATE | 0.99+ |
Brendan Burns | PERSON | 0.99+ |
Hortonworks' | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
SQL | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
57% | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Big Data Week | EVENT | 0.99+ |
two days | QUANTITY | 0.99+ |
Wei Wang | PERSON | 0.99+ |
Big Data | ORGANIZATION | 0.99+ |
Gardner | PERSON | 0.98+ |
Amit Walia | BigData SV 2017
>> Announcer: Live from San Jose, California, it's the Cube, covering Big Data Silicon Valley 2017. (upbeat music) >> Hello and welcome to the Cube's special coverage of Big Data SV, Big Data in Silicon Valley in conjunction with Strata + Hadoop. I'm John Furrier with George Gilbert, with Mickey Bonn and Peter Burns as well. We'll be doing interviews all day today and tomorrow, here in Silicon Valley in San Jose. Our next guest is Amit Walia who's the Executive Vice President and Chief Product Officer of Informatica. Kicking of the day one of our coverage. Great to see you. Thanks for joining us on our kick off. >> Good to be here with you, John. >> So obviously big data. this is like the eighth year of us covering, what was once Hadoop World, now it's Strata + Hadoop, Big Data SV. We also do Big Data NYC with the Cube and it's been an interesting transformation over the past eight years. This year has been really really hot with you're starting to see Big Data starting to get a clear line of sight of where it's going. So I want to get your thoughts, Amit, on where the view of the marketplace is from your standpoint. Obviously Informatica's got a big place in the enterprise. And the real trends on how the enterprises are taking analytics and specifically with the cloud. You got the AI looming, all buzzed up on AI. That really seized, people had to get their arms around that. And you see IoT. Intel announced an acquisition, $15 billion for autonomous vehicles, which is essentially data. What's your views? >> Amit: Well I think it's a great question. 10 years have happened since Hadoop started right? I think what has happened as we see is that today what enterprises are trying to encapsulate is what they call digital transformation. What does it mean? I mean think about it, digital transformation for enterprises, it means three unique things. They're transforming their business models to serve their customers better, they're transforming their operational models for their own execution internally, if I'm a manufacturing or an execution-oriented company. The third one is basically making sure that their offerings are also tailored to their customers. And in that context, if you think about it, it's all a data-driven world. Because it's data that helps customers be more insightful, be more actionable, and be a lot more prepared for the future. And that covers the things that you said. Look, that's where Hadoop came into play with big data. But today the three things that organizations are catered around big data is just a lot of data right? How do I bring actionable insights out of it? So in that context, ML and AI are going to play a meaningful role. Because to me as you talk about IoT, IoT is the big game changer of big data becoming big or huge data if I may for a minute. So machine learning, AI, self-service analytics is a part of that, and the third one would be big data and Hadoop going to cloud. That's going to be very fast. >> John: And so the enterprises now are also transforming, so this digital transformation, as you point out, is absolutely real, it's happening. And you start to see a lot more focus on the business models of companies where it's not just analytics as a IT function, it's been talked about for a while, but now it's really more relevant because you're starting to see impactful applications. >> Exactly. >> So with cloud and (chuckles) the new IoT stuff you start to say okay apps matter. And so the data becomes super important. How is that changing the enterprises' readiness in terms of how they're consuming cloud and data and what not? What's you're view on that? Because you guys are deep in this. >> Amit: Yep. >> What's the enterprises' orientation these days? >> So slight nuance to that, as an answer. I think what organizations have realized is that today two things happened that never happened in the last 20 years. Massive fragmentation of the persistence layer, you see Hadoop itself fragmented the whole database layer. And a massive fragmentation of the app layer. So there are 3,000 enterprise size apps today. So just think about it, you're not restricted to one app. So what customers and enterprises are realizing is that, the data layer is where you need to organize yourself. So you need to own the data layer, you cannot just be in the app layer and the database layer because you got to be understanding your data. Because you could be anywhere and everywhere. And the best example I give in the world of cloud is, you don't own anything, you rent it. So what do you own? You own the darn data. So in that context, enterprise readiness as you came to, becomes very important. So understanding and owning your data is the critical secret sauce. And that's where companies are getting disrupted. So the new guys are leveraging data, which by the way the legacy companies had, but they couldn't figure it out. >> What is that? This is important. I want to just double-click on that. Because you mentioned the data layer, what's the playbook? Because that's like the number one question that I get. >> Mm-hmm. >> On Cube interviews or off camera is that okay, I want to have a data strategy. Now that's empty in its statement, but what is the playbook? I mean, is it architecture? Because the data is the strategic advantage. >> Amit: Yes. >> What are they doing? What's the architecture? What are some of the things that enterprises do? Now obviously they care about service level agreements and having potentially multicloud, for instance, as a key thing. But what is that playbook for this data layer? >> That's a very good question, sir. Enterprise readiness has a couple of dimensions. One you said is that there will be hybrid doesn't mean a ground cloud multicloud. I mean you're going to be in multi SAS apps, multi platform apps, multi databases in the cloud. So there is a hybrid world over there. Second is that organizations need to figure out a data platform of their own. Because ultimately what they care for is that, do I have a full view of my customer? Do I have a full view of the products that I'm selling and how they are servicing my customers? That can only happen if you have what I call a meta-data driven data platform. Third one is, boy oh boy, you talked about self-service analytics, you need to know answers today. Having analytics be more self-serving for the business user, not necessarily the IT user, and then leveraging AI to make all these things a lot more powerful. Otherwise, you're going to be spending, what? Hours and hours doing statistical analysis, and you won't be able to get to it given the scale and size of data models. And SLAs will play a big role in the world of cloud. >> Just to follow up on that, so it sounds like you've got the self-service analytics to help essentially explore and visualize. >> Amit: Mm-hmm. >> You've got the data governance and cataloging and lineage to make sure it is high quality and navigable, and then you want to operationalize it once you've built the models. But there's this tension between I want what made the data lake great, which was just dump it all in there so we have this one central place, but all the governance stuff on top of that is sort of just well, we got to organize it anyway. >> Yeah. >> How do you resolve that tension? >> That is a very good question. And that's where enterprises kind of woke up to. So a good example I'll give you, what everybody wanted to make a data lake. I mean if you remember two years ago, 80% of the data lakes fell apart and the reason was for the fact that you just said is that people made the data lake a data swamp if I may. Just dump a lot of data into my loop cluster, and life will be great. But the thing is that, and what customers of large enterprises realized is they became system integrators of their own. I got to bring data, catalog it, prepare it, surface it. So the belief of customers now is that, I need a place to go where basically it can easily bring in all the data, meta-data driven catalog, so I can use AI and ML to surface that data. So it's very easy at the preparation layer for my analysts to go around and play with data and then I can visualize anything. But it's all integrated out of the box, then each layer, each component being self-integrated, then it falls apart very quickly when you want to, to your question, at an enterprise level operationalize it. Large enterprises care about two things. Is it operationalizable? And is it scalable? That's where this could fall apart. And that's what our belief is. And that's where governance happens behind the scenes. You're not doing anything. Security of your data, governance of their data is driven through the catalog. You don't even feel it. It's there. >> I never liked the data lakes term. Dave Vellante knows I've always been kind of against, even from day one, 'cause data's more fluid, I call it a data ocean, but to your point, I want to get on that point because I think data lakes is one dimension, right? >> Yeah. >> And we talked about this at Informatica World, last year I think. And this year it's May 15th. >> Yes. >> I think your event is coming up, but you guys introduced meta-data intelligence. >> Yep. >> So there was, the old model was throw it centralized, do some data governance, data management, fence it out, call, make some queries, get some reports. I'm over simplifying but it was like, it was like a side function. You're getting at now is making that data valuable. >> Amit: Yep. >> So if it's in a lake or it's stored, you never know when the data's going to be relevant, so you have to have it addressable. Could you just talk about where this meta-data intelligence is going? Because you mentioned machine learning and AI. 'Cause this seems to be what everyone is talking about. In real time, how do I make the data really valuable when I need it? And what's the secret sauce that you guys have, specifically, to make that happen? >> So that, to contextualize that question, think about it. So if you. What you don't want to do is keep make everything manual. Our belief is that the intelligence around data has to be at the meta-data level, right? Across the enterprise, which is why, when we invested in the catalog, I used the word, "It's the google of data for the enterprise." No place in an enterprise you can go search for all your data, and given that the fast, rapid-changing sources of data, think about IoT, as you talked about, John. Or think about your customer data, for you and me may come from a new source tomorrow. Do you want the analyst to figure out where the data is coming from? Or the machine learning or AI to contextualize and tell you, you know what, I just discovered a great new source for where John is going to go shop. Do you want to put that as a part of analytics to give him an offer? That's where the organizing principle for data sits. The catalog and all the meta-data, which is where ML and AI will converge to give the analyst self-discovery of data sets, recommendations like in Amazon environment, recommendations like Facebook, find other people or other common data that's like a Facebook or a LinkedIn, that is where everything is going, and that's why we are putting all our efforts on AI. >> So you're saying, you want to abstract the way the complexity of where the data sits? So that the analyst or app can interface with that? >> That's exactly right. Because to me, those are the areas that are changing so rapidly, let that be. You can pick whatever data sets based on what you want, you can pick whichever app you want to use, wherever you want to go, or wherever your business wants to go. You can pick whichever analytical tool you like, but you want to be able to take all of those tools but be able to figure out what data is there, and that should change all the time. >> I'm trying to ask you a lot while you're here. What's going to be the theme this year at Informatica World? How do you take it to the next level? Can you just give us a teaser of what we might expect this year? 'Cause this seems to be the hottest trend. >> This is, so first, at Informatica World this year, we will be unveiling our whole new strategy, branding, and messaging, there's a whole amount of push on that one. But the two things that will be focused a lot on is, one is around that intelligent data platform. Which is basically what I'm talking about. The organizing principle of every enterprise for the next decade, and within that, where AI is going to play a meaningful role for people to spring forward, discover things, self-service, and be able to create sense from this mountains of data that's going to sit around us. But we won't even know what to do. >> All right, so what do you guys have in the product, just want to drill into this dynamic you just mentioned, which is new data sources. With IoT, this is going to completely make it more complex. You never know what data's going to be coming off the cars, the wearables, the smart cities. You have all these new killer use-cases that are going to be transformational. How do you guys handle that, and what's the secret sauce of? 'Cause that seems to be the big challenge, okay, I'm used to dealing with data, its structure, whether it's schemas, now we got unstructured. So okay, now I got new data coming in very fast, I don't even know when or where it's going to come in, so I have to be ready for these new data. What is the Informatica solution there? >> So in terms of taking data from any source, that's never been a challenge for us, because Informatica, one of the bread and butter for us is that we connect and bring data from any potential source on the planet, that's what we do. >> John: And you automate that? >> We automate that process, so any potential new source of data, whether it's IoT, unstructured, semi-structured, log, we connect to that. What I think the key is, where we are heavily invested, once you've brought all that. By the way, you can use Kafka Cues for that, you can use back-streaming, all of that stuff you could do. Question is, how do you make sense out of it? I can get all the data, dump it in a Kafka Cue, and then I take it to do some processing on Spark. But the intelligence is where all the Informatica secret sauce is, right? The meta-data, the transformations, that's what we are invested in, but in terms of connecting anything to everything? That we do for a living, we have done that for one quarter of a century, and we keep doing it. >> I mean, I love having a chat with you, Amit, you're a product guy, and we love product guys, 'cause they can give us a little teaser on the roadmap, but I got to ask you the question, with all this automation, you know, the big buzz out in the world is, "Oh machine learning and AI is replacing jobs." So where is the shift going to be, because you can almost connect the dots and say, "Okay, you're going to put some people out of work, "some developer, some automation, "maybe the systems management layer or wherever." Where are those jobs shifting to? Because you could almost say, "Okay, if you're going to abstract away and automate, "who loses their job?" Who gets shifted and what are those new opportunities, because you could almost say that if you automate in, that should create a new developer class. So one gets replaced, one gets created possibly. Your thoughts on this personnel transformation? >> Yeah, I think, I think what we see is that value creation will change. So the jobs will go to the new value. New areas where value is created. A great example of that is, look at developers today, right. Absolutely, I think they did a terrific job in making sure that the Hadoop ecosystem got legitimized, right? But in my opinion, where enterprise scalability comes, enterprises don't want lots of different things to be integrated and just plumbed together. They want things to work out of the box, which is why, you know, software works for them. But what happens is that they want that development community to go work on what I call value-added areas of the stack. So think about it, in connected car, they're working with lots of customers on the connected car issue, right? They don't want developers to work on the plumbing. They want us to kind of give that out of the box, because SLA is operational scale, and enterprise scalability matters, but in terms of the top-layer analytics, to make sure we can make sense out of it, that's what they're, that's where they want innovation. So what you will see is that, I don't think the jobs will go in vapor, but I do think the jobs will get migrated to a different part of the stack, which today it has not been, but that's, you know, we live in Silicon Valley, that's a natural evolution we see, so I think that will happen. In general in the larger industry, again I'd say, look, driverless cars, I don't think they've driven away jobs. What they've done is created a new class of people who work. So I do think that will be a big change. >> Yeah there's a fallacy there. I mean with the ATM argument was ATM's are going to replace tellers, yet more branches opened up. >> That's exactly it. >> So therefore creating new jobs. I want to get to the quick question, I know George has a question, but I want to get on the cost of ownership, because one of the things that's been criticized in some of these emerging areas, like Hadoop and Open Stack, for instance, just to pick two random examples. It's great, looks good, you know, all peace and love. An industry's being created, legitimized, but the cost of ownership has been critical to get that done, it's been expensive, talent, to find talent and deploying it was hard. We heard that on the Cube many times. How does the cost of ownership equation change? As you go after these more value, as developers and businesses go after these more value-creating activities in the Stack? >> See look, I always say, there is no free lunch. Nothing is free. And customers realize that, that open source, if you completely wanted to, to your point, as enterprises wanted to completely scale out and create an end-to-end operational infrastructure, open source ends up being pretty expensive. For all the reasons, right, because you throw in a lot of developers, and it's not necessarily scalable, so what we're seeing right now is that enterprises, as they have figured that this works for me, but when they want to go scale it out, they want to go back to what I call a software provider, who has the scale, who has the supportability, who also has the ability to react to changes and also for them to make sure that they get the comfort that it will work. So to me, that's where they find it cheaper. Just building it, experimenting with that, it's cheaper here, but scaling it out is cheaper with a software provider, so we see a lot of our customers when we start a little bit experimenting to developers, downloading something, works great, but would I really want to take it across Nordstrom or a JP Morgan or a Morgan Stanley. I need security, I need scalability, I need somebody to call to, at that point on those equations become very important. >> And that's where the out of box experience comes in, where you have the automation, that kind of. >> Exactly. >> Does that ease up some of the cost of ownership? >> Exactly, and the talent is a big issue, right? See we live in Silicon Valley, so we. By the way, Silicon Valley hiring talent is hard. Just think about it, if you go to Kansas City, hiring a scholar developer, that's a rare breed. So just, when I go around the globe and talk to customers, they don't see that talent at all that we here just somehow take for granted. They don't, so it's hard for them to kind of put their energy behind it. >> Let me ask. More on the meta-data layer. There's an analogy that's come up from the IIoT world where they're building these digital twins, and it's not just GE. IBM's talking about it, and actually, we've seen more and more vendors where the digital twin is this, it's a digital representation now of some physical object. But you could think of it as meta-data, you know, for a physical object, and it gets richer over time. So my question is, meta-data in the old data warehouse world, was we want one representation of the customer. But now it's, there's a customer representation for a prospect, and one for an account, and one for, you know, in warranty, and one for field service. Is that, how does that change what you offer? >> That's a very very good question. Because that's where the meta-data becomes so much more important because its manifestation is changing. I'll give you a great example, take Transamerica, Transamerica is a customer of ours leveraging big data at scale, and what they're doing is that, to your question, they have existing customers who have insurance through them. But they're looking for white space analysis, who could be potential opportunities? Two distinct ones, and within that, they're looking at relationships. I know you, John, you have Transamerica, could you be an influencer with me? Or within your family, extended family. I'm a friend, but what about a family member that you've declared out there on social media? So they are doing all that stuff in the context of a data lake. How are they doing it? So in that context, think about that complexity of the job, pumping data into a lake won't solve it for them, but that's a necessary first step. The second step is where all of that meta-data through ML and AI, starts giving them that relationship graph. To say, you know what, John in itself has this white space opportunity for you, but John is related to me in one way, him and me are connected on Facebook. John's related to you a little bit more differently, he has a stronger bond with you, and within his family, he has different strong bonds. So that's John's relationship graph. Leverage him, if he has been a good customer of yours. All of that stuff is now at the meta-data level, not just the monolithic meta-data, relationship graph. His relationship graph of what he has bought from you, so that you can just see that discovery becomes a very important element. Do you want to do that in different places? You want to do that in one place. I may be in a cloud environment, I may be on prem, so that's where when I say that meta-data becomes the organized principle, that's where it becomes real. >> Just a quick follow-up on that, then. It doesn't seem obvious that every end customer of yours, not the consumer but the buyer of the software, would have enough data to start building that graph. >> I don't think, to me, what happened was, the word big data, I thought got massively abused. A lot of Hadoop customers are not necessarily big data customers. I know a lot of banking customers, enterprise banking, whose data volumes will surprise you, but they're using Hadoop. What they want is intelligence. That's why I keep saying that the meta-data part, they are more interested in a deeper understanding of the data. A great example is, if John. I had a customer, who basically had a big bank. Rich net worth customer. In their will, the daughter was listed. When the daughter went to school, by the way, went to the bank branch in that city, she had no idea, she walked up, she basically wanted to open an account. Three more friends in the line. Manager comes out because at that point, the teller said, "This is somebody you should take special care of." Boom, she goes in a special cabin, the other friends are standing in a line. Think of the customer service perception, you just created a new millennia right? That's important. >> Well this brings up the interesting comment. The whole graph thing, we love, but this brings back the neural network trend. Which is a concept that's been around for a long long time, but now it's front and center. I remember talking to Diane Green who runs Google Cloud, she was saying that you couldn't hire neural network, they couldn't get jobs 15 years ago. Now you can't hire enough of them. So that brings up the ML conversation. So, I want to take that to a question and ask about the data lake, 'cause you guys have announced a new cloud data lake. >> Yes. >> So it sounds like, from what you're saying, is you're going beyond the data lake. So talk about what that is. Because data lake, people get, you throw stuff into a lake. And hopefully it doesn't become a swamp. How are you guys going beyond just the basic concept of a data lake with your new cloud data lake? >> Yeah, so, data lake. If you remember last year, actually at Strata San Jose we chatted, and we had announced the data lake because we realized customers, to your point John, as you said, were struggling on how to even build a data lake, and they were all over the place, and they were failing. And we announced the first data lake there, and then in Strata New York, basically we brought the meta-data ML part to the data lake. And then obviously right now we're taking it to the cloud, and what we see in the world of data lake is that customers ask for three things. First, they want the prebuilt integrated solution. Data can come in, but I want the intelligence of meta-data and I want data preparation baked in. I don't want to have three different tools that I will go around, so out of the box. But we also saw, as they become successful with our customers, they want to scale up, scale down. Cloud is just a great place to go. You can basically put a data lake out there, by the way in the context of data, a lot of new data sources are in the cloud, so it's easy for them to scale in and out in the cloud, experiment there and all that stuff. Also you know Amazon, we supported Amazon Kinesis, all of these new sources and technologies in the world of cloud are allowing experimentation in the data lake, so that allowed our customers to basically get ahead of the curve very quickly. So in some ways, cloud allowed customers to do things a lot faster, better, and cheaper. So that's what we basically put in the hands of our customers. Now that they are feeling comfortable, they can do a secured and governed data lake without feeling that it's still not self-served. They want to put it in the cloud and be a lot more faster and cheaper about it. >> John: And more analytics on it. >> More analytics. And now, because our ML, our AI, the meta-data part, connects cloud, ground, everything. So they have an organizing principle, whatever they put wherever, they can still get intelligence out of it. >> Amit, we got to break, but I want to get one final comment for you to kind of end the segment, and it's been fun watching you guys work over the past couple years. And I want to get your perspective because the product decisions always have kind of a time table to them, it's not like you made this up last night because it's trendy, but you guys have made some good product choices. It seems like the wind's at your back right now at Informatica. What, specifically, are bets that you guys made a couple years ago that are now bearing fruit? Can you just take a minute to end the segment, share some of those product bets. Because it's not always that obvious to make those product bets years earlier, seems to be a tail wind for you. You agree, and can you share some of those bets? >> I think you said it rightly, product bets are hard, right? Because you got to see three, four years ahead. The one big bet that we made is that we saw, as I said to you, the decoupling of the data layer. So we realized that, look, the app layer's getting fragmented. The cloud platforms are getting fragmented. Databases are getting fragmented. That that whole old monolithic architecture is getting fundamentally blown up, and the customers will be in a multi, multi, multi spread out hybrid world. Data is the organizing principle, so three years ago, we bet on the intelligent data platform. And we said that the intelligent data platform will be intelligent because of the meta-data driven layer, and at that point, AI was nowhere in sight. We put ML in that picture, and obviously, AI has moved, so the bet on the data platform. Second bet that, in that data platform, it'll all be AI, ML driven meta-data intelligence. And the third one is, we bet big on cloud. Big data we had already bet big on, by the way. >> John: You were already there. >> We knew the cloud. Big data will move to the cloud far more rapidly than the old technology moved to the cloud. So we saw that coming. We saw the (mumbles) wave coming. We worked so closely with AWS and the Azure team. With Google now, as well. So we saw three things, and that's what we bet. And you can see the rich offerings we have, the rich partnerships we have, and the rich customers that are live in those platforms. >> And the market's right on your doorstep. I mean, AI is hot, ML, you're seeing all this stuff converge with IoT. >> So those were, I think, forward-looking bets that paid out for us. (chuckles) And but there's so much more to do, and so much more upside for all of us right now. >> A lot more work to do. Amit, thank you for coming on, sharing your insight. Again, you guys got in good pole position in the market, and again it's right on your doorstep, so congratulations. This is the Cube, I'm John Furrier with George Gilbert. With more coverage in Silicon Valley for Big Data SV and Strata + Hadoop after this short break.
SUMMARY :
it's the Cube, covering Big Data Silicon Valley 2017. Kicking of the day one of our coverage. And the real trends on how the enterprises And that covers the things that you said. on the business models of companies where How is that changing the enterprises' readiness the data layer is where you need to organize yourself. Because that's like the number one question that I get. Because the data is the strategic advantage. What are some of the things that enterprises do? Second is that organizations need to figure out Just to follow up on that, and then you want to operationalize it and the reason was for the fact that you just said I never liked the data lakes term. And we talked about this is coming up, but you guys introduced So there was, the old model was 'Cause this seems to be what everyone is talking about. and given that the fast, rapid-changing sources of data, and that should change all the time. How do you take it to the next level? But the two things that will be focused a lot on is, All right, so what do you guys have in the product, because Informatica, one of the bread and butter for us By the way, you can use Kafka Cues for that, but I got to ask you the question, So what you will see is that, ATM's are going to replace tellers, We heard that on the Cube many times. So to me, that's where they find it cheaper. where you have the automation, that kind of. Exactly, and the talent is a big issue, right? Is that, how does that change what you offer? so that you can just see that discovery not the consumer but the buyer of the software, I don't think, to me, what happened was, the data lake, 'cause you guys have announced How are you guys going beyond just the basic concept a lot of new data sources are in the cloud, And now, because our ML, our AI, the meta-data part, and it's been fun watching you guys work And the third one is, we bet big on cloud. than the old technology moved to the cloud. And the market's right on your doorstep. And but there's so much more to do, This is the Cube, I'm John Furrier with George Gilbert.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Amit Walia | PERSON | 0.99+ |
Diane Green | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mickey Bonn | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter Burns | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Transamerica | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
George | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
$15 billion | QUANTITY | 0.99+ |
Amit | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
Nordstrom | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
May 15th | DATE | 0.99+ |
Kansas City | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
second step | QUANTITY | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
JP Morgan | ORGANIZATION | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
each component | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
each layer | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
today | DATE | 0.99+ |
eighth year | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
three years ago | DATE | 0.99+ |
3,000 enterprise | QUANTITY | 0.98+ |
Big Data SV | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
next decade | DATE | 0.98+ |
two years ago | DATE | 0.98+ |
three | QUANTITY | 0.97+ |