Leo da Silva, Best Day Travel Group & Arnold Schiemann, Symphony Ventures | UiPath Forward 2018
(upbeat music) >> Live, from Miami Beach, Florida, it's theCUBE, covering UiPath Forward Americas. Brought to you by UiPath. >> Welcome back to the former home of Lebron James, I'm Dave Vellante, this is two minimum, we are here at South Beach at the hotel Fontainebleau. This is UiPath Forward Americas, and this is theCUBE, the leader in live tech coverage Leo Da Silva is here, he is the process excellent leader for Best Day Travel and Arnold Schiemann who's Vice President of Latin America and Spain. You get to go to all the fun places for Symphony. Welcome to theCUBE >> Thank you, thank you guys for your invitation >> You're very welcome, Leo let's start with you Best Day Travel, travel site, specializing in Mexico and other parts of the region tell us about the company >> Well, we have a leadership in Mexico we are, the last year we have five point four million travelers, okay? And there's a lot of people, okay? We've been in the business for 35 years, 34 years actually, okay? So, we're pretty solid, okay? While 75% of the all the transactions we have online, okay? And 25% we have offline, and that's why we're doing, all the transformation that we're doing is under this 25%, alright? Like, just to get the additional transformation and everything. >> So 35 years, so you started before the internet (Leo laughing) >> So I guess you should be 100% offline you obviously successfully made that transition. >> That's correct, that's correct. >> Okay, and Arnold, Symphony is the solution provider right? the implementation partner in this case, right? tell us about symphony and your role. >> Well, Symphony is probably the is particularly, suddenly concentrated on our PA management and our PA design, and our PA process rewardization. We were invited by Best Day Travel Group to look at the process, to look at the project and we embark in a very interesting transformation for them, so that they could move into their PA arena with a clear road map. >> So you guys are both process experts I mean that's, >> Yes >> You've got process in your title talk more about your role, if you would. >> Yeah, well, I'm a green belt, okay? And at least six sigma, and we use this methodology actually, and we are like, two years ago we implemented like a BPM, the department, you know inside the company, just to lead this transformation, okay? So that's what we're seeking right now to lead this transformation and, it's a very good challenge, you know? It's not easy, but we are trying to do our best. >> With your six sigma background, I think it would really tie right into what RPA is, 'cause you can really understand what has variance, and what is pretty standardized and that would seem is that the direct correlation with thing that you can have, the robot and the automation based on, really, the variance piece? >> Yes, totally, you know, well, when you start, all the implementation was right before where start you like to do a benchmark and you're able to see which technology we wanted to use and well, we found UiPath, alright? In which we found Symphony, and but it's not exactly, I think the technology is the last thing, right? So, the technology is the enabling alright? To do all those thing happening but if you don't have, like process management, you know, if you don't have that, it's kind of difficult to reach the target, okay? So, yeah, it's pretty much, I think it's when you, I think the most challenging is let people know what they're doing wrong you know, what they're doing repeating tasks, right so, when you do, like, the process walk through, people just get amazed, you know, like, what? Are you serious, we're doing that? >> When did you start? >> We started in February >> This year? >> Yeah >> Okay, so, take us back to February or January whatever, December, when you were maybe even before that, thinking about the business case. How did it come about, and how'd you guys meet? Take us through the sort of initiative. >> Yeah, well, right before, it was six months before I think it was, on July of last year, we started a conversation, right? And when I found that, within like six months of benchmarking and, we reached that like UiPath, and we start to ... trying to get something different, you know? To do something different enterprise and we had this need, okay? From inside, you know, from back office to tranformate because it's operation sometimes it costs a lot, alright? The first step that we did was like a future of work accelerator, okay? Which is, it's this scan, it's a total scan of the area, okay? And to see how how big are the opportunities, okay? To transformate things, right, so was the first step and after we had the pilot, we have three or four projects ongoing. >> And you were involved from the beginning Arnold, last July? >> Yes, yes >> One thing which was really very interesting about the project is that the client was the C.E.O and the C.F.O was totally the C-suite involvement So, and we believe that our PA is about the business, is about the process, it was ideal. So, we had really I believe it was really not work but, really a good time that we spent together integrating very closely with the team from Best Day Travel Group, to the point that you couldn't tell who was from Best Day and who was from Symphony, and then we were able to present to the C-suite, the result of the road map to move forward with a very clear business case, the process that was going to be robotized. Simultaneously, Best Day wanted approved inside, saying lets develop robotized version of one of the processes, and we did one which had been quite successful, we were just talking that the amount of work that that robot is handling today life, is such that either robot doesn't operate, he wouldn't know what to do because there is so much work to do behind in the past, and he doesn't know what he did, but today, it is almost impossible to recreate that. >> Yeah, that's correct, singularity is here >> One of the things that maybe you can help me understand, 'cause I'm a little bit new to this technology, how do you figure out, how do you size this, like how do you know how many things a robot can do, we heard one of the customers has a thousand robots, how does this scale, and how does this build out inside of a customer? >> Two thing that we do is that we look at the company, we identify those process, with heavy like, say, head count with lots of repetitive tasks that can be partially or totally robotized, and then we present it as a road map because the first question they have is "how do we start?" I mean, this is a company, 3000 people 4 million passengers, where do we start? How we get good advantage of the robots and that's how we did it, and then it's going on, the project we just did the first part, we continue now with the second part which is going to be even more interesting. >> What'd the business case look like? I mean, was it a saving money, making presumably some of this was cost reduction right off the bat, right? >> Yes, yes >> Lets talk about that business case what's that framework look like? >> Well, the action will have a pilot, that we just did, we launched already, alright? The business case was like, to to reduce cost, alright? The operational cost is very high, okay? So, now, we have like, just to have an idea the situation before would have, like six person working, you know, like the eight hour shift, okay? And doing like issuing tickets and you know and right now we have, like, just one robot and we built a capability of, 126% okay? On this, just with one robot, alright, and yeah, it's amazing, its amazing and 24/7, you know, right now it working pretty fine. >> Specifically, where do the cost savings come from? >> Well, the cost savings is not exactly that ease, but it's a customer's experience, okay? And also the capability that you can build alright? To get more sales, okay? And there's another project that, before that we had the first one, we have to to reduce the cost of the operation you, know, for 65 people, alright? And ... the transactions cost a lot of money for us, okay? So that's how we're trying to we're trying to understand that and we're trying to eliminate those costs or reduce, you know like, as much as we can. >> Its a part of that, you redeploy people, you put 'em on other tasks, is that what you're doing? >> Yes, yes, we free them up, you put another, you add value task, right? >> So the C.F.O is one of the stakeholders here, >> It was >> So many C.F.Os might say "okay, well, we're "not going to cut head count, so where do I "get my savings?" so the answer, if I'm hearing it is well we're going to increase revenue because these people are going to be on other tasks, and >> That's it, yes >> And, do you have visibility in line of sight as to how fast that can happen, whether, is it already starting to happen? >> Yeah, it already start to happen, already start to happen, like in, you know, this project was we have the roll back in 15 days >> I was going to ask you what the break even was it was inside of a month? >> You know, its already paid, it all 15 days, it's already paid, right so, yeah, the C.F.O is pretty happy with that. >> The first project was relatively small right? >> Yeah, yeah yeah. >> You proved it out and now you're going to throw gasoline on the fire >> That's it, that's it. >> That's great, so what's next for you guys? >> Well, next, we are go to the customer service you know, like ano-traceability, there's a traceability project that we have to do, alright? Just to ... To have the client in front of everything, you know? So that's our strategy right now and we're going to do, well Symphony is going to help us out with our PA and with implementation and the process, because its going to be a new process, it doesn't exist, alright? So there's going to be a brand new one so we have to create from scratch. >> Arnold, I wonder if you can go a little broader for us on this, it sounds like you've got a perfect partner inside the company with, you know, process in his title you've got the C-suite engaged, is that a typical deployment, what are you finding? >> Is not typical but it is, that is something that we look for all the time. 'cause it's, if the client is not engaged, we can do nothing, if the C-suite is not engaged, there is very little process people can do and by being engaged the C-suite, we're driving the cost reductions, but there is another point besides cost, consistency, and also we are eliminating side loss that had existed for long time, 'cause the companies are starting with one organization then another one, another one and all of them touch the customer what the probably will be doing to them hopefully before the end of the year, early next year, to be able to see the transverse of the customer, one and a half million passengers arriving to Cancún and they are passengers. But you don't know how many people will come back so you better know that these guys came here they like to go to the scuba diving next time he's around, we can offer him a scuba diving, we can pick him up from the airport, we can offer other services and then, the company is structured to be exponentially, so that you can grow from 4 million to 8 million passengers without adding head count, adding, that is the future of Best Day Travel Group and that's why we have engaged the management. >> Okay, so you're looking at the moon shot double the number of passengers served with the same head count, that's a huge productivity boost, so I'm hearing 15 day break even, some of that was hard cost reduction, its revenue increased, its proven, now you're going to invest more consistency, better customer service, cross selling, hey they like to scuba dive, maybe we can make an offer here, and better data allows you to do that that's going to summarizes the the business case and we're talking I mean, I don't want to, you know, squeeze the M.P.V at it, but we're talking millions? Hundreds of thousands? >> Millions >> Hundreds of millions? >> Millions right? >> Yeah, yeah, pretty much, it's a huge number you, know, its a huge number and, we have a lot of opportunities and, I think it's going to be a success, you know? >> And presumably the employees want to be part of this ride, right? They want to get, whether it's re-trained, or become R.P.A experts, deploy this technology, drive their digital automation and service those 8 million customers with the same resources you know, or invest in other resources. >> yes >> New growth areas. >> Yes, yes. >> Great story >> Yeah, it is, it is, >> we're working hard >> (laughs) figuring it out >> We're privileged to have been work with them because they are, I say unique but it was done for us from day one everything was put in place, engagement, people, and then the company itself is very easy to manipulate and transform because of the way that it was structured 30 years ago. >> And why UiPath? I mean, you said I chose them last summer why, why'd they win? >> Well, because of, well during a benchmarking, I can see a lot of difference between them, you know? And we have concluded that, well they actually Symphony recommend us, alright? So, you want this, you want that for this situation, it's going to be the best solution, right? And after that, we're pretty sure that it's it's the best it's the best choice, right? Because of the personalities, because a lot of stuffs that they have they can bring to us, you know? >> Do you worry about, do you worry about shadow R.P.A, like (laughter) >> The divisions going off and doing their own robots, or have you guys got a handle on that? >> Yeah, you know (laughing) no, not worried about that, you know, but yeah it's going to happen. >> It's a good thing. >> Alright, gentlemen, thanks so much for coming on theCUBE it was great to have you. >> Thank you for inviting us >> Alright keep it right there everybody, Stu and I will be back at UiPath Forward Americas right after this short break, you're watching theCube, we'll be right back. (closing music)
SUMMARY :
Brought to you by UiPath. is the process excellent While 75% of the all the transactions So I guess you should be 100% offline is the solution provider right? Well, Symphony is probably the You've got process in your title a BPM, the department, you know and how'd you guys meet? the first step and after we had the pilot, of one of the processes, and we did one and that's how we did it, and then and 24/7, you know, that you can build alright? So the C.F.O is one of so the answer, if I'm hearing it is 15 days, it's already paid, right so, and the process, because its going to be the airport, we can offer other services and better data allows you to do that And presumably the employees because of the way do you worry about shadow R.P.A, like about that, you know, but on theCUBE it was great to have you. Stu and I will be back at
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cancún | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Arnold | PERSON | 0.99+ |
Arnold Schiemann | PERSON | 0.99+ |
February | DATE | 0.99+ |
4 million | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Mexico | LOCATION | 0.99+ |
January | DATE | 0.99+ |
Leo Da Silva | PERSON | 0.99+ |
December | DATE | 0.99+ |
126% | QUANTITY | 0.99+ |
Millions | QUANTITY | 0.99+ |
35 years | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
34 years | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
15 days | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
Hundreds of millions | QUANTITY | 0.99+ |
Best Day Travel | ORGANIZATION | 0.99+ |
Hundreds of thousands | QUANTITY | 0.99+ |
last July | DATE | 0.99+ |
second part | QUANTITY | 0.99+ |
Symphony | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
Best Day Travel Group | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
15 day | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Leo da Silva | PERSON | 0.99+ |
Leo | PERSON | 0.99+ |
3000 people | QUANTITY | 0.99+ |
eight hour | QUANTITY | 0.99+ |
South Beach | LOCATION | 0.99+ |
65 people | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
Symphony Ventures | ORGANIZATION | 0.99+ |
Best Day | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
one robot | QUANTITY | 0.99+ |
Lebron James | PERSON | 0.99+ |
last summer | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Miami Beach, Florida | LOCATION | 0.99+ |
early next year | DATE | 0.99+ |
one and a half million passengers | QUANTITY | 0.99+ |
30 years ago | DATE | 0.99+ |
first project | QUANTITY | 0.99+ |
one robot | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
4 million passengers | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
UiPath Forward Americas | ORGANIZATION | 0.98+ |
four projects | QUANTITY | 0.98+ |
first step | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
six person | QUANTITY | 0.96+ |
four million travelers | QUANTITY | 0.96+ |
today | DATE | 0.95+ |
Spain | LOCATION | 0.95+ |
One thing | QUANTITY | 0.95+ |
8 million customers | QUANTITY | 0.94+ |
C.F.O | ORGANIZATION | 0.94+ |
Two thing | QUANTITY | 0.92+ |
five point | QUANTITY | 0.89+ |
2018 | DATE | 0.89+ |
double | QUANTITY | 0.89+ |
Day | ORGANIZATION | 0.88+ |
8 million passengers | QUANTITY | 0.88+ |
Austin Adams & Zach Arnold, Ygrene | KubeCon + CloudNativeCon EU 2018
>> Announcer: Live from Copenhagen Denmark, it's theCUBE covering Kubecon and CloudnativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome back everyone, live here at Copenhagen, Denmark, Cube's coverage of Kubecon 2018 in Europe, this is all about the Kubernetes the future of cloud native, CloudNativeCon part of the CNCF Cloud Native Foundation, I'm John Furrier and my co-host Lauren Cooney, founder of Spark Labs industry expert of open source. So, we have two end user customers of Kubernetes and Cloud Native, Zach Arnold, software engineer Ygenre energy fund, and Austin Adams software development manager, same company. You guys are doing really interesting business model around energy and equity in buildings and homes, but you're writing code, so you have to make all this stuff work, so I'm sure you're cloud native, why have a data center when you can have the cloud >> Austin : We were born in the cloud. >> You were born in the cloud. So take us through, explain the business real quick, and then what's your back end, technical scaling situation look like in terms of infrastructure, software and what's the make up of the systems. >> Zach: You know the business best. >> Yeah, so Ygrene operates under something called PACE, property assess clean energy. We operate in a couple of different states. We work with local governments to create a PACE program that is accepted in different counties or jurisdictions within the state, and then we allow homeowners and contracting companies to provide financing for home improvements that are specifically within the domain of renewable energy or energy efficiency. >> So, you basically finance a solar panel that I put on my house or building if there's benefits there, and then you guys get the financing and you tie in with the government so the property taxes, the leverage the security is the building right, or the asset. >> Yeah, and the way that we're chartered is basically we can put a tax on the property which gives us some guarantees on repayment and things like that, and it's a great model so far. >> It's a new financial engineering around energy efficiancy so you've got to build systems, so you're working with government, so now we all know how government systems work, so you've got to be agile and nimble. Take us through how the back end works, what's it look like, what's the system look like, you're hosted in the cloud, is it Amazon, Google? >> So everything that we have is in a cloud provider that starts with an A, and ends with an S, it's AWS I don't know if I can say that, I think I can say that, AWS all the way-- >> Yes, it's good. >> And we have tons of services, we have Kubernetes running most of our main services. Within our migration we actually started with our main service. A lot of people start with, you know, their smallest microservice, we just went whole-hog and just went in for it, so they system is mainly a lone-management system. Underwriting data aggregation and underwriting processing, so every application that comes in we have to underwrite it and make sure every little thing checks out, and our underwriting system has won awards for how accurate it is and how high quality it is as well. >> So, I'm doing a mental white board in my mind, just kind of graphing this so just help me out here and take us through this. So, you guys are a cutting edge company, new progressive business model, real innovative, great stuff. Cloud native, so you're born in the cloud no data center, cool, check, it's what everyone does, and now you're like okay, now I've got to deal with these legacy systems. So, you're putting containers around things, so you have to interface, you build your own system so that's cool, but you're dealing with other systems and then how are you handling that, you are just containerizing it, so take us through some of those linkages. >> Yeah, so where we're creating, a lot of times when we have to integrate with another system, we'll create a small service that is code that we own, and we'll reach out to those integrations, those vendors and we'll do aggregation within our system and provide an interface back to our systems. You know, like everyone, we're breaking up the monolith or whatever, maybe in 10 years we'll go back to a monolith, who knows but you know we're slicing out things, making microservices, it looks like a mess on the back end, just tons of microservices going everywhere and that's why we're using all these Cloud Native tools to be able to manage that. So, in order to move quickly, we're wanting to containerize everything, everything runs in a container at this point. >> Lauren: Great. >> A lot of our services follow this kind of we're kind of calling the container adaptor pattern, it follows the software adaptor pattern where, just like Austin was saying, let's say for example we're interfacing with a credit vendor, we create a service where we talk to our own service that has a well defined interface that we know will always get a credit report back with the following fields, but then where that information actually comes from, whether it's one of the big three credit vendors or someone else who has a well defined API, that's largely not the concern of the main loan management system, it's the concern of the microservice that's responsible for reaching out to that other entity there. So, that's how we've kind of gotten to beat around the legacy interfacing of all these other different financial services and tools that help to aggregate data.. >> It's super clever you can optimize on a service basis but now you have to orchestrate and kind of conduct everything through-- >> And keep everything secure. >> That's really interesting, I mean I think what I'm looking at here is a huge ecosystem of partners and companies and end users coming together and one of the questions, beyond why you are here, what are you looking at here, what is interesting to you, what do you want to learn about that you might bring into your, you know, architecture essentially? >> Austin and I were talking about this, we kind of tend to look at the CNCF list of projects as a dinner menu. (laughs) >> We're refreshing that page frequently, because we're adding projects at an alarming rate, but one project we're using FluentD, Notary, Kubernetes, of course, Prometheus, things like that, we want to start using those things more extensively. One's that we're really excited about are Spire and Spiffy, the identity, kind of a new take, not necessarily new but new for cloud native take on identity of services and authentication, as well as the open policy agent to provide a single DSL to do all of your policy and authorization-- >> Lauren: That's a lot of work, load and management and identity correct? >> Yeah, yes. >> Authorization and authentication are two of the most important things that happen in our system and we have so many different ways that it happens right now, it can tend to look a little clogy, just from the sense of the fact that we need a little more coordination or standardization around it, I mean we have well written policies that are documented but the way that those actually get enforced are, it's individualized based on the service, you know, if it's a cloud based policy, then it's AWS IAM, if it's Kubernetes based policy it's RBAC using Kubernetes RBAC, so it kind of looks like if we can abstact a lot of that functionality out of the services, the containers, the orchestration tool or the cloud, to making those decisions, that would really, really simplify things for us. >> So, you guys are end users, so are you part of like an end user group that gives feedback directly into the community or how does that work, and do you contribute to that? >> Yes, so we're on the fringes of the contributor community as well, and we're definitely on GitHub on all these projects posting issues and in some cases providing our own PR's or whatever. None of us are within the Kubernetes orb but that's definitely something we all are achieving or aspiring to be is jumping into some of these projects, especially some of the smaller projects that we're using on a daily basis on our build servers like, Portheurs or Notary, some of those things we're actively contributing to those. >> So, you've traded on mastery of product but being active on the project is the key, the balance there. >> Yeah, I mean typically what you find in the fiance industry is when they go for a solution, they lead with their wallet as for what we can purchase, or what we can sponsor, but Ygrene has been, our managers and management have been incredibly empowering this way, they say well what can we give, we lead with our hands. >> Yeah, and this is interesting, if you have a good business model innovation, which you guys have, you can be a completely clean sheet of paper to build it. >> Right >> So, that's the best thing about the cloud. You can really move fast and go from, you know, point A to point B, move the needle. >> Yeah, with it at the same time there's kind of a clean slate, there's even a clean slate in terms of best practices within our industry. Now if we were in mortgage, there's a lot of rules, there's a lot of clear guidelines on how to do security and auditing and things that you need, where in our industry that's all emerging, so we have a chance to also set the pace, set the tone for what security might look like, or what cloud usage might look like within the PACE industry. But at the same time, we're getting increasing government regulations, so we're having to make these decisions around, what are the tools that are going help us achieve maximum customer protection and audit-ability while maintaining our business model without totally-- >> And you're going to need flexibility because you don't know what's going to come next you've got to be ready for anything, and that is what leads to my next question, two points, how do you guys prepare for what's next, what's the main ethos around, technical architecture around being prepared for that, ready state that's coming to you, and then two, what have you learned over the, what's the scar tissue look like, what's the moments of joy and despair going on because you're reiterating, your learning, you're always constantly getting knocked down, standing back up. so this is what innovation is, it can be fun and also grueling at the same time. >> Yeah, so how we deal with what's new beyond our like software process, we have a well-defined process that everything gets churned into. Government is really good about giving us notice about when stuff's going into effect, so we always have target dates that we're going toward. But, in terms of what's next in terms of our software, we have this interesting culture within our organization, everyone wants to improve everything, I think it's called a Kaizen culture, just people are looking at stuff they want to improve it, and so our process allows for anyone to throw something on the backlog. It will get prioritized and put around, but we're allowing all of our engineers to say, hey we want to do this, and you know, putting it into an open forum where, you know, we might not do it but we have the discussion, and we have all the channels to have those discussions and, like most technology companies or technology focused companies, we spend a lot of time talking about technologies, and making those decisions. >> You guys really have the cultural ethos but the people to bate and then commit. >> And that's one of my, you know, recommendations for any company trying to move to cloud native or Kubernetes is, always, you have to have your evangelists, on your team, because you can't expect people who have been doing it one way forever to instantly be onboard. You need some sort of technical evangelist whether that's outside company, it works best, I think, if it's someone you've hired, or someone in your organization who's preaching the gospel of Kubernetes or cloud native. >> Spark Labs, Lauren's company's doing a lot of that work, but that really nails it, I mean, you got to just, it's not a technical issue, per se-- >> Exactly. >> We're hearing that all through the show here. What's on your wish list, what is the holiday's want to bring for you? If you could throw your wish list out there, and you can, a magic wand, crystal ball >> EKS, if Amazon would respond to our request. >> Okay, we just had AG on yesterday, he said it's coming >> It's coming. >> He said, months, >> Did he say months, I thought it was a few months, So maybe >> We'll check the transcripts. >> Alright >> Yeah, it wasn't tomorrow. >> That's alright. >> And that's one of our, that's our scar tissue right? We're doing this ourself, you know, there's this huge control board and we got people, you know, doing the knobs and things and we're relatively small, you know, we're a small engineering organization so we're doing a lot of this ourselves where we can abstract a lot of that work out to a cloud provider that we are already on. >> Well it's going to be good reps for you guys as this thing gets abstracted away, you're going to have a great core competencies in Kubernetes, I think that is a notable thing there. >> Austin: For sure. >> One of the things on my wish list, I was speaking to Jace and Josh Burkus and a lot of the core contributors in Kubernetes at the Contributors Summit, I kind of realized that I would love to see a coordinated cross cutting after, either on part of the CNCF or on part of The Kubernetes Project proper, to have a proactive security, I wouldn't call it a working group, I guess a SIG, a Special Interest Group. It would be, I know that we can deal with zero day issues really, really quickly. For example, the Azure host path mapping issue that was a few months ago, but right now it's kind of on the responsibility of each SIG to implement whatever security looks like to them individually, which is great, it means there are people thinking about security, that makes me sleep better at night. But, seeing some coordination around that and kind of driving towards, okay we have this tool that seems to be changing the game, how are we going to change the game with security? Like is there a way to look at that and even, 'cause authentication and authorization have been around since more than one user used a terminal in the 1960's and 70's. But, even with this new step of admission controllers, where we have more fine grain control around how stuff gets into the cluster. I think it would be great to look at what a coordinated cloud native security effort would look like. >> I think that's great, I mean we've been talking to a lot of vendors here and a lot of folks that have projects, and we bring security every single time and they kind of have an answer, but they really don't. >> They body swerve you, we've got this we've got that. >> Or you're the developer and you have to build it in yourself, so I totally agree with that recommendation I think it's fabulous. >> Yeah, Kubernetes is making so many things simpler at certain levels. Now, if we can focus those efforts at making security simple for people, because they're security experts, they can put their two cents in >> Lauren: Let's build it in and not block it on. >> Build it in and not expect every developer to know. >> Zach: Don't bolt it on, build it in. >> Build it from the beginning, there are all kinds of new ways. The fact there is no perimeter with the cloud brings up, really kind of throws everyone for a loop because you have to go to the chipset down, I mean what Google got, I think is a very interesting approach, they're trying to push forward this multilayer approach from chip to kernel to OS to app, interesting. They've got, managing through all their security, they've got android, I mean spear phishing is a huge problem right now, we're seeing and a lot of enterprises we talk to are like, well, it's like the firewalls and VPN's like that's old school, they need to modernize that so this is going to get them thinking about that. So great, hey guys, thank you for coming on and sharing your feedback-- >> Thank you. >> And your data and your place and how you are architected on AWS and your work with Kubernetes. Congratulations. >> Thank you. >> Cube coverage here in Copenhagen. It's theCUBE's coverage at Kubecon 2018. We'll be back with more after this short break.
SUMMARY :
Brought to you by the Cloud Native Computing Foundation and my co-host Lauren Cooney, founder of Spark Labs and then what's your back end, technical scaling situation homeowners and contracting companies to provide and then you guys get the financing and you tie Yeah, and the way that we're chartered is basically so you've got to build systems, so you're working A lot of people start with, you know, their smallest have to interface, you build your own system so that's So, in order to move quickly, we're wanting to containerize of the main loan management system, it's the concern to look at the CNCF list of projects as a dinner Spire and Spiffy, the identity, kind of a new take, of the fact that we need a little more coordination especially some of the smaller projects that we're but being active on the project is the key, Yeah, I mean typically what you find in the fiance Yeah, and this is interesting, if you have a good business You can really move fast and go from, you know, and auditing and things that you need, where in our and also grueling at the same time. have the discussion, and we have all the channels to have You guys really have the cultural ethos but the people or Kubernetes is, always, you have to have your and you can, a magic wand, crystal ball huge control board and we got people, you know, Well it's going to be good reps for you guys that seems to be changing the game, how are we and we bring security every single time and they kind Or you're the developer and you have to build Yeah, Kubernetes is making so many things simpler so this is going to get them thinking about that. are architected on AWS and your work with Kubernetes. We'll be back with more after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lauren Cooney | PERSON | 0.99+ |
Lauren | PERSON | 0.99+ |
Zach | PERSON | 0.99+ |
Josh Burkus | PERSON | 0.99+ |
Jace | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Copenhagen | LOCATION | 0.99+ |
Zach Arnold | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
Spark Labs | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
two points | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
RBAC | TITLE | 0.99+ |
Kubernetes | TITLE | 0.98+ |
CNCF Cloud Native Foundation | ORGANIZATION | 0.98+ |
two cents | QUANTITY | 0.98+ |
Ygrene | PERSON | 0.98+ |
Copenhagen, Denmark | LOCATION | 0.98+ |
Ygenre energy fund | ORGANIZATION | 0.98+ |
more than one user | QUANTITY | 0.98+ |
Cloud Native | ORGANIZATION | 0.98+ |
android | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
Austin | PERSON | 0.97+ |
single | QUANTITY | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
Copenhagen Denmark | LOCATION | 0.96+ |
point B | OTHER | 0.96+ |
10 years | QUANTITY | 0.96+ |
Austin Adams | PERSON | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
zero day | QUANTITY | 0.96+ |
70's | DATE | 0.96+ |
One | QUANTITY | 0.95+ |
Kubecon 2018 | EVENT | 0.95+ |
Kubernetes | ORGANIZATION | 0.95+ |
Notary | TITLE | 0.94+ |
FluentD | TITLE | 0.94+ |
few months ago | DATE | 0.93+ |
1960's | DATE | 0.93+ |
CloudnativeCon Europe 2018 | EVENT | 0.92+ |
Azure | TITLE | 0.92+ |
Cube | ORGANIZATION | 0.92+ |
Contributors Summit | EVENT | 0.91+ |
Kubernetes RBAC | TITLE | 0.91+ |
each | QUANTITY | 0.89+ |
SIG | ORGANIZATION | 0.88+ |
tons of services | QUANTITY | 0.87+ |
The Kubernetes Project | TITLE | 0.85+ |
two end user | QUANTITY | 0.84+ |
three | QUANTITY | 0.83+ |
PACE | TITLE | 0.82+ |
IAM | TITLE | 0.82+ |
CloudNativeCon EU 2018 | EVENT | 0.79+ |
one project | QUANTITY | 0.76+ |
Kaizen | ORGANIZATION | 0.76+ |
one way | QUANTITY | 0.7+ |
GitHub | ORGANIZATION | 0.69+ |
single time | QUANTITY | 0.67+ |
things | QUANTITY | 0.66+ |
of people | QUANTITY | 0.64+ |
Kubecon | EVENT | 0.63+ |
Nick O'Keefe, Arnold & Porter | ACGSV GROW! Awards 2018
>> Narrator: From the computer museum in Mountain View, California, it's theCUBE. Covering ACG Silicon Valley Grow Awards brought to you by ACG Silicon Valley. >> Hey welcome back everybody, Jeff Frick here with theCUBE, we're in Mountain View, California at the ACGSV awards, the grow awards, 14th annual. We've been coming for a couple of years, about 300 people celebrating, really, there's a lot of networking, it's an interesting organization. Check it out, we're excited to have our next guest, he's Nick O'Keefe, partner of Arnold and Porter. Nick, great to see you. >> Likewise, great seeing you, great to talk to you. >> So we were talking a little bit off camera, you came to Silicon Valley in 2000, and were saying you seen a lot of changes in those 18 years. >> Yeah it's phenomenal, it's epitomized by the great gathering that we have here today. As I was saying earlier, when I came, I worked in Silicon Alley. Silicon Valley was sort of a bigger version of Silicon Alley and it's just kept growing. You know, the practice between East Coast and West Coast is converged. I mean, there's some of the biggest most successful companies in the world are based here now, and some of the biggest deals. It's just incredible in a short period of time how that's happened. As I was saying earlier, you know, one of the things that really opened my mind, opened my eyes to how successful Silicon Valley is, is I opened up the Middle East offices to another law firm right around the time of the Great Recession. And it's seems like every country is trying to emulate Silicon Valley. We advised on how they can replicate it, what kind of laws they'd have to put in place, what kind of ecosystem they'd have to build. And there's just something really unique here that's really difficult to emulate in different countries-- >> Right because it's all industries. Right, all industries tend to aggregate and congregate around a usually a specific location, or one or two. You think of financial services in New York and London. Because you get the people, and those people leave and start new companies. You have the schools that drive people in their associates. It's tough, it's tough to replicate a whole ecosystem if you don't have all those components, and then, as it gels for a awhile, I think the barriers to entry become even higher. So, you get different versions of it, but really not the same. >> Yeah that's right, I mean, we have all the ingredients here, we have the great educational institutions, you know, Berkeley, Stanford. You have the financial institutions or the venture money. Very sophisticated population, it's just wonderful living here. Just so many smart people around, you can't just lift them up and put them somewhere else, they all have ties in the community. It's just very tough. What's interesting about financial services you mentioned, typically that's a New York-based practice, but with Fintech, you're seeing some of that migrate over here. Cryptocurrencies, a lot of that technology is being developed here, and that's really a convergence of financial services and tech, and Silicon Valley is the hub of that. >> Yeah, I really think that Stanford and Cal don't get enough credit. And Santa Clara and some of the other schools, but those two particularly, because they attract really great talent. They come, their weather's great, they've got a culture of innovation, they've got very nice connections with the local business community, so people don't leave. So you got this constant influx of smart people, and they stay, where a lot of other places, even great academic institutions don't necessarily have the business climate, the weather climate, or kind of the ecosystem to keep their brightest, it's there locally. So I think that's just a huge driver. >> Yeah absolutely, I completely agree. And there's, even if they don't stay, they still maintain their ties here. You know, people all over the world come to study here, as you're indicating. You know, I'm doing a deal currently with some Chinese people who did graduate research locally, and they formed a very successful start-up in China, where currently, we're doing a deal with. And the fact that Stanford, they couldn't be where they were if they hadn't gone through Stanford, and they develop ties with the region, and with the companies in the regions, so they're very much, sort of a diaspora of Silicon Valley, the way they've operated it. >> Right, what is your take on China? 'Cause to me, China's the big competitor. That's the one, I think, where there's the potential because they got a huge internal market, they're really good at fast following, and you look at Alibaba Cloud, and some of the big, big players over there. I think that's really where the biggest threat to the current US incumbents is going to come. >> It's very interesting, it's sort of two, two faceted. On the one hand, obviously, a huge population, and as the country develops, I mean, ultimately within the fairly near future, the Gross National Product is expected to overtake the US. But you have sort of a different culture, and they have the same challenges as everyone else does, this sort of replicating Silicon Valley, I don't think they'll ever take Silicon Valley, you know, take the crown away from them. And I think, what I'm seeing now in a couple of deals is, so the current administration is obviously trying to defend the US trade position, but it's having deleterious effects in that it's preventing Silicon Valley companies from growing and from doing deals. You know, a lot of the Chinese funds they're lucky to invest in the US, where there's currently some regulations that are expected to be proposed next month that could inhibit Chinese investment in the US. Now that's not good for Silicon Valley, so the attempt is to, sort of, protect the US economy, but, you know, I can see certain effects that are happening that are not helpful. It's interesting, there's sort of a symbiotic relationship between development here in the US, and development in other countries, and it's difficult to fight it 'cause you're going to have weird effects. You know, I think the US, it's just a unique country. You know, I think it'll always be unique, and I personally, I don't have a fear that China is going to somehow usurp the position the US occupies, or India, or other huge country, I'm just very polished on Silicon Valley, and the US generally. >> Yeah it is amazing 'cause I've been here a little longer than you, and it just, it just keeps reinventing, right? It's just wave after wave after wave, it was originally silicon and microprocessors, and then it's software, and then it's IOT. And now, you see all the automotive people have innovation centers here. So wave after wave after wave, just continues to come, and then we're going to have, you know, 5G, and it's this whole move to asymptomatically approaching zero cost of store, compute, and networking, and infinite, basically, amounts of those on tap. It really opens up a huge opportunity. >> It really does, yeah, and it's, a lot of it's going to come from here. >> Alright Nick, well thanks for taking a few minutes of your time, and stopping by. >> You bet, my pleasure. >> Alright he's Nick O'Keefe, I'm Jeff Frick, you're watching theCUBE, from the ACGSV awards, Grow Awards in Mountain View, California. Thanks for watching. (digital music)
SUMMARY :
brought to you by ACG Silicon Valley. at the ACGSV awards, the Likewise, great seeing and were saying you seen a lot You know, the practice between East Coast You have the schools that drive and Silicon Valley is the hub of that. of the other schools, of Silicon Valley, the and some of the big, You know, a lot of the Chinese funds just continues to come, and a lot of it's going to come from here. a few minutes of your from the ACGSV awards,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick O'Keefe | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Nick O'Keefe | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Nick | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
2000 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
18 years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Cal | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
next month | DATE | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Alibaba Cloud | ORGANIZATION | 0.99+ |
Fintech | ORGANIZATION | 0.98+ |
Middle East | LOCATION | 0.98+ |
Chinese | OTHER | 0.98+ |
today | DATE | 0.98+ |
ACG Silicon Valley Grow Awards | EVENT | 0.97+ |
about 300 people | QUANTITY | 0.97+ |
ACG Silicon Valley | ORGANIZATION | 0.96+ |
ACGSV awards | EVENT | 0.95+ |
Arnold | PERSON | 0.95+ |
East Coast | LOCATION | 0.94+ |
Porter | PERSON | 0.94+ |
West Coast | LOCATION | 0.93+ |
Great Recession | EVENT | 0.92+ |
14th annual | QUANTITY | 0.92+ |
Silicon Alley | LOCATION | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
Berkeley | ORGANIZATION | 0.91+ |
wave after | EVENT | 0.88+ |
ACGSV | EVENT | 0.87+ |
ACGSV GROW! Awards 2018 | EVENT | 0.86+ |
grow awards | EVENT | 0.83+ |
Santa Clara | ORGANIZATION | 0.78+ |
China | ORGANIZATION | 0.78+ |
5G | ORGANIZATION | 0.77+ |
India | LOCATION | 0.75+ |
Silicon | ORGANIZATION | 0.72+ |
couple of years | QUANTITY | 0.72+ |
couple | QUANTITY | 0.71+ |
US | ORGANIZATION | 0.69+ |
wave after wave | EVENT | 0.68+ |
Grow Awards | EVENT | 0.61+ |
Silicon Valley | ORGANIZATION | 0.56+ |
Valley | LOCATION | 0.46+ |
Thijs Ebbers & Arno Vonk, ING | KubeCon + CloudNativeCon NA 2022
>>Good morning, brilliant humans. Good afternoon or good evening, depending on your time zone. My name is Savannah Peterson and I'm here live with the Cube. We are at CubeCon in Detroit, Michigan. And joining me is my beautiful co-host, Lisa, how you feeling? Afternoon of day three. >>Afternoon day three. We've had such great conversations. We have's been fantastic. The momentum has just been going like this. I love it. >>Yes. You know, sometimes we feel a little low when we're at the end of a conference. Not today. Don't feel that that way at all, which is very exciting. Just like the guests that we have up for you next. Kind of an unexpected player when we think about technology. However, since every company, one of the themes is every company is trying to be a software company. I love that we're talking to I n G. Joining us today is Ty Evers and Arno vk. Welcome to the show gentlemen. Thank >>You very much. Glad to be you. Thank you. >>Yes, it's wonderful. All the way in from Amsterdam. Probably some of the farthest flying folks here for this adventure. Starting off. I forgot what's going on with the shirts guys. You match very well. Tell, tell everyone. >>Well these are our VR code shirts. VR code is basically the player of our company to get people interested as an IT person in banking. Right? Actually, people don't think banking is a good place to work as an IT professional, but actually this, and we are using the OC went with these nice logos to get it attention. >>I love that. So let's actually, let's just talk about that for a second. Why is it such an exciting role to be working in technology at a company like I N G or traditional bank? >>I N G is a challenging environment. That's how do you make an engineer happy, basically give them a problem to solve. So we have lots and lots of problems to solve. So that makes it challenging. But yeah, also rewarding. And you can say a lot of things about banks and with looking at the IT perspective, we are doing amazing things in I and that's what we talked about. Can >>You, can you tell us any of those amazing things or are they secrets? >>Think we talked about last Tuesday at S shift commons conference. Yeah, so we had two, two presentations I presented with my coho sand on my journey over the last three years. So what has IG done? Basically building a secure container hosting platform. Yeah. How do we live a banking cot with cloud native technology and together with our coho young villa presented actually showed it by demo making life and >>Awesome >>In person. So we were not just presenting, >>It's not all smoke and mirrors. It's >>Not smoke and mirror, which we're not presenting our fufu marketing block now. We actually doing it today. And that's what we wanted to share here. >>Well, and as consumers we expect we can access our banking on any device 24 by seven. I wanna be able to do all my transactions in a way that I know is secure. Obviously security's a huge thing there, but talk about I n G Bank aren't always been around for a very long time. Talk about this financial institution as a software company. Really obviously a lot of challenges to solve, a lot of opportunity. But talk about what it's like working for a history and bank that's really now a tech company. >>Yes. It's been really changing as a bank to a tech company. Yeah. We have a lot of developers and operators and we do deliver offer. We OnPrem, we run in the public. So we have a huge engineers and people around to make our software. Yes. And I am responsible for the i Container Ocean platform and we deliver that the name space as a surface and as a real, real secure environment. So our developers, all our developers in, I can request it, but they only get a name space. Yeah, that's very important there. They >>Have >>Resources and all sort of things. Yeah. And it is, they cannot access it. They can only access it by one wifi. So, >>So Lisa and I were chatting before we brought you up here. Name space as a service. This is a newer term for us. Educate us. What does that mean? >>Basically it means we don't give a full cluster to our consumers, right? We only give them basically cpu, memory networking. That's all they need to host application. Everything else we abstract away. And especially in a banking context where compliance is a big thing, you don't need to do compliance for an entire s clusterized developer. It's really saves development time for the colleagues in the bank. It >>Decreases the complexity of projects, which is a huge theme here, especially at scale. I can imagine. I mean, my gosh, you're serving so many different people, it probably saves you time. Let's talk about regulation. What, how challenging is that for you as technologists to balance in all the regulations around banking and FinTech? It's, it's, it's, it's not like some of these kind of wild, wild west industries where we can just go out and play and prototype and do whatever we want. There's a lot of >>Rules. There's a lot of rules. And the problem is you have legislation and you have the real world. Right. And you have to find something in, they're >>Not the same thing. >>You have to find something in between with both parties on the stands and cannot adhere to. Yeah. So the challenge we had, basically we had to wide our, in our own container security standards to prove that the things we were doing were the white things to be in control as a bank because there was no market standard for container security. So basically we took some input from this. So n did a lot of good work. We basically added some things on top to be valid for a bank in Europe. So yeah, that's what we did. And the nice thing is today we take all the boxes we defined back in 2019. >>Hey, so you what it's, I guess, I guess the rules are a little bit easier when you get to help define them. Yep. Yeah. That it feels like a very good strategic call >>And they makes sense. Yeah. Right. Because the hardest problem is try to be compliant for something which doesn't make sense. Right, >>Right. Arnold, talk about, let's double click on namespace as a service. You talked about what that is, but give us a little bit of information on why I N G really believes this is the right approach for this company. >>It's protects for the security that developers doing things they don't shoot. Yeah. They cannot access their store anymore when it is running in production. And that is the most, most important. That is, it is immutable running in our platform. >>Excellent. Talk about both of you. How long have you, have you both been at I n G for a long time? >>I've been with I N G since September, 2001. So that's more than 20 years >>Now. Long time. Ana, what about you? >>Before 2000 already before. >>So both of your comment on that's a long time. Yeah. Talk about the culture of innovation that's at I N G to be able to move at such speed and be groundbreaking in what you're, how you're using technology, what, what's the appetite like at the bank to embrace new and emerging technologies? >>So we are really looking, basically the, the mantra of the bank is to help our customers get a step ahead in life and in business. And we do that by one superior customer service and secondly, sustainability at the heart. So anything which contributes to those targets, you can go to your manager and if you can make goods case why it contributes most of the cases you get some time or some budgets or even some additional colleagues to help you out and give it a try require from a culture perspective required open to trying things out before we reach production. Once you go to production. Yeah. Then we are back to being a bank and you need to take all the boxes to make really sure that we are confident with our customers data and basically we're still a bank but a lot of is possible. >>A lot. It is possible. And there's the customer on the other end who's expecting, like I said earlier, that they can access their data any time that they want, be able to do any transaction they want, making sure the content that's delivered to them is relevant, that it's secure. Obviously with, that's the biggest challenge especially is we think about how many generations are alive today and and those that aren't tech savvy. Yeah. Have challenges with that. Talk about what the bank's dedication is to ensuring from a security perspective that its customers don't have anything to worry about. >>That's always a thin line between security and the user experience. So I n g, like every other bank needs to make choices. Yes. We want the really ease of customers and take the risk that somebody abuses it or do we make it really, really secure and alienate part of our customer base. And that's an ongoing, that's a, that's a a hard, >>It's a trade off. That's >>A line. >>So it's really hard. Interesting part is in Netherlands we had some debates about banks closing down locations, but the moment we introduced our mobile weapon iPads, basically the debates became a lot quieter because a lot of elderly people couldn't work with an iPhone. It turned out they were perfectly fine with a well-designed iPad app to do their banking. Really? >>Okay. >>But that's already learning from like 15 years ago. >>What was the, what was the product roadmap on that? So how, I mean I can imagine you released a mobile app, you're not really thinking that. >>That's basically, I think that was a heavy coincidence. We just, Yeah, okay. Went out to design a very good mobile app. Yeah. And then looking out afterwards at the statistics we say, hey, who was using this way? We've got somebody who's signing on and I dunno the exact age, but it was something like somebody of 90 plus who signed on to use that mobile app. >>Wow. Wow. I mean you really are the five different generations living and working right now. Designing technology. Everybody has to go to the bank whether we are fans of our bank or we're not. Although now I'm thinking about IG as a bank in general. Y'all have a a very good attitude about it. What has kept you at the company for over 20 years? That is we, we see people move around, especially in this technology industry. Yes. Yeah. You know, every two to three years. Sometimes obviously you're in positions of leadership, they're obviously taking good care of you. But I mean multiple decades. Why have you stuck? >>Well first I didn't have the same job in I N D for two decades. Nice. So I went around the infrastructure domain. I did storage initially I did security, I did solution design and in the end I ended up in enterprise architecture. So yeah, it's not like I stuck 20 years in the same role. So every so years >>Go up the ladder but also grow your own skill sets. >>Explore. Yeah. >>So basically I think that's what's every, everybody should be thinking in these days. If you're in a cloud head industry, if you're good at it, you can out quite a nice salary. But it also means that you have some kind of obligation to society to make a difference. And I think, yeah, >>I wouldn't say that everybody feels that way. I >>Need to make a difference with I N G A difference for being more available to our consumers, be more secure to, to our consumers. I, I think that's what's driving me to stick with the company. >>What about you R Now? >>Yes, for me it's very important. Every two, three years are doing new things. I can work with the latest technology so I become really, really innovative so that it is the place to be. >>Yeah. You sort of get that rotation every two to three years with the different tools that you're using. Speaking of or here we're at Cuan, we're talking cloud native, we're talking Kubernetes. Do you think it's possible to, I'm coming back to the regulations. Do you think it's possible to get to banking grade security with cloud native Tech? >>Initially I said we would be at least as secure traditional la but last Tuesday we've proven we can get more secure than situational it. So yeah, definitely. Yes. >>Awesome. I mean, sounds like you proved it to yourself too, which is really saying something. >>Well we actually have Penta results and of course I cannot divulge those, but I about pretty good. >>Can you define, I wanna kind of double book on thanking great security, define what that is, thanking great security and how could other industries aim to Yeah, >>Hit that, that >>Standard. I want security everywhere. Especially my bank. The >>Architecture is zero privilege. So you hear a lot about lease privilege in all the security talks. That's not what you should be aiming for. Zero privilege is what you should be aiming for. And once you're at zero privileged environments, okay, who can leak data because no natural person has access to it. Even if you have somebody invading your infrastructure, there are no privileges. They cannot do privilege escalations. Yeah. So the answer for me is really clear. If you are handling customer data, if you're and customer funds aim for zero privilege architecture, >>What, what are you most excited about next? What's next for you guys? What's next for I n G? What are we gonna be talking about when we're chatting to you Right here? Atan next year or in Amsterdam actually, since we're headed that way in the spring, which is fun. Yes. >>Happy to be your host in Amsterdam. The >>Other way around. We're holding you to that. You've talked about how fun the culture is. Now you're gonna ask, she and I we need, but we need the tee-shirts. We, we obviously need a matching outfit. >>Definitely. We'll arrange some teachers for you as well. Yeah, no, for me, two highlights from this com. The first one was kcp. That can potentially be a paradigm change on how we deal with workloads on Kubernetes. So that's very interesting. I don't know if you see any implementations by next year, but it's definitely something. Looks >>Like we had them on the show as well. Yeah. So it's, it's very fun. I'm sure, I'm sure they'll be very flattered that you just just said. What about you Arnoldo that got you most excited? >>The most important for me was talking to a lot of Asian is other people. What if they thinking how we go forward? So the, the, the community and talk to each other. And also we found those and people how we go forward. >>Yeah, that's been a big thing for us here on the cube and just the energy, the morale. I mean the open source community is so collaborative. It creates an entirely different ethos. Arna. Ty, thank you so much for being here. It's wonderful to have you and hear what I n g is doing in the technology space. Lisa, always a pleasure to co-host with you. Of course. And thank you Cube fans for hanging out with us here on day three of Cuban Live from Detroit, Michigan. My name is Savannah Peterson and we'll see you up next for a great chat coming soon.
SUMMARY :
And joining me is my beautiful co-host, Lisa, how you feeling? I love it. Just like the guests that we have up for you next. Glad to be you. I forgot what's going on with the shirts guys. VR code is basically the player of our company So let's actually, let's just talk about that for a second. So we have lots and lots of problems to solve. How do we live a banking cot with cloud native technology and together So we were not just presenting, It's not all smoke and mirrors. And that's what we wanted to share here. Well, and as consumers we expect we can access our banking on any device 24 So we have a huge engineers and people around to And it is, they cannot access it. So Lisa and I were chatting before we brought you up here. Basically it means we don't give a full cluster to our consumers, right? What, how challenging is that for you as technologists And the problem is you have legislation and So the challenge we had, basically we had to wide our, in our own container security standards to prove Hey, so you what it's, I guess, I guess the rules are a little bit easier when you get to help define them. Because the hardest problem is try to be compliant for something You talked about what that is, And that is the most, most important. Talk about both of you. So that's more than 20 years Ana, what about you? So both of your comment on that's a long time. of the cases you get some time or some budgets or even some additional colleagues to help you out and making sure the content that's delivered to them is relevant, that it's secure. abuses it or do we make it really, really secure and alienate part of our customer It's a trade off. but the moment we introduced our mobile weapon iPads, basically the debates became a So how, I mean I can imagine you released a mobile app, And then looking out afterwards at the statistics we say, What has kept you at the company for over 20 years? I did solution design and in the end I ended up in enterprise architecture. Yeah. that you have some kind of obligation to society to make a difference. I wouldn't say that everybody feels that way. Need to make a difference with I N G A difference for being more available to our consumers, technology so I become really, really innovative so that it is the place to be. Do you think it's possible to get to we can get more secure than situational it. I mean, sounds like you proved it to yourself too, which is really saying something. I want security everywhere. So you hear a lot about lease privilege in all the security talks. What are we gonna be talking about when we're chatting to you Right here? Happy to be your host in Amsterdam. We're holding you to that. I don't know if you see any implementations by What about you Arnoldo that got you most excited? And also we And thank you Cube fans for hanging out with us here on day three of Cuban Live from Detroit,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2019 | DATE | 0.99+ |
Ana | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Arnold | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
September, 2001 | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
I N G | ORGANIZATION | 0.99+ |
iPads | COMMERCIAL_ITEM | 0.99+ |
two decades | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
next year | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
Arno Vonk | PERSON | 0.99+ |
both parties | QUANTITY | 0.99+ |
IG | ORGANIZATION | 0.99+ |
more than 20 years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
last Tuesday | DATE | 0.99+ |
over 20 years | QUANTITY | 0.98+ |
I n G | ORGANIZATION | 0.98+ |
Thijs Ebbers | PERSON | 0.98+ |
15 years ago | DATE | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
seven | QUANTITY | 0.97+ |
Cuan | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
90 plus | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.96+ |
Zero privilege | QUANTITY | 0.95+ |
Penta | ORGANIZATION | 0.94+ |
Arna | PERSON | 0.94+ |
first one | QUANTITY | 0.93+ |
zero privilege | QUANTITY | 0.93+ |
one wifi | QUANTITY | 0.92+ |
Kubernetes | ORGANIZATION | 0.92+ |
2000 | DATE | 0.92+ |
Arnoldo | PERSON | 0.92+ |
OnPrem | ORGANIZATION | 0.92+ |
two highlights | QUANTITY | 0.92+ |
day three | QUANTITY | 0.91+ |
five different generations | QUANTITY | 0.9+ |
ING | ORGANIZATION | 0.9+ |
24 | QUANTITY | 0.89+ |
CubeCon | ORGANIZATION | 0.88+ |
G Bank | ORGANIZATION | 0.87+ |
zero privilege architecture | QUANTITY | 0.86+ |
secondly | QUANTITY | 0.86+ |
Atan | LOCATION | 0.85+ |
two presentations | QUANTITY | 0.83+ |
S shift commons conference | EVENT | 0.82+ |
NA 2022 | EVENT | 0.82+ |
zero privileged | QUANTITY | 0.81+ |
every two | QUANTITY | 0.81+ |
last three years | DATE | 0.79+ |
double | QUANTITY | 0.77+ |
Ty Evers | ORGANIZATION | 0.76+ |
device | QUANTITY | 0.72+ |
Afternoon | DATE | 0.72+ |
Cuban Live | EVENT | 0.7+ |
a second | QUANTITY | 0.69+ |
Ty | PERSON | 0.68+ |
three | QUANTITY | 0.65+ |
every | QUANTITY | 0.57+ |
i Container Ocean | ORGANIZATION | 0.56+ |
Afternoon of day | DATE | 0.54+ |
Kubernetes | TITLE | 0.52+ |
Aaron Arnoldsen & Adi Zolotov, BCG GAMMA | AWS re:Invent 2021
>>Welcome back to the cubes, continuous coverage of AWS reinvent 2021. I'm Lisa Martin. We are winning one of the industry's most important in hybrid tech events this year with AWS and its enormous ecosystem of partners to life sets we have going on right now. There's a dueling set right across from me, two remote studios over 100 guests on the program. We'll be digging into really the next decade of cloud innovation. I'm pleased to welcome two guests that sit next to me here. We've got Aaron Arnold Santa's partner at BCG gamma and a diesel associate director of data science at BCG gamma guys. Welcome to the program. Thanks for having us. I D let's go ahead and start with you. Give us the low down what's going on at BCG gamma. >>We are focused on building responsible, sustainable, and efficient AI at scale to solve pressing business problems. >>Good. We're going to dig into that more. There was a lot of talk about AI this morning during the keynote yesterday as well. And you know, one of the things Aaron that we talked about the last day and a half is that every company, these days has to be a data company, but the volume of data is so great that we've got to have AI to be able to help all the humans process it, find all of the nuggets that are buried within these volumes of data for companies to be competitive. You talk about a sustainable efficient let's go ahead and talk about what do you mean by efficient AI? It sounds great, but help unpack what that actually means. And, and how does an organization in any industry actually achieving? >>Yeah. So when we talk about efficient AI, we're really talking about resilience, scale and adoption. So we all know that the environment in which AI tools and systems are deployed change and update very frequently, and those changes and updates can lead to errors, downtime, which erode user trust. And so when you're designing your AI, it's really critical to build it right and, and ensure it's resilient to those types of changes in the operational environment. And that really means designing it upfront to adhere to company standards around documentation, um, testing bias as well as approved model architecture. So, so that piece is really critical. The other piece about efficient AI is we're really talking about using better code structure to ensure that and enable that teams can search learn, um, and really clone AI IP to bring AI at scale across company silos. So what efficient AI does is it ensures that companies can go from proof of concept and exploration to deploying AI at scale. The final piece is really about solving the right business problems quickly in a way that ensures that users and customers will adopt and actually use the tool and capability >>That adoption there is absolutely critical. And >>You know, when we, when we're talking about AI, most of the time we're talking about three components and we call it like the ten twenty, seventy rule, 10% of the change is really about the better AI algorithms that are coming out. 20% is a better architecture, the technology, all of those components, but 70% of it is really about how are we influencing our business partners to make better decisions? How are we making sure AI is built right into the operational decision flow? And that's really, when we start talking about better AI, we move it away from kind of our pet project, buzzword bingo, into decision operational flows, you know, and, and there's, there's a journey there, there's a journey that we all are on. You see the evolution of AI right now. And I th and I liken it a lot to, um, myself when I'm, I'm a big football fan, right. And I've fantasy football is like my passion. I see. And when I look at the decision-makings, I've made 10 years ago versus now, now I actually have my own models. I'm running against it. I'm, I'm very much into the details of what is the data telling me, but, um, it's not until I bring that together with my decision making process, that really makes it so that I have bragging rights on Sundays. >>I wouldn't want to compete against Aaron. I mean, you know, I've got a lot of friends that do fantasy football, but I don't think they're taking, they're actually doing data-driven approaches as you are. One of the things I'm glad that you talked about the 10, 20, 70 formula for in dividing investments in AI. One of the things that really surprised me, and I'm looking at my notes here, because I was writing this down was that you said 10% AI and machine learning algorithms, 20% software and technology infrastructure, 70% though is also change management. That is hard, especially the speed with which every industry is operating today. What we've seen in the last 22 months, we've seen a massive acceleration to the cloud, every business pivoting, many times where our customers, in terms of understanding the challenges that they can solve with AI, given the fact that we're still in such a dynamic global environment, do what are you seeing? >>So I think it's actually quite, bi-modal some companies, including the public sector are really leaning in and exploring all the different applications and all the different solutions. Unfortunately, if they're not emphasizing that 70% on change management and the culture change and user adoption, those are substantial, but you don't get the return on the investment. Right. On the other hand, the other part of that bi-modal distribution is there are folks who are still really reluctant because they have made investments and it hasn't right. Brought about the change that they were hoping for. And so I think it's really critical to bring that holistic approach to bringing AI and advanced analytic tools to really change the way, you know, a company's attacking its problems and bringing solutions to its users and customers. >>Yeah. I like it a lot to us as us, as adults have when we teach our kids about math, right? Like less of my time with my own kids is focused on teaching them, the principles, the, and all those things, but it's more teaching them to be comfortable. Why are they learning math? What are they doing? How is that going to prepare them to be more competitive and, uh, later on in life. And so, and then the same thing's happening in this evolution in AI, right? There is this big tech and AI transformations that are happening. But the questions we need to ask ourselves within is are we taking the time to make sure our companies and our people are on the journey with us and that they understand that this is going to be better for them and give them a competitive advantage. >>That's critical. We know we've talked a lot in Alaska. We talk a lot about every show about people, process technologies and people as part of that. But I've definitely seen more of a focus. I think the last two and a half days of the people emphasis going, we have to have, we have to upskill our people. We have to train our people. We have to make sure that they're understand how this technology can partner with them and enable them rather than take things away. So it's nice to hear you talking about the big focus there being on the people that is because without that, then a deed to your point, a lot of those projects aren't successful >>And not only, I think the other piece there in terms of bringing the user along for the journey is you don't want them to feel like this is just another tool, right? Another part of their, in addition to their workflow, right? You want to take the burden away. You want it to really, um, not add, but to, to their, to their list of, of daily tasks, but subtract and make it easier. Right. And I think that that's really critical for a lot of companies as well. >>I think along with what you're talking about, we have to teach people to be responsible. So it's, it's one thing to do the job better, but it's another thing to be responsible because in today's world, we have to think about our responsibilities back to our communities, to our consumers, to our shareholders and into ultimately to the environment itself. And so responsive as we are thinking about AI, we need to think differently too, because let's face it. Data is fuel and we can accidentally make the wrong decisions for the globe by making the right decisions for stakeholders. We have to do a better job of understanding the why we're doing what we're doing, what we're doing, and not only the, the intended consequences of our decisions, but also the unintended consequences. And then we need to be responsible in the ways that we're using AI and that we're transparent in our use thereof. >>Yeah, Aaron, I think that's incredibly critical. I think responsible AI, um, has to be at the heart of, of AI transformation. And one of the interesting things that we have found is that organizations perceive their responsible AI maturity to be substantially higher than it actually is. Right. And less than 50% of organizations that have, you know, a fully implemented AI at scale, do not have a responsible AI, um, capability. And so at BCG, we've been working quite hard to integrate our gamma responsible AI program into the big AI transformations, because it's so critical. It's so absolutely important. And, and really that there's a lot of facets to that. But one of that, one of the critical ones is an ensures the goals and the outcomes of the AI systems are fair and unbiased and explainable, which is so critical. Um, I think it also ensures best that we follow best practices for data governance to protect user privacy, which I think is another critical, um, piece here, as well as minimizing any negative social or environmental impact, which again, I it's, it's just gotta be at the forefront of AI development. What about, >>And I think that there's a tech part to that too. So like one thing that we're working on called a gamma facet is really, you know, for the longest time in this AI transformation, AI was kind of a black box and it's kind of mystical, but we, we optimize our results. The transformation, when we talk about better, AI is, uh, the decision maker is in the center and knows the outcome. And so we make it a clear box. And so they're really, we're working a lot on, you know, the most common Python packages, uh, to make them more clear too, so that the business user and the data scientist understands the decisions that they're making and how it will impact the company and longer term society. >>What about the sustainability front? I mean, it's clear that I can understand why you have the 10 20, 70 approach that, that 70% is really important. There are companies that think they're much farther advanced in terms of responsible use of responsible AI responsibly than they really are. Um, but you know, we talk about sustainability all the time. It's a buzzword, but it's also something that's incredibly important to you to companies like AWS. I imagined a companies like yourself, where does, what does sustainable AI look like and how to organizations implemented along with responsible AI efficient AI? >>Yeah, I think it's the question in some ways right now, given everything that's happening around the world. And so AI for sustainability is, is really critical. I think we all have a part to play in this fight, um, to ensure our, our global environment. So I think we need to use the same AI expertise, the same AI technology that we bring to maximize revenue and minimize cost, um, to, to minimizing a company's footprint. Long-term I think that's really critical. One of the things we've seen is that 85% of companies want to reduce their emissions, but less than 10% of them know how to accurately measure right. Their footprint. And so we've been focusing on AI for sustainability across a couple of different pillars. The first is measuring the current impact from operations. The second is data mining, um, for optimal decisions to reduce that footprint. And the third is scenarios to plan better strategies to alter our impact. >>Excellent. Well, there's so much work to be done, guys. Thank you for joining me talking about what BCG is doing for responsible, efficient, ethical, and sustainable AI, a lot of opportunities. I'm sure for you guys with AWS and your list of clients, but we thank you for taking the time out to talk with us this morning. So much. I write for my guests. I'm Lisa Martin. You're watching the cube, the global leader in live tech coverage.
SUMMARY :
and its enormous ecosystem of partners to life sets we have going on right now. sustainable, and efficient AI at scale to solve pressing business And you know, one of the things Aaron that we talked about the last day and a half is that every company, and exploration to deploying AI at scale. And And I've fantasy football is like my passion. One of the things that really surprised me, and I'm looking at my notes here, because I was writing this down was that you said And so I think it's really critical to bring that holistic approach to bringing AI the time to make sure our companies and our people are on the journey with us So it's nice to hear you talking about the big focus there being on the people that is because And I think that that's really critical for a lot of companies as well. So it's, it's one thing to do the job better, but it's another thing to be responsible because in today's And one of the interesting things that we have found is that organizations And I think that there's a tech part to that too. but it's also something that's incredibly important to you to companies like AWS. I think we all have a part to play in but we thank you for taking the time out to talk with us this morning.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Aaron | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Alaska | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Adi Zolotov | PERSON | 0.99+ |
Aaron Arnoldsen | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
10% | QUANTITY | 0.99+ |
Aaron Arnold Santa | PERSON | 0.99+ |
BCG | ORGANIZATION | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
less than 10% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
BCG gamma | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two remote studios | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
10 years ago | DATE | 0.97+ |
2021 | DATE | 0.97+ |
70 | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
over 100 guests | QUANTITY | 0.95+ |
20 | QUANTITY | 0.95+ |
three components | QUANTITY | 0.93+ |
today | DATE | 0.91+ |
one thing | QUANTITY | 0.9+ |
next decade | DATE | 0.9+ |
last 22 months | DATE | 0.9+ |
this morning | DATE | 0.89+ |
Sundays | DATE | 0.87+ |
two and a half days | QUANTITY | 0.85+ |
Invent | EVENT | 0.83+ |
last day | DATE | 0.76+ |
ten twenty, seventy rule | QUANTITY | 0.73+ |
GAMMA | PERSON | 0.62+ |
Grant Johnson, Ancestry | Qualys Security Conference 2019
>> Narrator: From Las Vegas, it's theCUBE. Covering Qualys Security Conference 2019. Brought to you by Qualys. >> Hey, welcome back, you ready with Jeff Frick here with theCUBE. We are at the Qualys Security Conference in Las Vegas. This show's been going on, I think, 19 years. This is our first time here. We're excited to be here, and we've got, there's always these people that go between the vendor and the customer and back and forth. We've had it go one way, now we've got somebody who was at Qualys and now is out implementing the technology. We're excited to welcome Grant Johnson. He is the director of Risk and Compliance for Ancestry. Grant, great to see you. >> Thank you for having me, great to be here. >> Yeah, it is always interesting to me and there's always a lot of people at these shows that go back and forth between, and their creating the technology and delivering the technology versus implementing the technology and executing at the customer side. So, you saw an opportunity at Ancestry, what opportunity did you see and why did you make that move? >> Well it's a good question, I was really happy where I was at, I worked for here at Qualys for a long time. But, I had a good colleague of mine from way back just say, hey look, he took over as the chief information security officer at Ancestry and said, "they've got an opportunity here, do you want it?" I said, "hey sure." I mean, it was really kind of a green field. It was the ability to get in on the ground floor, designing the processes, the environment, the people and everything to, what I saw is really a really cool opportunity, they were moving to the cloud. Complete cloud infrastructure which was a few years ago, you know, a little uncommon so it was just and opportunity to learn a lot of different things and kind of be thinking through some different processes and the way to fix it. >> Right, right, so you've been there for a little while now. Over three years, what was the current state and then what was the opportunity to really make some of those changes, as kind of this new initiative with this new see, so? >> No, yeah, we were traditional. You know, a server data center kind of background and everything like that. But with the way the company was starting to go as we were growing it, really just crazy, just at a crazy clip, to where we really couldn't sustain. We wanted to go global, we wanted to move Ancertry out to Europe and to other environments and just see the growth that was going to happen there, and there just wasn't a way that we could do it with the traditional data center model. We're plugging those in all over the place, so the ideas is, we're going to go to a cloud and with going to the cloud, we could really rethink the way that we do security and vulnerability management, and as we went from a more traditional bottle which is, where you scan and tell people to patch and do things like that, to where we can try to start to bake vulnerability management into the process and do a lot of different things. And you know, we've done some pretty cool things that way, I think as a company and, always evolving, always trying to be better and better every day but it was a lot of fun and it's been really kind of a neat ride. >> So, was there a lot of app redesign and a whole bunch of your core infrastructure. Not boxes, but really kind of software infrastructure that had to be redone around a cloud focus so you can scale? >> Yeah. There absolutely was. We really couldn't lift and shift. We really had to take, because we were taking advantage of the cloud environment, if we just lifted and shifted our old infrastructure in there, it wasn't going to take advantage of that cloud expansion like we needed it to. >> Right. >> We needed it to be able to handle it tide, of high tide, low tide, versus those traffic times when we're high and low. So it really took a rewrite. And it was a lot of really neat people coming together. We basically, at the onset of this right when I started in 2016, our chief technology officer got up and said, "we're going to burn the ships." We have not signed the contract for our data center to renew at 18 months. So we have to go to the cloud. And it was really neat to see hundreds of people really come together and really make that happen. I've been involved in the corporate world for a long time in IT. And a lot of those projects fail. And it was really neat to see a big project like that actually get off the ground. >> Right, right. It's funny, the burning the ship analogy is always an interesting one. (grant laughs) Which you know, Arnold Schwarzenegger never had a plan B. (grant laughs) Because if you have plan B, you're going to fall back. So just commit and go forward. >> A lot of truth to that. Right, you're flying without a net, whatever kind of metaphor you want to use on that one. Yeah, but you have to succeed and there is a lot that'll get it done I think, if you just don't have that plan B like you said. >> Right, so talk about kind of where Ancestry now is in terms of being able to roll out apps quicker, in terms of being able to scale much larger, in terms of being able to take advantages of a lot more attack surface area, which probably in the old model was probably not good. Now those are actually new touch points for customers. >> It's a brave new world on a lot of aspects. I mean, to the first part of that, we're just a few days away from cyber Monday. Which is you know, our normal rate clip of transactions is about 10 to 12 transactions a second. >> So still a bump, is cyber Monday still a bump? >> It's still huge for us. >> We have internet at home now. We don't have to go to work to get on the internet to shop. >> You know, crazy enough, it still is. You know, over the course of the week, and kind of starting on Thanksgiving, we scale to have about 250 transactions a second. So that was one of the good parts of the cloud, do you invest and the big iron and in the big piping for your peak times of the year. Or and it sits, your 7-10% utilization during the rest of the year, but you can handle those peaks well. So I mean, we're just getting into the time of year, so that's where our cloud expansion, where a lot of the value for that has come. In terms, of attack surface, yeah, absolutely. Five years ago, I didn't even know what a container was. And we're taking advantage a lot of that technology to be able to move nimbly. You can't spin up a server fast enough to meet the demands of user online clicking things. You really have to go with containers and that also increases what you really need to be able to secure with people and the process and technology and everything like that. >> Right. >> So it's been a challenge. It's been really revitalizing and really, really neat to me to get in there and learn some new things and new stuff like that. >> That's great. So I want to ask you. It may be a little sensitive, not too sensitive but kind of sensitive right. Is with 23 and Me and Ancestry, and DNA registries, et cetera, it's opened up this whole new conversation around cold case and privacy and blah blah blah. I don't want to get into that. That's a whole different conversation, but in terms of your world and in terms of risking compliance, that's a whole different type of a data set I think that probably existed in the early days of Ancestry.com >> Yeah >> Where you're just trying to put your family tree together. So, how does that increased value, increased sensitivity, increased potential opportunity for problems impact the way that you do your job and the way that you structure your compliance systems? >> Boy. Honestly, that is part of the reason why I joined the company. Is that I really kind of saw this opportunity. Kind of be a part of really a new technology that's coming online. I'd have to say. >> Or is it no different than everyone else's personal information and those types of things? Maybe it's just higher profile in the news today. >> Not it all, no. It kind of inherent within our company. We realized that our ability to grow and stay affable or just alive as a business, we pivot on security. And security for us and privacy is at the fore front. And I think one of the key changes that's done for maybe in other companies that I get is, people from our development teams, to our operations teams, to our security department, to our executives. We don't have to sell security to em. They really get it. It's our customer privacy and their data that we're asking people to share their most personal data with us. We can give you a new credit card. Or, you can get a new credit card number issued. We can't give you a new DNA sequence. >> Right. >> So once that's out there, it's out there and it is the utmost to us. And like I said, we don't have to sell security internally, and with that we've gotten a lot of support internally to be able to implement the kind of things that we needed to implement to keep that data as secure as we can. >> Right, well that's nice to hear and probably really nice for you to be able to execute your job that you don't have to sell securities. It is important, important stuff. >> Grant: Yes, that's absolutely true. >> All right, good. So we are jamming through digital transformation. If we talk a year from now, what's on your plate for the next year? >> We just continue to evolve. We're trying to still continue the build in some of those processes that make us better, stronger, faster, as we go through, to respond to threats. And just really kind of handle the global expansion that our company's undergoing right now. Just want to keep the lights on and make sure that nobody even thinks about security when they can do this. I can't speak for them, but I think we really want to lead the world in terms of privacy and customer trust and things like that. So there are a lot of things that I think we've got coming up that we really want to kind of lead the way on. >> Good, good. I think that is a great objective and I think you guys are in a good position to be the shining light to be, kind of guiding in that direction 'cause it's important stuff, really important stuff. >> Yeah, we hope so, we really do. >> Well Grant, nothing but the best to you. Good luck and keep all that stuff locked down. >> Thank you, thank you so much! Thanks for having me. >> He's Grant, I'm Jeff. You're watching theCube. We're at the Qualys Security Conference at the Bellagio in La Vegas. Thanks for watching. We'll see you next time. (upbeat music)
SUMMARY :
Brought to you by Qualys. and now is out implementing the technology. and why did you make that move? you know, a little uncommon and then what was the opportunity to really make and there just wasn't a way that we could do it that had to be redone around a cloud focus so you can scale? We really had to take, We needed it to be able to Which you know, Arnold Schwarzenegger never had a plan B. Yeah, but you have to succeed in terms of being able to roll out apps quicker, I mean, to the first part of that, We don't have to go to work to get on the internet to shop. and that also increases what you really need to be able to and really, really neat to me to get in there and in terms of risking compliance, impact the way that you do your job and the Honestly, that is part of the reason Maybe it's just higher profile in the news today. We realized that our ability to grow and stay affable to be able to implement the kind of things that we needed really nice for you to be able to execute your job So we are jamming through digital transformation. And just really kind of handle the global expansion and I think you guys are in a good position Well Grant, nothing but the best to you. Thanks for having me. We're at the Qualys Security Conference
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Grant Johnson | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Arnold Schwarzenegger | PERSON | 0.99+ |
Grant | PERSON | 0.99+ |
Qualys | ORGANIZATION | 0.99+ |
Ancestry | ORGANIZATION | 0.99+ |
La Vegas | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
hundreds of people | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
first part | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
19 years | QUANTITY | 0.98+ |
Qualys Security Conference | EVENT | 0.98+ |
Five years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
Thanksgiving | EVENT | 0.96+ |
Over three years | QUANTITY | 0.93+ |
Ancestry.com | ORGANIZATION | 0.93+ |
theCUBE | ORGANIZATION | 0.92+ |
few years ago | DATE | 0.92+ |
about 10 | QUANTITY | 0.92+ |
Qualys Security Conference 2019 | EVENT | 0.91+ |
Bellagio | LOCATION | 0.9+ |
7-10% | QUANTITY | 0.89+ |
about 250 transactions a second | QUANTITY | 0.88+ |
12 transactions a second | QUANTITY | 0.87+ |
DNA | ORGANIZATION | 0.66+ |
Risk and Compliance for Ancestry | ORGANIZATION | 0.65+ |
23 and Me and | ORGANIZATION | 0.64+ |
a year | QUANTITY | 0.49+ |
Monday | EVENT | 0.4+ |
cyber | DATE | 0.35+ |
Monday | ORGANIZATION | 0.29+ |
Alex Tabares, Carnival Corporation & Sheldon Whyte, Carnival Cruise Lines | Splunk .conf18
>> Narrator: Live from Orlando, Florida. It's theCUBE! Covering .conf18. Brought to you by Splunk. >> Welcome back to Orlando, everybody. Splunk .conf18. This is theCUBE, the leader in live tech coverage. I'm Dave Vellante with my co-host, Stu Miniman. Carnival Cruise Lines is back. We heard from them yesterday, we heard them on the main stage of .conf. CEO is up there with Doug Merritt. Sheldon White is here. He's an enterprise architect at Carnival Cruise Line And Alex Taberras, who's the director of threat intelligence at Carnival. Gents, welcome to theCUBE. >> Thank you. >> Doing a lot of talk on security today. They've lined us up, which is great. We love the conversation. So much to learn. Alex, I'll start with you. When you think about security and threat intelligence, what are the big changes that you've seen over the last, whatever, pick a time. Half a decade? Decade? Couple of years even. >> Alex: So, it's just the amount of threats that are coming in now and how fast they're coming in, right? We can't seem to be keeping up with everything that's happening in the environment, everything that's happening outside, trying to get into our environment and cause all that damage, right? So, that's why Splunk is awesome, right? I get to see everything come in, real time. I'm able to quickly pinpoint any action I need to take, send it to my team and have them immediate right away. >> So, Sheldon, yesterday we had ship and shore from Carnival and he was talking about really different problems. You know, the folks on the ship, they got 250 thousand people on the ocean at any one point in time collecting data, trying to make a better experience, keep them connected. Folks on the shore, obviously, websites and things like that. Where do you fit into that mix of ship and shore? >> Sheldon: Right, so there's an entire value stream that we map out as enterprise architects. And so, what we do there is analyze all the customer touch points. And then we aggregate all of that information into a pipeline that we then address our audiences with those critical KPIs. Operational and infrastructure, the entire stack. >> Dave: You guys obviously have very strong relationship with Splunk. We heard from your CEO, Arnold Donald, right? >> Alex: Correct. >> Interesting name, I haven't messed that up yet so. (laughing) And so, where did that relationship start? Did it start in SecOps? Did it start in IT operations management? >> Alex: So, it really started in Devops, right? And they started... They purchased Splunk, I think back in like 2007, 2008. And they started looking at it, right? And I think I was talking to one of our other architects and it was one gig is what we started at, right? Now, we're upwards of 600 gigs. Just for security. So, it started there and it just kind of morphed into this huge relationship where we're partnering and touching all aspects of our business with Splunk. You know, and the Cloud and everything else. >> So, we heard, I don't know if you guys saw the key notes today, but we saw some announcements building on yesterday's Splunk next announcement. We heard some business workflow and some industrial IOT. I would think both of those are relevant for you guys. Not industrial IOT, but your IOT. Do you see Splunk permeating further into the organization? I guess, the answer's yes. You kind of already said that. But I'm interested in what role you guys play in facilitating that ? Are you kind of champions, evangelist, experts, consultants? How does that work? How do you see that (mumbles)? >> Sheldon: So, we see ourselves as internal consultants. We have our internal customers that depend on our guidance and our end-to-end view of the business processes. So, and now as enter our Cloud journey, into the second year of our Cloud journey, just we're able to accelerate our time to value for our internal customers to gain even greater insights into what's happening ship and shore. >> Dave: I wonder how, if you can talk about, how enterprise architecture has changed over the last decade even. You know, it used to be you were trying to harden the two tier or three tier architecture and harden top, don't touch it, it works. And then, of course, we all know, it created a lot of different stove pipes and a lot of data was locked into those stove pipes. That's changed, obviously. Cloud, now the Edge. Maybe because you guys were always sort of a distributed data company, you approached it differently. But I wondered if you could gives us (mumbles)? >> Sheldon: No, that's an interesting question. Because the evolution is not so much enterprise architect as it is eco system architect, right? So, now you have these massively distributed systems. So, you're really managing an eco system of internal and third party. And then all the relevant touch points, right? Like Alex mentioned, all that perimeters constantly shifting now. So, yeah, our focus is always aligning with the on-time business process and our internal customers. >> Yeah, wonder if we could dig into the Cloud a little. Alex, can we start with you? How does Cloud fit into your world of security? >> Alex: So, for me, the Cloud, as far as Splunk goes, it allows me to expand and contract as needed, right? So before, we used to have our on premise hardware, very finite RAM memory, I mean, disk space everything. So now, with the Cloud, I'm able to expand my environment as I move across all my North American brands, European brands, to be able to gather all that data, look at it and take action on it, right? >> Stu: And Sheldon, you're using AWS. We see they're, every software provider lives in AWS. It's often in the marketplace. We been seeing a lot this week that there's a deeper partnership. There's actually a lot of integration. Maybe give us your viewpoint on what you've seen on how Splunk and AWS work together to meet your requirements. >> Yeah. So, that's an interesting evolution as well of that partnership, right? So, you're starting to see things like the S3 API integration. So that you're removing storage from the critical path. And now that opens up different scale of possibilities, right? And internal opportunities. But yes, as you can see, leveraging the machine learning toolkit. I saw that one coming. It's going to be interesting to see how that keeps evolving, right? And also, like I was speaking to Alex, about the natural language capability. So, that also is well brought into the dimension of how our senior leadership with interact with these operational platforms. >> Yeah, I got to thank you. You're going to have your customer's natural language has to get into some of their rooms. It's definitely future. >> Sheldon: Oh, it's going to be apart of that value chain. Yeah, for sure. >> Dave: How does the S3 API integration affect you guys? Obviously, you got to put Syntax in an object store, which is going to scale. What does that mean for you guys? >> Sheldon: So, using the Splunk developer Cloud, we could develop all sorts of solutions to manage it intelligently how our storage, right? In near real time. So, we can completely automate and that end-to-end just integration with Splunk, how it ingest, how long that data stays relevant and how we offload it into things like Glacier. >> Dave: In the enablement, there is the S3 API. So, you're taking advantage of all the AWS automation tooling. >> Sheldon: Correct. >> Is that right? >> Sheldon: Correct. >> Alright. >> Sheldon: That's another example of that side integration. Not only with the S3 API. Lex, for the natural language. Obviously, TensorFlow and the machine learning toolkit. So, I think you're going to see that type of... those type of capabilities expanding as Splunk evolves. Next year, I'm sure they're going to have a ton of more, you know, announcements around how this evolution continues, right? >> Dave: So, you know, I was interested in the TensorFlow and Spark integration. And Stu and I were talking in an earlier segment. It's great, developers love that. We saw a lot of demos today that was like, looks so simple. Anybody could do it. Even I might be able to do it. But as practitioners of Splunk, is it really going to be that easy? Are business users actually going to be able to pick this stuff up and what are they going to have to do in order to take advantage of Splunk? Some training involved? >> Sheldon: Right, right. >> What's the learning curve going to be like? >> Sheldon: That's a great question, because there's a dual focus to this, right? First, is offloading from the developer. All that heavy lifting of creating this user interface and the dashboards, per say. Now, its all API driven. So, as you saw, maybe in the keynote this morning, that within the demo, was an API driven dashboard came together in several minutes. But one is offloading that and the second part is just enabling the business user with other capabilities, like natural language process. And they don't necessarily need to be on that screen. They can get acception reporting through emails and voice commands. So, training is also part of it, obviously. So, it's a multifaceted approach to leveraging these new capabilities. >> Dave: Are you guys responsible for the physical infrastructure of your ships? I mean, is that part of your purview? Okay. So, really there's is an industrial IOT component big time for you guys. >> Absolutely. >> Alex: And there's a huge push now for Maritime security, right? We saw what happened with Maersk and NotPetya virus, right? So, how it took them out of operation for about three weeks. So, this IOT is very, I think, awesome, right? I was speaking to some of the Splunk guys yesterday about it. How we could leverage that on our ships to gather that data, right, from our SCADA systems. And from our bridge and engine control systems to be able to view any kind of threat. Any kind of vulnerability that we might be seeing in the environment. How we can control that and how we can predict anything from happening, right? So, that's going to be very key to us. >> Dave: So, Splunk is going to take that data right off the machines. Which Stu and I were talking, that to us is a huge advantage. So many IT companies are coming and saying, "Hey! We're going to put a box at the edge". That's nice, but what about the data? So, Splunk's starting with the data, but it's the standards of that data. They're really driven by engineers and operations technology folks. Is Splunk sort of standard agnostic? Can they be able to ingest that data? What has to be done for you guys to take advantage of that? >> So, we'll have to ingest that data. And we'll have to, you know, look at it and see what we're seeing, right? This is all brand new to us as well. >> Dave: Right. >> Right. This whole Maritime thing has risen up in the past year, year and a half. So, we're going to have to look at the data and then kind of figure out what we want to see. Normalize it, you know, we'll probably get some PS services or something to assist us. Some experts. And then we just go from there, right? We build our dashboards and our reports. >> Dave: And predictive maintenance is a huge use case for you guys. >> Alex: Absolutely. >> I mean, to me, it's as important as the airlines. >> Alex: Absolutely, yes. >> So, I would think, anytime you... Well, first of all, real time during a journey. But anytime that journey is completed, you must bring in the inspectors and, I'm sure, very time consuming and precise. >> So, I know that some of our senior leadership, especially in the Maritime space, has now looking towards Splunk to do some of that predictive maintenance. To make sure that we have that right nuts and bolts, right? Per say, on the ship. To be able to fix any issue that might arise at sea while we're on there. >> Dave: Now, it's expect that the drive is going to be for human augmentation and of drive efficiency. >> Alex: Correct. >> You're not just going to trust the machines right out of the box. No way, right? >> Alex: No. But it's empowering those engineers, right? As we see with some of the dashboards that they're coming up with at the keynote. Empowering some of the those engineers that are in the engine room. That are in bridge. To be able to see those issues come up, right? And be able to track. >> Dave: Plus, I would imagine this is the kind of thing like an airline pilot. You're double checking, you're triple checking. So, you might catch misses earlier on in the cycle. >> Alex: Yeah. I could see it having huge impact. >> Stu: Yeah. Sheldon, I was just thinking through the other next announcement. I wonder if Splunk business flows sounds like something that might fit into your data pipeline? Get insights, understand satisfaction. Seems like it might be a fit. Is that of interest to you? >> Sheldon: Yeah, it sure is. Because we definitely want to, since we've evolved with kind of fragmented systems. We still have main frames, we still have whole call center environment that we need to ensure that it's parts of the end-to-end guest experience. So, for sure, we're getting into the whole early adopter program on the process flow. >> Yeah. Can you give us little insight? What kind of back and forth do you have with Splunk? What sort of things are you asking that would help make your jobs easier going forward? >> So, going forward, I know they're addressing a lot so the ingestion and data standardization. And now, with the decoupling of the storage, which is awesome, makes our lives a lot easier. But the evolution of the natural language and the integration with AWS natively is huge for us, as well as our Cloud program matures. And we start enabling Serverless architectures, for example. So, yeah. No, it's a very important part. >> Stu: Yeah. I mean, Serverless is actually something we're pretty interested. What are some of the early places that you're finding value there? >> Well, many people don't know this, but Carnival's also one of the largest travel agencies in the United States. So, we have the whole... Well, it's the whole global air travel platform that we're currently migrating to a Serverless architecture, integrates with Sabre. So, we're looking at things like open trace for that. And I know that our friends at Splunk are enabling capabilities for that type of management. >> Dave: And what's the business impact of Serverless there? You're just better utilization of resources? Faster time to value? Maybe you could describe. >> Yeah. Near real time processing. Scaling up and scaling down seasonally. Our key aspects of that. Removing the constraints of CPU and storage and-- >> Dave: Alex, has it changed the security paradigm at all? Serverless? How does it change it? >> Alex: So, it does. It let's me not have to worry so much about on premise stuff, right? As I did before. So, that helps a lot, right? And being able to scale up and down quickly as much data as we're ingesting is very key for us. >> Dave: You guys are heavy into Cloud, it's obvious. I wonder if you could share with us how you decide, kind of, what goes? If you're not all in on Cloud, right? It's not 100 percent Cloud? >> Sheldon: No, we could never be all in. >> No. >> Dave: And we've put forth that notion for years. We call it "true private cloud". That what you want to do is bring the Cloud experience to your data, wherever that data lives. There's certain data and workloads that you're not just going to put into the Cloud. >> Sheldon: That's correct. >> So, you would confirm that. That's the case. Like, you just said it. >> Correct. >> Dave: You're never going to put some of these workloads on Cloud. >> Well, we have floating data centers. So, we'll always be in a hybrid model. But there is a decision framework around how we create those application, migration pipelines. And the complexity and interdependencies between these platforms, some are easier to move than others. So, yeah. No, we're quite aware of-- >> Dave: And so, my follow up question is are you trying to bring that Cloud experience to those... to the floating data centers, wherever possible? And how is the industry doing? If you had a grade them in terms of their success. I mean, you certainly hear this from the big tech suppliers. "Oh, yes! We've got private Cloud" and "It's just like the public Cloud". And we know it's not and it doesn't have to be. >> Sheldon: Right. >> But if it can substantially mimic that public Cloud experience, it's a win for you guys. So, how is the industry doing in your view? >> So, I think it's a crawl, walk, run type of thing. Obviously, you have these floating cities and satellite bandwidth is a precious resource that we have to use wisely, right? So, we definitely are Edge computing strategy is evolving rapidly. What do we act upon at the Edge? What do we send to the Cloud? When do we send it? There also some business drivers behind this. For example, one of our early Cloud forays was in replicating a guest activity aboard the ship. So, we know if somebody buys a margarita off the coast of Australia, we know it five seconds later. And then, we could act upon that data. Casino or whatever data it may be in near real time. >> So, a lot of data stays at the floating data center, obviously. >> Correct. >> Much of it comes back to the Cloud. When it comes back to the Cloud is a decision, 'cause of the expense of the bandwidth. What do you do? You part the ship at the data center and put a big fire hose in there? (laughing) >> Alex: I wish it was that easy. >> You got a bunch of disc drives that you just take and load up? That's got to be a challenge. >> So, there business requirements, right? So, we have to figure out what application is more important, right? So, usually like our ship property management system, right. Where we have all our guests data, as far as their names, birth dates, all that stuff. That takes priority over a lot of other things, right. So, we have to use, like Sheldon said, that bandwidth wisely. 'Cause we don't really own a lot of the ports that we go into. So, we can't, just like you say, plug in a cable and move on, right? We still rely heavily on our satellites. So, bandwidth is our number on constraint and we have to, you know, we share it with our revenue generating guests as well. So, obviously, they take priority and a lot of factors go into that. >> Dave: And data's not shrinking. So, I'll give you guys the last word, if you could just sort of summarize, in your view, some of the big challenges that you're going to try to apply Splunk towards solving in the next near to mid term. >> Alex: Well, I'm more security focused. So, for me, its just making sure that I can get that data as fast as possible. I know that I saw yesterday at the keynote, the mobile app. That for me is going to be like one of the things I'm going to go like, research right away, right? 'Cause for me, its' getting that alert right away when something's going on, so that I can mitigate quickly, move fast and stop those threats from hitting our environment. >> Dave: Sheldon? >> Yes, I think the challenges are, like you mentioned earlier, about the stove pipes and how organizations evolve. Now, with this massive influx of data, that just making sense of it from a people, technology and processes standpoint. So that we could manage the chaos, so to speak, right? And make sure that we have an orderly end-to-end view of all the activity on the ships. >> Dave: Well, thank you guys. Stu and I are like kids in a candy shop, 'cause we getting to talk to so many customers this week. So, we really appreciate your time and your insights and the inspiration for your peers. So, thank you. >> Oh, thank you very much. >> Alex: Thank you for having us. >> Dave: You're welcome. Alright, keep it right there everybody. Stu and I will be back right after this short break. You're watching theCUBE Live from .conf18. Be right back. (techno music)
SUMMARY :
Brought to you by Splunk. Welcome back to Orlando, everybody. We love the conversation. Alex: So, it's just the amount of threats that are You know, the folks on the ship, into a pipeline that we then address our audiences Dave: You guys obviously have very strong Interesting name, I haven't messed that up yet so. Alex: So, it really started in Devops, right? So, we heard, I don't know if you guys Sheldon: So, we see ourselves as internal consultants. Dave: I wonder how, if you can talk about, So, now you have these massively distributed systems. Alex, can we start with you? Alex: So, for me, the Cloud, as far as Splunk goes, It's often in the marketplace. So, that also is well brought into the dimension of how You're going to have your customer's natural language Sheldon: Oh, it's going to be apart of that value chain. Dave: How does the S3 API integration affect you guys? So, we can completely automate and that end-to-end Dave: In the enablement, there is the S3 API. Obviously, TensorFlow and the machine learning toolkit. Dave: So, you know, I was interested in the So, as you saw, maybe in the keynote this morning, Dave: Are you guys responsible for the So, that's going to be very key to us. Dave: So, Splunk is going to take that data And we'll have to, you know, look at it and And then we just go from there, right? use case for you guys. So, I would think, anytime you... So, I know that some of our senior leadership, Dave: Now, it's expect that the drive is going to be You're not just going to trust the machines And be able to track. So, you might catch misses earlier on in the cycle. I could see it having huge impact. Is that of interest to you? environment that we need to ensure that it's parts of the What kind of back and forth do you have with Splunk? and the integration with AWS natively is huge for us, What are some of the early places that you're finding So, we have the whole... Faster time to value? Removing the constraints of CPU and storage and-- So, that helps a lot, right? I wonder if you could share with us how you decide, That what you want to do is bring the Cloud experience So, you would confirm that. Dave: You're never going to put some of these workloads And the complexity and interdependencies between these And how is the industry doing? So, how is the industry doing in your view? So, we know if somebody buys a margarita off the coast So, a lot of data stays at the floating data center, 'cause of the expense of the bandwidth. You got a bunch of disc drives that you just take and So, we can't, just like you say, plug in a cable So, I'll give you guys the last word, if you could So, for me, its just making sure that I can get And make sure that we have an orderly end-to-end view So, we really appreciate your time and your insights Stu and I will be back right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sheldon | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Alex Taberras | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Doug Merritt | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Sheldon White | PERSON | 0.99+ |
Carnival Cruise Lines | ORGANIZATION | 0.99+ |
Alex | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Arnold Donald | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Carnival Cruise Line | ORGANIZATION | 0.99+ |
Alex Tabares | PERSON | 0.99+ |
one gig | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Next year | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
two tier | QUANTITY | 0.99+ |
600 gigs | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
250 thousand people | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Carnival Corporation | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
TensorFlow | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Susan St. Ledger, Splunk | Splunk .conf18
live from Orlando Florida it's the cube covered conf 18 got to you by Splunk welcome back to our land Oh everybody I'm Dave Volante with my co-hosts two minima and you're watching the cube the leader in live tech coverage we're brought here by Splunk toises Splunk off 18 hashtag spunk conf 18 Susan st. Leger is here she's the president of worldwide field operations at Splunk Susan thanks for coming on the cube thanks so much for having me today so you're welcome so we've been reporting actually this is our seventh year we've been watching the evolution of Splunk going from sort of hardcore IT OPSEC ops now really evolving in doing some of the things that when everybody talked about big data back in the day and spunk really didn't they talked about doing all these things that actually they're using Splunk for now so it's really interesting to see that this has been a big tailwind for you guys but anyway big week for you guys how do you feel I feel incredible we had you know we've it announced more innovations today just today then we have probably in the last three years combined we have another big set of innovations to announce tomorrow and you know just as an indicator of that I think you heard Tim today our CTO say on stage we to date have 282 patents and we are one of the world leaders in terms of the number of patents that we have and we have 500 pending right so if you think about 282 since the inception of the company and 500 pending it's a pretty exciting time for spunk people talk about that flywheel we were talking stew and I were talking earlier about some of the financial metrics and you know you have a lot of a large deal seven-figure deals which which you guys pointed out on your call let's see that's the outcome of having happy customers it's not like you turn to engineer that you just serving customers and that's what what they do I talk about how Splunk next is really bringing you into new areas yeah so spike next is so exciting there's really three three major pillars if you will design principles to spunk next one is to help our customers access data wherever it lives another one is to get actionable outcomes from the data and the third one is to allow unleash the power spunk to more users so there really the three pillars and if you think about maybe how we got there we have all of these people within IT and security that are the experts on Splunk the swing ninjas ful and their being they see the power of spunk and how it can help all these other departments and so they're being pulled in to help those other departments and they're basically saying Splunk help us help our business partners make it easier to get there to help them unleash the power spunk for them so they don't necessarily need us for all of their needs and so that's really what's what next is all about it's about making it again access data easier actionable outcomes and then more users and so we're really excited about it so talk about those new users I mean obviously the ITA ops they're your peeps so are they sort of advocating to you into the line of business or are you probably being dragged into the line of business what's that dynamic like yeah it's definitely we're customer success first and we're listening to our customers and they're asking us to take them that should go there with them right there being pulled that they know that what we what we say with our customers what are what our deepest customers understand about us is everybody needs funk it's just not everyone knows it yet and I said they're teaching their business why they need it and so it's really a powerful thing and so we're partnering with them to say how do we help them create business applications more which you'll see tomorrow in our announcements to help their business users you know one of the things that strikes us if we were talking it was the DevOps gentleman when you look at the companies that are successful with so-called digital transformation they have data at the core and they have sort of I guess I don't want to say a single data model but it's not a data model of stovepipes and that's what he described and essentially if I understand the power of Splunk just in talking to some of your customers it's really that singular data model that everybody can collaborate on with get advice from each other across the organization so not this sort of stovepipe model it seems like a fundamental linchpin of digital transformation even though you guys haven't been using that overusing that term thank you sort of a sign of smug you didn't use the big data term when big data was all hot now you use it same thing with digital transformation you're a fundamental it would seem to me to a lot of companies digital transformation that's exactly if you think about we started nineteen security but the reason for that is they were the first ones to truly do digital transformation right those are just the two the two organizations that started but exactly the way that they did it now all the other business units are trying to do it and that same exact platform that same exact platform that we use there's no reason we can't use it for those other areas those other functions but but if we want to go there faster we have to make it easier to use spunk and that's what you're seeing with spunk next you know I look at my career the last couple of decades we've been talking about oh well there's going to we're gonna leverage data and there's go where we want to be predictive on the models but that the latest wave of kind of AI ml and deep learning what I heard what you're talking about and in the Splunk next maybe you could talk a little bit about why it's real now and why we're actually going to be able to do more with our data to be able to extract the value out of it and really enable businesses sure so I think machine learning is that is at the heart of it and you know we we actually do two things from a machine learning perspective number one is within each of our market groups so IT security IT operations we have data scientists that work to build models within our applications so we build our own models and then we're hugely transparent with our customers about what those models are so they can tweak them if they like but we pre build those so that they have them in each of those applications so that's number one and and that's part of the actionable outcomes right ml helps drive actionable outcomes so much faster the second aspect is the ML TK right which is we give the our customers in ml TK so they can you know build their own algorithms and leverage everything all of the models that are out there as well so I think that two-fold approach really helps us accelerate the insights that we give to our customers Susan how are you evolving your go-to-market model as you think about Splunk next and just think about more line of business interactions so what are you doing on the go-to-market side yeah so the go to market when you think about reaching all of those other verticals if you will right it's very much going to be about the ecosystem all right so it's it's going to be about the solution provider ecosystem about the ISV ecosystem about the big the si is both boutique and the global s is to help us really Drive Splunk into all the verticals and meet their needs and so that will be one of the big things that you see we will obviously still have our horizontal focus across IT and security but we are really understanding what are the use cases within financial services what are the use cases within healthcare that can be repeated thousands of times and if you saw some of the announcements today in particular the data stream processor which allows you to act on data in motion with millisecond response that now puts you as close to real-time as anything we've ever seen in the data landscape and that's going to open up just a series of use cases that nobody ever thought of using spoil for so I wonder what you're hearing from customers when they talk about how do they manage that that pace of change out there I really like I walked around the show floor stuff I've been hearing lots people talking about you know containers and we had one of the your customers talking about how kubernetes fits into what they're doing seems like it really is a sweet spot for spunk that you can deal with all of these different types of information and it makes it even more important for customers to come to you yeah as you heard from Doug today in our keynote our CEO and the keynote it is a messy world right and part of the message just because it's a digital explosion and it's not going to get any slower it's just going to continue to get faster and I know you met with some of our customers earlier today and if'n carnival if you think about the landscape of NIF right I mean their mission is to protect the arsenal of nuclear weapons for the country right to make them more efficient to make them safer and if you think about all of it they not only have traditional IT operations and security they have to worry about but they have this landscape of lasers and all these sensors everywhere and that and when you look at that that's the messy data landscape and I think that's where Splunk is so uniquely positioned because of our approach you can operate on data in motion or at rest and because there is no structuring upfront I would I want to come back to what you said about real-time because that you know I oh I've said this now for a couple years but never used to use the term when Big Data was at its the peak of what does a gardener call it the hype cycle you guys didn't use that term and and so when you think about the use cases and in the Big Data world you've been hearing about real time forever now you're talking about it enterprise data warehouse you know cheaper EDW is fraud detection better analytics for the line of business obviously security and IT ops these are some of the use cases that we used to hear about in Big Data you're doing like all these now and sort of your platform can be used in all of these sort of traditional Big Data use cases am i understanding that problem 100% understanding it properly you know Splunk has again really evolved and if you think about again some of the announcements today think about date of fabric search right rather than saying you have to put everything into one instance or everything into one place right we're saying we will let you operate across your entire landscape and do your searches at scale and you know spunk was already the fastest at searching across your global enterprise to start with and when we were two to three times faster than anybody who compete it with us and now we improve that today by fourteen hundred percent I don't I don't even know where like you just look at again it ties back to the innovations and what's being done in our developer community within our engineering and team in those traditional use cases that I talked about in big data it was it was kind of an open source mess really complex zookeeper is the big joke right and always you know hive and pig and you know HBase and blah blah blah and we're practitioners of a lot of that stuff that's it's very complex essentially you've got a platform that now can be used the same platform that you're using in your traditional base that you're bringing to the line of business correct okay right it's the same exact platform we are definitely putting the power of Splunk in in the users hand so by doing things like mobile use on mobile and AR today and again I wish I could talk about what's coming tomorrow but let's just say our business users are going to be pretty blown away by what they're going to see tomorrow in our announcements yeah so I mean I'm presuming these are these are modern it's modern software micro services API base so if I want to bring in those open source tool tools I can in fact what you'll actually see when you understand more about the architecture is we're actually leveraging a lot of open-source and what we do so you know capabilities a spark and flink and but what we're doing is we're masking the complex the complexity of those from the user so instead of you having to do your own spark environment your own flink environment and you know having to figure out Kafka on your own and how you subscribe to what we're giving you all that we're we're masking all that for you and giving you the power of leveraging those tools so this becomes increasingly important my opinion especially as you start bringing in things like AI and machine learning and deep learning because that's going to be adopted both within a platform like use as yours but outside as well so you have to be able to bring in innovations from others but at the same time to simplify it and reduce that complexity you've got to infuse AI into your own platform and that's exactly what you're doing it's exactly what we're doing it's in our platform it's in our applications and then we provide the toolkit the SDK if you will so users can take it to another level all right so you've got 16,000 customers today if I understand the vision of SPARC next you're looking to get an order of magnitude more customers that you of it as addressable market talk to us about the changes that need to happen in the field is it just you're hitting an inflection point you've got those you know evangelists out there and I you know I see the capes and the fezzes all over the show so how is your field get ready to reach that broader audience yeah I think that's a great question again once again it will I'll tell you what we're doing internally but it's also about the ecosystem right in order to go broader it has to be about this this Splunk ecosystem and on the technology side we're opening the aperture right it's micro services it's ap eyes it's cloud there's there's so much available for that ecosystem and then from a go-to-market perspective it's really about understanding where the use cases are that can be repeated thousands of times right that the the the big problems that each of those verticals are trying to solve as opposed to the one corner use case that you know you could you could solve for one customer and that was actually one of the things we found is when we did analysis we used to do case studies on Big Data number one use case that always came back was custom because nothing was repeatable and that's how we were seeing you know a little bit more industry specific issues I was at soft ignite last week and you know Microsoft is going deep on verticals to get specific as to you know for IOT and AI how they can get specific in those environments I agreed I think again one of the things that so unique about Splunk platform is because it is the same platform that's at the underlying aspect that serves all of those use cases we have the ability in my opinion to do it in a way that's far less custom than anybody else and so we've seen the ecosystem evolve as well again six seven years ago it was kind of a tiny technology ecosystem and last year in DC we saw it really starting to expand now you walk around here you see you know some big booths from some of the SI partners that's critical because that's global scale deep deep industry expertise but also board level relationships absolutely that's another part of the the go-to markets Splunk becomes more strategic this is a massive Tam expansion that where we are potentially that we're witnessing with Splunk how do you see those conversations changing are you personally involved in more of those boardroom discussions definitely personally involved in your spot on to say that that's what's happening and I think a perfect example is you talk to Carnival today right we didn't typically have a lot of CEOs at the Splunk conference right now we have CEOs coming to the spunk conference right because it is at that level of strategic to our customers and so when you think about Carnival and yes they're using it for the traditional IT ops and security use cases but they're also using it for their customer experience and who would ever think you know ten years ago or even five years ago of Splunk as a customer experience platform but really what's at the heart of customer experience it's data so speaking of the CEO of Carnival Arnold Donald it's kind of an interesting name and and so he he stood up in the States today talking about diversity doubling down on diversity as an african-american you know you frankly in our industry you don't see a lot of african-americans CEOs you don't see a ton of women CEOs you don't see the son of women with with president in their title so he he made a really kind of interesting statement where he said something to the effect of forty years ago when I started in the business I didn't work with a lot of people like me and I thought that was a very powerful statement and he also said essentially look at if we're diverse we're gonna beat you every time your thoughts as an executive and in tech and a woman in tech so first of all i 100% agree with him and i can actually go back to my start i was a computer scientist at NSA so i didn't see a lot of people who looked like me and so from that perspective I know exactly where he's coming from and I am I'll tell you at Splunk we have a huge investment in diversity and not because it's a checkbox but because we believe in exactly what he says it's a competitive edge when you get people who think differently because you came from a different background because you're a different ethnicity because you were educated differently whatever it is whether it's gender whether it's ethnicity whether it's just a different approach to thinking all differentiation puts a different lens and and that way you don't get stove you don't have stovepipe thinking and I what I love about our culture at spunk is that we we call it a high growth mindset and if you're not intellectually curious and you don't want to think beyond the boundaries then it's probably not a good fit for you and a big part of that is having a diverse environment we do a lot of spunk to drive that we actually posted our gender diversity statistics last year because we believe if you don't measure it you're never going to improve it and it was a big step right to say we want to publish it we want to hold herself accountable and we've done a really nice job of moving it a little over 1% in one year which for our population is pretty big but we're doing really unique things like we have all job descriptions are now analyzed there's actually a scientific analysis that can be done to make sure that the job description does not bias whether men are women whether men alone or whether it's you know gender neutral so that that's exciting obviously we have a big women in technology program and we have a high potential focus on our top women as well what's interesting about your story Susan and we spent a lot of time on the cube talking about diversity generally in women in tech specifically we support a lot of WI t and we always talk him frequently we're talking about women and engineering roles or computer science roles and how they they oftentimes even when they graduate with that degree they don't come into tech and what strikes me about your path is your technical and yet now you've become this business executive so and I would imagine that having that background that technical background only helped in terms of especially in this industry so there are paths beyond just the technical role one hundred percent it first of all it's a huge advantage I believe it's the core reason why I am where I am today because I have the technical aptitude and while I enjoyed the business side of it as much and I love the sales side and the marketing side and all of the above the truth of the matter is at my core I think it's that intellectual curiosity that came out of my technical background that kept me going and really made me very I took risks right and if you look at my career it's much more of a jungle gym than a ladder and the way you know I always give advice to young people who generally it's young women who ask but oh sometimes it's the young men as well which is like how did you get to where you are how do I plan that how do I get and the truth of the matter is you can't if you try and plan it it's probably not going to work out the exactly the way you plan and so my advice is to make sure that you every time you're going to make a move your ask yourself what am I going to learn Who am I going to learn from and what is it going to add to my experience that I can materially you know say is going to help me on a path to where I ultimately want to be but I think if you try and figure it out and plan a perfect ladder I also think that when you try and do a ladder you don't have what I call pivots which is looking at things from different lenses right so me having been on the engineering side on the sales side on the services side of things it gives me a different lens and understanding the entire experience of our customers as well as the internals of an organization and I think that people who pivot generally are people who are intellectually curious and have intellectual capacity to learn new things and that's what I look for when I hire people I love that you took a nonlinear progression to the path that you're in now and it's speaking of you know the the technical I think if you're in this business you better like tech or what are you doing in this business but the more you understand technology the more you can connect the dots between how technology is impacting business and then how it can be applied in new ways so well congratulations on your careers you got a long way to go and thanks so much for coming on the queue so much David I really appreciate it thank you okay keep it right - everybody stew and I'll be back with our next guest we're live from Splunk Don Capcom 18 you're watching the cube [Music]
SUMMARY :
it's the cube covered conf 18 got to you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Susan | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Susan St. Ledger | PERSON | 0.99+ |
fourteen hundred percent | QUANTITY | 0.99+ |
282 patents | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
second aspect | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Orlando Florida | LOCATION | 0.99+ |
Doug | PERSON | 0.99+ |
NSA | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
last year | DATE | 0.99+ |
seventh year | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.98+ |
16,000 customers | QUANTITY | 0.98+ |
Tim | PERSON | 0.98+ |
thousands of times | QUANTITY | 0.98+ |
Carnival | ORGANIZATION | 0.98+ |
two organizations | QUANTITY | 0.98+ |
forty years ago | DATE | 0.98+ |
two-fold | QUANTITY | 0.97+ |
one year | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
DC | LOCATION | 0.97+ |
five years ago | DATE | 0.97+ |
african-americans | OTHER | 0.97+ |
one customer | QUANTITY | 0.97+ |
Susan st. Leger | PERSON | 0.97+ |
each | QUANTITY | 0.96+ |
third one | QUANTITY | 0.96+ |
three pillars | QUANTITY | 0.96+ |
african-american | OTHER | 0.96+ |
both | QUANTITY | 0.96+ |
ten years ago | DATE | 0.95+ |
stew | PERSON | 0.95+ |
two things | QUANTITY | 0.95+ |
six seven years ago | DATE | 0.94+ |
one hundred percent | QUANTITY | 0.94+ |
one corner | QUANTITY | 0.93+ |
first ones | QUANTITY | 0.93+ |
three times | QUANTITY | 0.93+ |
over 1% | QUANTITY | 0.92+ |
single | QUANTITY | 0.91+ |
seven-figure deals | QUANTITY | 0.9+ |
thousands of times | QUANTITY | 0.89+ |
earlier today | DATE | 0.89+ |
500 pending | QUANTITY | 0.88+ |
spunk | PERSON | 0.86+ |
last couple of decades | DATE | 0.84+ |
EDW | ORGANIZATION | 0.82+ |
one place | QUANTITY | 0.81+ |
lot | QUANTITY | 0.8+ |
three | QUANTITY | 0.8+ |
last three years | DATE | 0.79+ |
Kafka | TITLE | 0.79+ |
three major pillars | QUANTITY | 0.78+ |
a ton of women | QUANTITY | 0.77+ |
Curt Persaud, Carnival Cruise Lines & Ariel Molina, Carnival Cruise Lines | Splunk .conf18
>> Live from Orlando, Florida, it's theCUBE, covering .conf18. Brought to you by Splunk. >> Welcome back to Splunk .conf18, #splunkconf18. You're here watching theCUBE, the leader in live-tech coverage. My name is Dave Vellante, and I'm with my cohost, Stu Miniman, and we're going to take a cruise with the data. Curt Persaud is here. He's the director of IT for Guest Technology at Carnival Cruise Lines. So, he's the ship. And Ariel Molina is here. He's the Senior Director of web development and enterprise architecture at Carnival Cruise Line. He's the shore. Gents, welcome to theCUBE. Good to see you. >> Happy to be here. Very, very. >> Thanks for having us guys. >> Dave, I sea what you did there. (laughs) >> Yeah, Stu, it's pretty good, huh. Well, this is kind of, you know, Splunk is known for a little tongue in cheek. >> Alright, let's keep this interview on course. >> (laughs) Alright, you got it. So Arnold Donald, your CEO, was on stage today with Doug Merritt, a very inspirational individual. You guys have an amazing company. You see those ads and just go "wow." Just makes you want to go. But Ariel, let's start with you, your role, what you guys are doing here. Just kick it off for us. >> So, no, it's fantastic, great to be here. Great energy in the conference today. The keynote was fantastic. It was great to see our CEO up there and really represent our company, really talk about, sort of, where we're heading and how Splunk helps us along that journey when it comes to data. Things are changing, they're moving faster every day, right? We're pressured into delivering more value, delivering innovation at a faster pace, and Splunk is a key enabler of that, for us. >> And Curt, at any one point in time, you guys said you have like 250,000 guests on the seas around the world. Wow! And everybody wants to be connected these days. So that's kind of your purview, right? >> Yeah, absolutely. Five, 10 years ago, what sold cruises was the ability to be disconnected. Right now, people want to be connected more than ever. So what we try to do, beyond just the connectivity, and giving them better bandwidth, and stuff like that, was to try to develop products onboard that helps them be connected, be social, but not miss out on the product that we're actually selling, which is the ship, the people, the crew, and the actual entertainment and the staff onboard. So we're trying to make people social, but not anti-social with some of the technologies that we're bringing onboard, as well. >> Doug Merritt said today, "we're all data emitters." And I think the number was you guys will service 13 million guests in any given year? So a huge, huge number of data emitters. And of course, Ariel, you obviously are analyzing a lot of data, as well. So, how has the use of data changed over the years at Carnival? Maybe you could kind of take us through that. >> Well, ultimately I think it's about personalizing the experience. So, how do we use the data to better understand what folks are looking for in that guest journey? We call the guest journey everything from planning a voyage, purchasing a voyage, purchasing all the auxiliary items that are up for sale, and then ultimately making it into the ship. So, what we're doing these days, is looking at mining this data, and looking for opportunities. On the dot-com side of things, obviously it's about resiliency and personalization. How do we deliver innovation through multiple releases, and then do so in a resilient way? And a lot of those innovations, typically, are around personalization. And we see that move the needle. We're incentivized to have more folks book online. That's ultimately good for the bottom line. So, data's a big part of that. Personalization, resiliency. >> Yeah, it's one of those interesting things we look at. Most people probably think of cruise ships as you're vacation or transportation, everything like that. You're a technology company now. You're tied in, you've got multiple mobile apps, before and during. Maybe bring us a little bit inside what that's like. >> Over the past three years, we've seen a great transformation in terms of the technologies that we're bringing on board. You name it, whether it's very high end tools, like Splunk and other APM tools that we use, to cutting-edge technology like AI, chatbots, facial recognition. We're using the full breadth of all these innovations, in terms of technology, to try to enhance guest experience. And to Ariel's point, the focus is really on trying to be very personal, trying to personalize this information, trying to personalize the guest experience, and using all those data points that we're capturing to really target what a custom experience looks for you. It's really interesting, because one of the things that we try to do in that personalization is try to manage those micro-moments. We're trying to get you what you want, we're trying to get you the feedback that you need in that micro-moment, so that you can do your transaction and move on to enjoying your cruise. >> There's something that you mentioned. You want a balance. You want people to take advantage of what's there. You used to think of a vacation like this, you'd disconnect yourself. Help understand that balance. >> You'd be surprised. We were just recently on a cruise, my family and I, and we don't cruise as often as you would imagine. >> Because you work for the company. >> Even though, when you do, it feels good to be a customer, right? There's so much activity going on on a ship on a given day. It's very hard to understand where to be at a certain point in time, and some people find that overwhelming. What things like the app does is really allow you to curate your day. To say hey, you like music? Let's focus on events that are music-oriented and that's going to be in Location XYZ on the ship. And they're going to be sequenced. So, that's personalizing the experience. But it's also ensuring that folks are really taking advantage of the full product. >> From our perspective, the technology should be in the background. It's more complementary. The real product is really the ship, the crew members, the activities, the entertainment on board. That's the product we really want people to really connect to. The stuff that we do is auxiliary in terms of, let me help you maximize those experiences on board. And that's what we're really trying to do. If we can get that done and accomplished, than we have done our jobs. >> So the app is the digital conduit to the physical experience >> Exactly. >> If you have a good app, it makes all the difference in the world. If you're at Disney, and you're trying to figure out what's next, what do the lines look like? You get a lot of people on a ship, and you want to prioritize. You all call that curating your experience. It's all about the app, as they say. What's the state of the app? The 1.0 probably needed a little work. Where are you know in the evolution? >> We're in a 2.0 release version of it. The original version, we started with what we called the meat and potatoes. The very basic stuff, that hey, where can I get food? What is the entertainment lineup for the day? We started off with some innovation in terms of being able to generate, we did a chat, kind of like, communication, so people could chat with their families onboard without having to purchase a plan or have any bandwidth needs. And then, as we evolved that, then we started to go into things that are more transactional. So, you're able to purchase your photos digitally through the app. We leveraged facial recognition software, so that if a photographer on a ship takes a picture of you, it recognizes that as you and puts your photo in your photo stream and your photo album. So, very, very convenient. We do things like sell shore excursions in terms of transactional stuff. You can sit at the pool and say "oh, tomorrow's a port day, "I'm going to be in the Bahamas. "Let me see what shore excursion I want to do. And you can do it directly from the app without even moving. So now, as we evolve that now, as Ariel said, now we're trying to leverage all that data now, to go beyond the transactions, and make things even more personalized. So, I know that you favor the casino, maybe you're a spa person, you want a facial. We'll target you and say hey, on your previous cruise you did this. Let's target you because we might have something special waiting for you onboard. >> And then carry that across the journey, right. So now they leave our ships. And how do we get them to come back to our ships? How do you create that conversation that's ongoing, notifications about what's going on on our ships. People follow their favorite cruise director. People follow a lot of the unique experiences there. How do you bring that to the online, to the dot-com experience? So that when they're thinking about that next cruise, they can remember what that last cruise was about, and they can know what's happening on each one of our ships in real-time. It's a journey. And technology definitely is a huge enabler for us and the experience. >> So what's the data architecture look like on there? We always talk on theCUBE about the innovation sandwich of the future. It used to be Moore's Law, doubling every two years. Okay, great. Now, it's data, plus machine intelligence, and you scale with the cloud. What's your data architecture look like? >> Well, I think it's early days. I think it's, I mean, they're all over the place, right? I think there's silos within the enterprise that are really maximizing data. I think that that trend continues to happen. But I think there's got to be, and the enterprise architecture world is sort of about wrangling that, and figuring out how data from different dispersed touch points affect that. So, it's early days. I do think that you're starting to see that machine learning algorithms do play a part. I'm seeing it personally, more in the operations side of the world. So all these systems, at the end of the day, they need to be resilient and they need to have high service levels. So, what I'm seeing now is tools, and at Splunk, you saw that today, being able to be really predictive about where the anomalies are. Traditionally, you were having to log errors and then interpret errors, and then that would be the way you action some of these things. The predictive nature of some of these tools are such that you're being proactive. So when you talk about data there's so many different places you can go. If you think about our technology stack, and that guest experience point of view, it's all about really maintaining that SLA's, resolving issues as quickly as possible. And there's a ton of data in that space, right? I mean, it's everywhere, there's a ton of signals. >> Well you guys know, we tend not to throw stuff away in technology. You sort of have to figure out how to integrate. >> A signal via the customer is probably one of those, as well. So at the end of the day, what more information are we collecting about our guest to ultimately personalize that experience? It's centered around that. >> And that's challenging, I mean, look at the airlines. And your app, which you love the airline apps. I mean, you're not, like, tethered to them. But the phone experience, and even the laptop experience, are a little bit different. Because of the data, it's very, very challenging. Have you figured that out? Or are you sort of figuring that out? >> That's API's, right? It's that experienced API layer. Being able to activate that data which is sitting in distinct silos and then do so across those experience apps, the experience channels, which is dot-com, the app, the chatbot, there's so many interfaces out there. But, yeah, it's a solid, mature API strategy that's going to get us there. >> And I think one of the things that our challenge is, as technology partners, is the ability to build those platforms so that the next wave of conversions, as you mentioned, there's some disjointed experience across the desktop view versus the mobile view, is to try to bring those conversions together. And in order to that, like Ariel said, maybe making some API extraction layers figuring out how to mine the data better, figuring out how to leverage insights from different tools or machines and sensors, we have a ton of sensors on these ships as well. And bringing all those things together to be able to put us in a position that when we do finally get a seamless conversion, we're ready for it from a technology and a platform perspective. >> It's obvious why data is important for your business. You actually did a press release with Splunk. Maybe explain a little about how Splunk Cloud fits into this discussion that we've been having? >> Well, Cloud really removes the barriers of experimentation. How do you right-size a problem you don't understand very well? I think Cloud really helps with that. We're looking forward to being able to be flexible. Flexibility in architecture, flexibility in infrastructure. So that's absolutely the use-case I think security's got a number of use-cases. You see it every day in the news. So yeah, more opportunities, I would say, it scales that flexibility that's taken us the cloud route. >> When you think Splunk, you think security. You got guys in the Knock. That's not where you guys are. You're kind of closer to the business. And so you're seeing Splunk, as I said before, permeate into other parts of the organization. You kind of expected somebody else to do that. I don't know, the Hadoop guys. And it's interesting, Splunk never used to talk about big data. Now that the big data era is, sort of, behind us, Splunk talks a lot about big data. It's kind of an interesting flip. >> I would say it's democratizing the data. That's the stuff I liked, that I heard today. How do you get these tools away from the IT operators that are writing these complex queries to get insights? And how do you elevate that up to the analysts, and the product managers? And how do they get access to those interfaces? You know, drag-and-drop, whatever you want to call it. But I think that where I see this happening more so than, machine learning, that's great and predictive. But just empowering others to really leverage that data. I would say Splunk is leading there and it's good to see some of that stuff today. >> Absolutely. It's putting the power where it really needs to be, where it's the end users, the guys making decisions, it's the product owners, the product managers, that are making those slight tweaks to that interface, or to that design, or to that experience, that makes a difference. And that's what we're trying to do, and leverage with tools like Splunk, as well. >> Even the simple visualization, right, the stuff that's out of the box is really important for the business user, right? >> The out of the box part's another thing that I saw today, which is more, sort of, curating for particular use-case, and saying hey, we're going to build that end-to-end and really turn it on and activate it a little sooner. So that infrastructure product we saw today, I think that's a big step forward. Where you're a platform, but at some point you're going to have to start being a little more vertical in the way that you bring to market, the way that they did with security. >> And Doug talked about, you know, Doug Merritt, that is, talked about data is messy, and the messiest landscape is the data. And then he talked about being able to organize that data in the moment. So, I think about, okay, just put it in the, we like to call data ocean, right, and just capture it. But then having the tools to be able to actually look at it in whatever schema you want, when you want it, is a challenge that people have. My question is, did he describe it accurately? I think yes. But then, can you actually do that with this messy data? >> I think it's a great concept. I'm interested to see how that plays out going forward. But I think in our world, we have several use-cases where that makes sense. We have a very captive audience for seven to 10 days. So we really have a very limited amount of time to make a really good impression. So, it's not only about attracting first-time cruisers; it's trying to get a repeat cruiser. So that limited time frame that we have to leave a really lasting impression is very limited. So things like recovery, in terms of getting metrics or data real-time, and being able to act on it immediately. Say you had a bad experience at the sushi bar. If we're able to grab that information, whatever data points that allow us to understand what happened, and then do a quick recovery, we may have a guest for a repeat cruise. Those are the things that we're trying to do. And, if what Doug is saying is something that they've kind of solved, or are able to try to solve in a good way, that is very powerful for us as well, and we definitely see leverage in that. >> Last question, Ariel, you're saying off-camera it's kind of early days. What's the future hold? I mean, that's going to blow our minds. Blow our minds! >> Oh, it's the predictive thing, right? It's bringing you your favorite drink before you're ready to have it, or something. I don't know. The cruise line business, the travel and hospitality space is a very fun space to work in. We get to really see our guests enjoy the product. And us, as technologists, we get to see how technology moves the needle. Continued innovation, right? If you're in the development side of the world, challenging yourself to deploy more often, to deliver more value more often. And if you're on the data side, how to get aggregated, compile all this this data, for ultimately what we're looking for, which is to enhance the guest experience. >> I mean, that real-time notion that you were talking about Curt, you can see that coming together and completely transforming the guest experience. So guys, thanks so much for coming on theCUBE. It was great to have you. Congratulations on all your success and good luck. Alright keep it right there everybody, we'll be back at Splunk .conf18. You're watching theCUBE. Dave Vellante with Stu Miniman. we'll be right back! (upbeat music)
SUMMARY :
Brought to you by Splunk. So, he's the ship. Happy to be here. you did there. Well, this is kind of, you know, this interview on course. Just makes you want to go. Great energy in the conference today. on the seas around the world. and the actual entertainment So, how has the use of data changed it's about personalizing the experience. interesting things we look at. so that you can do your transaction There's something that you mentioned. and we don't cruise as and that's going to be in That's the product we really want people It's all about the app, as they say. So, I know that you favor the casino, and the experience. and you scale with the cloud. and the enterprise architecture world You sort of have to figure So at the end of the day, Because of the data, it's the experience channels, is the ability to build those platforms that we've been having? So that's absolutely the use-case Now that the big data era and it's good to see it's the product owners, that you bring to market, and the messiest landscape is the data. and being able to act on it immediately. I mean, that's going to blow our minds. Oh, it's the predictive thing, right? that you were talking about Curt,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Ariel Molina | PERSON | 0.99+ |
Doug Merritt | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Curt Persaud | PERSON | 0.99+ |
Carnival Cruise Lines | ORGANIZATION | 0.99+ |
Arnold Donald | PERSON | 0.99+ |
Bahamas | LOCATION | 0.99+ |
Carnival Cruise Line | ORGANIZATION | 0.99+ |
Ariel | PERSON | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
seven | QUANTITY | 0.99+ |
250,000 guests | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Curt | PERSON | 0.99+ |
today | DATE | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
13 million guests | QUANTITY | 0.99+ |
10 days | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
10 years ago | DATE | 0.97+ |
Splunk | PERSON | 0.97+ |
Five | DATE | 0.96+ |
Disney | ORGANIZATION | 0.95+ |
one point | QUANTITY | 0.93+ |
Splunk | EVENT | 0.92+ |
first-time | QUANTITY | 0.9+ |
Splunk .conf18 | EVENT | 0.9+ |
Cloud | TITLE | 0.84+ |
.conf18 | EVENT | 0.83+ |
every two years | QUANTITY | 0.82+ |
each one | QUANTITY | 0.81+ |
Hadoop | ORGANIZATION | 0.76+ |
a ton of data | QUANTITY | 0.76+ |
doubling | QUANTITY | 0.71+ |
theCUBE | TITLE | 0.7+ |
#splunkconf18 | EVENT | 0.68+ |
past three years | DATE | 0.65+ |
things | QUANTITY | 0.65+ |
cases | QUANTITY | 0.64+ |
dot | ORGANIZATION | 0.64+ |
Splunk | TITLE | 0.63+ |
Ariel | ORGANIZATION | 0.61+ |
ton | QUANTITY | 0.6+ |
signals | QUANTITY | 0.59+ |
theCUBE | ORGANIZATION | 0.59+ |
a ton of sensors | QUANTITY | 0.58+ |
2.0 | DATE | 0.58+ |
Moore | TITLE | 0.57+ |
1.0 | OTHER | 0.41+ |
Wikibon Research Meeting
>> Dave: The cloud. There you go. I presume that worked. >> David: Hi there. >> Dave: Hi David. We had agreed, Peter and I had talked and we said let's just pick three topics, allocate enough time. Maybe a half hour each, and then maybe a little bit longer if we have the time. Then try and structure it so we can gather some opinions on what it all means. Ultimately the goal is to have an outcome with some research that hits the network. The three topics today, Jim Kobeielus is going to present on agile and data science, David Floyer on NVMe over fabric and of course keying off of the Micron news announcement. I think Nick is, is that Nick who just joined? He can contribute to that as well. Then George Gilbert has this concept of digital twin. We'll start with Jim. I guess what I'd suggest is maybe present this in the context of, present a premise or some kind of thesis that you have and maybe the key issues that you see and then kind of guide the conversation and we'll all chime in. >> Jim: Sure, sure. >> Dave: Take it away, Jim. >> Agile development and team data science. Agile methodology obviously is well-established as a paradigm and as a set of practices in various schools in software development in general. Agile is practiced in data science in terms of development, the pipelines. The overall premise for my piece, first of all starting off with a core definition of what agile is as a methodology. Self-organizing, cross-functional teams. They sprint toward results in steps that are fast, iterative, incremental, adaptive and so forth. Specifically the premise here is that agile has already come to data science and is coming even more deeply into the core practice of data science where data science is done in team environment. It's not just unicorns that are producing really work on their own, but more to the point, it's teams of specialists that come together in co-location, increasingly in co-located environments or in co-located settings to produce (banging) weekly check points and so forth. That's the basic premise that I've laid out for the piece. The themes. First of all, the themes, let me break it out. In terms of the overall how I design or how I'm approaching agile in this context is I'm looking at the basic principles of agile. It's really practices that are minimal, modular, incremental, iterative, adaptive, and co-locational. I've laid out how all that maps in to how data science is done in the real world right now in terms of tight teams working in an iterative fashion. A couple of issues that I see as regards to the adoption and sort of the ramifications of agile in a data science context. One of which is a co-location. What we have increasingly are data science teams that are virtual and distributed where a lot of the functions are handled by statistical modelers and data engineers and subject matter experts and visualization specialists that are working remotely from each other and are using collaborative tools like the tools from the company that I just left. How can agile, the co-location work primer for agile stand up in a world with more of the development team learning deeper and so forth is being done on a scrutiny basis and needs to be by teams of specialists that may be in different cities or different time zones, operating around the clock, produce brilliant results? Another one of which is that agile seems to be predicated on the notion that you improvise the process as you go, trial and error which seems to fly in the face of documentation or tidy documentation. Without tidy documentation about how you actually arrived at your results, how come those results can not be easily reproduced by independent researchers, independent data scientists? If you don't have well defined processes for achieving results in a certain data science initiative, it can't be reproduced which means they're not terribly scientific. By definition it's not science if you can't reproduce it by independent teams. To the extent that it's all loosey-goosey and improvised and undocumented, it's not reproducible. If it's not reproducible, to what extent should you put credence in the results of a given data science initiative if it's not been documented? Agile seems to fly in the face of reproducibility of data science results. Those are sort of my core themes or core issues that I'm pondering with or will be. >> Dave: Jim, just a couple questions. You had mentioned, you rattled off a bunch of parameters. You went really fast. One of them was co-location. Can you just review those again? What were they? >> Sure. They are minimal. The minimum viable product is the basis for agile, meaning a team puts together data a complete monolithic sect, but an initial deliverable that can stand alone, provide some value to your stakeholders or users and then you iteratively build upon that in what I call minimum viable product going forward to pull out more complex applications as needed. There's sort of a minimum viable product is at the heart of agile the way it's often looked at. The big question is, what is the minimum viable product in a data science initiative? One way you might approach that is saying that what you're doing, say you're building a predictive model. You're predicting a single scenario, for example such as whether one specific class of customers might accept one specific class of offers under the constraining circumstances. That's an example of minimum outcome to be achieved from a data science deliverable. A minimum product that addresses that requirement might be pulling the data from a single source. We'll need a very simplified feature set of predictive variables like maybe two or three at the most, to predict customer behavior, and use one very well understood algorithm like linear regressions and do it. With just a few lines of programming code in Python or Aura or whatever and build us some very crisp, simple rules. That's the notion in a data science context of a minimum viable product. That's the foundation of agile. Then there's the notion of modular which I've implied with minimal viable product. The initial product is the foundation upon which you build modular add ons. The add ons might be building out more complex algorithms based on more data sets, using more predictive variables, throwing other algorithms in to the initiative like logistic regression or decision trees to do more fine-grained customer segmentation. What I'm giving you is a sense for the modular add ons and builds on to the initial product that generally weaken incrementally in the course of a data science initiative. Then there's this, and I've already used the word incremental where each new module that gets built up or each new feature or tweak on the core model gets added on to the initial deliverable in a way that's incremental. Ideally it should all compose ultimately the sum of the useful set of capabilities that deliver a wider range of value. For example, in a data science initiative where it's customer data, you're doing predictive analysis to identify whether customers are likely to accept a given offer. One way to add on incrementally to that core functionality is to embed that capability, for example, in a target marketing application like an outbound marketing application that uses those predictive variables to drive responses in line to, say an e-commerce front end. Then there's the notion of iterative and iterative really comes down to check points. Regular reviews of the standards and check points where the team comes together to review the work in a context of data science. Data science by its very nature is exploratory. It's visualization, it's model building and testing and training. It's iterative scoring and testing and refinement of the underlying model. Maybe on a daily basis, maybe on a weekly basis, maybe adhoc, but iteration goes on all the time in data science initiatives. Adaptive. Adaptive is all about responding to circumstances. Trial and error. What works, what doesn't work at the level of the clinical approach. It's also in terms of, do we have the right people on this team to deliver on the end results? A data science team might determine mid-way through that, well we're trying to build a marketing application, but we don't have the right marketing expertise in our team, maybe we need to tap Joe over there who seems to know a little bit about this particular application we're trying to build and this particular scenario, this particular customers, we're trying to get a good profile of how to reach them. You might adapt by adding, like I said, new data sources, adding on new algorithms, totally changing your approach for future engineering as you go along. In addition to supervised learning from ground troops, you might add some unsupervised learning algorithms to being able to find patterns in say unstructured data sets as you bring those into the picture. What I'm getting at is there's a lot, 10 zillion variables that, for a data science team that you have to add in to your overall research plan going forward based on, what you're trying to derive from data science is its insights. They're actionable and ideally repeatable. That you can embed them in applications. It's just a matter of figuring out what actually helps you, what set of variables and team members and data and sort of what helps you to achieve the goals of your project. Finally, co-locational. It's all about the core team needs to be, usually in the same physical location according to the book how people normally think of agile. The company that I just left is basically doing a massive social engineering exercise, ongoing about making their marketing and R&D teams a little more agile by co-locating them in different cities like San Francisco and Austin and so forth. The whole notion that people will collaborate far better if they're not virtual. That's highly controversial, but none-the-less, that's the foundation of agile as it's normally considered. One of my questions, really an open question is what hard core, you might have a sprawling team that's doing data science, doing various aspects, but what solid core of that team needs to be physically co-located all or most of the time? Is it the statistical modeler and a data engineer alone? The one who stands up how to do cluster and the person who actually does the building and testing of the model? Do the visualization specialists need to be co-located as well? Are other specialties like subject matter experts who have the knowledge in marketing, whatever it is, do they also need to be in the physical location day in, day out, week in and week out to achieve results on these projects? Anyway, so there you go. That's how I sort of appealed the argument of (mumbling). >> Dave: Okay. I got a minimal modular, incremental, iterative, adaptive, co-locational. What was six again? I'm sorry. >> Jim: Co-locational. >> Dave: What was the one before that? >> Jim: I'm sorry. >> Dave: Adaptive. >> Minimal, modular, incremental, iterative, adaptive, and co-locational. >> Dave: Okay, there were only six. Sorry, I thought it was seven. Good. A couple of questions then we can get the discussion going here. Of course, you're talking specifically in the context of data science, but some of the questions that I've seen around agile generally are, it's not for everybody, when and where should it be used? Waterfalls still make sense sometimes. Some of the criticisms I've read, heard, seen, and sometimes experienced with agile are sort of quality issues, I'll call it lack of accountability. I don't know if that's the right terminology. We're going for speed so as long as we're fast, we checked that box, quality can sacrifice. Thoughts on that. Where does it fit and again understanding specifically you're talking about data science. Does it always fit in data science or because it's so new and hip and cool or like traditional programming environments, is it horses for courses? >> David: Can I add to that, Dave? It's a great, fundamental question. It seems to me there's two really important aspects of artificial intelligence. The first is the research part of it which is developing the algorithms, developing the potential data sources that might or might not matter. Then the second is taking that and putting it into production. That is that somewhere along the line, it's saving money, time, etc., and it's integrated with the rest of the organization. That second piece is, the first piece it seems to be like most research projects, the ROI is difficult to predict in a new sort of way. The second piece of actually implementing it is where you're going to make money. Is agile, if you can integrate that with your systems of record, for example and get automation of many of the aspects that you've researched, is agile the right way of doing it at that stage? How would you bridge the gap between the initial development and then the final instantiation? >> That's an important concern, David. Dev Ops, that's a closely related issue but it's not exactly the same scope. As data science and machine learning, let's just net it out. As machine learning and deep learning get embedded in applications, in operations I should say, like in your e-commerce site or whatever it might be, then data science itself becomes an operational function. The people who continue to iterate those models in line the operational applications. Really, where it comes down to an operational function, everything that these people do needs to be documented and version controlled and so forth. These people meaning data science professionals. You need documentation. You need accountability. The development of these assets, machine learning and so forth, needs to be, is compliance. When you look at compliance, algorithmic accountability comes into it where lawyers will, like e-discovery. They'll subpoena, theoretically all your algorithms and data and say explain how you arrived at this particular recommendation that you made to grant somebody or not grant somebody a loan or whatever it might be. The transparency of the entire development process is absolutely essential to the data science process downstream and when it's a production application. In many ways, agile by saying, speed's the most important thing. Screw documentation, you can sort of figure that out and that's not as important, that whole pathos, it goes by the wayside. Agile can not, should not skip on documentation. Documentation is even more important as data science becomes an operational function. That's one of my concerns. >> David: I think it seems to me that the whole rapid idea development is difficult to get a combination of that and operational, boring testing, regression testing, etc. The two worlds are very different. The interface between the two is difficult. >> Everybody does their e-commerce tweaks through AB testing of different layouts and so forth. AB testing is fundamentally data science and so it's an ongoing thing. (static) ... On AB testing in terms of tweaking. All these channels and all the service flow, systems of engagement and so forth. All this stuff has to be documented so agile sort of, in many ways flies in the face of that or potentially compromises the visibility of (garbled) access. >> David: Right. If you're thinking about IOT for example, you've got very expensive machines out there in the field which you're trying to optimize true put through and trying to minimize machine's breaking, etc. At the Micron event, it was interesting that Micron's use of different methodologies of putting systems together, they were focusing on the data analysis, etc., to drive greater efficiency through their manufacturing process. Having said that, they need really, really tested algorithms, etc. to make sure there isn't a major (mumbling) or loss of huge amounts of potential revenue if something goes wrong. I'm just interested in how you would create the final product that has to go into production in a very high value chain like an IOT. >> When you're running, say AI from learning algorithms all the way down to the end points, it gets even trickier than simply documenting the data and feature sets and the algorithms and so forth that were used to build up these models. It also comes down to having to document the entire life cycle in terms of how these algorithms were trained to make the predictors of whatever it is you're trying to do at the edge with a particular algorithm. The whole notion of how are all of these edge points applications being trained, with what data, at what interval? Are they being retrained on a daily basis, hourly basis, moment by moment basis? All of those are critical concerns to know whether they're making the best automated decisions or actions possible in all scenarios. That's like a black box in terms of the sheer complexity of what needs to be logged to figure out whether the application is doing its job as best a possible. You need a massive log, you need a massive event log from end to end of the IOT to do that right and to provide that visibility ongoing into the performance of these AI driven edge devices. I don't know anybody who's providing the tool to do it. >> David: If I think about how it's done at the moment, it's obviously far too slow at the moment. At the same time, you've got to have some testing and things like that. It seems to me that you've got a research model on one side and then you need to create a working model from that which is your production model. That's the one that goes through the testing and everything of that sort. It seems to me that the interface would be that transition from the research model to the working model that would be critical here and the working model is obviously a subset and it's going to be optimized for performance, etc. in real time, as opposed to the development model which can be a lot to do and take half a week to manage it necessary. It seems to me that you've got a different set of business pressures on the working model and a different set of skills as well. I think having one team here doesn't sound right to me. You've got to have a Dev Ops team who are going to take the working model from the developers and then make sure that it's sound and save. Especially in a high value IOT area that the level of iteration is not going to be nearly as high as in a lower cost marketing type application. Does that sound sensible? >> That sounds sensible. In fact in Dev Ops, the Dev Ops team would definitely be the ones that handle the continuous training and retraining of the working models on an ongoing basis. That's a core observation. >> David: Is that the right way of doing it, Jim? It seems to me that the research people would be continuing to adapt from data from a lot of different places whereas the operational model would be at a specific location with a specific IOT and they wouldn't have necessarily all the data there to do that. I'm not quite sure whether - >> Dave: Hey guys? Hey guys, hey guys? Can I jump in here? Interesting discussion, but highly nuanced and I'm struggling to figure out how this turns into a piece or sort of debating some certain specifics that are very kind of weedy. I wonder if we could just reset for a second and come back to sort of what I was trying to get to before which is really the business impact. Should this be applied broadly? Should this be applied specifically? What does it mean if I'm a practitioner? What should I take away from, Jim your premise and your sort of fixed parameters? Should I be implementing this? Why? Where? What's the value to my organization - the value I guess is obvious, but does it fit everywhere? Should it be across the board? Can you address that? >> Neil: Can I jump in here for a second? >> Dave: Please, that would be great. Is that Neil? >> Neil: Neil. I've never been a data scientist, but I was an actuary a long time ago. When the truth actuary came to me and said we need to develop a liability insurance coverage for floating oil rigs in the North Sea, I'm serious, it took a couple of months of research and modeling and so forth. If I had to go to all of those meetings and stand ups in an agile development environment, I probably would have gone postal on the place. I think that there's some confusion about what data science is. It's not a vector. It's not like a Dev Op situation where you start with something and you go (mumbling). When a data scientist or whatever you want to call them comes up with a model, that model has to be constantly revisited until it's put out of business. It's refined, it's evaluated. It doesn't have an end point like that. The other thing is that data scientist is typically going to be running multiple projects simultaneously so how in the world are you going to agilize that? I think if you look at the data science group, they're probably, I think Nick said this, there are probably groups in there that are doing fewer Dev Ops, software engineering and so forth and you can apply agile techniques to them. The whole data science thing is too squishy for that, in my opinion. >> Jim: Squishy? What do you mean by squishy, Neil? >> Neil: It's not one thing. I think if you try to represent data science as here's a project, we gather data, we work on a model, we test it, and then we put it into production, it doesn't end there. It never ends. It's constantly being revised. >> Yeah, of course. It's akin to application maintenance. The application meaning the model, the algorithm to be fit for purpose has to continually be evaluated, possibly tweaked, always retrained to determine its predictive fit for whatever task it's been assigned. You don't build it once and assume its strong predictive fit forever and ever. You can never assume that. >> Neil: James and I called that adaptive control mechanisms. You put a model out there and you monitor the return you're getting. You talk about AB testing, that's one method of doing it. I think that a data scientist, somebody who really is keyed into the machine learning and all that jazz. I just don't see them as being project oriented. I'll tell you one other thing, I have a son who's a software engineer and he said something to me the other day. He said, "Agile? Agile's dead." I haven't had a chance to find out what he meant by that. I'll get back to you. >> Oh, okay. If you look at - Go ahead. >> Dave: I'm sorry, Neil. Just to clarify, he said agile's dead? Was that what he said? >> Neil: I didn't say it, my son said it. >> Dave: Yeah, yeah, yeah right. >> Neil: No idea what he was talking about. >> Dave: Go ahead, Jim. Sorry. >> If you look at waterfall development in general, for larger projects it's absolutely essential to get requirements nailed down and the functional specifications and all that. Where you have some very extensive projects and many moving parts, obviously you need a master plan that it all fits into and waterfall, those checkpoints and so forth, those controls that are built into that methodology are critically important. Within the context of a broad project, some of the assets being build up might be machine loading models and analytics models and so forth so in the context of our broader waterfall oriented software development initiative, you might need to have multiple data science projects spun off within the sub-projects. Each of those would fit into, by itself might be indicated sort of like an exploration task where you have a team doing data visualization, exploration in more of an open-ended fashion because while they're trying to figure out the right set of predictors and the right set of data to be able to build out the right model to deliver the right result. What I'm getting at is that agile approaches might be embedded into broader waterfall oriented development initiatives, agile data science approaches. Fundamentally, data science began and still is predominantly very smart people, PhDs in statistics and math, doing open-ended exploration of complex data looking for non-obvious patterns that you wouldn't be able to find otherwise. Sort of a fishing expedition, a high priced fishing expedition. Kind of a mode of operation as how data science often is conducted in the real world. Looking for that eureka moment when the correlations just jump out at you. There's a lot of that that goes on. A lot of that is very important data science, it's more akin to pure science. What I'm getting at is there might be some role for more structure in waterfall development approaches in projects that have a data science, core data science capability to them. Those are my thoughts. >> Dave: Okay, we probably should move on to the next topic here, but just in closing can we get people to chime in on sort of the bottom line here? If you're writing to an audience of data scientists or data scientist want to be's, what's the one piece of advice or a couple of pieces of advice that you would give them? >> First of all, data science is a developer competency. The modern developers are, many of them need to be data scientists or have a strong grounding and understanding of data science, because much of that machine learning and all that is increasingly the core of what software developers are building so you can't not understand data science if you're a modern software developer. You can't understand data science as it (garbled) if you don't understand the need for agile iterative steps within the, because they're looking for the needle in the haystack quite often. The right combination of predictive variables and the right combination of algorithms and the right training regimen in order to get it all fit. It's a new world competency that need be mastered if you're a software development professional. >> Dave: Okay, anybody else want to chime in on the bottom line there? >> David: Just my two penny worth is that the key aspect of all the data scientists is to come up with the algorithm and then implement them in a way that is robust and it part of the system as a whole. The return on investment on the data science piece as an insight isn't worth anything until it's actually implemented and put into production of some sort. It seems that second stage of creating the working model is what is the output of your data scientists. >> Yeah, it's the repeatable deployable asset that incorporates the crux of data science which is algorithms that are data driven, statistical algorithms that are data driven. >> Dave: Okay. If there's nothing else, let's close this agenda item out. Is Nick on? Did Nick join us today? Nick, you there? >> Nick: Yeah. >> Dave: Sounds like you're on. Tough to hear you. >> Nick: How's that? >> Dave: Better, but still not great. Okay, we can at least hear you now. David, you wanted to present on NVMe over fabric pivoting off the Micron news. What is NVMe over fabric and who gives a fuck? (laughing) >> David: This is Micron, we talked about it last week. This is Micron announcement. What they announced is NVMe over fabric which, last time we talked about is the ability to create a whole number of nodes. They've tested 250, the architecture will take them to 1,000. 1,000 processor or 1,000 nodes, and be able to access the data on any single node at roughly the same speed. They are quoting 200 microseconds. It's 195 if it's local and it's 200 if it's remote. That is a very, very interesting architecture which is like nothing else that's been announced. >> Participant: David, can I ask a quick question? >> David: Sure. >> Participant: This latency and the node count sounds astonishing. Is Intel not replicating this or challenging in scope with their 3D Crosspoint? >> David: 3D Crosspoint, Intel would love to sell that as a key component of this. The 3D Crosspoint as a storage device is very, very, very expensive. You can replicate most of the function of 3D Crosspoint at a much lower price point by using a combination of D-RAM and protective D-RAM and Flash. At the moment, 3D Crosspoint is a nice to have and there'll be circumstances where they will use it, but at the meeting yesterday, I don't think they, they might have brought it up once. They didn't emphasize it (mumbles) at all as being part of it. >> Participant: To be clear, this means rather than buying Intel servers rounded out with lots of 3D Crosspoint, you buy Intel servers just with the CPU and then all the Micron niceness for their NVMe and their Interconnect? >> David: Correct. They are still Intel servers. The ones they were displaying yesterday were HP1's, they also used SuperMicro. They want certain characteristics of the chip set that are used, but those are just standard pieces. The other parts of the architecture are the Mellanox, the 100 gigabit converged ethernet and using Rocky which is IDMA over converged ethernet. That is the secret sauce which allows you and Mellanox themselves, their cards have a lot of offload of a lot of functionality. That's the secret sauce which allows you to go from any point to any point in 5 microseconds. Then create a transfer and other things. Files are on top of that. >> Participant: David, Another quick question. The latency is incredibly short. >> David: Yep. >> Participant: What happens if, as say an MPP SQL database with 1,000 nodes, what if they have to shuffle a lot of data? What's the throughput? Is it limited by that 100 gig or is that so insanely large that it doesn't matter? >> David: They key is this, that it allows you to move the processing to wherever the data is very, very easily. In the principle that will evolve from this architecture, is that you know where the data is so don't move the data around, that'll block things up. Move the processing to that particular node or some adjacent node and do the processing as close as possible. That is as an architecture is a long term goal. Obviously in the short term, you've got to take things as they are. Clearly, a different type of architecture for databases will need to eventually evolve out of this. At the moment, what they're focusing on is big problems which need low latency solutions and using databases as they are and the whole end to end use stack which is a much faster way of doing it. Then over time, they'll adapt new databases, new architectures to really take advantage of it. What they're offering is a POC at the moment. It's in Beta. They had their customers talking about it and they were very complimentary in general about it. They hope to get it into full production this year. There's going to be a host of other people that are doing this. I was trying to bottom line this in terms of really what the link is with digital enablement. For me, true digital enablement is enabling any relevant data to be available for processing at the point of business engagement in real time or near real time. The definition that this architecture enables. It's a, in my view a potential game changer in that this is an architecture which will allow any data to be available for processing. You don't have to move the data around, you move the processing to that data. >> Is Micron the first market with this capability, David? NV over Me? NVMe. >> David: Over fabric? Yes. >> Jim: Okay. >> David: Having said that, there are a lot of start ups which have got a significant amount of money and who are coming to market with their own versions. You would expect Dell, HP to be following suit. >> Dave: David? Sorry. Finish your thought and then I have another quick question. >> David: No, no. >> Dave: The principle, and you've helped me understand this many times, going all the way back to Hadoop, bring the application to the data, but when you're using conventional relational databases and you've had it all normalized, you've got to join stuff that might not be co-located. >> David: Yep. That's the whole point about the five microseconds. Now that the impact of non co-location if you have to join stuff or whatever it is, is much, much lower. It's so you can do the logical draw in, whatever it is, very quickly and very easily across that whole fabric. In terms of processing against that data, then you would choose to move the application to that node because it's much less data to move, that's an optimization of the architecture as opposed to a fundamental design point. You can then optimize about where you run the thing. This is ideal architecture for where I personally see things going which is traditional systems of record which need to be exactly as they've ever been and then alongside it, the artificial intelligence, the systems of understanding, data warehouses, etc. Having that data available in the same space so that you can combine those two elements in real time or in near real time. The advantage of that in terms of business value, digital enablement, and business value is the biggest thing of all. That's a 50% improvement in overall productivity of a company, that's the thing that will drive, in my view, 99% of the business value. >> Dave: Going back just to the joint thing, 100 gigs with five microseconds, that's really, really fast, but if you've got petabytes of data on these thousand nodes and you have to do a join, you still got to go through that 100 gig pipe of stuff that's not co-located. >> David: Absolutely. The way you would design that is as you would design any query. You've got a process you would need, a process in front of that which is query optimization to be able to farm all of the independent jobs needed to do in each of the nodes and take the output of that and bring that together. Both the concepts are already there. >> Dave: Like a map. >> David: Yes. That's right. All of the data science is there. You're starting from an architecture which is fundamentally different from the traditional let's get it out architectures that have existed, by removing that huge overhead of going from one to another. >> Dave: Oh, because this goes, it's like a mesh not a ring? >> David: Yes, yes. >> Dave: It's like the high performance compute of this MPI type architecture? >> David: Absolutely. NVMe, by definition is a point to point architecture. Rocky, underneath it is a point to point architecture. Everything is point to point. Yes. >> Dave: Oh, got it. That really does call for a redesign. >> David: Yes, you can take it in steps. It'll work as it is and then over time you'll optimize it to take advantage of it more. Does that definition of (mumbling) make sense to you guys? The one I quoted to you? Enabling any relevant data to be available for processing at the point of business engagement, in real time or near real time? That's where you're trying to get to and this is a very powerful enabler of that design. >> Nick: You're emphasizing the network topology, while I kind of thought the heart of the argument was performance. >> David: Could you repeat that? It's very - >> Dave: Let me repeat. Nick's a little light, but I could hear him fine. You're emphasizing the network topology, but Nick's saying his takeaway was the whole idea was the thrust was performance. >> Nick: Correct. >> David: Absolutely. Absolutely. The result of that network topology is a many times improvement in performance of the systems as a whole that you couldn't achieve in any previous architecture. I totally agree. That's what it's about is enabling low latency applications with much, much more data available by being able to break things up in parallel and delivering multiple streams to an end result. Yes. >> Participant: David, let me just ask, if I can play out how databases are designed now, how they can take advantage of it unmodified, but how things could be very, very different once they do take advantage of it which is that today, if you're doing transaction processing, you're pretty much bottle necked on a single node that sort of maintains the fresh cache of shared data and that cache, even if it's in memory, it's associated with shared storage. What you're talking about means because you've got memory speed access to that cache from anywhere, it no longer is tied to a node. That's what allows you to scale out to 1,000 nodes even for transaction processing. That's something we've never really been able to do. Then the fact that you have a large memory space means that you no longer optimize for mapping back and forth from disk and disk structures, but you have everything in a memory native structure and you don't go through this thing straw for IO to storage, you go through memory speed IO. That's a big, big - >> David: That's the end point. I agree. That's not here quite yet. It's still IO, so the IO has been improved dramatically, the protocol within the Me and the over fabric part of it. The elapsed time has been improved, but it's not yet the same as, for example, the HPV initiative. That's saying you change your architecture, you change your way of processing just in the memory. Everything is assumed to be memory. We're not there yet. 200 microseconds is still a lot, lot slower than the process that - one impact of this architecture is that the amount of data that you can pass through it is enormously higher and therefore, the memory sizes themselves within each node will need to be much, much bigger. There is a real opportunity for architectures which minimize the impact, which hold data coherently across multiple nodes and where there's minimal impact of, no tapping on the shoulder for every byte transferred so you can move large amounts of data into memory and then tell people that it's there and allow it to be shared, for example between the different calls and the GPUs and FPGAs that will be in these processes. There's more to come in terms of the architecture in the future. This is a step along the way, it's not the whole journey. >> Participant: Dave, another question. You just referenced 200 milliseconds or microseconds? >> David: Did I say milliseconds? I meant microseconds. >> Participant: You might have, I might have misheard. Relate that to the five microsecond thing again. >> David: If you have data directly attached to your processor, the access time is 195 microseconds. If you need to go to a remote, anywhere else in the thousand nodes, your access time is 200 microseconds. In other words, the additional overhead of that data is five microseconds. >> Participant: That's incredible. >> David: Yes, yes. That is absolutely incredible. That's something that data scientists have been working on for years and years. Okay. That's the reason why you can now do what I talked about which was you can have access from any node to any data within that large amount of nodes. You can have petabytes of data there and you can have access from any single node to any of that data. That, in terms of data enablement, digital enablement, is absolutely amazing. In other words, you don't have to pre put the data that's local in one application in one place. You're allowing an enormous flexibility in how you design systems. That coming back to artificial intelligence, etc. allows you a much, much larger amount of data that you can call on for improving applications. >> Participant: You can explore and train models, huge models, really quickly? >> David: Yes, yes. >> Participant: Apparently that process works better when you have an MPI like mesh than a ring. >> David: If you compare this architecture to the DSST architecture which was the first entrance into this that MP bought for a billion dollars, then that one stopped at 40 nodes. It's architecture was very, very proprietary all the way through. This one takes you to 1,000 nodes with much, much lower cost. They believe that the cost of the equivalent DSSD system will be between 10 and 20% of that cost. >> Dave: Can I ask a question about, you mentioned query optimizer. Who develops the query optimizer for the system? >> David: Nobody does yet. >> Jim: The DBMS vendor would have to re-write theirs with a whole different pensive cost. >> Dave: So we would have an optimizer database system? >> David: Who's asking a question, I'm sorry. I don't recognize the voice. >> Dave: That was Neil. Hold on one second, David. Hold on one second. Go ahead Nick. You talk about translation. >> Nick: ... On a network. It's SAN. It happens to be very low latency and very high throughput, but it's just a storage sub-system. >> David: Yep. Yep. It's a storage sub-system. It's called a server SAN. That's what we've been talking about for a long time is you need the same characteristics which is that you can get at all the data, but you need to be able to get at it in compute time as opposed to taking a stroll down the road time. >> Dave: Architecturally it's a SAN without an array controller? >> David: Exactly. Yeah, the array controller is software from a company called Xcellate, what was the name of it? I can't remember now. Say it again. >> Nick: Xcelero or Xceleron? >> David: Xcelero. That's the company that has produced the software for the data services, etc. >> Dave: Let's, as we sort of wind down this segment, let's talk about the business impact again. We're talking about different ways potentially to develop applications. There's an ecosystem requirement here it sounds like, from the ISDs to support this and other developers. It's the final, portends the elimination of the last electromechanical device in computing which has implications for a lot of things. Performance value, application development, application capability. Maybe you could talk about that a little bit again thinking in terms of how practitioners should look at this. What are the actions that they should be taking and what kinds of plans should they be making in their strategies? >> David: I thought Neil's comment last week was very perceptive which is, you wouldn't start with people like me who have been imbued with the 100 database call limits for umpteen years. You'd start with people, millennials, or sub-millenials or whatever you want to call them, who can take a completely fresh view of how you would exploit this type of architecture. Fundamentally you will be able to get through 10 or 100 times more data in real time than you can with today's systems. There's two parts of that data as I said before. The traditional systems of record that need to be updated, and then a whole host of applications that will allow you to do processes which are either not possible, or very slow today. To give one simple example, if you want to do real time changing of pricing based on availability of your supply chain, based on what you've got in stock, based on the delivery capabilities, that's a very, very complex problem. The optimization of all these different things and there are many others that you could include in that. This will give you the ability to automate that process and optimize that process in real time as part of the systems of record and update everything together. That, in terms of business value is extracting a huge number of people who previously would be involved in that chain, reducing their involvement significantly and making the company itself far more agile, far more responsive to change in the marketplace. That's just one example, you can think of hundreds for every marketplace where the application now becomes the systems of record, augmented by AI and huge amounts more data can improve the productivity of an organization and the agility of an organization in the marketplace. >> This is a godsend for AI. AI, the draw of AI is all this training data. If you could just move that in memory speed to the application in real time, it makes the applications much sharper and more (mumbling). >> David: Absolutely. >> Participant: How long David, would it take for the cloud vendors to not just offer some instances of this, but essentially to retool their infrastructure. (laughing) >> David: This is, to me a disruption and a half. The people who can be first to market in this are the SaaS vendors who can take their applications or new SaaS vendors. ISV. Sorry, say that again, sorry. >> Participant: The SaaS vendors who have their own infrastructure? >> David: Yes, but it's not going to be long before the AWS' and Microsofts put this in their tool bag. The SaaS vendors have the greatest capability of making this change in the shortest possible time. To me, that's one area where we're going to see results. Make no mistake about it, this is a big change and at the Micron conference, I can't remember what the guys name was, he said it takes two Olympics for people to start adopting things for real. I think that's going to be shorter than two Olympics, but it's going to be quite a slow process for pushing this out. It's radically different and a lot of the traditional ways of doing things are going to be affected. My view is that SaaS is going to be the first and then there are going to be individual companies that solve the problems themselves. Large companies, even small companies that put in systems of this sort and then use it to outperform the marketplace in a significant way. Particularly in the finance area and particularly in other data intent areas. That's my two pennies worth. Anybody want to add anything else? Any other thoughts? >> Dave: Let's wrap some final thoughts on this one. >> Participant: Big deal for big data. >> David: Like it, like it. >> Participant: It's actually more than that because there used to be a major trade off between big data and fast data. Latency and throughput and this starts to push some of those boundaries out so that you sort of can have both at once. >> Dave: Okay, good. Big deal for big data and fast data. >> David: Yeah, I like it. >> Dave: George, you want to talk about digital twins? I remember when you first sort of introduced this, I was like, "Huh? What's a digital twin? "That's an interesting name." I guess, I'm not sure you coined it, but why don't you tell us what digital twin is and why it's relevant. >> George: All right. GE coined it. I'm going to, at a high level talk about what it is, why it's important, and a little bit about as much as we can tell, how it's likely to start playing out and a little bit on the differences of the different vendors who are going after it. As far as sort of defining it, I'm cribbing a little bit from a report that's just in the edit process. It's data representation, this is important, or a model of a product, process, service, customer, supplier. It's not just an industrial device. It can be any entity involved in the business. This is a refinement sort of Peter helped with. The reason it's any entity is because there is, it can represent the structure and behavior, not just of a machine tool or a jet engine, but a business process like sales order process when you see it on a screen and its workflow. That's a digital twin of what used to be a physical process. It applied to both the devices and assets and processes because when you can model them, you can integrate them within a business process and improve that process. Going back to something that's more physical so I can do a more concrete definition, you might take a device like a robotic machine tool and the idea is that the twin captures the structure and the behavior across its lifecycle. As it's designed, as it's built, tested, deployed, operated, and serviced. I don't know if you all know the myth of, in the Greek Gods, one of the Goddesses sprang fully formed from the forehead of Zeus. I forgot who it was. The point of that is digital twin is not going to spring fully formed from any developers head. Getting to the level of fidelity I just described is a journey and a long one. Maybe a decade or more because it's difficult. You have to integrate a lot of data from different systems and you have to add structure and behavior for stuff that's not captured anywhere and may not be captured anywhere. Just for example, CAD data might have design information, manufacturing information might come from there or another system. CRM data might have support information. Maintenance repair and overhaul applications might have information on how it's serviced. Then you also connect the physical version with the digital version with essentially telemetry data that says how its been operating over time. That sort of helps define its behavior so you can manipulate that and predict things or simulate things that you couldn't do with just the physical version. >> You have to think about combined with say 3D printers, you could create a hot physical back up of some malfunctioning thing in the field because you have the entire design, you have the entire history of its behavior and its current state before it went kablooey. Conceivably, it can be fabricated on the fly and reconstituted as a physicologic from the digital twin that was maintained. >> George: Yes, you know what actually that raises a good point which is that the behavior that was represented in the telemetry helps the designer simulate a better version for the next version. Just what you're saying. Then with 3D printing, you can either make a prototype or another instance. Some of the printers are getting sophisticated enough to punch out better versions or parts for better versions. That's a really good point. There's one thing that has to hold all this stuff together which is really kind of difficult, which is challenging technology. IBM calls it a knowledge graph. It's pretty much in anyone's version. They might not call it a knowledge graph. It's a graph is, instead of a tree where you have a parent and then children and then the children have more children, a graph, many things can relate to many things. The reason I point that out is that puts a holistic structure over all these desperate sources of data behavior. You essentially talk to the graph, sort of like with Arnold, talk to the hand. That didn't, I got crickets. (laughing) Let me give you guys the, I put a definitions table in this dock. I had a couple things. Beta models. These are some important terms. Beta model represents the structure but not the behavior of the digital twin. The API represents the behavior of the digital twin and it should conform to the data model for maximum developer usability. Jim, jump in anywhere where you feel like you want to correct or refine. The object model is a combination of the data model and API. You were going to say something? >> Jim: No, I wasn't. >> George: Okay. The object model ultimately is the digital twin. Another way of looking at it, defining the structure and behavior. This sounds like one of these, say "T" words, the canonical model. It's a generic version of the digital twin or really the one where you're going to have a representation that doesn't have customer specific extensions. This is important because the way these things are getting built today is mostly custom spoke and so if you want to be able to reuse work. If someone's building this for you like a system integrator, you want to be able to, or they want to be able to reuse this on the next engagement and you want to be able to take the benefit of what they've learned on the next engagement back to you. There has to be this canonical model that doesn't break every time you essentially add new capabilities. It doesn't break your existing stuff. Knowledge graph again is this thing that holds together all the pieces and makes them look like one coherent hole. I'll get to, I talked briefly about network compatibility and I'll get to level of detail. Let me go back to, I'm sort of doing this from crib notes. We talked about telemetry which is sort of combining the physical and the twin. Again, telemetry's really important because this is like the time series database. It says, this is all the stuff that was going on over time. Then you can look at telemetry data that tells you, we got a dirty power spike and after three of those, this machine sort of started vibrating. That's part of how you're looking to learn about its behavior over time. In that process, models get better and better about predicting and enabling you to optimize their behavior and the business process with which it integrates. I'll give some examples of that. Twins, these digital twins can themselves be composed in levels of detail. I think I used the example of a robotic machine tool. Then you might have a bunch of machine tools on an assembly line and then you might have a bunch of assembly lines in a factory. As you start modeling, not just the single instance, but the collections that higher up and higher levels of extractions, or levels of detail, you get a richer and richer way to model the behavior of your business. More and more of your business. Again, it's not just the assets, but it's some of the processes. Let me now talk a little bit about how the continual improvement works. As Jim was talking about, we have data feedback loops in our machine learning models. Once you have a good quality digital twin in place, you get the benefit of increasing returns from the data feedback loops. In other words, if you can get to a better starting point than your competitor and then you get on the increasing returns of the data feedback loops, that is improving the fidelity of the digital twins now faster than your competitor. For one twin, I'll talk about how you want to make the whole ecosystem of twins sort of self-reinforcing. I'll get to that in a sec. There's another point to make about these data feedback loops which is traditional apps, and this came up with Jim and Neil, traditional apps are static. You want upgrades, you get stuff from the vendor. With digital twins, they're always learning from the customer's data and that has implications when the partner or vendor who helped build it for a customer takes learnings from the customer and goes to a similar customer for another engagement. I'll talk about the implications from that. This is important because it's half packaged application and half bespoke. The fact that you don't have to take the customer's data, but your model learns from the data. Think of it as, I'm not going to take your coffee beans, your data, but I'm going to run or make coffee from your beans and I'm going to take that to the next engagement with another customer who could be your competitor. In other words, you're extracting all the value from the data and that helps modify the behavior of the model and the next guy gets the benefit of it. Dave, this is the stuff where IBM keeps saying, we don't take your data. You're right, but you're taking the juice you squeezed out of it. That's one of my next reports. >> Dave: It's interesting, George. Their contention is, they uniquely, unlike Amazon and Google, don't swap spit, your spit with their competitors. >> George: That's misleading. To say Amazon and Google, those guys aren't building digital twins. Parametric technology is. I've got this definitely from a parametric technical fellow at an AWS event last week, which is they, not only don't use the data, they don't use the structure of the twin either from engagement to engagement. That's a big difference from IBM. I have a quote, Chris O'Connor from IBM Munich saying, "We'll take the data model, "but we won't take the data." I'm like, so you take the coffee from the beans even if you don't take the beans? I'm going to be very specific about saying that saying you don't do what Google and FaceBook do, what they do, it's misleading. >> Dave: My only caution there is do some more vetting and checking. A lot of times what some guy says on a Cube interview, he or she doesn't even know, in my experience. Make sure you validate that. >> George: I'll send it to them for feedback, but it wasn't just him. I got it from the CTO of the IOT division as well. >> Dave: When you were in Munich? >> George: This wasn't on the Cube either. This was by the side of, at the coffee table during our break. >> Dave: I understand and CTO's in theory should know. I can't tell you how many times I've gotten a definitive answer from a pretty senior level person and it turns out it was, either they weren't listening to me or they didn't know or they were just yessing me or whatever. Just be really careful and make sure you do your background checks. >> George: I will. I think the key is leave them room to provide a nuanced answer. It's more of a really, really, really concrete about really specific edge conditions and say do you or don't you. >> Dave: This is a pretty big one. If I'm a CIO, a chief digital officer, a chief data officer, COO, head of IT, head of data science, what should I be doing in this regard? What's the advice? >> George: Okay, can I go through a few more or are we out of time? >> Dave: No, we have time. >> George: Let me do a couple more points. I talked about training a single twin or an instance of a twin and I talked about the acceleration of the learning curve. There's edge analytics, David has educated us with the help of looking at GE Predicts. David, you have been talking about this fpr a long time. You want edge analytics to inform or automate a low latency decision and so this is where you're going to have to run some amount of analytics. Right near the device. Although I got to mention, hopefully this will elicit a chuckle. When you get some vendors telling you what their edge and cloud strategies are. Map R said, we'll have a hadoop cluster that only needs four or five nodes as our edge device. And we'll need five admins to care and feed it. He didn't say the last part, but that obviously isn't going to work. The edge analytics could be things like recalibrating the machine for different tolerance. If it's seeing that it's getting out of the tolerance window or something like that. The cloud, and this is old news for anyone who's been around David, but you're going to have a lot of data, not all of it, but going back to the cloud to train both the instances of each robotic machine tool and the master of that machine tool. The reason is, an instance would be oh I'm operating in a high humidity environment, something like that. Another one would be operating where there's a lot of sand or something that screws up the behavior. Then the master might be something that has behavior that's sort of common to all of them. It's when the training, the training will take place on the instances and the master and will in all likelihood push down versions of each. Next to the physical device process, whatever, you'll have the instance one and a class one and between the two of them, they should give you the optimal view of behavior and the ability to simulate to improve things. It's worth mentioning, again as David found out, not by talking to GE, but by accidentally looking at their documentation, their whole positioning of edge versus cloud is a little bit hand waving and in talking to the guys from ThingWorks which is a division of what used to be called Parametric Technology which is just PTC, it appears that they're negotiating with GE to give them the orchestration and distributed database technology that GE can't build itself. I've heard also from two ISV's, one a major one and one a minor one who are both in the IOT ecosystem one who's part of the GE ecosystem that predicts as a mess. It's analysis paralysis. It's not that they don't have talent, it's just that they're not getting shit done. Anyway, the key thing now is when you get all this - >> David: Just from what I learned when I went to the GE event recently, they're aware of their requirement. They've actually already got some sub parts of the predix which they can put in the cloud, but there needs to be more of it and they're aware of that. >> George: As usual, just another reason I need a red phone hotline to David for any and all questions I have. >> David: Flattery will get you everywhere. >> George: All right. One of the key takeaways, not the action item, but the takeaway for a customer is when you get these data feedback loops reinforcing each other, the instances of say the robotic machine tools to the master, then the instance to the assembly line to the factory, when all that is being orchestrated and all the data is continually enhancing the models as well as the manual process of adding contextual information or new levels of structure, this is when you're on increasing returns sort of curve that really contributes to sustaining competitive advantage. Remember, think of how when Google started off on search, it wasn't just their algorithm, but it was collecting data about which links you picked, in which order and how long you were there that helped them reinforce the search rankings. They got so far ahead of everyone else that even if others had those algorithms, they didn't have that data to help refine the rankings. You get this same process going when you essentially have your ecosystem of learning models across the enterprise sort of all orchestrating. This sounds like motherhood and apple pie and there's going to be a lot of challenges to getting there and I haven't gotten all the warts of having gone through, talked to a lot of customers who've gotten the arrows in the back, but that's the theoretical, really cool end point or position where the entire company becomes a learning organization from these feedback loops. I want to, now that we're in the edit process on the overall digital twin, I do want to do a follow up on IBM's approach. Hopefully we can do it both as a report and then as a version that's for Silicon Angle because that thing I wrote on Cloudera got the immediate attention of Cloudera and Amazon and hopefully we can both provide client proprietary value add, but also the public impact stuff. That's my high level. >> This is fascinating. If you're the Chief of Data Science for example, in a large industrial company, having the ability to compile digital twins of all your edge devices can be extraordinarily valuable because then you can use that data to do more fine-grained segmentation of the different types of edges based on their behavior and their state under various scenarios. Basically then your team of data scientists can then begin to identify the extent to which they need to write different machine learning models that are tuned to the specific requirements or status or behavior of different end points. What I'm getting at is ultimately, you're going to have 10 zillion different categories of edge devices performing in various scenarios. They're going to be driven by an equal variety of machine learning, deep learning AI and all that. All that has to be built up by your data science team in some coherent architecture where there might be a common canonical template that all devices will, all the algorithms and so forth on those devices are being built from. Each of those algorithms will then be tweaked to the specific digital twins profile of each device is what I'm getting at. >> George: That's a great point that I didn't bring up which is folks who remember object oriented programming, not that I ever was able to write a single line of code, but the idea, go into this robotic machine tool, you can inherit a couple of essentially component objects that can also be used in slightly different models, but let's say in this machine tool, there's a model for a spinning device, I forget what it's called. Like a drive shaft. That drive shaft can be in other things as well. Eventually you can compose these twins, even instances of a twin with essentially component models themselves. Thing Works does this. I don't know if GE does this. I don't think IBM does. The interesting thing about IBM is, their go to market really influences their approach to this which is they have this huge industry solutions group and then obviously the global business services group. These guys are all custom development and domain experts so they'll go into, they're literally working with Airbus and with the goal of building a model of a particular airliner. Right now I think they're doing the de-icing subsystem, I don't even remember on which model. In other words they're helping to create this bespoke thing and so that's what actually gets them into trouble with potentially channel conflict or maybe it's more competitor conflict because Airbus is not going to be happy if they take their learnings and go work with Boeing next. Whereas with PTC and Thing Works, at least their professional services arm, they treat this much more like the implementation of a packaged software product and all the learnings stay with the customer. >> Very good. >> Dave: I got a question, George. In terms of the industrial design and engineering aspect of building products, you mentioned PTC which has been in the CAD business and the engineering business for software for 50 years, and Ansis and folks like that who do the simulation of industrial products or any kind of a product that gets built. Is there a natural starting point for digital twin coming out of that area? That would be the vice president of engineering would be the guy that would be a key target for this kind of thinking. >> George: Great point. This is, I think PTC is closely aligned with Terradata and they're attitude is, hey if it's not captured in the CAD tool, then you're just hand waving because you won't have a high fidelity twin. >> Dave: Yeah, it's a logical starting point for any mechanical kind of device. What's a thing built to do and what's it built like? >> George: Yeah, but if it's something that was designed in a CAD tool, yes, but if it's something that was not, then you start having to build it up in a different way. I think, I'm trying to remember, but IBM did not look like they had something that was definitely oriented around CAD. Theirs looked like it was more where the knowledge graph was the core glue that pulled all the structure and behavior together. Again, that was a reflection of their product line which doesn't have a CAD tool and the fact that they're doing these really, really, really bespoke twins. >> Dave: I'm thinking that it strikes me that from the industrial design in engineering area, it's really the individual product is really the focus. That's one part of the map. The dynamic you're pointing at, there's lots of other elements of the map in terms of an operational, a business process. That might be the fleet of wind turbines or the fleet of trucks. How they behave collectively. There's lots of different entry points. I'm just trying to grapple with, isn't the CAD area, the engineering area at least for hard products, have an obvious starting point for users to begin to look at this. The BP of Engineering needs to be on top of this stuff. >> George: That's a great point that I didn't bring up which is, a guy at Microsoft who was their CTO in their IT organization gave me an example which was, you have a pipeline that's 1,000 miles long. It's got 10,000 valves in it, but you're not capturing the CAD design of the valve, you just put a really simple model that measures pressure, temperature, and leakage or something. You string 10,000 of those together into an overall model of the pipeline. That is a low fidelity thing, but that's all they need to start with. Then they can see when they're doing maintenance or when the flow through is higher or what the impact is on each of the different valves or flanges or whatever. It doesn't always have to start with super high fidelity. It depends on which optimizing for. >> Dave: It's funny. I had a conversation years ago with a guy, the engineering McNeil Schwendler if you remember those folks. He was telling us about 30 to 40 years ago when they were doing computational fluid dynamics, they were doing one dimensional computational fluid dynamics if you can imagine that. Then they were able, because of the compute power or whatever, to get the two dimensional computational fluid dynamics and finally they got to three dimensional and they're looking also at four and five dimensional as well. It's serviceable, I guess what I'm saying in that pipeline example, the way that they build that thing or the way that they manage that pipeline is that they did the one dimensional model of a valve is good enough, but over time, maybe a two or three dimensional is going to be better. >> George: That's why I say that this is a journey that's got to take a decade or more. >> Dave: Yeah, definitely. >> Take the example of airplane. The old joke is it's six million parts flying in close formation. It's going to be a while before you fit that in one model. >> Dave: Got it. Yes. Right on. When you have that model, that's pretty cool. All right guys, we're about out of time. I need a little time to prep for my next meeting which is in 15 minutes, but final thoughts. Do you guys feel like this was useful in terms of guiding things that you might be able to write about? >> George: Hugely. This is hugely more valuable than anything we've done as a team. >> Jim: This is great, I learned a lot. >> Dave: Good. Thanks you guys. This has been recorded. It's up on the cloud and I'll figure out how to get it to Peter and we'll go from there. Thanks everybody. (closing thank you's)
SUMMARY :
There you go. and maybe the key issues that you see and is coming even more deeply into the core practice You had mentioned, you rattled off a bunch of parameters. It's all about the core team needs to be, I got a minimal modular, incremental, iterative, iterative, adaptive, and co-locational. in the context of data science, and get automation of many of the aspects everything that these people do needs to be documented that the whole rapid idea development flies in the face of that create the final product that has to go into production and the algorithms and so forth that were used and the working model is obviously a subset that handle the continuous training and retraining David: Is that the right way of doing it, Jim? and come back to sort of what I was trying to get to before Dave: Please, that would be great. so how in the world are you going to agilize that? I think if you try to represent data science the algorithm to be fit for purpose and he said something to me the other day. If you look at - Just to clarify, he said agile's dead? Dave: Go ahead, Jim. and the functional specifications and all that. and all that is increasingly the core that the key aspect of all the data scientists that incorporates the crux of data science Nick, you there? Tough to hear you. pivoting off the Micron news. the ability to create a whole number of nodes. Participant: This latency and the node count At the moment, 3D Crosspoint is a nice to have That is the secret sauce which allows you The latency is incredibly short. Move the processing to that particular node Is Micron the first market with this capability, David? David: Over fabric? and who are coming to market with their own versions. Dave: David? bring the application to the data, Now that the impact of non co-location and you have to do a join, and take the output of that and bring that together. All of the data science is there. NVMe, by definition is a point to point architecture. Dave: Oh, got it. Does that definition of (mumbling) make sense to you guys? Nick: You're emphasizing the network topology, the whole idea was the thrust was performance. of the systems as a whole Then the fact that you have a large memory space is that the amount of data that you can pass through it You just referenced 200 milliseconds or microseconds? David: Did I say milliseconds? Relate that to the five microsecond thing again. anywhere else in the thousand nodes, That's the reason why you can now do what I talked about when you have an MPI like mesh than a ring. They believe that the cost of the equivalent DSSD system Who develops the query optimizer for the system? Jim: The DBMS vendor would have to re-write theirs I don't recognize the voice. Dave: That was Neil. It happens to be very low latency which is that you can get at all the data, Yeah, the array controller is software from a company called That's the company that has produced the software from the ISDs to support this and other developers. and the agility of an organization in the marketplace. AI, the draw of AI is all this training data. for the cloud vendors to not just offer are the SaaS vendors who can take their applications and then there are going to be individual companies Latency and throughput and this starts to push Dave: Okay, good. I guess, I'm not sure you coined it, and the idea is that the twin captures the structure Conceivably, it can be fabricated on the fly and it should conform to the data model and that helps modify the behavior Dave: It's interesting, George. saying, "We'll take the data model, Make sure you validate that. I got it from the CTO of the IOT division as well. This was by the side of, at the coffee table I can't tell you how many times and say do you or don't you. What's the advice? of behavior and the ability to simulate to improve things. of the predix which they can put in the cloud, I need a red phone hotline to David and all the data is continually enhancing the models having the ability to compile digital twins and all the learnings stay with the customer. and the engineering business for software hey if it's not captured in the CAD tool, What's a thing built to do and what's it built like? and the fact that they're doing these that from the industrial design in engineering area, but that's all they need to start with. and finally they got to three dimensional that this is a journey that's got to take It's going to be a while before you fit that I need a little time to prep for my next meeting This is hugely more valuable than anything we've done how to get it to Peter and we'll go from there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Chris O'Connor | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Jim Kobeielus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Neil | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
1,000 miles | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
195 microseconds | QUANTITY | 0.99+ |
Kevin Baillie, Atomic Fiction
>> Narrator: Live from Las Vegas. It's the CUBE. Covering NAB 2017, brought to you by HGST. >> Welcome back to the CUBE live in Las Vegas at the NAB show. We're having a great day so far. Very excited to introduce you to my next guest, Kevin Baillie, cofounder and VFX supervisor at Atomic Fiction and the CEO of Conductor Technologies. Never a boring day for you with those two titles, I can imagine. >> No, I like to joke that I like to make sure that I always have the most exciting job in the world so I had to pick three to make sure that I never have a down moment spoil that, that day >> Wow, I am impressed. So you just spoke at the virtual NAB conference last month on the visual effects in the cloud, power, and control. Something that I found very interesting was that six years ago, you were kind of on an island going "I have this hunch about cloud." Tell us about, what was that hunch, why did you have it, and what has it generated so far? >> Yeah, yeah, that's a great question. The hunch was less of like, "Hey cloud looks like a great opportunity." It was more of like knowing what wasn't working in the industry as it was at that time. There were all kinds of companies that were kind of like having financial troubles or having a hard time delivering projects, tons of bankruptcies and just really sad stories everywhere. And we looked at the market and said, "There's a ton of work here, this doesn't make sense." Some of the best entertainment is being made right now and it all relies on visual effects, what's wrong? And the further we broke down the problem, the more we realized that like fixed infrastructure within a market that naturally ebbs and flows, it just didn't, there wasn't a match there. So, through that problem, we looked for solutions and cloud was a very obvious one at that point. So we just made the jump. >> And tell us about Atomic Fiction versus Conductor Technologies. Chicken, egg, which one came first? And how are they collaborating together? >> Atomic Fiction came first. It was almost seven years ago at this point that we started Atomic. And we looked for any kind of a way to use cloud. We started using an AWS directly, we then used a tool called Zync. And as we grew, we found that the needs of the company were changing so radically that nothing that was out there could actually keep up with our pace of growth. We had all this customized pipeline that we couldn't find a way to like get it into the cloud. So we built our own and that was called Conductor. And after, I think we were working on like Game of Thrones and The Walk and had just started on Deadpool that we realized it was working so well that we decided to spin it off as it's own company and make a go for actually turning it into a product that could help everybody in the same way that the cloud had helped Atomic Fiction. >> Fantastic, one of my favorite movies is The Walk. I was looking at your website and you think as the viewer, "How did they film this?" You know, this day and age, so much is CGI. Talk to us about what realtime cloud rendering is. How does it enable a movie like The Walk or Deadpool to have that awe inspiring, jaw dropping reaction from the audience? >> Well I think a large portion of bringing that jaw dropping reaction to the audience and that level of realism is being able to run productions in the way that they want to be run. And what I mean by that is, let's take a movie like The Walk where you have to recreate 1974 New York and the Twin Towers, and all these different lighting scenarios. That means we have to build every building, every rain gutter, every hotdog stand in the street down to exacting detail, and that just takes a lot of time. So we spent a ton of time, probably the first three quarters of the schedule just building the city, building the city. And we couldn't render anything at that point And it wasn't only until the very end of the show that we were able to say, "alright, now we have New York is there, let's just put it on the screen." But that takes millions of hours of computing to get that done. The Walk for example, it used 9.1 million processor hours of rendering. That's over a thousand years on a single processor to get it done. So if we hadn't had the cloud, we would have had to been like, "Oh what can we render first "so we don't bottleneck at the end of the schedule?" And really kind of like trying to bend production into the box that we, of fixed infrastructure that we have. But with the cloud, we don't have to do that. We can say, we can go as big as we want to at the very end of the show and get it done if that's what makes sense for the show. Because that's what makes sense for the show, the creative just ends up being that much better. The same was true for Deadpool, the same is true for Star Trek. These movies, they just sort of, you want to craft love into the beginning part of it so the stuff you generate at the end is as beautiful as it can be. >> So is cloud really freeing production from being able to operate in the way that it needs to operate? >> Yeah, yeah, exactly. Because the traditional model is, a visual effects company builds a data center and stuffs it full of computers. In best case, with like three weeks lead time you can like rent a bunch of racks of computers and like shove them in a closet somewhere and get your project done. It ends up being expensive and painful. You need a big team to man all that stuff. Whereas with cloud, we can say, "Hey, I need a thousand computers three minutes from now." And boom, a thousand computers spin up out of nowhere. And the great thing that we've done with Conductor as well is we've gone and negotiated per minute software licensing with Autodesk and the Foundry and IsoTropic and Chaos Group. All these big software vendors in the industry. So not only can you get compute by the minute, you can also get all the software that you need by the minute, right. So you can have three thousand nodes running Autodesk to Arnold, and you, but you run it for 42 minutes and you only pay for 42 minutes of three thousand licenses of Arnold, right. So it's really transformative from a flexibility standpoint. >> And the cost model really flips it on it's head. >> And by the way, the artists get the result back faster. Because you can scale up so big and get the result back to them so quickly without any cost penalty, they see the fruits of their labor while the ideas are still fresh in their head, which is like a huge, like, intangible benefit which has real economic benefits. >> Absolutely, one of the things and themes that we've heard of today is that speed is key. Absolutely critical to whatever is going to happen or whether or not on a shoot, a vision changes direction. And without having the power of the cloud to facilitate something on a dime, there's delays, which all adds up to economic impact. >> Yeah, and you know, back on one of our earliest projects rendered in the cloud, Flight. The Robert Zemeckis movie with Denzel Washington. That exact thing happened, where it was like at the very end, he, Zemeckis realized that he needed this extra set of like a hundred visual effects shots. And if it hadn't have been for the cloud, we would have had to say, "No, sorry we can't do these." "We have to find somebody else to do them." But because the ability of the cloud to accommodate that last minute creative epiphany, we were able to actually do the work. So it really is truly transformative and allowed us to bring in, you know, hundreds of thousands of dollars of extra revenue that we wouldn't have been able to do otherwise. >> Absolutely. In terms of some of the public cloud providers, tell us who you're working with on that end. >> Yeah, so we're working with Google right now, using Google Compute Engine on the back end. And we're actually moving forward with Microsoft and Azure. Adding it as an option later in the year. So, hopefully at the end of the year, we'll be able to support all the large cloud providers. And be able to say, "Hey, Studio X. "We know you have an affinity for Google right now, "but on the next project maybe you need "a very specific GPU type." Or there's a company in China that needs to do some work and Google isn't there. Now Azure is your thing, right. So, I think that the world of cloud providers competing against one another is going to be really beneficial for everyone in our industry for sure. And we want to be there to facilitate a little bit of like, choose whoever's best, right. >> Right, giving you the ability to really be like agnostic on the back end. >> Yeah that's exactly right. >> So as we look at these massive resources that studios are generating, creating such interactive films, what are some of the precautions that you see and you can help them mitigate against leveraging the power of cloud. >> Well, one of the benefits of cloud is you only have to pay for what you use, just like electricity, right. One of the downsides of cloud is you have to pay for what you use, right. So, if you're not careful about the render you put in the cloud or the simulation you put in the cloud, or how long you keep data in the cloud, things can get really expensive really quickly. So, one of the things we did, and this is actually why we kind of spun Conductor off as it's own company. And we just raised our Series A round of funding back in December to build the team out, because a lot of this stuff is really complicated, is one of the big efforts, in kind of a post funding world for Conductor, is on analytics and being able to use data to help people drive production better. So you know, in the very beginning, we have cost limits where you can say, "On this shot, I don't want to spend "more than a thousand dollars." Or, "I never want this artist to be able to spend "more than fifteen hundred bucks a day." But in the future, I think that there is kind of like cloud buzz-wordy things that actually come into real play here where we can use machine learning to detect when things are taking too long and alert people. We can tell people how much a render is going to cost before they even submit it maybe. We can use computer vision to check for bad things happening in the middle of a render before a human ever has a chance to lay eyes on it. So there's all kinds of stuff we can do with data to help mitigate some of the downsides of cloud and hopefully only leave people with like great insights to help them run production better. >> That's fantastic. One of the things that really interests me is the machine learning and the artificial intelligence. To be able to look at whether it's a broadcast outlet or a film studio, to be able to take a look at and evaluate the value and additional revenue streams that can come. But also, in your case, maybe even leveraging AI and machine learning to make certain processes faster thereby lowering costs. >> Yeah, we can actually make proactive suggestions based on, like, you know, thousands or millions of data points and say like, "Hey if you tweak this value on your shading rate here, "you're going to end up with a great visual "and not spend any more time, or actually spend less." So things like that and then also working together with production management systems. Like the guys at Autodesk have a product called Shotgun that deals with schedules and artist assignments. And they can have all the schedule information. We have all the sort of infrastructure information. If we correlate those two data sets together, then we'll be able to actually proactively tell somebody when we think a shot is running behind schedule. Or a shot needs more optimization. And I mean, there's all kinds of things that we can use just purely using data and a trained machine learning model to actually help people run their entire business better, not just an individual shot. >> Right, well, six years ago, when you had this hunch, you said there were some skeptics around there. One, you must feel pretty validated by now, but are you kind of one of the go-to guys, go-to companies of this is how to do it properly? These are all of the advantages, economic advantages, etc, that we can provide? >> Yeah, I think that there were definitely people that told me I was absolutely crazy when I first got started. Some of them are actually using Conductor now, so that's kind of like good. >> That must feel good right? >> Yeah, it's a good validation point and they had a lot of reasons for thinking that we were insane, cause we kind of were. But we just sort of believed deep down that it was going to work. So, yeah, I mean now, I think we're in a great position to help people. And for me, and you know, this is always like a thing that I sometimes get a hard time for, but I'm so passionate about this industry moving into the cloud that I'm just as happy to talk to somebody about how to do it maybe on their own if they're trying to do it on a small scale. Or what our competitors might be doing. Really, through that, I've kind of, we've found a space where we don't really have any competitors yet and we're breaking new ground. Really servicing the sort of medium and enterprise scale customers, and that kind of flexibility and scale and security that they kind of need. So it's sort of interesting in this, in a way, this sort of like selfless, just being excited about cloud has helped us to find a market that we can really and truly add insane value to. >> Wow, that is fascinating. Well, your passion for it is evident. Thank you so much Kevin for joining us on the CUBE. >> Yeah, thank you so much. >> Have a great time at the rest of the show and we'll see you on the CUBE sometimes soon. >> I always do, thank you again. >> Excellent, we want to thank you for watching. Again, we are live at NAB Las Vegas. Stick around. We will be right back.
SUMMARY :
brought to you by HGST. Very excited to introduce you to my next guest, So you just spoke at the virtual NAB conference last month And the further we broke down the problem, And tell us about Atomic Fiction that could help everybody in the same way Talk to us about what realtime cloud rendering is. into the beginning part of it so the stuff you generate And the great thing that we've done with Conductor as well And by the way, the artists get the result back faster. Absolutely, one of the things and themes And if it hadn't have been for the cloud, In terms of some of the public cloud providers, "but on the next project maybe you need like agnostic on the back end. and you can help them mitigate One of the downsides of cloud is you have One of the things that really interests me And I mean, there's all kinds of things that we can use that we can provide? that told me I was absolutely crazy And for me, and you know, this is always like a thing Thank you so much Kevin for joining us on the CUBE. and we'll see you on the CUBE sometimes soon. Excellent, we want to thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Baillie | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Kevin | PERSON | 0.99+ |
42 minutes | QUANTITY | 0.99+ |
IsoTropic | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Denzel Washington | PERSON | 0.99+ |
Autodesk | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
Star Trek | TITLE | 0.99+ |
Zemeckis | PERSON | 0.99+ |
The Walk | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
two titles | QUANTITY | 0.99+ |
Atomic Fiction | ORGANIZATION | 0.99+ |
Chaos Group | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Conductor Technologies | ORGANIZATION | 0.99+ |
Robert Zemeckis | PERSON | 0.99+ |
more than a thousand dollars | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
more than fifteen hundred bucks a day | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
Arnold | ORGANIZATION | 0.98+ |
NAB 2017 | EVENT | 0.98+ |
last month | DATE | 0.98+ |
millions of hours | QUANTITY | 0.98+ |
Twin Towers | LOCATION | 0.97+ |
Deadpool | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
three minutes | QUANTITY | 0.97+ |
first three quarters | QUANTITY | 0.97+ |
hundreds of thousands of dollars | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
VFX | ORGANIZATION | 0.96+ |
three thousand licenses | QUANTITY | 0.96+ |
single processor | QUANTITY | 0.96+ |
over a thousand years | QUANTITY | 0.96+ |
NAB show | EVENT | 0.95+ |
1974 | DATE | 0.95+ |
three | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
seven years ago | DATE | 0.91+ |
Conductor | ORGANIZATION | 0.91+ |
9.1 million processor | QUANTITY | 0.9+ |
HGST | ORGANIZATION | 0.89+ |
three thousand nodes | QUANTITY | 0.89+ |
Azure | ORGANIZATION | 0.89+ |
Conductor | TITLE | 0.87+ |
two data sets | QUANTITY | 0.82+ |
hundred visual effects | QUANTITY | 0.81+ |
a thousand computers | QUANTITY | 0.76+ |
CGI | ORGANIZATION | 0.74+ |
Narrator: Live from | TITLE | 0.74+ |
things | QUANTITY | 0.73+ |
Atomic | TITLE | 0.72+ |
thousand computers | QUANTITY | 0.71+ |
end | DATE | 0.71+ |
Azure | TITLE | 0.7+ |
Atomic Fiction | TITLE | 0.69+ |
NAB | EVENT | 0.67+ |
Zync | ORGANIZATION | 0.66+ |
ton of work | QUANTITY | 0.65+ |
every hotdog | QUANTITY | 0.61+ |
Vijay Vijayasanker & Cortnie Abercrombie, IBM - IBM CDO Strategy Summit - #IBMCDO - #theCUBE
(lively music) >> To the world. Over 31 million people have viewed theCUBE and that is the result of great content, great conversations and I'm so proud to be part of theCUBE, of a great team. Hi, I'm John Furrier. Thanks for watching theCUBE. For more information, click here. >> Narrator: Live from Fisherman's Wharf in San Francisco, it's theCUBE. Covering IBM Chief Data Officer Strategy Summit Spring 2017. Brought to you by IBM. >> Hey, welcome back everybody. Jeff Frick here at theCUBE. It is lunchtime at the IBM CDO Summit. Packed house, you can see them back there getting their nutrition. But we're going to give you some mental nutrition. We're excited to be joined by a repeat performance of Cortnie Abercrombie. Coming on back with Vijay Vijayasankar. He's the GM Cognitive, IOT, and Analytics for IBM, welcome. >> Thanks for having me. >> So first off, did you eat before you came on? >> I did thank you. >> I want to make sure you don't pass out or anything. (group laughing) Cortnie and I both managed to grab a quick bite. >> Excellent. So let's jump into it. Cognitive, lot of buzz, IoT, lot of buzz. How do they fit? Where do they mesh? Why is it, why are they so important to one another? >> Excellent question. >> IoT has been around for a long time even though we never called it IoT. My favorite example is smart meters that utility companies use. So these things have been here for more than a decade. And if you think about IoT, there are two aspects to it. There's the instrumentation by putting the sensors in and getting the data. And the insides aspect where there's making sense of what the sensor is trying to tell us. Combining these two, is where the value is for the client. Just by putting outwardly sensors, it doesn't make much sense. So, look at the world around us now, right? The traditional utility, I will stick with the utilities to complete the story. Utilities all get dissected from both sides. On one hand you have your electric vehicles plugging into the grid to draw power. On the other hand, you have supply coming from solar roofs and so on. So optimizing this is where the cognitive and analytics kicks in. So that's the beauty of this world. All these things come together, that convergence is where the big value is. >> Right because the third element that you didn't have in your original one was what's going on, what should we do, and then actually doing something. >> Vijay: Exactly. >> You got to have the action to pull it all together. >> Yes, and learning as we go. The one thing that is available today with cognitive systems that we did not have in the past was this ability to learn as you go. So you don't need human intervention to keep changing the optimization algorithms. These things can learn by itself and improve over time which is huge. >> But do you still need a person to help kind of figure out what you're optimizing for? That's where, can you have a pure, machine-driven algorithm without knowing exactly what are you optimizing for? >> We are no where close to that today. Generally, where the system is super smart by itself is a far away concept. But there are lots of aspects of specific AI optimizing a given process that can still go into this unsupervised learning aspects. But it needs boundaries. The system can get smart within boundaries, the system cannot just replace human thought. Just augmenting our intelligence. >> Jeff: Cortnie, you're shaking you head over there. >> I'm completely in agreement. We are no where near, and my husband's actually looking forward to the robotic apocalypse by the way, so. (group laughing) >> He must be an Arnold Schwarzenegger fan. >> He's the opposite of me. I love people, he's like looking forward to that. He's like, the less people, the better. >> Jeff: He must have his Zoomba, or whatever those little vacuum cleaner things are called. >> Yeah, no. (group laughing) >> Peter: Tell him it's the fewer the people, the better. >> The fewer the people the better for him. He's a finance guy, he'd rather just sit with the money all day. What does that say about me? Anyway, (laughing) no, less with the gross. Yeah no, I think we're never going to really get to that point. Because we always as people always have to be training these systems to think like us. So we're never going to have systems that are just autonomically out there without having an intervention here and there to learn the next steps. That's just how it works. >> I always thought the autonomous vehicle, just example, cause it's just so clean. You know, if somebody jumps in front of the car, does the car hit the person, or run into the ditch? >> Where today a person can't make that judgment very fast. They're just going to react. But in computer time, that's like forever. So you can actually make rules. And then people go bananas, well what if it's a grandma on one side and kids on the other? Which do you go? Or what if it's a criminal that just robbed a bank? Do you take him out on purpose? >> Trade off. >> So, you get into a lot of, interesting parameters that have nothing to do necessarily with the mechanics of making that decision. >> And this changes the fundamentals of computing big time too, right? Because a car cannot wait to ping the Cloud to find out, you know, should I break, or should I just run over this person in front of me. So it needs to make that determination right away. And hopefully the right decision which is to break. But on the other hand, all the cars that have this algorithm, together have collective learning, which needs some kind of Cloud computing. So this whole idea of Edge computing will come and replace a lot of what exists today. So see this disruption even behind the scenes on how we architect these systems, it's a fascinating time. >> And then how much of the compute, the store is at the Edge? How much of the computed to store in the Cloud and then depending on the decision, how do you say it, can you do it locally or do you have to send it upstream or break it in pieces. >> I mean if you look at a car of the future, forget car of the future, car of the present like Tesla, that has more compute power than a small data center, at multiple CPU's, lots of RAM, a lot of hard disk. It's a little Cloud that runs on wheels. >> Well it's a little data center that runs on wheels. But, let me ask you a question. And here's the question, we talk about systems that learn, cognitive systems that are constantly learning, and we're training them. How do we ensure that Watson, for example is constantly operating in the interest of the customer, and not the interest of IBM? Now there's a reason I'm asking this question, because at some point in time, I can perceive some other company offering up a similar set of services. I can see those services competing for attention. As we move forward with increasingly complex decisions, with increasingly complex sources of information, what does that say about how these systems are going to interact with each other? >> He always with the loaded questions today. (group laughing) >> It's an excellent question, it's something that I worry about all the time as well. >> Something we worry about with our clients too. >> So, couple of approaches by which this will exist. And to begin with, while we have the big lead in cognitive computing now, there is no hesitation on my part to admit that the ecosystem around us is also fast developing and there will be hefty competition going forward, which is a good thing. 'Cause if you look at how this world is developing, it is developing as API. APIs will fight on their own merits. So it's a very pluggable architecture. If my API is not very good, then it will get replaced by somebody else's API. So that's one aspect. The second aspect is, there is a difference between the provider and the client in terms of who owns the data. We strongly believe from IBM that client owns the data. So we will not go in and do anything crazy with it. We won't even touch it. So we will provide a framework and a cartridge that is very industry specific. Like for example, if Watson has to act as a call center agent for a Telco, we will provide a set of instructions that are applicable to Telco. But, all the learning that Watson does is on top of that clients data. We are not going to take it from one Telco and put it in another Telco. That will stay very local to that Telco. And hopefully that is the way the rest of the industry develops too. That they don't take information from one and provide to another. Even on an anonymous basis, it's a really bad idea to take a clients data and then feed it elsewhere. It has all kinds of ethical and moral consequences, even if it's legal. >> Absolutely. >> And we would encourage clients to take a look at some of the others out there and make sure that that's the arrangement that they have. >> Absolutely, what a great job for an analyst firm, right? But I want to build upon this point, because I heard something very interesting in the keynote, the CDO of IBM, in the keynote this morning. >> He used a term that I've thought about, but never heard before, trust as a service. Are you guys familiar with his use of that term? >> Vijay: Yep. >> Okay, what does trust as a service mean, and how does it play out so that as a consumer of IMB cognitive services, I have a measurable difference in how I trust IBM's cognitive services versus somebody else? >> Some would call that Blockchain. In fact Blockchain has often been called trust as a service. >> Okay, and Blockchain is probably the most physical form of it that we can find at the moment, right? At the (mumbles) where it's open to everybody but then no one brand section can be tabbed by somebody else. But if we extend that concept philosophically, it also includes a lot of the concept about identity. Identity. I as a user today don't have an easy way to identify myself across systems. Like, if I'm behind the firewall I have one identity, if I am outside the firewall I have another identity. But, if you look at the world tomorrow where I have to deal with a zillion APIs, this concept of a consistent identity needs to pass through all of them. It's a very complicated a difficult concept to implement. So that trust as a service, essentially, the light blocking that needs to be an identity service that follows me around that is not restrictive to an IBM system, or a Nautical system or something. >> But at the end of the day, Blockchain's a mechanism. >> Yes. >> Trust in the service sounds like a-- >> It's a transparency is what it is, the more transparency, the more trust. >> It's a way of doing business. >> Yes. >> Sure. >> So is IBM going to be a leader in defining what that means? >> Well look, in all cases, IBM has, we have always strove, what's the right word? Striven, strove, whatever it. >> Strove. >> Strove (laughing)? >> I'll take that anyway. >> Strove, thank you. To be a leader in how we approach everything ethically. I mean, this is truly in our blood, I mean, we are here for our clients. And we aren't trying to just get them to give us all of their data and then go off and use it anywhere. You have to pay attention sometimes, that what you're paying for is exactly what you're getting, because people will try to do those things, and you just need to have a partner that you trust in this. And, I know it's self-serving to say, but we think about data ethics, we think about these things when we talk to our clients, and that's one of the things that we try to bring to the table is that moral, ethical, should you. Just because you can, and we have, just so you know walked away from deals that were very lucrative before, because we didn't feel it was the right thing to do. And we will always, I mean, I know it sounds self-serving, I don't know how to, you won't know until you deal with us, but pay attention, buyer beware. >> You're just Cortnie from IBM, we know what side you're on. (group laughing) It's not a mystery. >> Believe me, if I'm associated with it, it's yeah. >> But you know, it's a great point, because the other kind of ethical thing that comes up a lot with data, is do you have the ethical conversation before you collect that data, and how you're going to be using it. >> Exactly. >> But that's just today. You don't necessarily know what's going to, what and how that might be used tomorrow. >> Well, in other countries. >> That's what gets really tricky. >> Future-proofing is a very interesting concept. For example, vast majority of our analytics conversation today is around structure and security, those kinds of terms. But, where is the vast majority of data sitting today? It is in video and sound files, which okay. >> Cortnie: That's even more scary. >> It is significantly scary because the technology to get insights out of this is still developing. So all these things like cluster and identity and security and so on, and quantum computing for that matter. All these things need to think about the future. But some arbitrary form of data can come hit you and all these principles of ethics and legality and all should apply. It's a very non-trivial challenge. >> But I do see that some countries are starting to develop their own protections like the General Data Protection Regulation is going to be a huge driver of forced ethics. >> And some countries are not. >> And some countries are not. I mean, it's just like, cognitive is just like anything else. When the car was developed, I'm sure people said, hey everybody's going to go out killing people with their cars now, you know? But it's the same thing, you can use it as a mode of transportation, or you can do something evil with it. It really is going to be governed by the societal norms that you live in, as to how much you're going to get away with. And transparency is our friend, so the more transparent we can be, things like Blockchain, other enablers like that that allow you to see what's going on, and have multiple copies, the better. >> All right, well Cortnie, Vijay, great topics. And that's why gatherings like this are so important to be with your peer group, you know, to talk about these much deeper issues that are really kind of tangental to technology but really to the bigger picture. So, keep getting out on the fringe to help us figure this stuff out. >> I appreciate it, thanks for having us. >> Thanks. >> Pleasure. All right, I'm Jeff Frick with Peter Burris. We're at the Fisherman's Wharf in San Francisco at the IBM Chief Data Officer Strategy Summit 2017. Thanks for watching. (upbeat music) (dramatic music)
SUMMARY :
and that is the result of great content, Brought to you by IBM. It is lunchtime at the IBM CDO Summit. Cortnie and I both managed to grab a quick bite. So let's jump into it. On the other hand, you have supply Right because the third element that you didn't have in the past was this ability to learn as you go. the system cannot just replace human thought. forward to the robotic apocalypse by the way, so. He's like, the less people, the better. Jeff: He must have his Zoomba, or whatever those The fewer the people the better for him. does the car hit the person, or run into the ditch? a grandma on one side and kids on the other? interesting parameters that have nothing to do to find out, you know, should I break, How much of the computed to store in the Cloud forget car of the future, car of the present like Tesla, of the customer, and not the interest of IBM? He always with the loaded questions today. that I worry about all the time as well. And hopefully that is the way that that's the arrangement that they have. the CDO of IBM, in the keynote this morning. Are you guys familiar with his use of that term? In fact Blockchain has often been called trust as a service. Okay, and Blockchain is probably the most physical form the more transparency, the more trust. we have always strove, what's the right word? And, I know it's self-serving to say, but we think about You're just Cortnie from IBM, we know what side you're on. is do you have the ethical conversation before you what and how that might be used tomorrow. It is in video and sound files, which okay. It is significantly scary because the technology But I do see that some countries are starting But it's the same thing, you can use it as a mode that are really kind of tangental to technology We're at the Fisherman's Wharf in San Francisco
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Vijay Vijayasankar | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
General Data Protection Regulation | TITLE | 0.99+ |
Cortnie | PERSON | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Vijay | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Cortnie Abercrombie | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Vijay Vijayasanker | PERSON | 0.99+ |
both sides | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two aspects | QUANTITY | 0.99+ |
third element | QUANTITY | 0.99+ |
one aspect | QUANTITY | 0.98+ |
Spring 2017 | DATE | 0.98+ |
San Francisco | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Arnold Schwarzenegger | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Over 31 million people | QUANTITY | 0.96+ |
more than a decade | QUANTITY | 0.95+ |
IBM Chief Data Officer | EVENT | 0.95+ |
this morning | DATE | 0.94+ |
Watson | ORGANIZATION | 0.91+ |
one thing | QUANTITY | 0.9+ |
Strategy Summit 2017 | EVENT | 0.9+ |
IBM CDO Summit | EVENT | 0.89+ |
Fisherman's Wharf | LOCATION | 0.88+ |
IOT | ORGANIZATION | 0.88+ |
Fisherman's Wharf | TITLE | 0.88+ |
#IBMCDO | ORGANIZATION | 0.87+ |
couple | QUANTITY | 0.86+ |
theCUBE | TITLE | 0.83+ |
one hand | QUANTITY | 0.82+ |
Chief Data Officer | EVENT | 0.8+ |
IBM CDO Strategy Summit | EVENT | 0.8+ |
theCUBE | ORGANIZATION | 0.77+ |
Strategy Summit | EVENT | 0.74+ |
one side | QUANTITY | 0.73+ |
Cognitive | ORGANIZATION | 0.7+ |
zillion APIs | QUANTITY | 0.65+ |
Zoomba | ORGANIZATION | 0.61+ |
IMB | ORGANIZATION | 0.6+ |
GM Cognitive | ORGANIZATION | 0.6+ |
Analytics | ORGANIZATION | 0.54+ |
#theCUBE | ORGANIZATION | 0.46+ |
Val Bercovici, CNCF - Google Next 2017 - #GoogleNext17 - #theCUBE
>> Announcer: Live, from Silicon Valley, it's the Cube. Covering Google Cloud Next 17. (ambient music) >> Okay, welcome back everyone. We are here live in Palo Alto for a special two days of coverage of Google Next 2017 events in San Francisco. Sold out, 10,000 plus people. Yeah, really, an amazing turn of events. Amazon Web Services Reinvent had 36,000, Google's nipping at their heels, although different, we're going to break down the differences with Google versus Amazon because they're really two different things and again, this is Cube coverage here in Palo Alto studio, getting reaction. Sponsored by Intel, thanks, Intel, for allowing us to continue the wall-to-wall coverage of the key events in the tech industry. Our next guest is Val Bercovici who's the boardmember of the Cloud Native Compute Foundation, boardmember. >> That's right. >> Welcome back, you were here last week from Mobile World Congress, great to see you. Silicon contributor, what your reaction to the Google keynote, Google news? Not a lot of news, we saw the SAP, that was the biggest news and the rest were showcasing customers, most of the customers were G Suite customers. >> Yeah, exactly. So, I would say my first reaction is bit of a rough keynote, you know, there's definitely not as quit as much polish as Microsoft had in their heyday and of course, Amazon nowadays in the Cloud era. But what's interesting to me is there's the whole battle around empathy right now. So, the next gen developers and the Clouderati talk about user empathy and that means understanding the workflow of the user and getting the user to consume more of your stuff, you know, Snapchat gets user empathy for the millennial generation but anybody else. Facebook as well. So, you see Google, we emphasize, even the Google Twitter account, it emphasizes developer productivity and they have pretty strong developer empathy. But what AWS has, Amazon with AWS is enterprise empathy, right, they really understand how to package themselves and make themselves more consumable right now for a lot of mainstream enterprises, they've been doing this for three, four years at their Reinvent events now. Whereas Google is just catching up. They've got great developer empathy but they're just catching up on enterprise empathy. Those are the main differences I see. >> Yeah, I think that's an important point, Val, great, great point, I think Amazon certainly has, and I wrote this in my blog post this morning, getting a lot of reaction from that, actually, and some things I want to drill down on the network and security side. Some Google folks DMing me we're going to do that. But really, Amazon's lead is way out front on this. But the rest, you know, call 'em IBM, not in any particular, IBM, Oracle, Google, SAP, others, put Salesforces, we're talking Sass and Adobe, they're all in this kind of pack. It's like a NASCAR, you know, pack and you don't know who's going to slimshot around and get out there. But they all have their own unique use cases, they're using their own products to differentiate. We're hearing Google and again, this is a red flag for me because it kind of smells like they're hiding the ball. G Suite, I get the workplace productivity is a Cloud app, but that's not pure Cloud conversations, if you look at the Gartner, Gartner's recent, last report which I had a chance to get a peek at, there's no mention of Sassifications, Google G Suite's not in there, so the way Cloud is strictly defined doesn't even include Sass. >> Yeah. >> If you're going to include Sass, then you got to include Salesforce in that conversation or Adobe or others. >> Exactly. >> So, this is kind of an optical illusion in my mind. And I think that's something that points to Google's lack of traction on customers in the enterprise. >> This is where behind the scenes, Kubernetes, is so important and why I'm involved with the the CNCF. If anything, the first wave of Clouded option particularly by enterprise was centered around the VM model. And you know, infrastructure's a service based on VMs, Amazon, AWS is the king of that. What we're seeing right now is developers in particular that are developing the next generation of apps, most of them are already on our phones and our tablets and our houses and stuff, which is, you know, all these Echo-style devices. That is a container-based architecture that these next gen applications are based on. And so, Kubernetes, in my mind, is really nothing more than Google's attempt to create as much of a container-based ecosystem at scale so that the natural home for container-based apps will be GCP as opposed to AWS. That's the real long term play in why Google's investing so heavily in Kubernetes. >> Is that counterintuitive? Is that a good thing? I mean, it sounds like they're trying to change the goalpost, if you will, to change the game because we had Joe Arnold on, the founder of Swiftstack and you know, ultimately, you know, Clouds are Clouds and inter-Clouding and multi-Cloud is important. Does Kubernete actually help the industry? Or is that more Google specific in your mind? >> I think it will help the industry but the industry itself is moving so rapidly, we're seeing server-less right now and functions of service, and so, I think the landscape is shifting away from what we would think of as either VM or container-based infrastructure service towards having the right abstractions. What I'm seeing is that, really, even the most innovative enterprises today don't really care about their per minute or per hour cost for a cycle of computer, a byte of, you know, network transferred or stored. They care about big table, big quarry, the natural language processing, visual search, and a whole category of these AI based applications that they want to base their own new revenue-generating products and services based on. So, it's abstraction now as a new battlefield. AWS brings that cult of modularity to it, they're delivering a lot of cool services that are very high level Lambda centered based on really cool modularity, whereas Google's doing it, which is very, very elegant abstraction. It's at the developer level, at the technical level, that's what the landscape is at right now. >> Are you happy with Google's approach because I think Google actually doesn't want to be compared to AWS in a way. I mean, from what I can see from the keynote... >> Only by revenue. (laughs) >> Well, certainly, they're going to win that by throwing G Suite on it but, I mean, this is, again, a philosophy game, right? I mean, Andy Jassy is very customer focused, but they don't have their own Sass app, except for Amazon which they don't count on the Cloud. So, their success is all about customers, building on Amazon. Google actually has its own customer and they actually include that in, as does Microsoft with Office 365. >> Yeah, that's the irony, is if we go back to enterprise empathy I think it's Microsoft has that legacy of understanding the enterprise better than all the others. And they're beginning to leverage that, we're definitely seeing, as you're sliding comfortably to a number two position behind AWS, but it really does come back to, you know, are you going to lead with a propeller head lead in technology which Google clearly has, they've got some of the most superior technology, we were rattling off some the speeds and feeds that one of their product managers shared with you this morning. They've had amazing technology, that's unquestioned. But they do have also is this reputation of almost flying in rarefied air when it comes to enterprises. >> What do you mean by that? >> What I mean by that is that most enterprise IT organizations, even the progressive ones, have a hard time relating to Google technology. It's too far out there, it's too advanced, in some cases, they just can't understand it. They've never been trained in college courses on it or even post-grad courses on it. MBA is older than three years old, don't even reference the Cloud. So, there's a lot of training, a lot of knowledge that has to be, you know, conducted on the enterprise side. AWS is packaged, that technology there is the modularity in such a way that's more consumable. Not perfect, but more consumable than any other Cloud render and that's why, with an early head start, they've got the biggest enterprise traction today. >> Yeah, I mean, and I'm really bullish on Google, I love the company, I've been following them since '98, a lot of friends here at Palo Alto, a lot of Googlers living in my neighborhood, they're all around us. Larry Page, seen him around town. Great, great company and very, always been kind of like an academic, speed of academic. Very strong, technically, and that is, clearly, they're playing that card, "We have the technology." So, I would just say that, to counter that argument would be if Google, I'm Google, I'm on the team, the guy in green and you know, lookit, what I want to do is, we want to be the intel for the Cloud. So, the hard and top is we don't really care if people are trained, should be so easy to use, training doesn't matter. So, I mean, that's really more of an arrogant approach, but I don't think Google's being arrogant in the Cloud. I think that ship has sailed, I think Google has kind of been humbled in the sense, in recognizing that the enterprise is hard, they're checking the boxes. They have a partner program. >> Yeah, you're right, I mean, if you take a look at their customers today, you've got Spotify, and Snap, and Evernote, and you know, Pokemon Go and Niantic, all of the leading edge technology companies that have gone mainstream that are, you know, startup oriented Snap, of course. They're on Google Cloud. But that's not enough, you know, the enterprise, I did a seminar just last week promoting Container World with Jim Forge from ADP. The enterprise is not homogeneous, the enterprise is complicated. The L word legacy is all over, what they have to budget and plan for. So, the enterprise is just a lot more complicated than Google will acknowledge right now. And I believe if they were to humanize some of their advanced technology and package it and price it in such a way that AWS, you know, where they're seeing success, they'll accelerate their inevitable sort of leap to being one of those top three contenders. >> So, I'm just reading some of my, I'm putting together because for the Google folks, I'm going to interview them, just prepping for this, but just networking alone, isolating Cloud resources. That's hard, right? So, you know, virtual network in the Cloud, Google's got the virtual network. You get multiple IP addresses, for instance, ability to move network interfaces and IPs between instances, and AS networking support. Network traffic logging, virtual network peering, manage NAT gateways, subnet level filtering, IP V stick support, use any CIDR including RC 1918. Multiple network interface instances, I mean, this is complicated! (laughs) It's not easy so, you know, I think the strategy's going to be interesting to see how, does Google go into the point to point solution set, or they just say, "This is what we got, take it or leave it," and try to change the game? >> That's where they've been up until now and I don't think it's working because they have very formidable competitors that are not standing still. So, I think they're going to have to keep upping their game, again, not in terms of better technology but in terms of better packaging, better accessibility to their technology. Better trust, if you will, overseas. Cloud is a global game, it's not US only. And trust is so critical, there's a lot of skepticism in Europe today with the latest Wikileaks announcements, or Asia Today around. Any American based Cloud provider truly being able to isolate and protect my citizen's data, you know, within my borders. >> I think Google Cloud has one fatal flaw that I, looking at all the data, is that and the analysis that we've been looking at with Bookie Bontine and our research is that there's one thing that jumps out at me. I mean, the rest are all, I look at as, you know, Google's got such great technologies, they can move up fast, they can scale up to code. But the one thing that's interesting is their architecture, the way they handle their architecture is they can't let customers dictate data where data's stored. That is a huge issue for them. And if, to your point, if a user in Germany is using an app and it's got to stay in Germany. >> This is back to the empathy disconnect, right? As an abstraction layer for a developer, what I want is exactly what Google offers. I don't want to care as a developer where the bits and bytes are stored, I want this consistent, uniform API, I want to do cool stuff with the data. The operation side, particularly within legal parameters, regulatory parameters, you know, all sorts of other costs and quality assurance parameters, they really care about where that data is stored, and that's where having more enterprise empathy, and their thinking, and their offerings, and their pricing, and their packaging will leapfrog Google to where they want to be today. >> Val Bercovici, great analysis, I mean, I would totally agree just to lock that in, their developer empathy is so strong. And their operational one needs to be, they got a blind spot there where they got to work on that. And this is interesting because people who don't know Google are very strong operations, it's not like they don't have any ops chops. (Val laughs) They're absolutely in the five nines, they are awesome operations. But they've been operations for themselves. >> Exactly. >> So, that's the distinction you're getting at, right? >> Absolutely. >> Okay, so the next question I got to ask you is back to the developer empathy, 'cause I think it's a really big opportunity for Google. So, pointing out the fatal flaw in my opinions in the data locality thing. But I think the opportunity for Google to change the game, using the developer community opportunity because you mentioned the Kubernetes. There is a huge, open source, I don't want to say transformation but an evolution to the next generation, you're starting to see machine learning and AI start to tease out the leverage of not just data now. Data's become so massive now, you have data sets. That can be addressable and be treated like software programs. So, data as code becomes a new dynamic with AI. So, with AI, with open source, you're seeing a lot of activity, CNCF, the Cloud Native Compute Foundation, folks should check that out, that's an amazing group, analytics foundation. This is an awesome opportunity for Google to use Kubernetes as saying, "Hey, we will make orchestration of application workloads." >> Absolutely. >> This is something, Amazon's been great with open source, but they don't get a lot of love... >> Amazon has a blind spot on containers, let's not, you know, let's not call, you know, let's call it the speed of speed, let's not, you know, beat around the bush, they do have a blind spot around containers. It is something they strategically have to get a hold of, they've got some really interesting proprietary offerings. But it's not a natural home for a Docker workflow, it's not a natural home for a Kubernetes workflow yet. And it's something they have to work on and AI as a use case could not be more pertinent to business today because it's that quote, you know, "The future is here "but unevenly distributed." That's exactly where AI is today, the businesses that are figuring it out are really leaping ahead of their competitors. >> We're getting some great tweets, my phone's blowing up. Val, you've got great commentary. I want to bring up, so, I've been kind of over the top with the comment that I've been making. It's maybe mischaracterized but I'll say it again. There seems to be a Cold War going on inside the communities between, as Kubernetes have done, we've seen doc, or we've seen Docker Containers be so successful in this service list, server list vision, which is absolutely where Cloud Native needs to be in that notion of, you know, separating out fiscal gear and addressability, making it completely transparent, full dev ops, if you will. To who's going to own the orchestration and where does it sit on the stack? And with Kubernetes, to me, is interesting is that it tugs at some sacred cows in the container world. >> Yes. >> And it opens up the notion of multi-Cloud. I mean, assume latency can be solved at some point, but... >> It's actually core religion, what impressed me about he whole Kubernetes community, and community is its greatest strength, by the way, is the fact that they had a religion on multi-Cloud from day one. It wasn't about, "We'll add it later "'cause we know it's important," it's about portability and you know, even Docker lent that to the community. Portability is just a number one priority and now portability, at scale, across multiple Clouds, dynamically orchestrated, not through, you know, potential for human error, human interventions we saw last week. That the secret sauce there to stay. >> I think not only is, a Cold War is a negative connotation, but I think it's an opportunity to be sitting in the sun, if you will, on the beach with a pina colada because if you take the Kubernetes trend that's got developer empathy with portability, that speaks to what developers want, I want to have the ability to write code, ship it up to the network, and have it integrate in nicely and seamlessly so, you know, things can self-work and do all that. And AI can help in all those things. Connecting with operational challenges. So, what is, in your mind, that intersection? Because let's just say that Kubernetes is going to develop a nice trajectory which it has now and continues to be a nice way to galvanize a community around orchestration, portability, etc. Where does that intersect with some of the challenges and needs for operational effectiveness and efficiency? >> So, the dirtiest secret in that world is data gravity, rigtht? It's all well and fine to have workload portability across, you know, multiple instances and a cluster across multiple Clouds, so to speak. But data has weight, data has mass and gravity, and it's very hard to move particularly at scale. Kubernetes only in the last few releases with a furious pace in evolution, one four, one five, has a notion of provisioning persistent volumes, this thing they affectionately called pet sets that are not a stateful sets, I love that name. >> Cattle. >> Exactly. (laughs) So, Google is waking up and Kubernetes, I should say, in particular is waking up to the whole notion of managing data is really that last mile problem of Cloud portability and operational maturity. And planning around data gravity and overcoming where you can data gravity through meta-operational procedures is where this thing is going to really take off. >> I think that's where Google, I like Google's messaging, I like their posture on machine learning AI, I think that's key. But Amazon has been doing AI, they've got machine learning as a service, they've had Kineses for a while. In fact, Redshift and Kineses were their fastest growing services before Aurora became the big thing that they had. So, I think, you know, they're interested in the jets, with the trucks, and the snowmobile stuff. So I think certainly, Amazon's been doing that data and then rolling in as some sort of AI. >> And they've been humanizing it better, right? I can relate to some of Amazon's offering and sometimes I have it in the house. You know, so, the packaging and just the consumerability of these Amazon services today is ahead of where Google is and Google arguably has the superior technology. >> Yeah, and I think, you know, I was laying out my analysis of Google versus Amazon but I think it's not fair to try to compare them too much because Google is just making their opening moves on the chessboard. Because they had Diane Green, got to give her credit, she's really starting behind. And that's been talked about but they are serious, they're going to get there. The question is what does an enterprise need to do? So, your advice to enterprise would be what? Stick with the use cases that are either Google specific apps or Cloud Native, where do you go, how do you...? >> I would say to remember the lock-in days of the Linux vendors and even Microsoft in their heyday and definitely think multi-Cloud, you know, Cloud first is fine. But think, we need data first in a Cloud before I think a particular Cloud first. Always keep your options open, seek the highest levels of abstraction, particularly as you're innovating early on and fast failing in the Cloud. Don't go low right away, go low later on when you're operationalizing and scaled and looking to squeeze efficiencies out of a new product or service. >> Don't go low, you mean don't go low in the stack? >> Don't go low in the stack, exactly. Start very high in the stack. >> What would be an example? >> Lambda, you know, taking advantage of, if we bring in Kineses, IOT workflows, all sorts of sensor data coming in from the Edge. Don't code that for efficiency day one and switch to Kafka or something else that's more sophisticated, but keep it really high level as events triggering off, whether it's the IOTICK in the sensor inputs or whether it's S3 events, Dynamo, DB events. Write your functions that are very, very high level. >> Yeah. >> Get the workflows right. Pay a bit more money up front, pay premium for the fast... >> Well, there's also Bootstraps and the Training Channel Digimation, so, with Google, pick some things that are known out there. But you mentioned IOT and one of the things I was kind of disappointed in the keynote today, there wasn't much talk about IOT. You're not seeing IOT in the Google story. >> That may come up in tomorrow's keynote, it may come up tomorrow in a more technical context. But you're right, it's an area both Agar and AWS have a monster of a lead right now, as they've had really good SDKs out there to be able to create workflows without even being an expert in some of the devices that you know, you might own and maintain. >> Google's got some differentiation, they've got something, I'll highlight one that I like that I think is really compelling. Tensor flow. Tensor flow as got a lot of great traction and then Intel is writing chips with their Skylake product that actually runs much faster silicon... >> What was that, Nvidia? You know, it's a GPU game as much as a CPU game when it comes to machine learning. And it's just... >> What does that mean for you? I mean, that's exciting, you smile on that, I get geeked out on that because if you think about that, if you can have a relationship between the silicon and software, what does it mean from an impact standpoint? Do you think that's going to be a good accelerant for the game? >> Massive accelerant, you know, and this is where we get into sort of more rarefied air with Elon Musk's quote around the fact we'll need universal income for society. There a lot of static tasks that are automated today. There's more and more dynamic tasks now that these AI algorithms, through machine learning, can be trained to conduct in a very intelligent manner. So, more and more task based work all over the world, including in a robotic context but also call centers, stock brokerage, for example, it's been demonstrated that AI ML algorithms are superior to humans nine times out of ten in terms of recommending stocks. So, there's a lot of white collars, while it's blue collared work that just going to be augmented and then eliminated with these technologies and the fact that you have major players, economies at scales such as Intel and Nvidia and so forth accelerating that, making it affordable, fast, low power in certain edge context. That's, you know, really good for the industry. >> So, day one of two days of coverage here with Google, just thoughts real quick on what Google needs to do to really conquer the enterprise and really be credible, viable, successful, number two, or leader in the enterprise? >> I'm a big fan, you know, I've had personal experiences with fast following as opposed to leading and innovating sometimes in terms of getting market traction. I think they should unabashedly, unashamedly examine what Microsoft or what Amazon are doing right in the Cloud. Because you know, simple things like conducting a bit more of a smooth keynote, Google doesn't seem to have mastered it yet, right now in the Cloud space. And it's not rocket science, but shamelessly copying what works, shamelessly copying the packaging and the humanization that some of the advanced technologies that Amazon and Microsoft have done in particular. And then applying their technical superiority, you know, their uptime availability advantages, their faster networks, their strong consistency which is a big deal for developers across their regions. Emphasizing their strengths after they package and make their technology more consumable. As opposed to leading where the tech specs. >> And you have a lot of experience in the enterprise, table stakes out there that are pretty obvious that they need to check the boxes on, and would be what? >> A very good question, I would say, first and foremost, you really have to focus on more, you know, transparent pricing. Think something that is a whole black art in terms of optimizing your AWS usage in this industry that's formed around that. I think Google has and they enact blogs advertising a lot of advantages they have in the granularity, in the efficiency of their auto scaling up and down. But businesses don't really map that, they don't think of that first even though it can save them millions of dollars as they do move to Cloud first approaches. >> Yeah and I think Google got to shake that academic arrogance, in a way, that they've had a reputation for. Not that that's a bad thing, I'll give you an example, I love the fact that Google leads a lot of price performance on many levels in the Cloud, yet their SLAs are kind of wonky here and there. So, it's like, okay, enterprises like SLAs. You got to nail that. And then maybe keep their price a little high here, it can make more money, but... So, you were saying, is that enterprise might not get the fact that it's such a good deal. >> It's like enterprise sales 101, you talk about, you know, the operational benefits but you also talk about financial benefits and business benefits. Catching into those three contexts in terms of their technical superiority would do them a world of good as they seek more and more enterprise opportunities. >> Alright, Val Bercovici, CTO, also CTO, and also on the board of the Cloud Native Compute Foundation known as CNCF, a newly formed organization, part of the Linux Foundation. Really looking at the orchestration, looking at the containers, looking at Kubernetes, looking at a whole new world of app enablement. Val, thanks for the company, great to see you. Turning out to be guest contributor here on the Cube studio, appreciate his time. This is the Cube, two days of live coverage. Hope to have someone from Google on the security and network side coming in and calling in, we're going to try to set that up, a lot of conversations happening around that. Lot of great stuff happening at Google Next, we've got all the wall-to-wall coverage, reporters on the ground in San Francisco as well as analysts. And of course, in studio reaction here in Palo Alto. We'll be right back. (ambient music)
SUMMARY :
Announcer: Live, from Silicon Valley, it's the Cube. in the tech industry. and the rest were showcasing customers, So, the next gen developers and the Clouderati But the rest, you know, call 'em IBM, then you got to include Salesforce in that conversation And I think that's something that points to that are developing the next generation of apps, the goalpost, if you will, to change the game It's at the developer level, at the technical level, I think Google actually doesn't want to (laughs) and they actually include that in, Yeah, that's the irony, that has to be, you know, conducted on the enterprise side. I'm on the team, the guy in green and you know, lookit, and price it in such a way that AWS, you know, because for the Google folks, I'm going to interview them, So, I think they're going to have to keep upping their game, and the analysis that we've been looking at you know, all sorts of other costs They're absolutely in the five nines, Okay, so the next question I got to ask you This is something, Amazon's been great with open source, it's that quote, you know, "The future is here in that notion of, you know, I mean, assume latency can be solved at some point, but... and community is its greatest strength, by the way, and continues to be a nice way to So, the dirtiest secret in that world where you can data gravity So, I think, you know, they're interested in the jets, and just the consumerability of these Amazon services Yeah, and I think, you know, and definitely think multi-Cloud, you know, Don't go low in the stack, exactly. Lambda, you know, taking advantage of, for the fast... Bootstraps and the Training Channel Digimation, that you know, you might own and maintain. that I think is really compelling. And it's just... and the fact that you have major players, that some of the advanced in the granularity, in the efficiency I love the fact that Google but you also talk about financial benefits CTO, also CTO, and also on the board of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Larry Page | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Val Bercovici | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Diane Green | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Joe Arnold | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Jim Forge | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
ten | QUANTITY | 0.99+ |
nine times | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Craig McLuckie, Google | Google Cloud Platform 2014
(upbeat music) >> Live from the Mission Bay Conference Center in San Francisco, California, it's theCUBE at Google Cloud Platform Live. Here are your hosts, John Furrier and Jeff Frick. >> Okay welcome back everyone, we are live. This is theCUBE in San Francisco, California for Google Platform Conference Live, their developer conference for the cloud. I'm John Furrier, the founder of SiliconANGLE, Jeff Frick, my cohost, and we're excited to have CUBE alumni but also man about town coming to talk about containers, Kubernetes. We have Craig McLuckie, product manager at Google. Named the product Kubernetes. Welcome back. >> Thank you. It's great to be back on theCUBE. >> As I said, you're the man about town. Containers are the hottest thing going on. Really enabling a lot of new change. A lot of solidarity in the developer community around bringing cloud together, right? You're seeing people go, wow, containers are not a new concept. Docker has brought together the concept and made a huge push, just the ball got moved down the field big time. And then Kubernetes kind of tying it all together and you guys are open sourcing it. I wanted to first talk about, from your perspective, what's changed since VMware where we had a great conversation around Kubernetes? Obviously that was front and center in VMware's show, which is a huge IT enterprise vote of confidence. So now, here at Google, core developers. Large scale, backend network interconnect stuff going on. You almost connect the dots, right? Native developers really cranking out the apps? Large scale interconnect? There's a lot in the middle there between those bookends. What's changed? >> So a couple things I think have changed since I last spoke to theCUBE at VMworld. The first is we've seen an amazing amount of velocity around the Kubernetes community. Not just what Google's been doing but also what our open source community members have been contributing. And we're seeing a very fast acceleration of the overall platform. Moving quickly towards operation maturity, you know getting closer to production readiness and introducing a lot of features that are really need to both run real world applications and to go to new place, to go to a variety of new clouds. We're seeing the reality of a very highly portable and maturing way to build container based applications emerging. That's been very exciting. I think the other thing that's really interesting here is the way that we at Google have been introducing Kubernetes directly into the Google Cloud platform. Today we announced a new product called Google Container Engine which provides the quickest and easiest way to get a Kubernetes cluster up and running and managed for you on Google Cloud platform. And we're very excited about how easy it's making it for our customers to access this new way of building applications. >> Talk about this Container Engine because obviously App Engine's had huge success. Little bit of learning curve but you guys have some core front end developers that you're making that easier now but what is a Container Engine? Is it a Docker engine? Is it Docker compatible? Is it a whole new animal? What it is? What is it? >> That's great, I'm glad you asked that question. I would start by saying this, at Google we have Google Compute Engine which offers powerful, flexible, fast breeding VMs and at the other end of the spectrum we've had App Engine which offers a highly managed, very efficient way to get web applications up and running. And what we've encountered with our customers is that there is no natural way to move from one world to the other world. There's no connective tissue that exists in the middle that let's our customers think about building applications that are running on a cloud computer rather than just running on a virtual machine. And so what Google Container Engine is is a technology that let's our customers program at the cluster level. So Docker has provided this amazingly productive way to package up an application and deploy it into a node. Docker has done a great job of taking a lot of technologies that existed and making them incredibly accessible to developers. But the reality, in our experience, is that at least 80% of our customer's cost of maintaining applications comes out of the operation space so Kubernetes and Google Container Engine are an operationally viable way to build these distributed applications. It really moves our customers from thinking about deploying things into individual virtual machines to instead saying, hey, I'm just going to drop this into this cluster and it will all be wired together so I can take these little Lego building blocks I've got called containers, piece them together in ways that are intuitive and then have a very smart and effective system to run those for me on my behalf.. >> So basically a pool of VMs could be available to developer, if I get this right? So you're saying, I'm a developer, I don't have to worry about the dependencies by VMware, by VMware versus another form factor? I just let the container deal with that? Is that-- >> What we've done, yes, that's exactly right, we've created this strong separation between infrastructure operations and application operations. Docker has created a portable framework to take basically a binary and run it anywhere which is an amazing capability. But that's not enough. You also need to be able to manage that with a framework that can run anywhere so the union of Docker and Kubernetes provides this framework where you're completely abstracted from the underlying infrastructure. You could use VMware, you could use Red Hat Open Stack deployment, you could run on another major cloud provider like Rack Space or IBM and you could just build this application and deploy it there and experience this very powerful cluster first way of building and managing that app. >> Cluster first, I haven't heard that one. >> It's not a cluster you-know-what, it's a cluster first. (laughing) That trumps cloud first from Microsoft but let's go back to Kubernetes. You named the product, what does it mean? I mean it's kind of a, you don't look at a tech name, you say, it's not like alpha one, ya know? >> Kubernetes is the Greek word for the helmsman of a ship. I was looking to find a name and turns out, there's a lot of cluster management technologies and a lot of the obvious names were taken and so I had the inspiration of what is this doing? It's actually the thing that's overseeing the whole of your operation, and is planning what goes where and managing it. So Kubernetes is the helmsman of your cluster group, it's the thing that manages it. >> Did you design the algorithm to stay away from icebergs? (laughing) That's the key thing, you don't want to crash the system. But that's the challenge, you know, just joking aside, orchestration is really a hard thing. That's been a cloud phenomenon, automation. Everyone's been talking about, oh we have management software that automates and orchestrates cloud resources. But now in a cloud environment, it's more challenging now. Talk about what Kubernetes does different than older approaches to orchestration. >> I think is a very, very important consideration. When I look at the way that orchestration's been done traditionally, you tend to think about your application as being deeply tied to the underlying piece of infrastructure, so your orchestration process is provision me a basic machine, go get the packages I need, deploy my application pieces, wire it in explicitly to all the other pieces of my system and so you have to kind of build this relatively fragile system where all the piece are tied together and deeply coupled. What Kubernetes has done is provide a framework where you have a very principled, almost Lego building block that you can stick together and say, I want one of these things, I want it replicated six times, and I want it wired in to these other pieces without actually having to know about where those other pieces are deployed, how they relate to one another. It really is realizing this highly decoupled, very principled way of thinking about your environment as a cluster where you just drop your packages in and they're all wired together using virtualized networking and using this cluster centric paradigm and it radically, radically reduces the cost of operations. I could just give you an example of that. In the old days of Google, before we had these technologies inside the house, it was all we could do to keep the lights on. Like every day was an adventure, it was very hard, because our operations had our application pieces deeply tied into the physical infrastructure. When we introduced the system internally known as Borg, we changed the game. In less than a year-- >> Hold on, name is Borg? >> What was it called? >> Borg? >> Borg. >> Borg. >> Internally known as Borg. (laughing) >> Like connected to everything, like the Microsoft Borg, that's at Microsoft but Microsoft used to be called-- >> I was thinking more Arnold Schwarzenegger, but that's alright. >> Continue. I just wanted to make sure we heard that right. >> We literally doubled the number of production services we were running within a year. It's just so much easier to run things at scale. >> So provisioning, managing, it just makes a smoother operation? Smooth sailing if you will? >> It's really trying to hide provision, managing, right? You're basically, I have an app and I want to build it easily and then I want to deploy it easily and then I want it to be able to scale easily. >> Yes. >> Without having to go back and reconnect it to more stuff. It's funny because I think most people think that that's what clouds have already always done, right? There's basically compute, a networking and storage that's just in small units, virtually available to assemble however I want. But you say it, I used to have to still assemble it and disassemble it, now it's just-- >> Exactly. >> It's just plugging in. >> That's the challenge. The way we've seen cloud evolving has disappointed us a little bit because it really is just a re manifestation of the same existing first generation way of thinking about application development, application provisioning. If you challenge a lot of the fundamental assumptions, if you really step back and think about is there a better way to do this? If I have all this incredibly fungible resource that can turn up and turn down, is there a better way to build applications? Kubernetes is our invitation to the community to participate in defining that thing. We think it is a better way to build applications. We know it because we've been doing this for 10 years and it works really well for us. >> So talk about the open source angle because one, Kubernetes is open source, we've reported that live when we last chatted. Docker has huge success with their open source model. That's not well known in the main world, how the nuance and developers really are engaged and motivated to play with Docker which has it's own flywheel effect which is very viral in network effect. What's your strategy with Kubernetes? Is it standard open source blocking and tackling? Is there things you're doing to prime the pump? Is there a magical formula you guys are really nurturing and fostering? >> I am very happy with the way that the projects been run and it's been humbling to see the amount of adoption success we've had. I think that this manner of operating where we built Kubernetes as an open source project with the community, and then we take it and take exactly that and we turn it into a service and add a lot high value capabilities to it, is a pattern that's working very well for us. It's massively increased our velocity because it's not just us that are actually developing the project, we have amazing contributions from people like Red Hat. They're putting a lot of time and effort into making this thing great. Our friends at CoreOS are putting a lot of effort into it. We're able to do more because it's just more people working on it, so the velocity is far higher. The second thing is that we were able to go straight to an open offer. Normally we do these early adopter programs hidden behind the curtain, try to figure stuff out and do a lot of iteration. We didn't have to do that because the community has built the API with us, our customers have been working directly with us to shape the API. We know it's going to work for them. >> And that's helped you guys, so your differentiation doesn't really conflict with the community? >> Absolutely not. We recognized as we moved from a cloud that's worked mostly in the start up community and with internet facing companies to a cloud that's really engaging mainstream business. Our customers want multi cloud. It's critical to them. They want to be able to run in hybrid cloud. They want to have multi cloud provider relationships. They don't want to just rely on one provider and so our framework that works well everywhere but works especially well on Google, serves our business very well. >> Getting some great prompts on Crowd Chat so thanks for coming on theCUBE, always great to chat with you. You're in a hot area, we'd love to pick your brain but I want you to address three things I'm going to say to you, get your thoughts on. >> Okay. >> It can be your Google perspective, could be your own geeky perspective. Perimeter-less IT, multi cloud and mobile infrastructure. Three of the hottest areas on the planet right now in terms of people looking at investments, retooling, trying to figure things out, perimeter-less IT. Obviously perimeter IT, perimeter based security? >> Sure. >> Kind of goes away with the cloud right? >> Yeah. >> But you still need security, it's perimeter-less, so what does that mean? How do people understand and grasp that concept? >> I'm not sure I'm the right person to speak to perimeter less IT but I can say that-- >> Just in general. >> When I think about it, I think there's a couple of things that are happening here that are really interesting. When I look at the idea of perimeter-less IT, when I look at the idea of what I consider the democratization of IT, if you will, we've lived in a world where most businesses have been beholden to a specific organization that's controlled their provisioning, the policies and the set of bits they can use, everything's been controlled and IT hasn't been well loved by and large. We're moving into a world where it's a much more open ecosystem. Departments are far more empowered, anyone with a corporate credit card can go and get a machine and that's creating amazing agility and velocity for businesses. But it's introducing-- >> Creativity, too. >> A lot of creativity, but it's introducing a lot of pain as well. The hard thing is going to be creating a smart framework that allows empowered decentralization. Going from this world of highly controlled to decentralized empowerment, and I think that's where we're going to see a lot of interest from folks that are operating in the airplay space. >> Okay, multi cloud, just in general. Will people move to multiple clouds? Do you see that? UberClouds, we had Bitnami in earlier like, ah, people aren't really going to multiple clouds. They're not interested in moving workloads. Is that a state of the current situation or will it evolve to workloads anywhere? >> Multi cloud is the reality of our world. There's no serious customer I've spoken to in the last six months that has not been interested in a multi cloud relationship. Sorry, that's not true, there's no enterprise customer I've spoken to the last six months. >> That has not been interested? >> That has not been interested in multi cloud. >> And the reason is? >> In some ways. >> It's for what, resources? >> There's a couple of reasons. One is a lot of companies want to have just a multi provider relationship. They don't want to be beholden to a single cloud provider and frankly almost every customer I speak to has a massive investment in on premise infrastructure. They want to move away from a lot of the pain associated with managing that, but it's not going to happen overnight. Hybrid cloud is going to exist for quite a while. >> This is back to your empowered decentralization theme. >> And we have to provide them the tools to do that. We have to create positive pressure that moves them from those clouds to the public cloud. >> Final concept, and I've heard this a lot, kind of leads into the keynote, not necessarily the words but almost reeking of this concept of mobile infrastructure. I mean, mobile first, cluster first, kind of enables mobile first but mobile is obviously a form factor, whether it's an internet of things as a human or a device, doesn't matter it's still an endpoint the network. >> Yeah. >> It's a multitude of millions of devices so what is mobile infrastructure? Is it different? Is it the same? What's your take on it? >> It's an interesting question and the reality of our world is it's a mobile world. It's almost folly to do anything but think about mobile as the primary vehicle for customers, consumers and everyone else to interface with the internet, with the web. It certainly introduces an interesting set of challenges to application developers. I think one of the things that I am most sort of interested in cracking from a cloud provider's perspective is the world of multiple devices where you have a large set of devices in different form factors that are ultimately presenting a view of the same set of data, the same set of information and creating a set of experiences that work well in that multi device space. Moving away from a world where state is bound to a device to a world where state is based in your cloud and your device is simply providing a view or a way to interface with that data. We still have a way to go before that is fully materialized but I think that's going to be a big sort of anchor point of a lot of mobile development in the space. >> So Craig, where's the locus of competition move then? If the data center just becomes a resource that's on tap, basically, that I can just get? How do the cloud providers then differentiate? >> Basic infrastructure is relatively undifferentiated but when I look at the way that we run inside Google, we do some really, really scary smart things to make your application run for you. If you think about the way we run our infrastructure it's almost like the flight controller of a modern airplane. It's going from the old wire based control system where you move something to move a flap to a world where you have this controller that's taking in million of signals a second and making incredibly informed decisions that is optimizing the heck out of everything you do and making very fine grain corrections and I think that's going to be a huge avenue of differentiation. When you take an application, you package it and you give it to us and you trust us to run it for you and it's running at a slightly higher level, we have a much high extraction level, we can do incredibly smart things with things like machine learning technologies. We can watch how your application's running. We know how it ran last time so we can tell if something's going wrong because we have the ability to actually watch it. This is how we run internally. >> Right, right. >> It's not just about the infrastructure. It's going to be about smart systems that run your application for you. And that's going to be hard to-- >> It's really to abstract above the management of the application. It's actually the management of the application and the optimization of the application as opposed to the infrastructure? >> There's so much more value in moving from static, dumb infrastructure to actively managed, sort of precision managed container based capabilities. It's quite jarring. This was clear to me very soon after we shipped Google Compute Engine. I was able to see, we never looked inside VM so we were able to see what level of CP utilization our customer's were getting and we compared that to what we were able to run in our internal web loads and our customers are only getting like, there were several integer multiples less utilization than what they were paying for. So we knew that something could be done. We could actually move up the abstraction layer and just do a better job by actively managing and making smart decisions. And that would be very disruptive-- >> So let's play a game, we played a game with our last guest, we'll play the game of you and I are going to go into business together and be venture capitalist. >> Okay. >> Okay. >> Sounds like fun. >> What's our investment thesis? Knowing what we know, I mean, there's a lot of entrepreneurs out there really looking at the enterprise right now. The enterprise is hard, cloud is kind of like a proxy for the enterprise but it's not like your classic enterprise. I'm a tech entrepreneur, I'm a coder, I'm an architect, I'm an OS guy, systems guy, could be a creative filmmaker, whatever but I want to come in and get some white space. Is there white space out there that you see that is an opportunity for developers that could really come in and stake claim and build a really good business? It could be lifestyle business, it could be a home run. Where would we invest? >> Yeah, I think there's so much white space in this domain. We are in the very early days of getting these technologies to market. Obviously there's just bolstering the basic, sort of the fundamentals of the platform. Overlay networking, everyone's talking SDN. Obviously there's a lot of hype around that but being able to create an abstraction that allows high levels of plugability for different network fabrics as you move between clouds is interesting. Storage, and doing a better job of providing virtualized storage that is available to these containers is an area of opportunity. There's a lot of work to be done in the tuning environment, full on application lifecycle management, continuous integration, lots of opportunity in that space. And then frankly, as we start looking at taking these technologies to market and deploying them into real businesses that are running multi cloud, there's going to be a lot of the governance, risk management and compliance overlay capabilities that just don't exist. We have the ability to define policy and enforce it in a very effective way, whether it's security policy, data loss prevention policy-- >> But it has to be dynamic, right? >> And it has to by dynamically done and it has to be enforced at the node. >> That's software, that's hard software? >> And there's so much work to be done there. There's so many opportunities to either create niche, vertically oriented capabilities of service specific protocol or unique, highly valuable, cross coding capabilities. I'm very excited about the future in this space. >> Where would we get started if I was an entrepreneur? Like, hey Craig, I saw your interview, where do I get started? Writing an app engine code? I want to put the boat in the water and starting drifting into this area you just mentioned, how should I navigate in? How should I vector in? >> A lot of it depends on where you're going to be operating in the stack. I would suggest you go and learn Go. Go is rapidly, GoLang, if you want to talk about the sort of the development environment is rapidly emerging as the language for the new cloud. We're seeing a lot of work in the Go community. Docker is written in Go, Kubernetes is written in Go. So I'd start there. It's a great platform for systems development. So I'd start looking at some of the existing technologies, Docker, Kubernetes, start just assessing where the gaps are. I'd probably approach it from a systems development perspective if I was doing it but there's also going to be a lot of value higher up the chain where you can actually-- >> You can dance on top of the stack and around the stack? >> Absolutely. >> Alright so final question, are we going back to the old OS days? I know you were joking before we came on, conversational even in a way, that was pretty relevant. I mean, we're seeing concepts of systems programming of the 80's kind of, but in decentralized way. Comment on that because I think that's tying a lot of things together. >> I think that's an incredibly astute observation and I think we're moving away from a world, operating system today is a node local thing, right? So I have an operating system and it's providing an environment that abstracts me from the physical details of one piece of hardware, one machine, you know one set of resources. What we're starting to see now is the emergence of some of these distributed concepts where you're programming not to a specific singe piece of infrastructure, single piece of hardware but you're programming to a cluster and so I think it's very much like that. I think that's a very astute observation and we're going to see the buzz-- >> But no one vendor owns it. It's owned by the world. >> And nor should one. It needs to be a POSIX like ubiquitous framework that let's us get more out of these cluster centric applications. >> Very organic, I mean I love what's happening is a very organic development but yet there's some, kind of group dynamics going on around cluster and Docker's a great example. Came out of the woodwork to become a defacto standard. Probably the fastest defacto standard that I've ever seen-- >> It's been breathtaking how quickly that technology's taken hold. >> And that's just the crowd. >> Yeah. >> Just saying, hey if we don't like decide on something? We like these guys the best, they didn't piss anyone off or whatever, whatever the dynamic is. It could be double source, flywheel, but-- >> It's interesting, certainly from Google's perspective, we've noticed Docker a lot sooner than most the world did. We had technologies that we could have stood up as potentially competing capabilities but we chose not to, because the world is incredibly well served by a single standard for defining and packaging applications. Now we need to continue that and we need to build the standard for the POSIX like distributed systems standard, that people think about coding to when they're building these modern, next gen cloud V2 applications. >> Craig, I really appreciate you spending the time. Love the conversation, love kind of the long winding road we took there. We knocked out some Kubernetes. We talked about Docker containers. Talked about the future of the industry. Really appreciate it, you're awesome to have on theCUBE here, you're invited any time. CUBE alumni Craig McLuckie right on theCUBE. We'll be right back, here, live in San Francisco broadcasting exclusively from Google's developer conference here, the Cloud Platform Live Event from Google. We'll be right back after this short break. (light music)
SUMMARY :
Live from the Mission Bay Conference Center I'm John Furrier, the founder of SiliconANGLE, It's great to be back on theCUBE. and made a huge push, just the ball is the way that we at Google Little bit of learning curve but you guys and at the other end of the spectrum and deploy it there and experience this very powerful You named the product, what does it mean? and a lot of the obvious names were taken But that's the challenge, you know, and it radically, radically reduces the cost of operations. but that's alright. I just wanted to make sure we heard that right. It's just so much easier to run things at scale. and then I want it to be able to scale easily. and reconnect it to more stuff. of the same existing first generation way of thinking and motivated to play with Docker and it's been humbling to see the amount and so our framework that works well everywhere I'm going to say to you, get your thoughts on. Three of the hottest areas on the planet right now the democratization of IT, if you will, that are operating in the airplay space. Is that a state of the current situation Multi cloud is the reality of our world. and frankly almost every customer I speak to that moves them from those clouds to the public cloud. kind of leads into the keynote, not necessarily the words and the reality of our world is it's a mobile world. and I think that's going to be a huge avenue It's not just about the infrastructure. and the optimization of the application and we compared that to what we were able to run we played a game with our last guest, cloud is kind of like a proxy for the enterprise We have the ability to define policy and it has to be enforced at the node. There's so many opportunities to either create is rapidly emerging as the language for the new cloud. of the 80's kind of, but in decentralized way. and so I think it's very much like that. It's owned by the world. It needs to be a POSIX like ubiquitous framework Came out of the woodwork to become a defacto standard. how quickly that technology's taken hold. Just saying, hey if we don't like decide on something? that people think about coding to Talked about the future of the industry.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Craig McLuckie | PERSON | 0.99+ |
Craig | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
San Francisco | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
six times | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Arnold Schwarzenegger | PERSON | 0.99+ |
Today | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
one machine | QUANTITY | 0.99+ |
Go | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first generation | QUANTITY | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Three | QUANTITY | 0.98+ |
Google Platform Conference Live | EVENT | 0.98+ |
less than a year | QUANTITY | 0.98+ |
Rack Space | ORGANIZATION | 0.97+ |
one piece | QUANTITY | 0.97+ |
Mission Bay Conference Center | LOCATION | 0.97+ |
Lego | ORGANIZATION | 0.97+ |
one provider | QUANTITY | 0.97+ |
UberClouds | ORGANIZATION | 0.96+ |
VMworld | ORGANIZATION | 0.96+ |
Greek | OTHER | 0.96+ |
VMware | ORGANIZATION | 0.96+ |
80's | DATE | 0.95+ |
one world | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
Docker | ORGANIZATION | 0.94+ |
Google Container Engine | TITLE | 0.94+ |
Borg | TITLE | 0.93+ |
last six months | DATE | 0.93+ |
Google Cloud | TITLE | 0.93+ |
one set | QUANTITY | 0.93+ |
millions of devices | QUANTITY | 0.91+ |
Docker | TITLE | 0.91+ |
at least 80% | QUANTITY | 0.9+ |
osoft | ORGANIZATION | 0.9+ |
Google Compute Engine | ORGANIZATION | 0.89+ |
million of signals a second | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.89+ |
a year | QUANTITY | 0.88+ |
three things | QUANTITY | 0.88+ |
Kubernetes | ORGANIZATION | 0.88+ |
Google Cloud Platform Live | EVENT | 0.87+ |