Keith Basil, SUSE | HPE Discover 2022
>> Announcer: TheCube presents HPE Discover 2022, brought to you by HPE. >> Welcome back to HPE Discover 2022, theCube's continuous wall to wall coverage, Dave Vellante with John Furrier. Keith Basil is here as the General Manager for the Edge Business Unit at SUSE. Keith, welcome to theCube, man good to see you. >> Great to be here, it's my first time here and I've seen many shows and you guys are the best. >> Thanks you. >> Thank you very much. >> Big fans of SUSE you know, we've had Melissa on several times. >> Yes. >> Let's start with kind of what you guys are doing here at Discover. >> Well, we're here to support our wonderful partner HPE, as you know SUSE's products and services are now being integrated into the GreenLake offering. So that's very exciting for us. >> Yeah. Now tell us about your background. It's quite interesting you've kind of been in the mix in some really cool places. Tell us a little bit about yourself. >> Probably the most relevant was I used to work at Red Hat, I was a Product Manager working in security for OpenStack and OpenShift working with DOD customers in the intelligence community. Left Red Hat to go to Rancher, started out there as VP of Edge Solutions and then transitioned over to VP of Product for all of Rancher. And then obviously we know SUSE acquired Rancher and as of November 1st, of 2020, I think it was. >> Dave: 2020. >> Yeah, yeah time is flying. I came over, I still remained VP of Product for Rancher for Cloud Native Infrastructure. And I was working on the edge strategy for SUSE and about four months ago we internally built three business units, one for the Linux business, one for enterprise container management, basically the Rancher business, and then the newly minted business unit was the Edge business. And I was offered the role to be GM for that business unit and I happily accepted it. >> Very cool. I mean the market dynamics since the 2018 have changed dramatically, IBM bought Red Hat. A lot of customers said, "Hmm let's see what other alternatives are out there." SUSE popped its head up. You know, Melissa's been quite, you know forthcoming about that. And then you acquire Rancher in 2020, IPO in 2021. That kind of gives you another tailwind. So there's a new market when you go from 2018 to 2022, it's a completely changed dynamic. >> Yes and I'm going to answer your question from the Rancher perspective first, because as we were at Rancher, we had experimented with different flavors of the underlying OS underneath Kubernetes or Kubernetes offerings. And we had, as I said, different flavors, we weren't really operating system people for example. And so post-acquisition, you know, one of my internal roles was to bring the two halves of the house together, the philosophies together where you had a cloud native side in the form of Rancher, very progressive leading innovative products with Rancher with K3s for example. And then you had, you know, really strong enterprise roots around compliance and security, secure supply chain with the enterprise grade Linux. And what we found out was SUSE had been building a version of Linux called SLE Micro, and it was perfectly designed for Edge. And so what we've done over that time period since the acquisition is that we've brought those two things together. And now we're using Kubernetes directives and philosophies to manage all the way down to the operating system. And it is a winning strategy for our customers. And we're really excited about that. >> And what does that product look like? Is that a managed service? How are customers consuming that? >> It could be a managed service, it's something that our managed service providers could embrace and offer to their customers. But we have some customers who are very sophisticated who want to do the whole thing themselves. And so they stand up Rancher, you know at a centralized location at cloud GreenLake for example which is why this is very relevant. And then that control plane if you will, manages thousands of downstream clusters that are running K3s at these Edge locations. And so that's what the complete stack looks like. And so when you add the Linux capability to that scenario we can now roll a new operating system, new kernel, CVE updates, build that as an OCI container image registry format, right? Put that into a registry and then have that thing cascade down through all the downstream clusters and up through a rolling window upgrade of the operating system underneath Kubernetes. And it is a tremendous amount of value when you talk to customers that have this massive scale. >> What's the impact of that, just take us through what happens next. Is it faster? Is it more performant? Is it more reliable? Is it processing data at the Edge? What's the impact of the customer? >> Yes, the answer is yes to that. So let's actually talk about one customer that we we highlighted in our keynote, which is Home Depot. So as we know, Kubernetes is on fire, right? It is the technology everybody's after. So by being in demand, the skills needed, the people shortage is real and people are commanding very high, you know, salaries. And so it's hard to attract talent is the bottom line. And so using our software and our solution and our approach it allows people to scale their existing teams to preserve those precious human resources and that human capital. So that now you can take a team of seven people and manage let's say 3000 downstream stores. >> Yeah it's like the old SRE model for DevOps. >> Correct. >> It's not servers they're managing one to many. >> Yes. >> One to many clusters. >> Correct so you've got the cluster, the life cycle of the cluster. You already have the application life cycle with the classic DevOps. And now what we've built and added to the stack is going down one step further, clicking down if you will to managing the life cycle of the operating system. So you have the SUSE enterprise build chain, all the value, the goodness, compliance, security. Again, all of that comes with that build process. And now we're hooking that into a cloud native flow that ends up downstream in our customers. >> So what I'm hearing is your Edge strategy is not some kind of bespoke, "Hey, I'm going after Edge." It connects to the entire value chain. >> Yes, yeah it's a great point. We want to reuse the existing philosophies that are being used today. We don't want to create something net new, cause that's really the point in leverage that we get by having these teams, you know, do these things at scale. Another point I'm going to make here is that we've defined the Edge into three segments. One is the near Edge, which is the realm of the-- >> I was going to ask about this, great. >> The telecommunications companies. So those use cases and profiles look very different. They're almost data center lite, right? So you've had regional locations, central offices where they're standing up gear classic to you machines, right? So things you find from HPE, for example. And then once you get on the other side of the access device right? The cable modem, the router, whatever it is you get into what we call the far Edge. And this is where the majority of the use cases reside. This is where the diversity of use cases presents itself as well. >> Also security challenges. >> Security challenges. Yes and we can talk about that following in a moment. And then finally, if you look at that far Edge as a box, right? Think of it as a layer two domain, a network. Inside that location, on that network you'll have industrial IOT devices. Those devices are too small to run a full blown operating system such as Linux and Kubernetes in the stack but they do have software on them, right? So we need to be able to discover those devices and manage those devices and pull data from those devices and do it in a cloud native way. So that's what we called the tiny Edge. And I stole that name from the folks over at Microsoft. Kate and Edrick are are leading a project upstream called Akri, A-K-R-I, and we are very much heavily involved in Akri because it will discover the industrial IOT devices and plug those into a local Kubernetes cluster running at that location. >> And Home Depot would fit into the near edge is that correct? >> Yes. >> Yeah okay. >> So each Home Depot store, just to bring it home, is a far Edge location and they have over 2,600 of these locations. >> So far Edge? You would put far Edge? >> Keith: Far Edge yes. >> Far edge, okay. >> John: Near edge is like Metro. Think of Metro. >> And Teleco, communication, service providers MSOs, multi-service operators. Those guys are-- >> Near Edge. >> The near edge, yes. >> Don't you think, John's been asking all week about machine learning and AI, in that tiny Edge. We think there's going to be a lot of AI influencing. >> Keith: Oh absolutely. >> Real time. And it actually is going to need some kind of lighter weight you know, platform. How do you fit into that? >> So going on this, like this model I just described if you go back and look at the SUSECON 2022 demo keynote that I did, we actually on stage stood up that exact stack. So we had a single Intel nook running SLE Micro as we mentioned earlier, running K3s and we plugged into that device, a USB camera which was automatically detected and it loaded Akri and gave us a driver to plug it into a container. Now, to answer your question, that is the point in time where we bring in the ML and the AI, the inference and the pattern recognition, because that camera when you showed the SUSE plush doll, it actually recognized it and put a QR code up on the screen. So that's where it all comes together. So we tried to showcase that in a complete demo. >> Last week, I was here in Vegas for an event Amazon and AWS put on called re:Mars, machine learning, automation, robotics, and space. >> Okay. >> Kind of but basically for me was an industrial edge show. Cause The space is the ultimate like glam to edge is like, you're doing stuff in space that's pretty edgy so to speak, pun intended. But the industrial side of the Edge is going to, we think, accelerate with machine learning. >> Keith: Absolutely. >> And with these kinds of new portable I won't say flash compute or just like connected power sources software. The industrial is going to move really fast. We've been kind of in a snails pace at the Edge, in my opinion. What's your reaction to that? Do you think we're going to see a mass acceleration of growth at the Edge industrial, basically physical, the physical world. >> Yes, first I agree with your assessment okay, wholeheartedly, so much so that it's my strategy to go after the tiny Edge space and be a leader in the industrial IOT space from an open source perspective. So yes. So a few things to answer your question we do have K3s in space. We have a customer partner called Hypergiant where they've launched satellites with K3s running in space same model, that's a far Edge location, probably the farthest Edge location we have. >> John: Deep Edge, deep space. >> Here at HPE Discover, we have a business unit called SUSE RGS, Rancher Government Services, which focuses on the US government and DOD and IC, right? So little bit of the world that I used to work in my past career. Brandon Gulla the CTO of of that unit gave a great presentation about what we call the tactical Edge. And so the same technology that we're using on the commercial and the manufacturing side. >> Like the Jedi contract, the tactical military Edge I think. >> Yes so imagine some of these military grade industrial IOT devices in a disconnected environment. The same software stack and technology would apply to that use case as well. >> So basically the tactical Edge is life? We're humans, we're at the Edge? >> Or it's maintenance, right? So maybe it's pulling sensors from aircraft, Humvees, submarines and doing predictive analysis on the maintenance for those items, those assets. >> All these different Edges, they underscore the diversity that you were just talking Keith and we also see a new hardware architecture emerging, a lot of arm based stuff. Just take a look at what Tesla's doing at the tiny Edge. Keith Basil, thanks so much. >> Sure. >> For coming on theCube. >> John: Great to have you. >> Grateful to be here. >> Awesome story. Okay and thank you for watching. This is Dave Vellante for John Furrier. This is day three of HPE Discover 2022. You're watching theCube, the leader in enterprise and emerging tech coverage. We'll be right back. (upbeat music)
SUMMARY :
brought to you by HPE. as the General Manager for the and you guys are the best. Big fans of SUSE you know, of what you guys are doing into the GreenLake offering. in some really cool places. and as of November 1st, one for the Linux business, And then you acquire Rancher in 2020, of the underlying OS underneath Kubernetes of the operating system Is it processing data at the Edge? So that now you can take Yeah it's like the managing one to many. of the operating system. It connects to the entire value chain. One is the near Edge, of the use cases reside. And I stole that name from and they have over 2,600 Think of Metro. And Teleco, communication, in that tiny Edge. And it actually is going to need and the AI, the inference and AWS put on called re:Mars, Cause The space is the ultimate of growth at the Edge industrial, and be a leader in the So little bit of the world the tactical military Edge I think. and technology would apply on the maintenance for that you were just talking Keith Okay and thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
John | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
November 1st | DATE | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Rancher | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Rancher Government Services | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
DOD | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
James Kabila | PERSON | 0.99+ |
Keith Basil | PERSON | 0.99+ |
Hypergiant | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
SUSE RGS | ORGANIZATION | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Brandon Gulla | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Teleco | ORGANIZATION | 0.99+ |
10 plus years | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Last week | DATE | 0.99+ |
Jim Kobelius | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Kate | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
HPE Discover | ORGANIZATION | 0.99+ |
Edrick | PERSON | 0.99+ |
seven people | QUANTITY | 0.99+ |
Edge | ORGANIZATION | 0.99+ |
Jim | PERSON | 0.99+ |
one customer | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Melissa | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
over 2,600 | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
US government | ORGANIZATION | 0.98+ |
K3s | COMMERCIAL_ITEM | 0.98+ |
three business units | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Metro | ORGANIZATION | 0.96+ |
two halves | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
SLE Micro | TITLE | 0.96+ |
SLE Micro | COMMERCIAL_ITEM | 0.96+ |
Edge Solutions | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.95+ |
Akri | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.94+ |
Edge | LOCATION | 0.94+ |
Basil Faruqui, BMC Software | BigData NYC 2017
>> Live from Midtown Manhattan, it's theCUBE. Covering BigData New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. (calm electronic music) >> Basil Faruqui, who's the Solutions Marketing Manger at BMC, welcome to theCUBE. >> Thank you, good to be back on theCUBE. >> So first of all, heard you guys had a tough time in Houston, so hope everything's gettin' better, and best wishes to everyone down in-- >> We're definitely in recovery mode now. >> Yeah and so hopefully that can get straightened out quick. What's going on with BMC? Give us a quick update in context to BigData NYC. What's happening, what is BMC doing in the big data space now, the AI space now, the IOT space now, the cloud space? >> So like you said that, you know, the data link space, the IOT space, the AI space, there are four components of this entire picture that literally haven't changed since the beginning of computing. If you look at those four components of a data pipeline it's ingestion, storage, processing, and analytics. What keeps changing around it, is the infrastructure, the types of data, the volume of data, and the applications that surround it. And the rate of change has picked up immensely over the last few years with Hadoop coming in to the picture, public cloud providers pushing it. It's obviously creating a number of challenges, but one of the biggest challenges that we are seeing in the market, and we're helping costumers address, is a challenge of automating this and, obviously, the benefit of automation is in scalability as well and reliability. So when you look at this rather simple data pipeline, which is now becoming more and more complex, how do you automate all of this from a single point of control? How do you continue to absorb new technologies, and not re-architect our automation strategy every time, whether it's it Hadoop, whether it's bringing in machine learning from a cloud provider? And that is the issue we've been solving for customers-- >> Alright let me jump into it. So, first of all, you mention some things that never change, ingestion, storage, and what's the third one? >> Ingestion, storage, processing and eventually analytics. >> And analytics. >> Okay so that's cool, totally buy that. Now if your move and say, hey okay, if you believe that standard, but now in the modern era that we live in, which is complex, you want breath of data, but also you want the specialization when you get down to machine limits highly bounded, that's where the automation is right now. We see the trend essentially making that automation more broader as it goes into the customer environments. >> Correct >> How do you architect that? If I'm a CXO, or I'm a CDO, what's in it for me? How do I architect this? 'Cause that's really the number one thing, as I know what the building blocks are, but they've changed in their dynamics to the market place. >> So the way I look at it, is that what defines success and failure, and particularly in big data projects, is your ability to scale. If you start a pilot, and you spend three months on it, and you deliver some results, but if you cannot roll it out worldwide, nationwide, whatever it is, essentially the project has failed. The analogy I often given is Walmart has been testing the pick-up tower, I don't know if you've seen. So this is basically a giant ATM for you to go pick up an order that you placed online. They're testing this at about a hundred stores today. Now if that's a success, and Walmart wants to roll this out nation wide, how much time do you think their IT department's going to have? Is this a five year project, a ten year project? No, and the management's going to want this done six months, ten months. So essentially, this is where automation becomes extremely crucial because it is now allowing you to deliver speed to market and without automation, you are not going to be able to get to an operational stage in a repeatable and reliable manner. >> But you're describing a very complex automation scenario. How can you automate in a hurry without sacrificing the details of what needs to be? In other words, there would seem to call for repurposing or reusing prior automation scripts and rules, so forth. How can the Walmart's of the world do that fast, but also do it well? >> Yeah so we do it, we go about it in two ways. One is that out of the box we provide a lot of pre-built integrations to some of the most commonly used systems in an enterprise. All the way from the Mainframes, Oracles, SAPs, Hadoop, Tableaus of the world, they're all available out of the box for you to quickly reuse these objects and build an automated data pipeline. The other challenge we saw, and particularly when we entered the big data space four years ago was that the automation was something that was considered close to the project becoming operational. Okay, and that's where a lot of rework happened because developers had been writing their own scripts using point solutions, so we said alright, it's time to shift automation left, and allow companies to build automations and artifact very early in the developmental life cycle. About a month ago, we released what we call Control-M Workbench, its essentially a community edition of Control-M, targeted towards developers so that instead of writing their own scripts, they can use Control-M in a completely offline manner, without having to connect to an enterprise system. As they build, and test, and iterate, they're using Control-M to do that. So as the application progresses through the development life cycle, and all of that work can then translate easily into an enterprise edition of Control-M. >> Just want to quickly define what shift left means for the folks that might not know software methodologies, they don't think >> Yeah, so. of left political, left or right. >> So, we're not shifting Control-M-- >> Alt-left, alt-right, I mean, this is software development, so quickly take a minute and explain what shift left means, and the importance of it. >> Correct, so if you think of software development as a straight line continuum, you've got, you will start with building some code, you will do some testing, then unit testing, then user acceptance testing. As it moves along this chain, there was a point right before production where all of the automation used to happen. Developers would come in and deliver the application to Ops and Ops would say, well hang on a second, all this Crontab, and these other point solutions we've been using for automation, that's not what we use in production, and we need you to now go right in-- >> So test early and often. >> Test early and often. So the challenge was the developers, the tools they used were not the tools that were being used on the production end of the site. And there was good reason for it, because developers don't need something really heavy and with all the bells and whistles early in the development lifecycle. Now Control-M Workbench is a very light version, which is targeted at developers and focuses on the needs that they have when they're building and developing it. So as the application progresses-- >> How much are you seeing waterfall-- >> But how much can they, go ahead. >> How much are you seeing waterfall, and then people shifting left becoming more prominent now? What percentage of your customers have moved to Agile, and shifting left percentage wise? >> So we survey our customers on a regular basis, and the last survey showed that eighty percent of the customers have either implemented a more continuous integration delivery type of framework, or are in the process of doing it, And that's the other-- >> And getting close to a 100 as possible, pretty much. >> Yeah, exactly. The tipping point is reached. >> And what is driving. >> What is driving all is the need from the business. The days of the five year implementation timelines are gone. This is something that you need to deliver every week, two weeks, and iteration. >> Iteration, yeah, yeah. And we have also innovated in that space, and the approach we call jobs as code, where you can build entire complex data pipelines in code format, so that you can enable the automation in a continuous integration and delivery framework. >> I have one quick question, Jim, and I'll let you take the floor and get a word in soon, but I have one final question on this BMC methodology thing. You guys have a history, obviously BMC goes way back. Remember Max Watson CEO, and Bob Beach, back in '97 we used to chat with him, dominated that landscape. But we're kind of going back to a systems mindset. The question for you is, how do you view the issue of this holy grail, the promised land of AI and machine learning, where end-to-end visibility is really the goal, right? At the same time, you want bounded experiences at root level so automation can kick in to enable more activity. So there's a trade-off between going for the end-to-end visibility out of the gate, but also having bounded visibility and data to automate. How do you guys look at that market? Because customers want the end-to-end promise, but they don't want to try to get there too fast. There's a diseconomies of scale potentially. How do you talk about that? >> Correct. >> And that's exactly the approach we've taken with Control-M Workbench, the Community Edition, because earlier on you don't need capabilities like SLA management and forecasting and automated promotion between environments. Developers want to be able to quickly build and test and show value, okay, and they don't need something that is with all the bells and whistles. We're allowing you to handle that piece, in that manner, through Control-M Workbench. As things progress and the application progresses, the needs change as well. Well now I'm closer to delivering this to the business, I need to be able to manage this within an SLA, I need to be able to manage this end-to-end and connect this to other systems of record, and streaming data, and clickstream data, all of that. So that, we believe that it doesn't have to be a trade off, that you don't have to compromise speed and quality for end-to-end visibility and enterprise grade automation. >> You mentioned trade offs, so the Control-M Workbench, the developer can use it offline, so what amount of testing can they possibly do on a complex data pipeline automation when the tool's offline? I mean it seems like the more development they do offline, the greater the risk that it simply won't work when they go into production. Give us a sense for how they mitigate, the mitigation risk in using Control-M Workbench. >> Sure, so we spend a lot of time observing how developers work, right? And very early in the development stage, all they're doing is working off of their Mac or their laptop, and they're not really connected to any. And that is where they end up writing a lot of scripts, because whatever code business logic they've written, the way they're going to make it run is by writing scripts. And that, essentially, becomes the problem, because then you have scripts managing more scripts, and as the application progresses, you have this complex web of scripts and Crontabs and maybe some opensource solutions, trying to simply make all of this run. And by doing this on an offline manner, that doesn't mean that they're losing all of the other Control-M capabilities. Simply, as the application progresses, whatever automation that the builtin Control-M can seamlessly now flow into the next stage. So when you are ready to take an application into production, there's essentially no rework required from an automation perspective. All of that, that was built, can now be translated into the enterprise-grade Control M, and that's where operations can then go in and add the other artifacts, such as SLA management and forecasting and other things that are important from an operational perspective. >> I'd like to get both your perspectives, 'cause, so you're like an analyst here, so Jim, I want you guys to comment. My question to both of you would be, lookin' at this time in history, obviously in the BMC side we mention some of the history, you guys are transforming on a new journey in extending that capability of this world. Jim, you're covering state-of-the-art AI machine learning. What's your take of this space now? Strata Data, which is now Hadoop World, which is Cloud Air went public, Hortonworks is now public, kind of the big, the Hadoop guys kind of grew up, but the world has changed around them, it's not just about Hadoop anymore. So I'd like to get your thoughts on this kind of perspective, that we're seeing a much broader picture in big data in NYC, versus the Strata Hadoop show, which seems to be losing steam, but I mean in terms of the focus. The bigger focus is much broader, horizontally scalable. And your thoughts on the ecosystem right now? >> Let the Basil answer fist, unless Basil wants me to go first. >> I think that the reason the focus is changing, is because of where the projects are in their lifecycle. Now what we're seeing is most companies are grappling with, how do I take this to the next level? How do I scale? How do I go from just proving out one or two use cases to making the entire organization data driven, and really inject data driven decision making in all facets of decision making? So that is, I believe what's driving the change that we're seeing, that now you've gone from Strata Hadoop to being Strata Data, and focus on that element. And, like I said earlier, the difference between success and failure is your ability to scale and operationalize. Take machine learning for an example. >> Good, that's where there's no, it's not a hype market, it's show me the meat on the bone, show me scale, I got operational concerns of security and what not. >> And machine learning, that's one of the hottest topics. A recent survey I read, which pulled a number of data scientists, it revealed that they spent about less than 3% of their time in training the data models, and about 80% of their time in data manipulation, data transformation and enrichment. That is obviously not the best use of a data scientist's time, and that is exactly one of the problems we're solving for our customers around the world. >> That needs to be automated to the hilt. To help them >> Correct. to be more productive, to deliver faster results. >> Ecosystem perspective, Jim, what's your thoughts? >> Yeah, everything that Basil said, and I'll just point out that many of the core uses cases for AI are automation of the data pipeline. It's driving machine learning driven predictions, classifications, abstractions and so forth, into the data pipeline, into the application pipeline to drive results in a way that is contextually and environmentally aware of what's goin' on. The history, historical data, what's goin' on in terms of current streaming data, to drive optimal outcomes, using predictive models and so forth, in line to applications. So really, fundamentally then, what's goin' on is that automation is an artifact that needs to be driven into your application architecture as a repurposable resource for a variety of-- >> Do customers even know what to automate? I mean, that's the question, what do I-- >> You're automating human judgment. You're automating effort, like the judgments that a working data engineer makes to prepare data for modeling and whatever. More and more that can be automated, 'cause those are pattern structured activities that have been mastered by smart people over many years. >> I mean we just had a customer on with a Glass'Gim CSK, with that scale, and his attitude is, we see the results from the users, then we double down and pay for it and automate it. So the automation question, it's an option question, it's a rhetorical question, but it just begs the question, which is who's writing the algorithms as machines get smarter and start throwing off their own real-time data? What are you looking at? How do you determine? You're going to need machine learning for machine learning? Are you going to need AI for AI? Who writes the algorithms >> It's actually, that's. for the algorithm? >> Automated machine learning is a hot, hot not only research focus, but we're seeing it more and more solution providers, like Microsoft and Google and others, are goin' deep down, doubling down in investments in exactly that area. That's a productivity play for data scientists. >> I think the data markets going to change radically in my opinion. I see you're startin' to some things with blockchain and some other things that are interesting. Data sovereignty, data governance are huge issues. Basil, just give your final thoughts for this segment as we wrap this up. Final thoughts on data and BMC, what should people know about BMC right now? Because people might have a historical view of BMC. What's the latest, what should they know? What's the new Instagram picture of BMC? What should they know about you guys? >> So I think what I would say people should know about BMC is that all the work that we've done over the last 25 years, in virtually every platform that came before Hadoop, we have now innovated to take this into things like big data and cloud platforms. So when you are choosing Control-M as a platform for automation, you are choosing a very, very mature solution, an example of which is Navistar. Their CIO's actually speaking at the Keno tomorrow. They've had Control-M for 15, 20 years, and they've automated virtually every business function through Control-M. And when they started their predictive maintenance project, where they're ingesting data from about 300,000 vehicles today to figure out when this vehicle might break, and to predict maintenance on it. When they started their journey, they said that they always knew that they were going to use Control-M for it, because that was the enterprise standard, and they knew that they could simply now extend that capability into this area. And when they started about three, four years ago, they were ingesting data from about 100,000 vehicles. That has now scaled to over 325,000 vehicles, and they have no had to re-architect their strategy as they grow and scale. So I would say that is one of the key messages that we are taking to market, is that we are bringing innovation that spans over 25 years, and evolving it-- >> Modernizing it, basically. >> Modernizing it, and bringing it to newer platforms. >> Well congratulations, I wouldn't call that a pivot, I'd call it an extensibility issue, kind of modernizing kind of the core things. >> Absolutely. >> Thanks for coming and sharing the BMC perspective inside theCUBE here, on BigData NYC, this is the theCUBE, I'm John Furrier. Jim Kobielus here in New York city. More live coverage, for three days we'll be here, today, tomorrow and Thursday, and BigData NYC, more coverage after this short break. (calm electronic music) (vibrant electronic music)
SUMMARY :
Brought to you by SiliconANGLE Media who's the Solutions Marketing Manger at BMC, in the big data space now, the AI space now, And that is the issue we've been solving for customers-- So, first of all, you mention some things that never change, and eventually analytics. but now in the modern era that we live in, 'Cause that's really the number one thing, No, and the management's going to How can the Walmart's of the world do that fast, One is that out of the box we provide a lot of left political, left or right. Alt-left, alt-right, I mean, this is software development, and we need you to now go right in-- and focuses on the needs that they have And getting close to a 100 The tipping point is reached. The days of the five year implementation timelines are gone. and the approach we call jobs as code, At the same time, you want bounded experiences at root level And that's exactly the approach I mean it seems like the more development and as the application progresses, kind of the big, the Hadoop guys kind of grew up, Let the Basil answer fist, and focus on that element. it's not a hype market, it's show me the meat of the problems we're solving That needs to be automated to the hilt. to be more productive, to deliver faster results. and I'll just point out that many of the core uses cases like the judgments that a working data engineer makes So the automation question, it's an option question, for the algorithm? doubling down in investments in exactly that area. What's the latest, what should they know? should know about BMC is that all the work kind of modernizing kind of the core things. Thanks for coming and sharing the BMC perspective
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
NYC | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Basil Faruqui | PERSON | 0.99+ |
five year | QUANTITY | 0.99+ |
ten months | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Basil | PERSON | 0.99+ |
Houston | LOCATION | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
Mac | COMMERCIAL_ITEM | 0.99+ |
BMC Software | ORGANIZATION | 0.99+ |
two ways | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Midtown Manhattan | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
ten year | QUANTITY | 0.99+ |
over 25 years | QUANTITY | 0.99+ |
over 325,000 vehicles | QUANTITY | 0.99+ |
about 300,000 vehicles | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
about 100,000 vehicles | QUANTITY | 0.99+ |
about 80% | QUANTITY | 0.98+ |
BigData | ORGANIZATION | 0.98+ |
Thursday | DATE | 0.98+ |
eighty percent | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
20 years | QUANTITY | 0.98+ |
one quick question | QUANTITY | 0.98+ |
single point | QUANTITY | 0.98+ |
Bob Beach | PERSON | 0.97+ |
four years ago | DATE | 0.97+ |
two use cases | QUANTITY | 0.97+ |
one final question | QUANTITY | 0.97+ |
'97 | DATE | 0.97+ |
ORGANIZATION | 0.97+ | |
Agile | TITLE | 0.96+ |
New York city | LOCATION | 0.96+ |
About a month ago | DATE | 0.96+ |
Oracles | ORGANIZATION | 0.96+ |
Hadoop | TITLE | 0.95+ |
about a hundred stores | QUANTITY | 0.94+ |
less than 3% | QUANTITY | 0.94+ |
2017 | DATE | 0.93+ |
Glass'Gim | ORGANIZATION | 0.92+ |
about | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
Ops | ORGANIZATION | 0.91+ |
Hadoop | ORGANIZATION | 0.9+ |
Max Watson | PERSON | 0.88+ |
100 | QUANTITY | 0.88+ |
theCUBE | ORGANIZATION | 0.88+ |
Mainframes | ORGANIZATION | 0.88+ |
Navistar | ORGANIZATION | 0.86+ |
Basil Faruqui, BMC | theCUBE NYC 2018
(upbeat music) >> Live from New York, it's theCUBE. Covering theCUBE New York City 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Okay, welcome back everyone to theCUBE NYC. This is theCUBE's live coverage covering CubeNYC Strata Hadoop Strata Data Conference. All things data happen here in New York this week. I'm John Furrier with Peter Burris. Our next guest is Basil Faruqui lead solutions marketing manager digital business automation within BMC returns, he was here last year with us and also Big Data SV, which has been renamed CubeNYC, Cube SV because it's not just big data anymore. We're hearing words like multi cloud, Istio, all those Kubernetes. Data now is so important, it's now up and down the stack, impacting everyone, we talked about this last year with Control M, how you guys are automating in a hurry. The four pillars of pipelining data. The setup days are over; welcome to theCUBE. >> Well thank you and it's great to be back on theCUBE. And yeah, what you said is exactly right, so you know, big data has really, I think now been distilled down to data. Everybody understands data is big, and it's important, and it is really you know, it's quite a cliche, but to a larger degree, data is the new oil, as some people say. And I think what you said earlier is important in that we've been very fortunate to be able to not only follow the journey of our customers but be a part of it. So about six years ago, some of the early adopters of Hadoop came to us and said that look, we use your products for traditional data warehousing on the ERP side for orchestration workloads. We're about to take some of these projects on Hadoop into production and really feel that the Hadoop ecosystem is lacking enterprise-grade workflow orchestration tools. So we partnered with them and some of the earliest goals they wanted to achieve was build a data lake, provide richer and wider data sets to the end users to be able to do some dashboarding, customer 360, and things of that nature. Very quickly, in about five years time, we have seen a lot of these projects mature from how do I build a data lake to now applying cutting-edge ML and AI and cloud is a major enabler of that. You know, it's really, as we were talking about earlier, it's really taking away excuses for not being able to scale quickly from an infrastructure perspective. Now you're talking about is it Hadoop or is it S3 or is it Azure Blob Storage, is it Snowflake? And from a control-end perspective, we're very platform and technology agnostic, so some of our customers who had started with Hadoop as a platform, they are now looking at other technologies like Snowflake, so one of our customers describes it as kind of the spine or a power strip of orchestration where regardless of what technology you have, you can just plug and play in and not worry about how do I rewire the orchestration workflows because control end is taking care of it. >> Well you probably always will have to worry about that to some degree. But I think where you're going, and this is where I'm going to test with you, is that as analytics, as data is increasingly recognized as a strategic asset, as analytics increasingly recognizes the way that you create value out of those data assets, and as a business becomes increasingly dependent upon the output of analytics to make decisions and ultimately through AI to act differently in markets, you are embedding these capabilities or these technologies deeper into business. They have to become capabilities. They have to become dependable. They have to become reliable, predictable, cost, performance, all these other things. That suggests that ultimately, the historical approach of focusing on the technology and trying to apply it to a periodic or series of data science problems has to become a little bit more mature so it actually becomes a strategic capability. So the business can say we're operating on this, but the technologies to take that underlying data science technology to turn into business operations that's where a lot of the net work has to happen. Is that what you guys are focused on? >> Yeah, absolutely, and I think one of the big differences that we're seeing in general in the industry is that this time around, the pull of how do you enable technology to drive the business is really coming from the line of business, versus starting on the technology side of the house and then coming to the business and saying hey we've got some cool technologies that can probably help you, it's really line of business now saying no, I need better analytics so I can drive new business models for my company, right? So the need for speed is greater than ever because the pull is from the line of business side. And this is another area where we are unique is that, you know, Control M has been designed in a way where it's not just a set of solutions or tools for the technical guys. Now, the line of business is getting closer and closer, you know, it's blending into the technical side as well. They have a very, very keen interest in understanding are the dashboards going to be refreshed on time? Are we going to be able to get all the right promotional offers at the right time? I mean, we're here at NYC Strata, there's a lot of real-time promotion happening here. The line of business has direct interest in the delivery and the timing of all of this, so we have always had multiple interfaces to Control M where a business user who has an interest in understanding are the promotional offers going to happen at the right time and is that on schedule? They have a mobile app for them to do that. A developer who's building up complex, multi-application platform, they have an API and a programmatic interface to do that. Operations that has to monitor all of this has rich dashboards to be able to do that. That's one of the areas that has been key for our success over the last couple decades, and we're seeing that translate very well into the big data place. >> So I just want to go under the hood for a minute because I love that answer. And I'd like to pivot off what Peter said, tying it back to the business, okay, that's awesome. And I want to learn a little bit more about this because we talked about this last year and I kind of am seeing it now. Kubernetes and all this orchestration is about workloads. You guys nailed the workflow issue, complex workflows. Because if you look at it, if you're adding line of business into the equation, that's just complexity in and of itself. As more workflows exist within its own line of business, whether it's recommendations and offers and workflow issues, more lines of business in there is complex for even IT to deal with, so you guys have nailed that. How does that work? Do you plug it in and the lines of businesses have their own developers, so the people who work with the workflows engage how? >> So that's a good question, with sort of orchestration and automation now becoming very, very generic, it's kind of important to classify where we play. So there's a lot of tools that do release and build automation. There's a lot of tools that'll do infrastructure automation and orchestration. All of this infrastructure and release management process is done ultimately to run applications on top of it, and the workflows of the application need orchestration and that's the layer that we play in. And if you think about how does the end user, the business and consumer interact with all of this technology is through applications, k? So the orchestration of the workflow's inside the applications, whether you start all the way from an ERP or a CRM and then you land into a data lake and then do an ML model, and then out come the recommendations analytics, that's the layer we are automating today. Obviously, all of this-- >> By the way, the technical complexity for the user's in the app. >> Correct, so the line of business obviously has a lot more control, you're seeing roles like chief digital officers emerge, you're seeing CTOs that have mandates like okay you're going to be responsible for all applications that are facing customer facing where the CIO is going to take care of everything that's inward facing. It's not a settled structure or science involved. >> It's evolving fast. >> It's evolving fast. But what's clear is that line of business has a lot more interest and influence in driving these technology projects and it's important that technologies evolve in a way where line of business can not only understand but take advantage of that. >> So I think it's a great question, John, and I want to build on that and then ask you something. So the way we look at the world is we say the first fifty years of computing were known process, unknown technology. The next fifty years are going to be unknown process, known technology. It's all going to look like a cloud. But think about what that means. Known process, unknown technology, Control M and related types of technologies tended to focus on how you put in place predictable workflows in the technology layer. And now, unknown process, known technology, driven by the line of business, now we're talking about controlling process flows that are being created, bespoke, strategic, differentiating doing business. >> Well, dynamic, too, I mean, dynamic. >> Highly dynamic, and those workflows in many respects, those technologies, piecing applications and services together, become the process that differentiates the business. Again, you're still focused on the infrastructure a bit, but you've moved it up. Is that right? >> Yeah, that's exactly right. We see our goal as abstracting the complexity of the underlying application data and infrastructure. So, I mean, it's quite amazing-- >> So it could be easily reconfigured to a business's needs. >> Exactly, so whether you're on Hadoop and now you're thinking about moving to Snowflake or tomorrow something else that comes up, the orchestration or the workflow, you know, that's as a business as a product that's our goal is to continue to evolve quickly and in a manner that we continue to abstract the complexity so from-- >> So I've got to ask you, we've been having a lot of conversations around Hadoop versus Kubernetes on multi cloud, so as cloud has certainly come in and changed the game, there's no debate on that. How it changes is debatable, but we know that multiple clouds is going to be the modus operandus for customers. >> Correct. >> So I got a lot of data and now I've got pipelining complexities and workflows are going to get even more complex, potentially. How do you see the impact of the cloud, how are you guys looking at that, and what are some customer use cases that you see for you guys? >> So the, what I mentioned earlier, that being platform and technology agnostic is actually one of the unique differentiating factors for us, so whether you are an AWS or an Azure or a Google or On-Prem or still on a mainframe, a lot of, we're in New York, a lot of the banks, insurance companies here still do some of the most critical processing on the mainframe. The ability to abstract all of that whether it's cloud or legacy solutions is one of our key enablers for our customers, and I'll give you an example. So Malwarebytes is one of our customers and they've been using Control M for several years. Primarily the entire structure is built on AWS, but they are now utilizing Google cloud for some of their recommendation analysis on sentiment analysis because their goal is to pick the best of breed technology for the problem they're looking to solve. >> Service, the best breed service is in the cloud. >> The best breed service is in the cloud to solve the business problem. So from Control M's perspective, transcending from AWS to Google cloud is completely abstracted for them, so runs Google tomorrow it's Azure, they decide to build a private cloud, they will be able to extend the same workflow orchestration. >> But you can build these workflows across whatever set of services are available. >> Correct, and you bring up an important point. It's not only being able to build the workflows across platforms but being able to define dependencies and track the dependencies across all of this, because none of this is happening in silos. If you want to use Google's API to do the recommendations, well, you've got to feed it the data, and the data's pipeline, like we talked about last time, data ingestion, data storage, data processing, and analytics have very, very intricate dependencies, and these solutions should be able to manage not only the building of the workflow but the dependencies as well. >> But you're defining those elements as fundamental building blocks through a control model >> Correct. >> That allows you to treat the higher level services as reliable, consistent, capabilities. >> Correct, and the other thing I would like to add here is not only just build complex multiplatform, multiapplication workflows, but never lose focus of the business service of the business process there, so you can tie all of this to a business service and then, these things are complex, there are problems, let's say there's an ETL job that fails somewhere upstream, Control M will immediately be able to predict the impact and be able to tell you this means the recommendation engine will not be able to make the recommendations. Now, the staff that's going to work under mediation understands the business impact versus looking at a screen where there's 500 jobs and one of them has failed. What does that really mean? >> Set priorities and focal points and everything else. >> Right. >> So I just want to wrap up by asking you how your talk went at Strata Hadoop Data Conference. What were you talking about, what was the core message? Was it Control M, was it customer presentations? What was the focus? >> So the focus of yesterday's talk was actually, you know, one of the things is academic talk is great, but it's important to, you know, show how things work in real life. The session was focused on a real-use case from a customer. Navistar, they have IOT data-driven pipelines where they are predicting failures of parts inside trucks and buses that they manufacture, you know, reducing vehicle downtime. So we wanted to simulate a demo like that, so that's exactly what we did. It was very well received. In real-time, we spun up EMR environment in AWS, automatically provision control of infrastructure there, we applied spark and machine learning algorithms to the data and out came the recommendation at the end was that, you know, here are the vehicles that are-- >> Fix their brakes. (laughing) >> Exactly, so it was very, very well received. >> I mean, there's a real-world example, there's real money to be saved, maintenance, scheduling, potential liability, accidents. >> Liability is a huge issue for a lot of manufacturers. >> And Navistar has been at the leading edge of how to apply technologies in that business. >> They really have been a poster child for visual transformation. >> They sure have. >> Here's a company that's been around for 100 plus years and when we talk to them they tell us that we have every technology under the sun that has come since the mainframe, and for them to be transforming and leading in this way, we're very fortunate to be part of their journey. >> Well we'd love to talk more about some of these customer use cases. Other people love about theCUBE, we want to do more of them, share those examples, people love to see proof in real-world examples, not just talk so appreciate it sharing. >> Absolutely. >> Thanks for sharing, thanks for the insights. We're here Cube live in New York City, part of CubeNYC, we're getting all the data, sharing that with you. I'm John Furrier with Peter Burris. Stay with us for more day two coverage after this short break. (upbeat music)
SUMMARY :
Brought to you by SiliconANGLE Media with Control M, how you guys are automating in a hurry. describes it as kind of the spine or a power strip but the technologies to take that underlying of the house and then coming to the business You guys nailed the workflow issue, and that's the layer that we play in. for the user's in the app. Correct, so the line of business and it's important that technologies evolve in a way So the way we look at the world is we say that differentiates the business. of the underlying application data and infrastructure. so as cloud has certainly come in and changed the game, and what are some customer use cases that you see for the problem they're looking to solve. is in the cloud. The best breed service is in the cloud But you can build these workflows across and the data's pipeline, like we talked about last time, That allows you to treat the higher level services and be able to tell you this means the recommendation engine So I just want to wrap up by asking you at the end was that, you know, Fix their brakes. there's real money to be saved, And Navistar has been at the leading edge of how They really have been a poster child for and for them to be transforming and leading in this way, people love to see proof in real-world examples, Thanks for sharing, thanks for the insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Basil Faruqui | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
500 jobs | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
New York | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Hadoop | TITLE | 0.99+ |
first fifty years | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
Navistar | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.98+ |
yesterday | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
Malwarebytes | ORGANIZATION | 0.97+ |
Cube | ORGANIZATION | 0.95+ |
Control M | ORGANIZATION | 0.95+ |
NYC | LOCATION | 0.95+ |
Snowflake | TITLE | 0.95+ |
Strata Hadoop Data Conference | EVENT | 0.94+ |
100 plus years | QUANTITY | 0.93+ |
CubeNYC Strata Hadoop Strata Data Conference | EVENT | 0.92+ |
last couple decades | DATE | 0.91+ |
Azure | TITLE | 0.91+ |
about five years | QUANTITY | 0.91+ |
Istio | ORGANIZATION | 0.9+ |
CubeNYC | ORGANIZATION | 0.89+ |
day | QUANTITY | 0.87+ |
about six years ago | DATE | 0.85+ |
Kubernetes | TITLE | 0.85+ |
today | DATE | 0.84+ |
NYC Strata | ORGANIZATION | 0.83+ |
Hadoop | ORGANIZATION | 0.78+ |
one of them | QUANTITY | 0.77+ |
Big Data SV | ORGANIZATION | 0.75+ |
2018 | EVENT | 0.7+ |
Kubernetes | ORGANIZATION | 0.66+ |
fifty years | DATE | 0.62+ |
Control M | TITLE | 0.61+ |
four pillars | QUANTITY | 0.61+ |
two | QUANTITY | 0.6+ |
-Prem | ORGANIZATION | 0.6+ |
Cube SV | COMMERCIAL_ITEM | 0.58+ |
a minute | QUANTITY | 0.58+ |
S3 | TITLE | 0.55+ |
Azure | ORGANIZATION | 0.49+ |
cloud | TITLE | 0.49+ |
2018 | DATE | 0.43+ |
Basil Faruqui, BMC Software | BigData NYC 2017
>> Announcer: Live from Midtown Manhattan its theCUBE. Covering BigData New York City 2017. Brought to you by SiliconANGLE Media and it's ecosystem sponsors. >> His name is Jim Kobielus. >> Jim: That right, John Furrier is actually how I pronounce his name for the record. But he is Basil Faruqui. >> Basil Faruqui who's the solutions marketing manager at BMC, welcome to theCUBE. >> Basil: Thank you, good to be back on theCUBE. >> So, first of all, I heard you guys had a tough time in Houston, so hope everything's getting better and best wishes. >> Basil: Definitely in recovery mode now. >> Hopefully that can get straightened out. What's going on BMC, give us a quick update and in context to BigData NYC what's happening, what is BMC doing in the the big data space now? The AI space now, the IoT space now, the cloud space? >> Like you said you know the data space, the IoT space. the AI space. There are four components of this entire picture that literally haven't changed since the beginning of computing. If you look at those four components of a data pipeline a suggestion, storage. processing and analytics. What keeps changing around it is the infrastructure, the types of data, the volume of data and the applications that surround it. The rate of change has picked up immensely over the last few years with Hadoop coming into the picture, public cloud providers pushing it. It's obviously created a number of challenges, but one of the biggest challenges that we are seeing in the market and we're helping customers address is the challenge of automating this. And obviously the benefit of automation is in scalability as well as reliability. So when you look at this rather simple data pipeline, which is now becoming more and more complex. How do you automate all of this from a single point of control? How do you continue to absorb new technologies and not re-architect your automation strategy every time. Whether it's Hadoop, whether it's bringing in machine learning from a cloud provider. And that is the the issue we've been solving for customers. >> All right, let me jump into it. So first of all you mention some things some things that never change, ingestion storage, and what was the third one? >> Ingestions, storage, processing and eventual analytics. >> So OK, so that's cool, totally buy that. Now if you move and say hey okay so you believe that's standard but now in the modern era that we live in, which is complex, you want breadth of data, and also you want the specialization when you get down the machine learning. That's highly bound, that's where the automation it is right now. We see the trend essentially making that automation more broader as it goes into the customer environments. >> Basil: Correct. >> How do you architect that? If I'm a CXO to I'm a CDO, what's in it for me? How do I architect this because that's really the number one thing is I know what the building blocks are but they've changed in their dynamics to the marketplace. >> So the way I look at it is that what defines success and failure, and particularly in big data projects, is your ability to scale. If you start a pilot and you spend, you know, three months on it and you deliver some results. But if you cannot roll it out worldwide, nationwide, whatever it is essentially the project has failed. The analogy often give is Walmart has been testing the pick up tower, I don't know if you seen, so this is basically a giant ATM for you to go pick up an order that you placed online. They're testing this at about hundred stores today. Now that's a success and Walmart wants to roll this out nationwide. How much time do you think their IT departments can have? Is this is a five year project, ten year project? No, the management's going to want this done six months, ten months. So essentially, this is where automation becomes extremely crucial because it is now allowing you to deliver speed to market and without automation you are not going to be able to get to an operational stage in a repeatable and reliable manner. >> You're describing a very complex automation scenario. How can you automate in a hurry without sacrificing you know, the details of what needs to be, In other words, you seem to call for re purposing or reusing prior automation scripts and rules and so forth. How how can the Walmart's of the world do that fast, but also do it well? >> So we do it we go about it in two ways. One is that out of the box we provide a lot of pre built integrations to some of the most commonly used systems in an enterprise. All the way up from the mainframes, Oracle's, SAP's Hadoop, Tableau's, of the world. They're all available out of the box for you to quickly reuse these objects and build an automated data pipeline. The other challenge we saw, and particularly when we entered the big data space four years ago, was that the automation was something that was considered close to the project becoming operational. And that's where a lot of rework happened because developers have been writing their own scripts, using point solutions. So we said all right, it's time to shift automation left and allow companies to build automation as an artifact very early in the development lifecycle. About a month ago we released what we call Control-M Workbench which is essentially a Community Edition of Control-M targeted towards developers. So that instead of writing their own scripts they can use a Control-M in a completely offline manner without having to connect to an enterprise system. As they build and test and iterate, they're using Control-M to do that. So as the application progresses the development lifecycle, and all of that work can then translate easily into an Enterprise Edition of Control-M. >> So quickly, just explain what shift-left means for the folks that might not know software methodologies, left political or left alt-right, this is software development so please take a minute explain what shift-left means, and the importance of it. >> Correct, so the if you if you think of software development and as a straight line continuum you can start with building some code, you will do some testing, then unit testing, than user acceptance testing. As it moves along this chain, there was a point right before production where all of the automation used to happen. You know, developers would come in and deliver the application to ops, and ops would say, well hang on a second all this CRON tab and all these other point solutions have been using for automation, that's not what we use in production. And we need you to now. >> To test early and often. >> Test early and often. The challenge was the developers, the tools they use, we're not the tools that were being used on the production end of the cycle. And there was good reason for it because developers don't need something really heavy and with all the bells and whistles early in the development lifecycle. Control-M Workbench is a very light version which is targeted at developers and focuses on the needs that they have when they're building and developing as the application progresses through its life cycle. >> How much are you seeing Waterfall and then people shifting-left becoming more prominent now. What percentage of your customers have moved to Agile and shifting-left percentage wise? >> So we survey our customers on a regular basis. In the last survey showed that 80% of the customers have either implemented a more continuous integration delivery type of framework, or are in the process of doing it. And that's the other. >> And getting upfront costs as possible, a tipping point is reached. >> What is driving all of that is the need from the business, you know, the days of the five year implementation timelines are gone. This is something that you need to deliver every week, two weeks, and iteration. And we have also innovated in that space and the approach we call Jobs-as-Code where you can build entire, complex data pipelines in code formats so that you can enable the automation in a continuous integration and delivery framework. >> I have one quick question, Jim, and then I'll let you take the floor and got to learn to get a word in soon. But I have one final question on this BMC methodology thing. You guys have a history obviously BMC goes way back. Remember Max Watson CEO, and then in Palm Beach back in 97 we used to chat with him. Dominated that landscape, but we're kind of going back to a systems mindset, so the question for you is how do you view the issue of the this holy grail, the promised land of AI and machine learning. Where, you know, end-to-end visibility is really the goal, right. At the same time, you want bounded experiences at root level so automation can kick in to enable more activity. So it's a trade off between going for the end-to-end visibility out of the gate, but also having bounded visibility and data to automate. How do you guys look at that market because customers want the end-to-end promise, but they don't want to try to get there too fast as a dis-economies of scale potentially. How do you talk about that? >> And that's exactly the approach we've taken with Control-M Workbench the Community Edition. Because early on you don't need capabilities like SLA management and forecasting and automated promotion between environments. Developers want to be able to quickly build, and test and show value, OK. And they don't need something that, as you know, with all the bells and whistles. We're allowing you to handle that piece in that manner, through Control-M Workbench. As things progress, and the application progresses, the needs change as well. Now I'm closer to delivering this to the business, I need to be able to manage this within an SLA. I need to be able to manage this end-to-end and connect this other systems of record and streaming data and click stream data, all of that. So that we believe that there it doesn't have to be a trade off. That you don't have to compromise speed and quality and visibility and enterprise grade automation. >> You mention trade-offs so the Control-M Workbench the developer can use it offline, so what amount of testing can they possibly do on a complex data pipeline automation, when it's when the tool is off line? I mean it simply seems like the more development they do off line, the greater the risk that it simply won't work when they go into production. Give us a sense for how they mitigate that risk. >> Sure, we spent a lot of time observing how developers work and very early in the development stage, all they're doing is working off of their Mac or their laptop and they're not really connecting to any. And that is where they end up writing a lot of scripts because whatever code, business logic, that they've written the way they're going to make it run is by writing scripts. And that essentially becomes a problem because then you have scripts managing more scripts and as the the application progresses, you have this complex web of scripts and CRON tabs and maybe some open source solutions. trying to make, simply make, all of this run. And by doing this I don't know offline manner that doesn't mean that they're losing all of the other controlling capabilities. Simply, as the application progresses whatever automation that they've built in Control-M can seamlessly now flow into the next stage. So when you are ready take an application into production there is essentially no rework required from an automation perspective. All of that that was built can now be translated into the enterprise grade Control-M and that's where operations can then go in and add the other artifacts such as SLA management forecasting and other things that are important from an operational perspective. >> I'd like to get both your perspectives because you're like an analyst here. So Jim, I want you guys to comment, my question to both of you would be you know, looking at this time in history, obviously on the BMC side, mention some of the history. You guys are transforming on a new journey and extending that capability in this world. Jim, you're covering state of the art AI machine learning. What's your take of the space now? Strata Data which is now Hadoop World, which is, Cloudera went public, Hortonworks is now public. Kind of the big, the Hadoop guys kind of grew up, but the world has changed around them. It's not just about Hadoop anymore. So I want to get your thoughts on this kind of perspective. We're seeing a much broader picture in BigData NYC versus the Strata Hadoop, which seems to be losing steam. But, I mean, in terms of the focus, the bigger focus is much broader horizontally scalable your thoughts on the ecosystem right now. >> Let Basil answer first unless Basil wants me to go first. >> I think the reason the focus is changing is because of where the projects are in their life cycle. You know now what we're seeing is most companies are grappling with how do I take this to the next level. How do I scale, how do I go from just proving out one or two use cases to making the entire organization data driven and really inject data driven decision making in all facets of decision making. So that is, I believe, what's driving the change that we're seeing, that you know now you've gone from Strata Hadoop to being Strata Data, and focus on that element. Like I said earlier, these difference between success and failure is your ability to scale and operationalize. Take machine learning for example. >> And really it's not a hype market. Show me the meat on the bone, show me scale, I got operational concerns of security and whatnot. >> And machine learning you know that's one of the hottest topics. A recent survey I read which polled a number of data scientists, it revealed that they spent about less than 3% of their time in training the data models and about 80% of their time in data manipulation, data transformation and enrichment. That is obviously not the best use of the data scientists time, and that is exactly one of the problems we're solving for our customers around the world. >> And it needs to be automated to the hilt to help them to be more productive delivering fast results. >> Ecosystem perspective, Jim whats you thoughts? >> Yes everything that Basil said, and I'll just point out that many of the core use cases for AI are automation of the data pipeline. You know it's driving machine learning driven predictions, classifications, you know abstractions and so forth, into the data pipeline, into the application pipeline to drive results in a way that is contextually and environmentally aware of what's going on. The path, the history historical data, what's going on in terms of current streaming data to drive optimal outcomes, you know, using predictive models and so forth, in line to applications. So really, fundamentally then, what's going on is that automation is an artifact that needs to be driven into your application architecture as a re-purposeful resource for a variety of jobs. >> How would you even know what to automate? I mean that's the question. >> You're automating human judgment, your automating effort. Like the judgments that a working data engineer makes to prepare data for modeling and whatever. More and more that need can be automated because those are patterned, structured activities that have been mastered by smart people over many years. >> I mean we just had a customer on his with a glass company, GSK, with that scale, and his attitude is we see the results from the users then we double down and pay for it and automate it. So the automation question, it's a rhetorical question but this begs the question, which is you know who's writing the algorithms as machines get smarter and start throwing off their own real time data. What are you looking at, how do you determine you're going to need you machine learning for machine learning? You're going to need AI for AI? Who writes the algorithms for the algorithms? >> Automated machine learning is a hot hot, not only research focus, but we're seeing it more and more solution providers like Microsoft and Google and others, are going deep down doubling down and investments in exactly that area. That's a productivity play for data scientists. >> I think the data markets going to change radically in my opinion, so you're starting to see some things with blockchain some other things that are interesting. Data sovereignty, data governance are huge issues. Basil, just give your final thoughts for this segment as we wrap this up. Final thoughts on data and BMC, what should people know about BMC right now, because people might have a historical view of BMC. What's the latest, what should they know, what's the new Instagram picture of BMC? What should they know about you guys? >> I think what I would say people should know about BMC is that you know all the work that we've done over the last 25 years, in virtually every platform that came before Hadoop, we have now innovated to take this into things like big data and cloud platforms. So when you are choosing Control-M as a platform for automation, you are choosing a very very mature solution. An example of which is Navistar and their CIO is actually speaking at the keynote tomorrow. They've had Control-M for 15, 20 years and have automated virtually every business function through Control-M. And when they started their predictive maintenance project where there ingesting data from about 300 thousand vehicles today, to figure out when this vehicle might break and do predictive maintenance on it. When they started their journey they said that they always knew that they were going to use Control-M for it because that was the enterprise standard. And they knew that they could simply now extend that capability into this area. And when they started about three four years ago there were ingesting data from about a hundred thousand vehicles, that has now scaled over 325 thousand vehicles and they have not had to re-architect their strategy as they grow and scale. So, I would say that is one of the key messages that we are are taking to market, is that we are bringing innovation that has spanned over 25 years and evolving it. >> Modernizing it. >> Modernizing it and bringing it to newer platforms. >> Congratulations, I wouldn't call that a pivot, I'd call it an extensibility issue, kind of modernizing the core things. >> Absolutely. >> Thanks for coming and sharing the BMC perspective inside theCUBE here. On BigData NYC this is theCUBE. I'm John Furrier, Jim Kobielus here in New York City, more live coverage the three days we will be here, today, tomorrow and Thursday at BigData NYC. More coverage after this short break.
SUMMARY :
Brought to you by SiliconANGLE Media how I pronounce his name for the record. Basil Faruqui who's the solutions marketing manager So, first of all, I heard you guys The AI space now, the IoT space now, the cloud space? And that is the the issue we've been solving So first of all you mention some things some things the specialization when you get down the machine learning. the number one thing is I know what the building blocks are the pick up tower, I don't know if you seen, How how can the Walmart's of the world One is that out of the box we provide for the folks that might not know software methodologies, Correct, so the if you if you think and developing as the application progresses How much are you seeing Waterfall And that's the other. And getting upfront costs as possible, What is driving all of that is the need from At the same time, you want bounded experiences And that's exactly the approach we've taken with I mean it simply seems like the more development and as the the application progresses, Kind of the big, the Hadoop guys kind of grew up, that we're seeing, that you know now you've gone Show me the meat on the bone, show me scale, of the data scientists time, and that is exactly And it needs to be automated to the hilt that many of the core use cases for AI are automation I mean that's the question. Like the judgments that a working data engineer makes So the automation question, it's a rhetorical question and more solution providers like Microsoft What's the latest, what should they know, is that you know all the work that we've done and bringing it to newer platforms. the core things. more live coverage the three days we will be here,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Basil Faruqui | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Basil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Houston | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
15 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Palm Beach | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
ten months | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
ten year | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
over 325 thousand vehicles | QUANTITY | 0.99+ |
Mac | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
GSK | ORGANIZATION | 0.99+ |
about 300 thousand vehicles | QUANTITY | 0.99+ |
about 80% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Midtown Manhattan | LOCATION | 0.99+ |
SAP | ORGANIZATION | 0.98+ |
one quick question | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
Strata Hadoop | TITLE | 0.98+ |
four years ago | DATE | 0.98+ |
over 25 years | QUANTITY | 0.98+ |
single point | QUANTITY | 0.98+ |
about a hundred thousand vehicles | QUANTITY | 0.97+ |
one final question | QUANTITY | 0.97+ |
About a month ago | DATE | 0.96+ |
Max Watson | PERSON | 0.96+ |
ORGANIZATION | 0.96+ | |
BigData | ORGANIZATION | 0.95+ |
four components | QUANTITY | 0.95+ |
about hundred stores | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
two use cases | QUANTITY | 0.95+ |
NYC | LOCATION | 0.94+ |
Navistar | ORGANIZATION | 0.94+ |
BMC Software | ORGANIZATION | 0.93+ |
97 | DATE | 0.93+ |
Agile | TITLE | 0.89+ |
Basil Faruqui, BMC Software - BigData SV 2017 - #BigDataSV - #theCUBE
(upbeat music) >> Announcer: Live from San Jose, California, it's theCUBE covering Big Data Silicon Valley 2017. >> Welcome back everyone. We are here live in Silicon Valley for theCUBE's Big Data coverage. Our event, Big Data Silicon Valley, also called Big Data SV. A companion event to our Big Data NYC event where we have our unique program in conjunction with Strata Hadoop. I'm John Furrier with George Gilbert, our Wikibon big data analyst. And we have Basil Faruqui, who is the Solutions Marketing Manager at BMC Software. Welcome to theCUBE. >> Thank you, great to be here. >> We've been hearing a lot on theCUBE about schedulers and automation, and machine learning is the hottest trend happening in big data. We're thinking that this is going to help move the needle on some things. Your thoughts on this, on the world we're living in right now, and what BMC is doing at the show. >> Absolutely. So, scheduling and workflow automation is absolutely critical to the success of big data projects. This is not something new. Hadoop is only about 10 years old but other technologies that have come before Hadoop have relied on this foundation for driving success. If we look the Hadoop world, what gets all the press is all the real-time stuff, but what powers all of that underneath it is a very important layer of batch. If you think about some of the most common use cases for big data, if you think of a bank, they're talking about fraud detection and things like that. Let's just take the fraud detection example. Detecting an anomaly of how somebody is spending, if somebody's credit card is used which doesn't match with their spending habits, the bank detects that and they'll maybe close the card down or contact somebody. But if you think about everything else that has happened before that as something that has happened in batch mode. For them to collect the history of how that card has been used, then match it with how all the other card members use the cards. When the cards are stolen, what are those patterns? All that stuff is something that is being powered by what's today known as workload automation. In the past, it's been known by names such as job scheduling and batch processing. >> In the systems businesses everyone knows what schedulers, compilers, all this computer science stuff. But this is interesting. Now that the data lake has become so swampy, and people call it the data swamp, people are looking at moving data out of data lakes into real time, as you mention, but it requires management. So, there's a lot of coordination going on. This seems to be where most enterprises are now focusing their attention on, is to make that data available. >> Absolutely. >> Hence the notion of scheduling and workloads. Because their use cases are different. Am I getting it right? >> Yeah, absolutely. And if we look at what companies are doing, every CEO and every boardroom, there's a charter for digital transformation for companies. And, it's no longer about taking one or two use cases around big data and driving success. Data and intelligence is now at the center of everything a company does, whether it's building new customer engagement models, whether it's building new ecosystems with their partners, suppliers. Back-office optimization. So, when CIOs and data architects think about having to build a system like that, they are faced with a number of challenges. It has to become enterprise ready. It has to take into account governance, security, and others. But, if you peel the onion just a little bit, what architects and CIOs are faced with is okay, you've got a web of complex technologies, legacy applications, modern applications that hold a lot of the corporate data today. And then you have new sources of data like social media, devices, sensors, which have a tendency to produce a lot more data. First things first, you've got a ecosystem like Hadoop, which is supposed to be kind of the nerve center of the new digital platform. You've got to start ingesting all this data into Hadoop. This has to be in an automated fashion for it to be able to scalable. >> But this is the combination of streaming and batch. >> Correct. >> Now this seems to be the management holy grail right now. Nailing those two. Did I get that? >> Absolutely. So, people talk about, in technical terms, the speed layer and the batch layer. And both have to converge for them to be able to deliver the intelligence and insight that the business users are looking for. >> Would it be fair to say it's not just the convergence of the speed layer and batch layer in Hadoop but what BMC brings to town is the non-Hadoop parts of those workloads. Whether it's batch outside Hadoop or if there's streaming, which sort-of pre-Hadoop was more nichey. But we need this over-arching control, which if it's not a Hadoop-centric architecture. >> Absolutely. So, I've said this for a long time, that Hadoop is never going to live on an island on its own in the enterprise. And with the maturation of the market, Hadoop has to now play with the other technologies in the stack So if you think about, just take data ingestion for an example, you've got ERP's, you've got CRM's, you've got middleware, you've got data warehouses, and you have to ingest a lot of that in. Where Control-M brings a lot of value and speeds up time to market is that we have out-of-the box integrations with a lot of the systems that already exist in the enterprise, such as ERP solutions and others. Virtually any application that can expose itself through an API or a web service, Control-M has the ability to automate that ingestion piece. But this is only step one of the journey. So, you've brought all this data into Hadoop and now you've got to process it. The number of tools available for processing this is growing at an unprecedented rate. You've got, you know MapReduce was a hot thing just two years ago and now Spark has taken over. So Control-M, about four years ago we started building very deep native capabilities in their new ecosystem. So, you've got ingestion that's automated, then you can seamlessly automate the actual processing of the data using things like Spark, Hive, PEG, and others. And the last mile of the journey, the most important one, is them making this refined data available to systems and users that can analyze it. Often Hadoop is not the repository where analytic systems sit on top of. It's another layer where all of this has to be moved. So, if you zoom out and take a look at it, this is a monumental task. And if you use siloed approach to automating this, this becomes unscalable. And that's where a lot of the Hadoop projects often >> Crash and burn. >> Crash and burn, yes, sustainability. >> Let's just say it, they crash and burn. >> So, Control-M has been around for 30 years. >> By the way, just to add to the crash-and-burn piece, the data lake gets stalled there, that's why the swamp happens, because they're like, now how do I operationalize this and scale it out? >> Right, if you're storing a lot of data and not making it available for processing and analysis, then it's of no use. And that's exactly our value proposition. This is a problem we haven't solved for the first time. We did this as we have seen these waves of automation come through. From the mainframe time when it was called batch processing. Then it evolved into distributed client server when it was known more as job scheduling. And now. >> So BMCs have seen this movie before. >> Absolutely. >> Alright, so let's take a step back. Zoom out, step back, go hang out in the big trees, look down on the market. Data practitioners, big data practitioners out there right now are wrestling with this issue. You've got streaming, real-time stuff, you got batch, it's all coming together. What is Control-M doing great right now with practitioners that you guys are solving? Because there are a zillion tools out there, but people are human. Every hammer looks for a nail. >> Sure. So, you have a lot of change happening at the same time but yet these tools. What is Control-M doing to really win? Where are you guys winning? >> Where we are adding a lot of value for our customers is helping them speed up the time to market and delivering these big data projects, in delivering them at scale and quality. >> Give me an example of a project. >> Malwarebytes is a Silicon Valley-based company. They are using this to ingest and analyze data from thousands of end-points from their end users. >> That's their Lambda architecture, right? >> In Lambda architecture, I won't steal their thunder, they're presenting tomorrow at eleven. >> Okay. >> Eleven-thirty tomorrow. Another example is a company called Navistar. Now here's a company that's been around for 200 years. They manufacture heavy-duty trucks, 18-wheelers, school buses. And they recently came up with a service called OnCommand. They have a fleet of 160,000 trucks that are fitted with sensors. They're sending telematic data back to their data centers. And in between that stops in the cloud. So it gets to the cloud. >> So they're going up to the cloud for upload and backhaul, basically, right? >> Correct. So, it goes to the cloud. From there it is ingested inside their Hadoop systems. And they're looking for trends to make sure none of the trucks break down because a truck that's carrying freight breaks down hits the bottom line right away. But that's not where they're stopping. In real time they can triangulate the position of the truck, figure out where the nearest dealership is. Do they have the parts? When to schedule the service. But, if you think about it, the warranty information, the parts information is not sitting in Hadoop. That's sitting in their mainframes, SAP systems, and others. And Control-M is orchestrating this across the board, from mainframe to ERP and into Hadoop for them to be able to marry all this data together. >> How do you get back into the legacy? That's because you have the experience there? Is that part of the product portfolio? >> That is absolutely a part of the product portfolio. We started our journey back in the mainframe days, and as the world has evolved, to client server to web, and now mobile and virtualized and software-defined infrastructures, we have kept pace with that. >> You guys have a nice end-to-end view right now going on. And certainly that example with the trucks highlights IOT rights right there. >> Exactly. You have a clear line of sight on IOT? >> Yup. >> That would be the best measure of your maturity is the breadth of your integrations. >> Absolutely. And we don't stop at what we provide just out of the box. We realized that we have 30 to 35 out-of-the box integrations but there are a lot more applications than that. We have architected control them in a way where that can automate data loads on any application and any database that can expose itself through an API. That is huge because if you think about the open-source world, by the time this conference is going to be over, there's going to be a dozen new tools and projects that come online. And that's a big challenge for companies too. How do you keep pace with this and how do you (drowned out) all this? >> Well, I think people are starting to squint past the fashion aspect of open source, which I love by the way, but it does create more diversity. But, you know, some things become fashionable and then get big-time trashed. Look at Spark. Spark was beautiful. That one came out of the woodwork. George, you're tracking all the fashion. What's the hottest thing right now on open source? >> It seems to me that we've spent five-plus years building data lakes and now we're trying to take that data and apply the insides from it to applications. And, really Control-M's value add, my understanding is, we have to go beyond Hadoop because Hadoop was an island, you know, an island or a data lake, but now the insides have to be enacted on applications that go outside that ecosystem. And that's where Control-M comes in. >> Yeah, absolutely. We are that overarching layer that helps you connect your legacy systems and modern systems and bring it all into Hadoop. The story I tell when I'm explaining this to somebody is that you've installed Hadoop day-one, great, guess what, it has no data in it. You've got to ingest data and you have to be able to take a strategic approach to that because you can use some point solutions and do scripting for the first couple of use cases, but as soon as the business gives us the green light and says, you know what, we really like what we've seen now let's scale up, that's where you really need to take a strategic approach, and that's where Control-M comes in. >> So, let me ask then, if the bleeding edge right now is trying to operationalize the machine learning models that people are beginning to experiment with, just the way they were experimenting with data lakes five years ago, what role can Control-M play today in helping people take a trained model and embed it in an application so it produces useful actions, recommendations, and how much custom integration does that take? >> If we take the example of machine learning, if you peel the onion of machine learning, you've got data that needs to be moved, that needs to be constantly evaluated, and then the algorithms have to be run against it to provide the insights. So, this in itself is exactly what Control-M allows you to do, is ingest the data, process the data, let the algorithms process it, and then of course move it to a layer where people and other systems, it's not just about people anymore, it's other systems that'll analyze the data. And the important piece here is that we're allowing you to do this from a single pane of glass. And being able to see this picture end to end. All of this work is being done to drive business results, generating new revenue models, like in the case of Navistar. Allowing you to capture all of this and then tie it to business SOAs, that is one of the most highly-rated capabilities of Control-M from our customers. >> This is the cloud equation we were talking last week at Google Next. A combination of enterprise readiness across the board. The end-to-end is the picture and you guys are in a good position. Congratulations, and thanks for coming on theCUBE. Really appreciate it. >> Absolutely, great to be here. >> It's theCUBE breaking it down here at Big Data World. This is the trend. It's an operating system world in the cloud. Big data with IOT, AI, machine learning. Big themes breaking out early-on at Big Data SV in conjunction with Strata Hadoop. More right after this short break.
SUMMARY :
it's theCUBE covering Big A companion event to and machine learning is the hottest trend is all the real-time stuff, and people call it the data swamp, Hence the notion of Data and intelligence is now at the center But this is the combination Now this seems to be the that the business users are looking for. of the speed layer and the market, Hadoop has to So, Control-M has From the mainframe time when look down on the market. What is Control-M doing to really win? and delivering these big data projects, Malwarebytes is a Silicon In Lambda architecture, And in between that stops in the cloud. So, it goes to the cloud. and as the world has evolved, And certainly that example with the trucks You have a clear line of sight on IOT? is the breadth of your integrations. is going to be over, That one came out of the woodwork. but now the insides have to and do scripting for the that is one of the most This is the cloud This is the trend.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Basil Faruqui | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Navistar | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
five-plus years | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
160,000 trucks | QUANTITY | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Hadoop | TITLE | 0.99+ |
Malwarebytes | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
Lambda | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
OnCommand | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.99+ |
tomorrow | DATE | 0.98+ |
two years ago | DATE | 0.98+ |
35 | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Big Data SV | EVENT | 0.98+ |
18-wheelers | QUANTITY | 0.98+ |
first couple | QUANTITY | 0.98+ |
Big Data | EVENT | 0.98+ |
BMC Software | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.97+ | |
today | DATE | 0.97+ |
First | QUANTITY | 0.97+ |
about 10 years old | QUANTITY | 0.97+ |
Control-M | ORGANIZATION | 0.96+ |
two use cases | QUANTITY | 0.96+ |
Big Data Silicon Valley 2017 | EVENT | 0.95+ |
Hadoop | ORGANIZATION | 0.95+ |
30 years | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
NYC | LOCATION | 0.94+ |
Big Data Silicon Valley | EVENT | 0.93+ |
single pane | QUANTITY | 0.92+ |
Eleven-thirty | DATE | 0.9+ |
step one | QUANTITY | 0.88+ |
Strata Hadoop | TITLE | 0.88+ |
200 years | QUANTITY | 0.87+ |
theCUBE | ORGANIZATION | 0.87+ |
a dozen new tools | QUANTITY | 0.83+ |
about four years ago | DATE | 0.83+ |
Wikibon | ORGANIZATION | 0.83+ |
-M | ORGANIZATION | 0.82+ |
Big Data SV | ORGANIZATION | 0.82+ |
Control-M | PERSON | 0.81+ |
a zillion tools | QUANTITY | 0.8+ |
thousands of end-points | QUANTITY | 0.76+ |
eleven | DATE | 0.76+ |
Spark | TITLE | 0.76+ |
BMCs | ORGANIZATION | 0.74+ |
Strata Hadoop | PERSON | 0.67+ |
BigData SV 2017 | EVENT | 0.66+ |
#BigDataSV | EVENT | 0.62+ |
Big | ORGANIZATION | 0.62+ |
SAP | ORGANIZATION | 0.6+ |
MapReduce | ORGANIZATION | 0.58+ |
Hive | TITLE | 0.52+ |
Kubernetes on Any Infrastructure Top to Bottom Tutorials for Docker Enterprise Container Cloud
>>all right, We're five minutes after the hour. That's all aboard. Who's coming aboard? Welcome everyone to the tutorial track for our launchpad of them. So for the next couple of hours, we've got a SYRIZA videos and experts on hand to answer questions about our new product, Doctor Enterprise Container Cloud. Before we jump into the videos and the technology, I just want to introduce myself and my other emcee for the session. I'm Bill Milks. I run curriculum development for Mirant us on. And >>I'm Bruce Basil Matthews. I'm the Western regional Solutions architect for Moran Tissue esa and welcome to everyone to this lovely launchpad oven event. >>We're lucky to have you with us proof. At least somebody on the call knows something about your enterprise Computer club. Um, speaking of people that know about Dr Enterprise Container Cloud, make sure that you've got a window open to the chat for this session. We've got a number of our engineers available and on hand to answer your questions live as we go through these videos and disgusting problem. So that's us, I guess, for Dr Enterprise Container Cloud, this is Mirant asses brand new product for bootstrapping Doctor Enterprise Kubernetes clusters at scale Anything. The airport Abu's? >>No, just that I think that we're trying Thio. Uh, let's see. Hold on. I think that we're trying Teoh give you a foundation against which to give this stuff a go yourself. And that's really the key to this thing is to provide some, you know, many training and education in a very condensed period. So, >>yeah, that's exactly what you're going to see. The SYRIZA videos we have today. We're going to focus on your first steps with Dr Enterprise Container Cloud from installing it to bootstrapping your regional child clusters so that by the end of the tutorial content today, you're gonna be prepared to spin up your first documentary prize clusters using documented prize container class. So just a little bit of logistics for the session. We're going to run through these tutorials twice. We're gonna do one run through starting seven minutes ago up until I guess it will be ten fifteen Pacific time. Then we're gonna run through the whole thing again. So if you've got other colleagues that weren't able to join right at the top of the hour and would like to jump in from the beginning, ten. Fifteen Pacific time. We're gonna do the whole thing over again. So if you want to see the videos twice, you got public friends and colleagues that, you know you wanna pull in for a second chance to see this stuff, we're gonna do it all. All twice. Yeah, this session. Any any logistics I should add, Bruce that No, >>I think that's that's pretty much what we had to nail down here. But let's zoom dash into those, uh, feature films. >>Let's do Edmonds. And like I said, don't be shy. Feel free to ask questions in the chat or engineers and boosting myself are standing by to answer your questions. So let me just tee up the first video here and walk their cost. Yeah. Mhm. Yes. Sorry. And here we go. So our first video here is gonna be about installing the Doctor Enterprise Container Club Management cluster. So I like to think of the management cluster as like your mothership, right? This is what you're gonna use to deploy all those little child clusters that you're gonna use is like, Come on it as clusters downstream. So the management costs was always our first step. Let's jump in there >>now. We have to give this brief little pause >>with no good day video. Focus for this demo will be the initial bootstrap of the management cluster in the first regional clusters to support AWS deployments. The management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case, eight of us and the Elsie um, components on the UCP Cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a big strap note on this dependencies on handling with download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the adduce environment. The fourth configuring the deployment, defining things like the machine types on the fifth phase. Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node, just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now we're just checking through AWS to make sure that the account we want to use we have the correct credentials on the correct roles set up and validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just going to check that we can, from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next, we're going to run it. I'm in. Deploy it. Changing into that big struck folder. Just making see what's there. Right now we have no license file, so we're gonna get the license filed. Oh, okay. Get the license file through the more antis downloads site, signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, Once we've done that, we can now go ahead with the rest of the deployment. See that the follow is there. Uh, huh? That's again checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. All right, The next big step is valid in all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running an AWS policy create. So it is part of that is creating our Food trucks script, creating the mystery policy files on top of AWS, Just generally preparing the environment using a cloud formation script you'll see in a second will give a new policy confirmations just waiting for it to complete. Yeah, and there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created Today I am console. Go to that new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media Access key I D and the secret access key. We went, Yeah, usually then exported on the command line. Okay. Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Yes. Okay, that's the key. Secret X key. Right on. Let's kick it off. Yeah, So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you, and as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the west side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS. At the end of the process, that cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Okay. Local clusters boat just waiting for the various objects to get ready. Standard communities objects here Okay, so we speed up this process a little bit just for demonstration purposes. Yeah. There we go. So first note is being built the best in host. Just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for a W s to create the instance. Okay. Yes. Here, beauty there. Okay. Mhm. Okay. Yeah, yeah. Okay. On there. We got question. Host has been built on three instances for the management clusters have now been created. We're going through the process of preparing. Those nodes were now copying everything over. See that? The scaling up of controllers in the big Strap cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Yeah. Yeah, just waiting for key. Clark. Uh huh. Start to finish up. Yeah. No. What? Now we're shutting down control this on the local bootstrap node on preparing our I. D. C. Configuration. Fourth indication, soon as this is completed. Last phase will be to deploy stack light into the new cluster the last time Monitoring tool set way Go stack like to plan It has started. Mhm coming to the end of the deployment Mountain. Yeah, America. Final phase of the deployment. Onda, We are done. Okay, You'll see. At the end they're providing us the details of you. I log in so there's a keeper clogging. You can modify that initial default password is part of the configuration set up with one documentation way. Go Councils up way can log in. Yeah, yeah, thank you very much for watching. >>Excellent. So in that video are wonderful field CTO Shauna Vera bootstrapped up management costume for Dr Enterprise Container Cloud Bruce, where exactly does that leave us? So now we've got this management costume installed like what's next? >>So primarily the foundation for being able to deploy either regional clusters that will then allow you to support child clusters. Uh, comes into play the next piece of what we're going to show, I think with Sean O'Mara doing this is the child cluster capability, which allows you to then deploy your application services on the local cluster. That's being managed by the ah ah management cluster that we just created with the bootstrap. >>Right? So this cluster isn't yet for workloads. This is just for bootstrapping up the downstream clusters. Those or what we're gonna use for workings. >>Exactly. Yeah. And I just wanted to point out, since Sean O'Mara isn't around, toe, actually answer questions. I could listen to that guy. Read the phone book, and it would be interesting, but anyway, you can tell him I said that >>he's watching right now, Crusoe. Good. Um, cool. So and just to make sure I understood what Sean was describing their that bootstrap er knows that you, like, ran document fresh pretender Cloud from to begin with. That's actually creating a kind kubernetes deployment kubernetes and Docker deployment locally. That then hits the AWS a p i in this example that make those e c two instances, and it makes like a three manager kubernetes cluster there, and then it, like, copies itself over toe those communities managers. >>Yeah, and and that's sort of where the transition happens. You can actually see it. The output that when it says I'm pivoting, I'm pivoting from my local kind deployment of cluster AP, I toothy, uh, cluster, that's that's being created inside of AWS or, quite frankly, inside of open stack or inside of bare metal or inside of it. The targeting is, uh, abstracted. Yeah, but >>those air three environments that we're looking at right now, right? Us bare metal in open staff environments. So does that kind cluster on the bootstrap er go away afterwards. You don't need that afterwards. Yeah, that is just temporary. To get things bootstrapped, then you manage things from management cluster on aws in this example? >>Yeah. Yeah. The seed, uh, cloud that post the bootstrap is not required anymore. And there's no, uh, interplay between them after that. So that there's no dependencies on any of the clouds that get created thereafter. >>Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, be a temporary container that would bootstrap all the other containers. Go away. It's, uh, so sort of a similar, similar temporary transient bootstrapping model. Cool. Excellent. What will convict there? It looked like there wasn't a ton, right? It looked like you had to, like, set up some AWS parameters like credentials and region and stuff like that. But other than that, that looked like heavily script herbal like there wasn't a ton of point and click there. >>Yeah, very much so. It's pretty straightforward from a bootstrapping standpoint, The config file that that's generated the template is fairly straightforward and targeted towards of a small medium or large, um, deployment. And by editing that single file and then gathering license file and all of the things that Sean went through, um, that that it makes it fairly easy to script >>this. And if I understood correctly as well that three manager footprint for your management cluster, that's the minimum, right. We always insist on high availability for this management cluster because boy do not wanna see oh, >>right, right. And you know, there's all kinds of persistent data that needs to be available, regardless of whether one of the notes goes down or not. So we're taking care of all of that for you behind the scenes without you having toe worry about it as a developer. >>No, I think there's that's a theme that I think will come back to throughout the rest of this tutorial session today is there's a lot of there's a lot of expertise baked him to Dr Enterprise Container Cloud in terms of implementing best practices for you like the defaulter, just the best practices of how you should be managing these clusters, Miss Seymour. Examples of that is the day goes on. Any interesting questions you want to call out from the chap who's >>well, there was. Yeah, yeah, there was one that we had responded to earlier about the fact that it's a management cluster that then conduce oh, either the the regional cluster or a local child molester. The child clusters, in each case host the application services, >>right? So at this point, we've got, in some sense, like the simplest architectures for our documentary prize Container Cloud. We've got the management cluster, and we're gonna go straight with child cluster. In the next video, there's a more sophisticated architecture, which will also proper today that inserts another layer between those two regional clusters. If you need to manage regions like across a BS, reads across with these documents anything, >>yeah, that that local support for the child cluster makes it a lot easier for you to manage the individual clusters themselves and to take advantage of our observation. I'll support systems a stack light and things like that for each one of clusters locally, as opposed to having to centralize thumb >>eso. It's a couple of good questions. In the chat here, someone was asking for the instructions to do this themselves. I strongly encourage you to do so. That should be in the docks, which I think Dale helpfully thank you. Dale provided links for that's all publicly available right now. So just head on in, head on into the docks like the Dale provided here. You can follow this example yourself. All you need is a Mirante license for this and your AWS credentials. There was a question from many a hear about deploying this toe azure. Not at G. Not at this time. >>Yeah, although that is coming. That's going to be in a very near term release. >>I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. Very bracing. Cool. Okay. Any other thoughts on this one does. >>No, just that the fact that we're running through these individual pieces of the steps Well, I'm sure help you folks. If you go to the link that, uh, the gentleman had put into the chat, um, giving you the step by staff. Um, it makes it fairly straightforward to try this yourselves. >>E strongly encourage that, right? That's when you really start to internalize this stuff. OK, but before we move on to the next video, let's just make sure everyone has a clear picture in your mind of, like, where we are in the life cycle here creating this management cluster. Just stop me if I'm wrong. Who's creating this management cluster is like, you do that once, right? That's when your first setting up your doctor enterprise container cloud environment of system. What we're going to start seeing next is creating child clusters and this is what you're gonna be doing over and over and over again. When you need to create a cluster for this Deb team or, you know, this other team river it is that needs commodity. Doctor Enterprise clusters create these easy on half will. So this was once to set up Dr Enterprise Container Cloud Child clusters, which we're going to see next. We're gonna do over and over and over again. So let's go to that video and see just how straightforward it is to spin up a doctor enterprise cluster for work clothes as a child cluster. Undocumented brands contain >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster, the scaling of the cluster and how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the U I so you can switch. Project Mary only has access to development. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Nan Yes, this H Keys Associate ID for Mary into her team on the cloud credentials that allow you to create access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences, Right? Let's now set up semester search keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name, we copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our local machine. A simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, you go to the clusters tab. We hit the create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one release version five point seven is the current release Onda Attach. Mary's Key is necessary Key. We can then check the rest of the settings, confirming the provider Any kubernetes c r D r I p address information. We can change this. Should we wish to? We'll leave it default for now on. Then what components? A stack light I would like to deploy into my Custer. For this. I'm enabling stack light on logging on Aiken. Sit up the retention sizes Attention times on. Even at this stage, at any customer alerts for the watchdogs. E consider email alerting which I will need my smart host details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I commend side on the route device size. There we go, my three machines obviously creating. I now need to add some workers to this custom. So I go through the same process this time once again, just selecting worker. I'll just add to once again, the AM is extremely important. Will fail if we don't pick the right, Am I for a boon to machine in this case and the deployment has started. We can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here we've created the VPC. We've created the sub nets on We've created the Internet gateway. It's unnecessary made of us and we have no warnings of the stage. Yeah, this will then run for a while. We have one minute past waken click through. We can check the status of the machine bulls as individuals so we can check the machine info, details of the machines that we've assigned, right? Mhm Onda. See any events pertaining to the machine areas like this one on normal? Yeah. Just watch asked. The community's components are waiting for the machines to start. Go back to Custer's. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway on the stage. The machines have been built on assigned. I pick up the U. S. Thank you. Yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. Mhm. No speeding things up a little bit. This whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured. Mhm, mhm. And then we go. Cluster has been deployed. So once the classes deployed, we can now never get around our environment. Okay, Are cooking into configure cluster We could modify their cluster. We could get the end points for alert alert manager on See here The griffon occupying and Prometheus are still building in the background but the cluster is available on you would be able to put workloads on it the stretch to download the cube conflict so that I can put workloads on it. It's again three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster Mhm all right Now that the build is fully completed, we can check out cluster info on. We can see that Allow the satellite components have been built. All the storage is there, and we have access to the CPU. I So if we click into the cluster, we can access the UCP dashboard, right? Shit. Click the signing with Detroit button to use the SSO on. We give Mary's possible to use the name once again. Thing is, an unlicensed cluster way could license at this point. Or just skip it on. There. We have the UCP dashboard. You can see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon, a data just being automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Mhm. So we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah. General dashboard of Cuba navies cluster one of this is configurable. You can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster, all right to scale the cluster on to add a notice. A simple is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger disks and you'll see that worker has been added from the provisioning state on shortly. We will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workouts are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button, validating which release you would like to update to. In this case, the next available releases five point seven point one. Here I'm kicking the update by in the background We will coordinate. Drain each node slowly go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Girl, we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already on In a few minutes we'll see that there are great has been completed. There we go. Great. Done. Yeah. If you work loads of both using proper cloud native community standards, there will be no impact. >>Excellent. So at this point, we've now got a cluster ready to start taking our communities of workloads. He started playing or APs to that costume. So watching that video, the thing that jumped out to me at first Waas like the inputs that go into defining this workload cost of it. All right, so we have to make sure we were using on appropriate am I for that kind of defines the substrate about what we're gonna be deploying our cluster on top of. But there's very little requirements. A so far as I could tell on top of that, am I? Because Docker enterprise Container Cloud is gonna bootstrap all the components that you need. That s all we have is kind of kind of really simple bunch box that we were deploying these things on top of so one thing that didn't get dug into too much in the video. But it's just sort of implied. Bruce, maybe you can comment on this is that release that Shawn had to choose for his, uh, for his cluster in creating it. And that release was also the thing we had to touch. Wanted to upgrade part cluster. So you have really sharp eyes. You could see at the end there that when you're doing the release upgrade enlisted out a stack of components docker, engine, kubernetes, calico, aled, different bits and pieces that go into, uh, go into one of these commodity clusters that deploy. And so, as far as I can tell in that case, that's what we mean by a release. In this sense, right? It's the validated stack off container ization and orchestration components that you know we've tested out and make sure it works well, introduction environments. >>Yeah, and and And that's really the focus of our effort is to ensure that any CVS in any of the stack are taken care of that there is a fixes air documented and up streamed to the open stack community source community, um, and and that, you know, then we test for the scaling ability and the reliability in high availability configuration for the clusters themselves. The hosts of your containers. Right. And I think one of the key, uh, you know, benefits that we provide is that ability to let you know, online, high. We've got an update for you, and it's fixes something that maybe you had asked us to fix. Uh, that all comes to you online as your managing your clusters, so you don't have to think about it. It just comes as part of the product. >>You just have to click on Yes. Please give me that update. Uh, not just the individual components, but again. It's that it's that validated stack, right? Not just, you know, component X, y and Z work. But they all work together effectively Scalable security, reliably cool. Um, yeah. So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old universal control plane. Doctor Enterprise. On top of that, Sean had the classic comment there, you know? Yeah. Yeah. You'll see a little warnings and errors or whatever. When you're setting up, UCP don't handle, right, Just let it do its job, and it will converge all its components, you know, after just just a minute or two. But we saw in that video, we sped things up a little bit there just we didn't wait for, you know, progress fighters to complete. But really, in real life, that whole process is that anything so spend up one of those one of those fosters so quite quite quick. >>Yeah, and and I think the the thoroughness with which it goes through its process and re tries and re tries, uh, as you know, and it was evident when we went through the initial ah video of the bootstrapping as well that the processes themselves are self healing, as they are going through. So they will try and retry and wait for the event to complete properly on. And once it's completed properly, then it will go to the next step. >>Absolutely. And the worst thing you could do is panic at the first warning and start tearing things that don't don't do that. Just don't let it let it heal. Let take care of itself. And that's the beauty of these manage solutions is that they bake in a lot of subject matter expertise, right? The decisions that are getting made by those containers is they're bootstrapping themselves, reflect the expertise of the Mirant ISS crew that has been developing this content in these two is free for years and years now, over recognizing humanities. One cool thing there that I really appreciate it actually that it adds on top of Dr Enterprise is that automatic griffon a deployment as well. So, Dr Enterprises, I think everyone knows has had, like, some very high level of statistics baked into its dashboard for years and years now. But you know our customers always wanted a double click on that right to be able to go a little bit deeper. And Griffon are really addresses that it's built in dashboards. That's what's really nice to see. >>Yeah, uh, and all of the alerts and, uh, data are actually captured in a Prometheus database underlying that you have access to so that you are allowed to add new alerts that then go out to touch slack and say hi, You need to watch your disk space on this machine or those kinds of things. Um, and and this is especially helpful for folks who you know, want to manage the application service layer but don't necessarily want to manage the operations side of the house. So it gives them a tool set that they can easily say here, Can you watch these for us? And Miran tas can actually help do that with you, So >>yeah, yeah, I mean, that's just another example of baking in that expert knowledge, right? So you can leverage that without tons and tons of a long ah, long runway of learning about how to do that sort of thing. Just get out of the box right away. There was the other thing, actually, that you could sleep by really quickly if you weren't paying close attention. But Sean mentioned it on the video. And that was how When you use dark enterprise container cloud to scale your cluster, particularly pulling a worker out, it doesn't just like Territo worker down and forget about it. Right? Is using good communities best practices to cordon and drain the No. So you aren't gonna disrupt your workloads? You're going to just have a bunch of containers instantly. Excellent crash. You could really carefully manage the migration of workloads off that cluster has baked right in tow. How? How? Document? The brass container cloud is his handling cluster scale. >>Right? And And the kubernetes, uh, scaling methodology is is he adhered to with all of the proper techniques that ensure that it will tell you. Wait, you've got a container that actually needs three, uh, three, uh, instances of itself. And you don't want to take that out, because that node, it means you'll only be able to have to. And we can't do that. We can't allow that. >>Okay, Very cool. Further thoughts on this video. So should we go to the questions. >>Let's let's go to the questions >>that people have. Uh, there's one good one here, down near the bottom regarding whether an a p I is available to do this. So in all these demos were clicking through this web. You I Yes, this is all a p. I driven. You could do all of this. You know, automate all this away is part of the CSC change. Absolutely. Um, that's kind of the point, right? We want you to be ableto spin up. Come on. I keep calling them commodity clusters. What I mean by that is clusters that you can create and throw away. You know, easily and automatically. So everything you see in these demos eyes exposed to FBI? >>Yeah. In addition, through the standard Cube cuddle, Uh, cli as well. So if you're not a programmer, but you still want to do some scripting Thio, you know, set up things and deploy your applications and things. You can use this standard tool sets that are available to accomplish that. >>There is a good question on scale here. So, like, just how many clusters and what sort of scale of deployments come this kind of support our engineers report back here that we've done in practice up to a Zeman ia's like two hundred clusters. We've deployed on this with two hundred fifty nodes in a cluster. So were, you know, like like I said, hundreds, hundreds of notes, hundreds of clusters managed by documented press container fall and then those downstream clusters, of course, subject to the usual constraints for kubernetes, right? Like default constraints with something like one hundred pods for no or something like that. There's a few different limitations of how many pods you can run on a given cluster that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. >>Yeah, E. I mean, I don't think that we constrain any of the capabilities that are available in the, uh, infrastructure deliveries, uh, service within the goober Netease framework. So were, you know, But we are, uh, adhering to the standards that we would want to set to make sure that we're not overloading a node or those kinds of things, >>right. Absolutely cool. Alright. So at this point, we've got kind of a two layered our protection when we are management cluster, but we deployed in the first video. Then we use that to deploy one child clustering work, classroom, uh, for more sophisticated deployments where we might want to manage child clusters across multiple regions. We're gonna add another layer into our architectural we're gonna add in regional cluster management. So this idea you're gonna have the single management cluster that we started within the first video. On the next video, we're gonna learn how to spin up a regional clusters, each one of which would manage, for example, a different AWS uh, US region. So let me just pull out the video for that bill. We'll check it out for me. Mhm. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectures of you how to set up the management environment, prepare for the deployment deployment overview and then just to prove it, to play a regional child cluster. So, looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case AWS on the LCN components on the D you speak Cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need a regional cluster? Different platform architectures, for example aws who have been stack even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager we also Machine Manager were held. Mandel are managed as well as the actual provider logic. Mhm. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. And you see, it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster similar to what we're going to deploy now, also only has three managers once again, no workers. But as a comparison, here's a child cluster This one has three managers, but also has additional workers associate it to the cluster. All right, we need to connect. Tell bootstrap note. Preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine. All right. A few things we have to do to make sure the environment is ready. First thing we're going to see go into route. We'll go into our releases folder where we have the kozberg struck on. This was the original bootstrap used to build the original management cluster. Yeah, we're going to double check to make sure our cube con figures there once again, the one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything is working. A condom. No damages waken access to a swell. Yeah. Next we're gonna edit the machine definitions. What we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I. So that's found under the templates AWS directory. We don't need to edit anything else here. But we could change items like the size of the machines attempts. We want to use that The key items to ensure where you changed the am I reference for the junta image is the one for the region in this case AWS region for utilizing this was no construct deployment. We have to make sure we're pointing in the correct open stack images. Yeah, okay. Set the correct and my save file. Now we need to get up credentials again. When we originally created the bootstrap cluster, we got credentials from eight of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we're just exporting the AWS access key and I d. What's important is CAAs aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our cube conflict that we want to use for the management cluster. When we looked at earlier Yeah, now we're exporting that. Want to call the cluster region Is Frank Foods Socrates Frankfurt yet trying to use something descriptive It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed. Um, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at W s and waiting for that bastard and no to get started. Please. The best you nerd Onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy. Dr. Enterprise, this is probably the longest face. Yeah, seeing the second that all the nerds will go from the player deployed. Prepare, prepare. Yeah, You'll see their status changes updates. He was the first night ready. Second, just applying second already. Both my time. No waiting from home control. Let's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running the date of the U. S. All my stay. Ah, now we're playing Stockland. Switch over is done on. Done. Now I will build a child cluster in the new region very, very quickly to find the cluster will pick. Our new credential has shown up. We'll just call it Frankfurt for simplicity a key and customs to find. That's the machine. That cluster stop with three managers. Set the correct Am I for the region? Yeah, Do the same to add workers. There we go test the building. Yeah. Total bill of time Should be about fifteen minutes. Concedes in progress. It's going to expect this up a little bit. Check the events. We've created all the dependencies, machine instances, machines, a boat shortly. We should have a working cluster in Frankfurt region. Now almost a one note is ready from management. Two in progress. Yeah, on we're done. Clusters up and running. Yeah. >>Excellent. So at this point, we've now got that three tier structure that we talked about before the video. We got that management cluster that we do strapped in the first video. Now we have in this example to different regional clustering one in Frankfurt, one of one management was two different aws regions. And sitting on that you can do Strap up all those Doctor enterprise costumes that we want for our work clothes. >>Yeah, that's the key to this is to be able to have co resident with your actual application service enabled clusters the management co resident with it so that you can, you know, quickly access that he observation Elson Surfboard services like the graph, Ana and that sort of thing for your particular region. A supposed to having to lug back into the home. What did you call it when we started >>the mothership? >>The mothership. Right. So we don't have to go back to the mother ship. We could get >>it locally. Yeah, when, like to that point of aggregating things under a single pane of glass? That's one thing that again kind of sailed by in the demo really quickly. But you'll notice all your different clusters were on that same cluster. Your pain on your doctor Enterprise Container Cloud management. Uh, court. Right. So both your child clusters for running workload and your regional clusters for bootstrapping. Those child clusters were all listed in the same place there. So it's just one pane of glass to go look for, for all of your clusters, >>right? And, uh, this is kind of an important point. I was, I was realizing, as we were going through this. All of the mechanics are actually identical between the bootstrapped cluster of the original services and the bootstrapped cluster of the regional services. It's the management layer of everything so that you only have managers, you don't have workers and that at the child cluster layer below the regional or the management cluster itself, that's where you have the worker nodes. And those are the ones that host the application services in that three tiered architecture that we've now defined >>and another, you know, detail for those that have sharp eyes. In that video, you'll notice when deploying a child clusters. There's not on Lee. A minimum of three managers for high availability management cluster. You must have at least two workers that's just required for workload failure. It's one of those down get out of work. They could potentially step in there, so your minimum foot point one of these child clusters is fine. Violence and scalable, obviously, from a >>That's right. >>Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want to my last video. There's another question here about, like where these clusters can live. So again, I know these examples are very aws heavy. Honestly, it's just easy to set up down on the other us. We could do things on bare metal and, uh, open stack departments on Prem. That's what all of this still works in exactly the same way. >>Yeah, the, uh, key to this, especially for the the, uh, child clusters, is the provision hers? Right? See you establish on AWS provision or you establish a bare metal provision or you establish a open stack provision. Or and eventually that list will include all of the other major players in the cloud arena. But you, by selecting the provision or within your management interface, that's where you decide where it's going to be hosted, where the child cluster is to be hosted. >>Speaking off all through a child clusters. Let's jump into our last video in the Siri's, where we'll see how to spin up a child cluster on bare metal. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. So why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent. Provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and supports high performance workloads like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another opera visor. Lay it between so continue on the theme Why Communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p G A s G p us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. Uh, we can handle utilization in the scheduling. Better Onda we increase the performances and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project will add the bare metal hosts, including the host name. I put my credentials I pay my address the Mac address on then provide a machine type label to determine what type of machine it is for later use. Okay, let's get started. So well again. Was the operator thing. We'll go and we'll create a project for our machines to be a member off helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. So the first thing we had to be in post, Yeah, many of the machine A name. Anything you want, que experimental zero one. Provide the IAP my user name type my password. Okay. On the Mac address for the common interface with the boot interface and then the i p m I i p address These machines will be at the time storage worker manager. He's a manager. Yeah, we're gonna add a number of other machines on will. Speed this up just so you could see what the process looks like in the future. Better discovery will be added to the product. Okay. Okay. Getting back there we have it are Six machines have been added, are busy being inspected, being added to the system. Let's have a look at the details of a single note. Yeah, you can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. I see. Okay, let's go and create the cluster. Yeah, So we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So we'll credit custom. We'll give it a name, but if it were selecting bare metal on the region, we're going to select the version we want to apply. No way. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of dress range on update the address range that we want to use for the cluster. Check that the sea ideal blocks for the Cuban ladies and tunnels are what we want them to be. Enable disabled stack light. Yeah, and soothe stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here. We're focused on building communities clusters, so we're gonna put the count of machines. You want managers? We're gonna pick the label type manager and create three machines is the manager for the Cuban eighties. Casting Okay thing. We're having workers to the same. It's a process. Just making sure that the worker label host level are I'm sorry. On when Wait for the machines to deploy. Let's go through the process of putting the operating system on the notes validating and operating system deploying doctor identifies Make sure that the cluster is up and running and ready to go. Okay, let's review the bold events waken See the machine info now populated with more information about the specifics of things like storage and of course, details of a cluster etcetera. Yeah, yeah, well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build? And that brings us to the end of this particular demo. You can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>All right, so there we have it, deploying a cluster to bare metal. Much the same is how we did for AWS. I guess maybe the biggest different stepwise there is there is that registration face first, right? So rather than just using AWS financials toe magically create PM's in the cloud. You got a point out all your bare metal servers to Dr Enterprise between the cloud and they really come in, I guess three profiles, right? You got your manager profile with a profile storage profile which has been labeled as allocate. Um, crossword cluster has appropriate, >>right? And And I think that the you know, the key differentiator here is that you have more physical control over what, uh, attributes that love your cat, by the way, uh, where you have the different attributes of a server of physical server. So you can, uh, ensure that the SSD configuration on the storage nodes is gonna be taken advantage of in the best way the GP use on the worker nodes and and that the management layer is going to have sufficient horsepower to, um, spin up to to scale up the the environments, as required. One of the things I wanted to mention, though, um, if I could get this out without the choking much better. Um, is that Ah, hey, mentioned the load balancer and I wanted to make sure in defining the load balancer and the load balancer ranges. Um, that is for the top of the the cluster itself. That's the operations of the management, uh, layer integrating with your systems internally to be able to access the the Cube Can figs. I I p address the, uh, in a centralized way. It's not the load balancer that's working within the kubernetes cluster that you are deploying. That's still cube proxy or service mesh, or however you're intending to do it. So, um, it's kind of an interesting step that your initial step in building this, um and we typically use things like metal L B or in gen X or that kind of thing is to establish that before we deploy this bear mental cluster so that it can ride on top of that for the tips and things. >>Very cool. So any other thoughts on what we've seen so far today? Bruce, we've gone through all the different layers. Doctor enterprise container clouds in these videos from our management are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. Closing thoughts before we take just a very short break and run through these demos again. >>You know, I've been very exciting. Ah, doing the presentation with you. I'm really looking forward to doing it the second time, so that we because we've got a good rhythm going about this kind of thing. So I'm looking forward to doing that. But I think that the key elements of what we're trying to convey to the folks out there in the audience that I hope you've gotten out of it is that will that this is an easy enough process that if you follow the step by steps going through the documentation that's been put out in the chat, um, that you'll be able to give this a go yourself, Um, and you don't have to limit yourself toe having physical hardware on prim to try it. You could do it in a ws as we've shown you today. And if you've got some fancy use cases like, uh, you you need a Hadoop And and, uh, you know, cloud oriented ai stuff that providing a bare metal service helps you to get there very fast. So right. Thank you. It's been a pleasure. >>Yeah, thanks everyone for coming out. So, like I said we're going to take a very short, like, three minute break here. Uh, take the opportunity to let your colleagues know if they were in another session or they didn't quite make it to the beginning of this session. Or if you just want to see these demos again, we're going to kick off this demo. Siri's again in just three minutes at ten. Twenty five a. M. Pacific time where we will see all this great stuff again. Let's take a three minute break. I'll see you all back here in just two minutes now, you know. Okay, folks, that's the end of our extremely short break. We'll give people just maybe, like one more minute to trickle in if folks are interested in coming on in and jumping into our demo. Siri's again. Eso For those of you that are just joining us now I'm Bill Mills. I head up curriculum development for the training team here. Moran Tous on Joining me for this session of demos is Bruce. Don't you go ahead and introduce yourself doors, who is still on break? That's cool. We'll give Bruce a minute or two to get back while everyone else trickles back in. There he is. Hello, Bruce. >>How'd that go for you? Okay, >>Very well. So let's kick off our second session here. I e just interest will feel for you. Thio. Let it run over here. >>Alright. Hi. Bruce Matthews here. I'm the Western Regional Solutions architect for Marantz. Use A I'm the one with the gray hair and the glasses. Uh, the handsome one is Bill. So, uh, Bill, take it away. >>Excellent. So over the next hour or so, we've got a Siris of demos that's gonna walk you through your first steps with Dr Enterprise Container Cloud Doctor Enterprise Container Cloud is, of course, Miranda's brand new offering from bootstrapping kubernetes clusters in AWS bare metal open stack. And for the providers in the very near future. So we we've got, you know, just just over an hour left together on this session, uh, if you joined us at the top of the hour back at nine. A. M. Pacific, we went through these demos once already. Let's do them again for everyone else that was only able to jump in right now. Let's go. Our first video where we're gonna install Dr Enterprise container cloud for the very first time and use it to bootstrap management. Cluster Management Cluster, as I like to describe it, is our mother ship that's going to spin up all the other kubernetes clusters, Doctor Enterprise clusters that we're gonna run our workloads on. So I'm gonna do >>I'm so excited. I can hardly wait. >>Let's do it all right to share my video out here. Yeah, let's do it. >>Good day. The focus for this demo will be the initial bootstrap of the management cluster on the first regional clusters. To support AWS deployments, the management cluster provides the core functionality, including identity management, authentication, infantry release version. The regional cluster provides the specific architecture provided in this case AWS and the Elsom components on the UCP cluster Child cluster is the cluster or clusters being deployed and managed. The deployment is broken up into five phases. The first phase is preparing a bootstrap note on its dependencies on handling the download of the bridge struck tools. The second phase is obtaining America's license file. Third phase. Prepare the AWS credentials instead of the ideas environment, the fourth configuring the deployment, defining things like the machine types on the fifth phase, Run the bootstrap script and wait for the deployment to complete. Okay, so here we're sitting up the strap node. Just checking that it's clean and clear and ready to go there. No credentials already set up on that particular note. Now, we're just checking through aws to make sure that the account we want to use we have the correct credentials on the correct roles set up on validating that there are no instances currently set up in easy to instance, not completely necessary, but just helps keep things clean and tidy when I am perspective. Right. So next step, we're just gonna check that we can from the bootstrap note, reach more antis, get to the repositories where the various components of the system are available. They're good. No areas here. Yeah, right now we're going to start sitting at the bootstrap note itself. So we're downloading the cars release, get get cars, script, and then next we're going to run it. Yeah, I've been deployed changing into that big struck folder, just making see what's there right now we have no license file, so we're gonna get the license filed. Okay? Get the license file through more antis downloads site signing up here, downloading that license file and putting it into the Carisbrook struck folder. Okay, since we've done that, we can now go ahead with the rest of the deployment. Yeah, see what the follow is there? Uh huh. Once again, checking that we can now reach E C two, which is extremely important for the deployment. Just validation steps as we move through the process. Alright. Next big step is violating all of our AWS credentials. So the first thing is, we need those route credentials which we're going to export on the command line. This is to create the necessary bootstrap user on AWS credentials for the completion off the deployment we're now running in AWS policy create. So it is part of that is creating our food trucks script. Creating this through policy files onto the AWS, just generally preparing the environment using a cloud formation script, you'll see in a second, I'll give a new policy confirmations just waiting for it to complete. And there is done. It's gonna have a look at the AWS console. You can see that we're creative completed. Now we can go and get the credentials that we created. Good day. I am console. Go to the new user that's being created. We'll go to the section on security credentials and creating new keys. Download that information media access Key I. D and the secret access key, but usually then exported on the command line. Okay, Couple of things to Notre. Ensure that you're using the correct AWS region on ensure that in the conflict file you put the correct Am I in for that region? I'm sure you have it together in a second. Okay, thanks. Is key. So you could X key Right on. Let's kick it off. So this process takes between thirty and forty five minutes. Handles all the AWS dependencies for you. Um, as we go through, the process will show you how you can track it. Andi will start to see things like the running instances being created on the AWS side. The first phase off this whole process happening in the background is the creation of a local kind based bootstrapped cluster on the bootstrap node that clusters then used to deploy and manage all the various instances and configurations within AWS at the end of the process. That cluster is copied into the new cluster on AWS and then shut down that local cluster essentially moving itself over. Yeah, okay. Local clusters boat. Just waiting for the various objects to get ready. Standard communities objects here. Yeah, you mentioned Yeah. So we've speed up this process a little bit just for demonstration purposes. Okay, there we go. So first note is being built the bastion host just jump box that will allow us access to the entire environment. Yeah, In a few seconds, we'll see those instances here in the US console on the right. Um, the failures that you're seeing around failed to get the I. P for Bastian is just the weight state while we wait for AWS to create the instance. Okay. Yeah. Beauty there. Movies. Okay, sketch. Hello? Yeah, Okay. Okay. On. There we go. Question host has been built on three instances for the management clusters have now been created. Okay, We're going through the process of preparing. Those nodes were now copying everything over. See that scaling up of controllers in the big strapped cluster? It's indicating that we're starting all of the controllers in the new question. Almost there. Right? Okay. Just waiting for key. Clark. Uh huh. So finish up. Yeah. No. Now we're shutting down. Control this on the local bootstrap node on preparing our I. D. C configuration, fourth indication. So once this is completed, the last phase will be to deploy stack light into the new cluster, that glass on monitoring tool set, Then we go stack like deployment has started. Mhm. Coming to the end of the deployment mountain. Yeah, they were cut final phase of the deployment. And we are done. Yeah, you'll see. At the end, they're providing us the details of you. I log in. So there's a key Clark log in. Uh, you can modify that initial default possible is part of the configuration set up where they were in the documentation way. Go Councils up way can log in. Yeah. Yeah. Thank you very much for watching. >>All right, so at this point, what we have we got our management cluster spun up, ready to start creating work clusters. So just a couple of points to clarify there to make sure everyone caught that, uh, as advertised. That's darker. Enterprise container cloud management cluster. That's not rework loans. are gonna go right? That is the tool and you're gonna use to start spinning up downstream commodity documentary prize clusters for bootstrapping record too. >>And the seed host that were, uh, talking about the kind cluster dingy actually doesn't have to exist after the bootstrap succeeds eso It's sort of like, uh, copies head from the seed host Toothy targets in AWS spins it up it then boots the the actual clusters and then it goes away too, because it's no longer necessary >>so that bootstrapping know that there's not really any requirements, Hardly on that, right. It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, as you just said, it's just a kubernetes in docker cluster on that piece. Drop note is just gonna get torn down after the set up finishes on. You no longer need that. Everything you're gonna do, you're gonna drive from the single pane of glass provided to you by your management cluster Doctor enterprise Continue cloud. Another thing that I think is sort of interesting their eyes that the convict is fairly minimal. Really? You just need to provide it like aws regions. Um, am I? And that's what is going to spin up that spending that matter faster. >>Right? There is a mammal file in the bootstrap directory itself, and all of the necessary parameters that you would fill in have default set. But you have the option then of going in and defining a different Am I different for a different region, for example? Oh, are different. Size of instance from AWS. >>One thing that people often ask about is the cluster footprint. And so that example you saw they were spitting up a three manager, um, managing cluster as mandatory, right? No single manager set up at all. We want high availability for doctrine Enterprise Container Cloud management. Like so again, just to make sure everyone sort of on board with the life cycle stage that we're at right now. That's the very first thing you're going to do to set up Dr Enterprise Container Cloud. You're going to do it. Hopefully exactly once. Right now, you've got your management cluster running, and they're gonna use that to spend up all your other work clusters Day today has has needed How do we just have a quick look at the questions and then lets take a look at spinning up some of those child clusters. >>Okay, e think they've actually been answered? >>Yeah, for the most part. One thing I'll point out that came up again in the Dail, helpfully pointed out earlier in surgery, pointed out again, is that if you want to try any of the stuff yourself, it's all of the dogs. And so have a look at the chat. There's a links to instructions, so step by step instructions to do each and every thing we're doing here today yourself. I really encourage you to do that. Taking this out for a drive on your own really helps internalizing communicate these ideas after the after launch pad today, Please give this stuff try on your machines. Okay, So at this point, like I said, we've got our management cluster. We're not gonna run workloads there that we're going to start creating child clusters. That's where all of our work and we're gonna go. That's what we're gonna learn how to do in our next video. Cue that up for us. >>I so love Shawn's voice. >>Wasn't that all day? >>Yeah, I watched him read the phone book. >>All right, here we go. Let's now that we have our management cluster set up, let's create a first child work cluster. >>Hello. In this demo, we will cover the deployment experience of creating a new child cluster the scaling of the cluster on how to update the cluster. When a new version is available, we begin the process by logging onto the you I as a normal user called Mary. Let's go through the navigation of the u I. So you can switch Project Mary only has access to development. Uh huh. Get a list of the available projects that you have access to. What clusters have been deployed at the moment there. Man. Yes, this H keys, Associate ID for Mary into her team on the cloud credentials that allow you to create or access the various clouds that you can deploy clusters to finally different releases that are available to us. We can switch from dark mode to light mode, depending on your preferences. Right. Let's now set up some ssh keys for Mary so she can access the notes and machines again. Very simply, had Mississippi key give it a name. We copy and paste our public key into the upload key block. Or we can upload the key if we have the file available on our machine. A very simple process. So to create a new cluster, we define the cluster ad management nodes and add worker nodes to the cluster. Yeah, again, very simply, we got the clusters tab we had to create cluster button. Give the cluster name. Yeah, Andi, select the provider. We only have access to AWS in this particular deployment, so we'll stick to AWS. What's like the region in this case? US West one released version five point seven is the current release Onda Attach. Mary's Key is necessary key. We can then check the rest of the settings, confirming the provider any kubernetes c r D a r i p address information. We can change this. Should we wish to? We'll leave it default for now and then what components of stack light? I would like to deploy into my custom for this. I'm enabling stack light on logging, and I consider the retention sizes attention times on. Even at this stage, add any custom alerts for the watchdogs. Consider email alerting which I will need my smart host. Details and authentication details. Andi Slack Alerts. Now I'm defining the cluster. All that's happened is the cluster's been defined. I now need to add machines to that cluster. I'll begin by clicking the create machine button within the cluster definition. Oh, select manager, Select the number of machines. Three is the minimum. Select the instant size that I'd like to use from AWS and very importantly, ensure correct. Use the correct Am I for the region. I convinced side on the route. Device size. There we go. My three machines are busy creating. I now need to add some workers to this cluster. So I go through the same process this time once again, just selecting worker. I'll just add to once again the am I is extremely important. Will fail if we don't pick the right. Am I for a Clinton machine? In this case and the deployment has started, we can go and check on the bold status are going back to the clusters screen on clicking on the little three dots on the right. We get the cluster info and the events, so the basic cluster info you'll see pending their listen. Cluster is still in the process of being built. We kick on, the events will get a list of actions that have been completed This part of the set up of the cluster. So you can see here. We've created the VPC. We've created the sub nets on. We've created the Internet Gateway. It's unnecessary made of us. And we have no warnings of the stage. Okay, this will then run for a while. We have one minute past. We can click through. We can check the status of the machine balls as individuals so we can check the machine info, details of the machines that we've assigned mhm and see any events pertaining to the machine areas like this one on normal. Yeah. Just last. The community's components are waiting for the machines to start. Go back to customers. Okay, right. Because we're moving ahead now. We can see we have it in progress. Five minutes in new Matt Gateway. And at this stage, the machines have been built on assigned. I pick up the U S. Yeah, yeah, yeah. There we go. Machine has been created. See the event detail and the AWS. I'd for that machine. No speeding things up a little bit this whole process and to end takes about fifteen minutes. Run the clock forward, you'll notice is the machines continue to bold the in progress. We'll go from in progress to ready. A soon as we got ready on all three machines, the managers on both workers way could go on and we could see that now we reached the point where the cluster itself is being configured mhm and then we go. Cluster has been deployed. So once the classes deployed, we can now never get around. Our environment are looking into configure cluster. We could modify their cluster. We could get the end points for alert Alert Manager See here the griffon occupying and Prometheus are still building in the background but the cluster is available on You would be able to put workloads on it at this stage to download the cube conflict so that I can put workloads on it. It's again the three little dots in the right for that particular cluster. If the download cube conflict give it my password, I now have the Q conflict file necessary so that I can access that cluster. All right, Now that the build is fully completed, we can check out cluster info on. We can see that all the satellite components have been built. All the storage is there, and we have access to the CPU. I. So if we click into the cluster, we can access the UCP dashboard, click the signing with the clock button to use the SSO. We give Mary's possible to use the name once again. Thing is an unlicensed cluster way could license at this point. Or just skip it on. Do we have the UCP dashboard? You could see that has been up for a little while. We have some data on the dashboard going back to the console. We can now go to the griffon. A data just been automatically pre configured for us. We can switch and utilized a number of different dashboards that have already been instrumented within the cluster. So, for example, communities cluster information, the name spaces, deployments, nodes. Um, so we look at nodes. If we could get a view of the resource is utilization of Mrs Custer is very little running in it. Yeah, a general dashboard of Cuba Navies cluster. What If this is configurable, you can modify these for your own needs, or add your own dashboards on de scoped to the cluster. So it is available to all users who have access to this specific cluster. All right to scale the cluster on to add a No. This is simple. Is the process of adding a mode to the cluster, assuming we've done that in the first place. So we go to the cluster, go into the details for the cluster we select, create machine. Once again, we need to be ensure that we put the correct am I in and any other functions we like. You can create different sized machines so it could be a larger node. Could be bigger group disks and you'll see that worker has been added in the provisioning state. On shortly, we will see the detail off that worker as a complete to remove a note from a cluster. Once again, we're going to the cluster. We select the node we would like to remove. Okay, I just hit delete On that note. Worker nodes will be removed from the cluster using according and drawing method to ensure that your workloads are not affected. Updating a cluster. When an update is available in the menu for that particular cluster, the update button will become available. And it's a simple as clicking the button validating which release you would like to update to this case. This available releases five point seven point one give you I'm kicking the update back in the background. We will coordinate. Drain each node slowly, go through the process of updating it. Andi update will complete depending on what the update is as quickly as possible. Who we go. The notes being rebuilt in this case impacted the manager node. So one of the manager nodes is in the process of being rebuilt. In fact, to in this case, one has completed already. Yeah, and in a few minutes, we'll see that the upgrade has been completed. There we go. Great. Done. If you work loads of both using proper cloud native community standards, there will be no impact. >>All right, there. We haven't. We got our first workload cluster spun up and managed by Dr Enterprise Container Cloud. So I I loved Shawn's classic warning there. When you're spinning up an actual doctor enterprise deployment, you see little errors and warnings popping up. Just don't touch it. Just leave it alone and let Dr Enterprises self healing properties take care of all those very transient temporary glitches, resolve themselves and leave you with a functioning workload cluster within victims. >>And now, if you think about it that that video was not very long at all. And that's how long it would take you if someone came into you and said, Hey, can you spend up a kubernetes cluster for development development A. Over here, um, it literally would take you a few minutes to thio Accomplish that. And that was with a W s. Obviously, which is sort of, ah, transient resource in the cloud. But you could do exactly the same thing with resource is on Prem or resource is, um physical resource is and will be going through that later in the process. >>Yeah, absolutely one thing that is present in that demo, but that I like to highlight a little bit more because it just kind of glides by Is this notion of, ah, cluster release? So when Sean was creating that cluster, and also when when he was upgrading that cluster, he had to choose a release. What does that didn't really explain? What does that mean? Well, in Dr Enterprise Container Cloud, we have released numbers that capture the entire staff of container ization tools that will be deploying to that workload costume. So that's your version of kubernetes sed cor DNs calico. Doctor Engineer. All the different bits and pieces that not only work independently but are validated toe work together as a staff appropriate for production, humanities, adopted enterprise environments. >>Yep. From the bottom of the stack to the top, we actually test it for scale. Test it for CVS, test it for all of the various things that would, you know, result in issues with you running the application services. And I've got to tell you from having, you know, managed kubernetes deployments and things like that that if you're the one doing it yourself, it can get rather messy. Eso This makes it easy. >>Bruce, you were staying a second ago. They I'll take you at least fifteen minutes to install your release. Custer. Well, sure, but what would all the other bits and pieces you need toe? Not just It's not just about pressing the button to install it, right? It's making the right decision. About what components work? Well, our best tested toe be successful working together has a staff? Absolutely. We this release mechanism and Dr Enterprise Container Cloud. Let's just kind of package up that expert knowledge and make it available in a really straightforward, fashionable species. Uh, pre Confederate release numbers and Bruce is you're pointing out earlier. He's got delivered to us is updates kind of transparent period. When when? When Sean wanted toe update that cluster, he created little update. Custer Button appeared when an update was available. All you gotta do is click. It tells you what Here's your new stack of communities components. It goes ahead. And the straps those components for you? >>Yeah, it actually even displays at the top of the screen. Ah, little header That says you've got an update available. Do you want me to apply? It s o >>Absolutely. Another couple of cool things. I think that are easy to miss in that demo was I really like the on board Bafana that comes along with this stack. So we've been Prometheus Metrics and Dr Enterprise for years and years now. They're very high level. Maybe in in previous versions of Dr Enterprise having those detailed dashboards that Ravana provides, I think that's a great value out there. People always wanted to be ableto zoom in a little bit on that, uh, on those cluster metrics, you're gonna provides them out of the box for us. Yeah, >>that was Ah, really, uh, you know, the joining of the Miranda's and Dr teams together actually spawned us to be able to take the best of what Morantes had in the open stack environment for monitoring and logging and alerting and to do that integration in in a very short period of time so that now we've got it straight across the board for both the kubernetes world and the open stack world. Using the same tool sets >>warm. One other thing I wanna point out about that demo that I think there was some questions about our last go around was that demo was all about creating a managed workplace cluster. So the doctor enterprise Container Cloud managers were using those aws credentials provisioned it toe actually create new e c two instances installed Docker engine stalled. Doctor Enterprise. Remember all that stuff on top of those fresh new VM created and managed by Dr Enterprise contain the cloud. Nothing unique about that. AWS deployments do that on open staff doing on Parramatta stuff as well. Um, there's another flavor here, though in a way to do this for all of our long time doctor Enterprise customers that have been running Doctor Enterprise for years and years. Now, if you got existing UCP points existing doctor enterprise deployments, you plug those in to Dr Enterprise Container Cloud, uh, and use darker enterprise between the cloud to manage those pre existing Oh, working clusters. You don't always have to be strapping straight from Dr Enterprises. Plug in external clusters is bad. >>Yep, the the Cube config elements of the UCP environment. The bundling capability actually gives us a very straightforward methodology. And there's instructions on our website for exactly how thio, uh, bring in import and you see p cluster. Um so it it makes very convenient for our existing customers to take advantage of this new release. >>Absolutely cool. More thoughts on this wonders if we jump onto the next video. >>I think we should move press on >>time marches on here. So let's Let's carry on. So just to recap where we are right now, first video, we create a management cluster. That's what we're gonna use to create All our downstream were closed clusters, which is what we did in this video. Let's maybe the simplest architectures, because that's doing everything in one region on AWS pretty common use case because we want to be able to spin up workload clusters across many regions. And so to do that, we're gonna add a third layer in between the management and work cluster layers. That's gonna be our regional cluster managers. So this is gonna be, uh, our regional management cluster that exists per region that we're going to manage those regional managers will be than the ones responsible for spending part clusters across all these different regions. Let's see it in action in our next video. >>Hello. In this demo, we will cover the deployment of additional regional management. Cluster will include a brief architectural overview, how to set up the management environment, prepare for the deployment deployment overview, and then just to prove it, to play a regional child cluster. So looking at the overall architecture, the management cluster provides all the core functionality, including identity management, authentication, inventory and release version. ING Regional Cluster provides the specific architecture provider in this case, AWS on the L C M components on the d you speak cluster for child cluster is the cluster or clusters being deployed and managed? Okay, so why do you need original cluster? Different platform architectures, for example AWS open stack, even bare metal to simplify connectivity across multiple regions handle complexities like VPNs or one way connectivity through firewalls, but also help clarify availability zones. Yeah. Here we have a view of the regional cluster and how it connects to the management cluster on their components, including items like the LCN cluster Manager. We also machine manager. We're hell Mandel are managed as well as the actual provider logic. Okay, we'll begin by logging on Is the default administrative user writer. Okay, once we're in there, we'll have a look at the available clusters making sure we switch to the default project which contains the administration clusters. Here we can see the cars management cluster, which is the master controller. When you see it only has three nodes, three managers, no workers. Okay, if we look at another regional cluster, similar to what we're going to deploy now. Also only has three managers once again, no workers. But as a comparison is a child cluster. This one has three managers, but also has additional workers associate it to the cluster. Yeah, all right, we need to connect. Tell bootstrap note, preferably the same note that used to create the original management plaster. It's just on AWS, but I still want to machine Mhm. All right, A few things we have to do to make sure the environment is ready. First thing we're gonna pseudo into route. I mean, we'll go into our releases folder where we have the car's boot strap on. This was the original bootstrap used to build the original management cluster. We're going to double check to make sure our cube con figures there It's again. The one created after the original customers created just double check. That cute conflict is the correct one. Does point to the management cluster. We're just checking to make sure that we can reach the images that everything's working, condone, load our images waken access to a swell. Yeah, Next, we're gonna edit the machine definitions what we're doing here is ensuring that for this cluster we have the right machine definitions, including items like the am I So that's found under the templates AWS directory. We don't need to edit anything else here, but we could change items like the size of the machines attempts we want to use but the key items to ensure where changed the am I reference for the junta image is the one for the region in this case aws region of re utilizing. This was an open stack deployment. We have to make sure we're pointing in the correct open stack images. Yeah, yeah. Okay. Sit the correct Am I save the file? Yeah. We need to get up credentials again. When we originally created the bootstrap cluster, we got credentials made of the U. S. If we hadn't done this, we would need to go through the u A. W s set up. So we just exporting AWS access key and I d. What's important is Kaz aws enabled equals. True. Now we're sitting the region for the new regional cluster. In this case, it's Frankfurt on exporting our Q conflict that we want to use for the management cluster when we looked at earlier. Yeah, now we're exporting that. Want to call? The cluster region is Frankfurt's Socrates Frankfurt yet trying to use something descriptive? It's easy to identify. Yeah, and then after this, we'll just run the bootstrap script, which will complete the deployment for us. Bootstrap of the regional cluster is quite a bit quicker than the initial management clusters. There are fewer components to be deployed, but to make it watchable, we've spent it up. So we're preparing our bootstrap cluster on the local bootstrap node. Almost ready on. We started preparing the instances at us and waiting for the past, you know, to get started. Please the best your node, onda. We're also starting to build the actual management machines they're now provisioning on. We've reached the point where they're actually starting to deploy Dr Enterprise, he says. Probably the longest face we'll see in a second that all the nodes will go from the player deployed. Prepare, prepare Mhm. We'll see. Their status changes updates. It was the first word ready. Second, just applying second. Grady, both my time away from home control that's become ready. Removing cluster the management cluster from the bootstrap instance into the new cluster running a data for us? Yeah, almost a on. Now we're playing Stockland. Thanks. Whichever is done on Done. Now we'll build a child cluster in the new region very, very quickly. Find the cluster will pick our new credential have shown up. We'll just call it Frankfurt for simplicity. A key on customers to find. That's the machine. That cluster stop with three manages set the correct Am I for the region? Yeah, Same to add workers. There we go. That's the building. Yeah. Total bill of time. Should be about fifteen minutes. Concedes in progress. Can we expect this up a little bit? Check the events. We've created all the dependencies, machine instances, machines. A boat? Yeah. Shortly. We should have a working caster in the Frankfurt region. Now almost a one note is ready from management. Two in progress. On we're done. Trust us up and running. >>Excellent. There we have it. We've got our three layered doctor enterprise container cloud structure in place now with our management cluster in which we scrap everything else. Our regional clusters which manage individual aws regions and child clusters sitting over depends. >>Yeah, you can. You know you can actually see in the hierarchy the advantages that that presents for folks who have multiple locations where they'd like a geographic locations where they'd like to distribute their clusters so that you can access them or readily co resident with your development teams. Um and, uh, one of the other things I think that's really unique about it is that we provide that same operational support system capability throughout. So you've got stack light monitoring the stack light that's monitoring the stack light down to the actual child clusters that they have >>all through that single pane of glass that shows you all your different clusters, whether their workload cluster like what the child clusters or usual clusters from managing different regions. Cool. Alright, well, time marches on your folks. We've only got a few minutes left and I got one more video in our last video for the session. We're gonna walk through standing up a child cluster on bare metal. So so far, everything we've seen so far has been aws focus. Just because it's kind of easy to make that was on AWS. We don't want to leave you with the impression that that's all we do, we're covering AWS bare metal and open step deployments as well documented Craftsman Cloud. Let's see it in action with a bare metal child cluster. >>We are on the home stretch, >>right. >>Hello. This demo will cover the process of defining bare metal hosts and then review the steps of defining and deploying a bare metal based doctor enterprise cluster. Yeah, so why bare metal? Firstly, it eliminates hyper visor overhead with performance boost of up to thirty percent provides direct access to GP use, prioritize for high performance wear clothes like machine learning and AI, and support high performance workouts like network functions, virtualization. It also provides a focus on on Prem workloads, simplifying and ensuring we don't need to create the complexity of adding another hyper visor layer in between. So continuing on the theme Why communities and bare metal again Hyper visor overhead. Well, no virtualization overhead. Direct access to hardware items like F p g A s G p, us. We can be much more specific about resource is required on the nodes. No need to cater for additional overhead. We can handle utilization in the scheduling better Onda. We increase the performance and simplicity of the entire environment as we don't need another virtualization layer. Yeah, In this section will define the BM hosts will create a new project. Will add the bare metal hosts, including the host name. I put my credentials. I pay my address, Mac address on, then provide a machine type label to determine what type of machine it is. Related use. Okay, let's get started Certain Blufgan was the operator thing. We'll go and we'll create a project for our machines to be a member off. Helps with scoping for later on for security. I begin the process of adding machines to that project. Yeah. Yeah. So the first thing we had to be in post many of the machine a name. Anything you want? Yeah, in this case by mental zero one. Provide the IAP My user name. Type my password? Yeah. On the Mac address for the active, my interface with boot interface and then the i p m i P address. Yeah, these machines. We have the time storage worker manager. He's a manager. We're gonna add a number of other machines on will speed this up just so you could see what the process. Looks like in the future, better discovery will be added to the product. Okay, Okay. Getting back there. We haven't Are Six machines have been added. Are busy being inspected, being added to the system. Let's have a look at the details of a single note. Mhm. We can see information on the set up of the node. Its capabilities? Yeah. As well as the inventory information about that particular machine. Okay, it's going to create the cluster. Mhm. Okay, so we're going to deploy a bare metal child cluster. The process we're going to go through is pretty much the same as any other child cluster. So credit custom. We'll give it a name. Thank you. But he thought were selecting bare metal on the region. We're going to select the version we want to apply on. We're going to add this search keys. If we hope we're going to give the load. Balancer host I p that we'd like to use out of the dress range update the address range that we want to use for the cluster. Check that the sea idea blocks for the communities and tunnels are what we want them to be. Enable disabled stack light and said the stack light settings to find the cluster. And then, as for any other machine, we need to add machines to the cluster. Here we're focused on building communities clusters. So we're gonna put the count of machines. You want managers? We're gonna pick the label type manager on create three machines. Is a manager for the Cuban a disgusting? Yeah, they were having workers to the same. It's a process. Just making sure that the worker label host like you are so yes, on Duin wait for the machines to deploy. Let's go through the process of putting the operating system on the notes, validating that operating system. Deploying Docker enterprise on making sure that the cluster is up and running ready to go. Okay, let's review the bold events. We can see the machine info now populated with more information about the specifics of things like storage. Yeah, of course. Details of a cluster, etcetera. Yeah, Yeah. Okay. Well, now watch the machines go through the various stages from prepared to deploy on what's the cluster build, and that brings us to the end of this particular do my as you can see the process is identical to that of building a normal child cluster we got our complaint is complete. >>Here we have a child cluster on bare metal for folks that wanted to play the stuff on Prem. >>It's ah been an interesting journey taken from the mothership as we started out building ah management cluster and then populating it with a child cluster and then finally creating a regional cluster to spread the geographically the management of our clusters and finally to provide a platform for supporting, you know, ai needs and and big Data needs, uh, you know, thank goodness we're now able to put things like Hadoop on, uh, bare metal thio in containers were pretty exciting. >>Yeah, absolutely. So with this Doctor Enterprise container cloud platform. Hopefully this commoditized scooping clusters, doctor enterprise clusters that could be spun up and use quickly taking provisioning times. You know, from however many months to get new clusters spun up for our teams. Two minutes, right. We saw those clusters gets better. Just a couple of minutes. Excellent. All right, well, thank you, everyone, for joining us for our demo session for Dr Enterprise Container Cloud. Of course, there's many many more things to discuss about this and all of Miranda's products. If you'd like to learn more, if you'd like to get your hands dirty with all of this content, police see us a training don Miranda's dot com, where we can offer you workshops and a number of different formats on our entire line of products and hands on interactive fashion. Thanks, everyone. Enjoy the rest of the launchpad of that >>thank you all enjoy.
SUMMARY :
So for the next couple of hours, I'm the Western regional Solutions architect for Moran At least somebody on the call knows something about your enterprise Computer club. And that's really the key to this thing is to provide some, you know, many training clusters so that by the end of the tutorial content today, I think that's that's pretty much what we had to nail down here. So the management costs was always We have to give this brief little pause of the management cluster in the first regional clusters to support AWS deployments. So in that video are wonderful field CTO Shauna Vera bootstrapped So primarily the foundation for being able to deploy So this cluster isn't yet for workloads. Read the phone book, So and just to make sure I understood The output that when it says I'm pivoting, I'm pivoting from on the bootstrap er go away afterwards. So that there's no dependencies on any of the clouds that get created thereafter. Yeah, that actually reminds me of how we bootstrapped doctor enterprise back in the day, The config file that that's generated the template is fairly straightforward We always insist on high availability for this management cluster the scenes without you having toe worry about it as a developer. Examples of that is the day goes on. either the the regional cluster or a We've got the management cluster, and we're gonna go straight with child cluster. as opposed to having to centralize thumb So just head on in, head on into the docks like the Dale provided here. That's going to be in a very near term I didn't wanna make promises for product, but I'm not too surprised that she's gonna be targeted. No, just that the fact that we're running through these individual So let's go to that video and see just how We can check the status of the machine bulls as individuals so we can check the machine the thing that jumped out to me at first Waas like the inputs that go into defining Yeah, and and And that's really the focus of our effort is to ensure that So at that point, once we started creating that workload child cluster, of course, we bootstrapped good old of the bootstrapping as well that the processes themselves are self healing, And the worst thing you could do is panic at the first warning and start tearing things that don't that then go out to touch slack and say hi, You need to watch your disk But Sean mentioned it on the video. And And the kubernetes, uh, scaling methodology is is he adhered So should we go to the questions. Um, that's kind of the point, right? you know, set up things and deploy your applications and things. that comes to us not from Dr Enterprise Container Cloud, but just from the underlying kubernetes distribution. to the standards that we would want to set to make sure that we're not overloading On the next video, we're gonna learn how to spin up a Yeah, Do the same to add workers. We got that management cluster that we do strapped in the first video. Yeah, that's the key to this is to be able to have co resident with So we don't have to go back to the mother ship. So it's just one pane of glass to the bootstrapped cluster of the regional services. and another, you know, detail for those that have sharp eyes. Let's take a quick peek of the questions here, see if there's anything we want to call out, then we move on to our last want all of the other major players in the cloud arena. Let's jump into our last video in the Siri's, So the first thing we had to be in post, Yeah, many of the machine A name. Much the same is how we did for AWS. nodes and and that the management layer is going to have sufficient horsepower to, are regional to our clusters on aws hand bear amount, Of course, with his dad is still available. that's been put out in the chat, um, that you'll be able to give this a go yourself, Uh, take the opportunity to let your colleagues know if they were in another session I e just interest will feel for you. Use A I'm the one with the gray hair and the glasses. And for the providers in the very near future. I can hardly wait. Let's do it all right to share my video So the first thing is, we need those route credentials which we're going to export on the command That is the tool and you're gonna use to start spinning up downstream It just has to be able to reach aws hit that Hit that a p I to spin up those easy to instances because, and all of the necessary parameters that you would fill in have That's the very first thing you're going to Yeah, for the most part. Let's now that we have our management cluster set up, let's create a first We can check the status of the machine balls as individuals so we can check the glitches, resolve themselves and leave you with a functioning workload cluster within exactly the same thing with resource is on Prem or resource is, All the different bits and pieces And I've got to tell you from having, you know, managed kubernetes And the straps those components for you? Yeah, it actually even displays at the top of the screen. I really like the on board Bafana that comes along with this stack. the best of what Morantes had in the open stack environment for monitoring and logging So the doctor enterprise Container Cloud managers were Yep, the the Cube config elements of the UCP environment. More thoughts on this wonders if we jump onto the next video. Let's maybe the simplest architectures, of the regional cluster and how it connects to the management cluster on their components, There we have it. that we provide that same operational support system capability Just because it's kind of easy to make that was on AWS. Just making sure that the worker label host like you are so yes, It's ah been an interesting journey taken from the mothership Enjoy the rest of the launchpad
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mary | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
Sean O'Mara | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
three machines | QUANTITY | 0.99+ |
Bill Milks | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
first video | QUANTITY | 0.99+ |
second phase | QUANTITY | 0.99+ |
Shawn | PERSON | 0.99+ |
first phase | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
Two minutes | QUANTITY | 0.99+ |
three managers | QUANTITY | 0.99+ |
fifth phase | QUANTITY | 0.99+ |
Clark | PERSON | 0.99+ |
Bill Mills | PERSON | 0.99+ |
Dale | PERSON | 0.99+ |
Five minutes | QUANTITY | 0.99+ |
Nan | PERSON | 0.99+ |
second session | QUANTITY | 0.99+ |
Third phase | QUANTITY | 0.99+ |
Seymour | PERSON | 0.99+ |
Bruce Basil Matthews | PERSON | 0.99+ |
Moran Tous | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Melissa Besse & David Stone, HPE | Accenture Innovation Day
>> Hey, welcome back already, Jeffrey. Here with the Cube, we are high Top San Francisco in the Salesforce Tower and the brand new A century's Thean Novation hub opened up, I don't know. Six months ago or so, we were here for the opening. It's a really spectacular space with a really cool Cinderella stare. So if you come, make sure you check that out. We're talking about a cloud in the evolution of cloud and hybrid cloud. And clearly two players that are right in the middle is helping customers get through this journey and do these migrations. Our center and h. P. E s were excited to have our next guest, Melissa Bessie. She is the managing director, Intelligent cloud and infrastructure strategic partnerships at a center. Melissa. Welcome. And joining us from HP is David Stone. He's the VP of ecosystem sales. They have a great to see you. >> Thanks for having me. >> So let's just jump into it. The cloud discussion has taken over for the last 10 years, but it's really continuing to evolve. It was kind of this this new entrance with aws kind of coming on the scene. One of the great lines of Jeff Basil's talks about is they had no competition for seven years. Nobody recognized that the the bookseller out on the left hand ah, edges coming in to take the river structure business. But as things have moved to Public Cloud, now there's hybrid cloud. No, no. All applications or work clothes are right for a public cloud. So now, while the enterprises are trying to figure this out, they want to make their moves. But it's complicated. So first of all, let's talk about some of the vocabulary hybrid cloud versus multi cloud one of those terms mean to you and your customers started Melissa. >> Sure. So when you think of multi cloud, right, we're seeing a big convergence of I would say multi Kludd operating model that really has to integrate across all the clouds. So you have your public cloud providers. You have your sass like, uh, sales force at work day, you have your pass right? And so when you think of multi cloud, any customers goingto have a plethora of all of these types of clouds and really being able to manage across those becomes critical. When you think of hybrid cloud hybrid cloud is really thinking about the placement of ill. We usually look at it from a data perspective, right? Are you going to put your data in the public or in the private space? And so you can't look at it from that perspective, and it really enables that data movement across both of those clouds. >> So what would you see? David and your, uh, your customers? I say that a >> lot of the customers that we see today or confused right the people who have gone to the public cloud have scratched their heads and said, Jeez, what do I do? It's not as cheap as I thought it was gonna be. So the ones who were early adopters or confused the ones who haven't moved yet are really scratching their head as well, Right, because if you don't have the right strategy, you'll end up getting boxed in. You'll pay a ton of money to get your data in, and you'll pay a ton of money to get your data out. And so all of our customers, you know, want the right hybrid strategy, and I think that's where the market and I know a center and HPD clearly see them, the market really becoming a hybrid world. >> It's interesting, Melissa, You said it's based on the data, and you just talked about moving data in and out where we more often hear it talked about workload. This kind of horses for courses, you know, it's a workload specific should be deployed in this particular kind of infrastructure configuration. But you both mentioned data, and there's a lot of conversation kind of pre cloud about data, gravity and how expensive it is to move the date and the age old thing. Do you move the compute to the data, or do you move the data to the compute? A lot of advantages if you have that data in the cloud, but you're highlighting a couple of the ah, the real negatives in terms of potential cost implications. And we didn't even get into regulations and some of the other things that drive workloads to stay in the data center. So how should people start thinking about these variables when they're trying to figure out what to do next? >> Ex enters position Definitely like when we started off on our hybrid cloud journey was to capture the workload and once you have that workload you could really balance. It's the public benefits of speed, innovation and consumption with the private benefits of actually regulation, data, gravity and performance. Right? And so our whole approach and big bet has been able is been to basically we had really good leading public capabilities because we got into the market early. But we knew our customers were not going to be able to migrate their entire estate over to public. And so in doing that, we we said, OK, if we create a hybrid capability that is highly automated, that is consumed like public, Um and that is standard. We'd be able to offer our customers a weight of pick, really the right workload in the right place at the right price. And that was really what? Our whole goal waas. >> Yeah, and so just the Adama Melissa said, I think we also think about at least, uh, you know, keeping the data in a place where you want but then being cloud adjacent. So getting in the right data centers and we often use the cloud saying to bring the cloud to the data right? So if you have the right hybrid strategy. You put the data where it makes the most sense where you want to maintain the security and privacy. Ah, but then have access to the AP eyes and whatever else you might need to get the full advantages of the public cloud. >> Yeah, and we hear a lot of the data center providers like quinyx and stuff talking about features like Direct Connect and Noted Toe have this proximity between the public cloud and the and the stuff that's in your private cloud so that you do have no low latent see, and you can when you do have to move things or you do need to access that data. It's not so far away. Um, I'm curious about the impact of companies like Salesforce with Salesforce Tower here in San Francisco at the Centre Offices and Office 3 65 and Work Day on how kind of the adoption of the SAS applications have changed. The conversation about Cloud or what's important and not important, needs to be security. I don't trust eating outside my data center Now, one might argue that public clouds are more secure in some ways than in private cloud. You have disgruntled employees per se running around the data centers on plugging things. So how? How is the adoption of things like Officer 65? Clearly, Microsoft's leverage that in a big way to grow their own cloud presence changed the conversation about what's good about Cloud. What's not good about Chlo? Why should we move in this direction? But if you have thought >> no, look, it's a great question, and I think if you think about that, his Melissa said, the use cases right and Microsoft is have sex. Feli successfully pivoted their business to it as a service model, right? And so what I think it's done is it's opened up innovation, and a lot of the sales forces of the world have adapted their business models. And that's truly to your point, a sass based offer. And so when you could do a work day or a salesforce dot com implementation shirt, it's been built that it's tested and everything else I think, what then becomes the bigger question in the bigger challenges. Most companies air sitting on 1000 applications that have been built over time, and what do you do with those? And so in many cases, you need to be connected to those SAS space providers. But you need the right hybrid strategy again. To be able to figure out how to connect those SAS based service is to whatever you're gonna do with those 1000 workloads and those 1000 workloads running on different things that you need the right strategy to figure out where to put the actual workloads and is people they're trying to go. I know one of the questions that comes up is Do you my grade or do you modernize? And so as people put that strategy together, I think how you tied to those SAS based service is clearly ties into your hybrid strategy, >> I would agree. And so, as David mentioned, right, that's where the clouded Jason see, you're seeing a lot of blur between public and private. I mean, Google's providing bare metal is a service, so it is actually dedicated hybrid cloud capabilities. Right? So you're seeing a lot of everyone. And as as David talked about all of the surrounding applications around your s a P around Oracle, when we created our ex enter hyper cloud, we were going after the enterprise workload. But there is a lot of legacy and other ones that need that data and or the sales force data, whatever the data is right and really be able to utilize it when they need to in a real low leagues. >> So let's I want to get unpacked. The ah central hybrid cloud. Um, what is that exactly is that is that your guys own cloud is, you know, kind of a solution set. I've heard that mentioned a couple of times. So what is the centre? Hyper cloud? >> So eccentric hybrid cloud was a big bet we made as we saw the convergence of multi cloud. We really said, We know we everything is not gonna go public and in some cases it's all coming back. And so customers really needed a way to look at all of their workloads, right? Because part of the issue with the getting the cost of the benefits out of public is the workload goes. But you really don't earn able to get out of the data center. We terminate the wild animal park because there's a lot of applications that right Are you going to modernize? Are you goingto let them to end of life? so there's a lot of things you have to consider to truly exit a data center strategy. And so its center hybrid cloud is actually a big bet we made. It is a highly automated, standard private cloud capability that really augments all of the leading capability we had in the cloud area. It is it's differentiated women, a big bet with HP. It's differentiated on its hardware. One of the reasons when we're going after the enterprise was they need large compute. They have large computer and large storage requirements, and what we were able to dio is when we created this used some of our automation differentiation. We have actually a client that we had an existing Io environment. We were actually able to achieve some significant benefits just from the automation. We got 50% in the provisioning of applications. We got 40% in the provisioning in the V m on, and we were able to take a lot of what I'll call the manual tasks and down Thio. It was like 62% reduction in the effort as well as a 33% savings overall in getting things production ready. So this capability is highly automated. It will actually repeat the provisioning at the application level because we're going after the enterprise workloads and it will create these. It's an asset that came from the government. So it's highly secured. Um, and it really was able to preserve. I think, what our customer needed and being able to span that public private capability they need out there in the hybrid world. >> Yeah, you could say I don't know that there's enough talk aboutthe complexity of the management in these worlds. Nobody ever wants to talk about writing this a sideman piece of the software, right? It's all about the core functionality. Let's shift gears a little bit. Talk about HBC. A lot of conversation about high performance computing, a lot going on with a I and machine learning now, which you know most of those benefits are going to be realized in a specific application, right? It's a machine learning or artificial intelligence apply to a specific application. So again, you guys big, big iron and been making big iron for ah, long time. What is this kind of hybrid cloud open up in terms of HB Ito have the big, heavy big heavy metal instead it and still have kind of the agility and flexibility of a cloud type of infrastructure. >> Yeah, no, I think it's a great question. I think if you think about what HB strategy has been in this area and high performance compute, we bought the company SG eye on. As you've seen, the announcements were hopefully gonna close on the Kree acquisition as well. And so we see in the world of the data continuing to expand in huge volumes, the need to have incredible horsepower to drive that associated with it. Now all of this really requires Where's your data being created and where's it actually being consumed? And so you need to have the right edge to cloud strategy and everything. And so in many cases, you need enough compute at the edge to be able to compute in do stuff in real time. But in many cases, you need to feed all that data back into ah, Mother Cloud or some sort of mother HP, you know, e type of high performance, confused environment that can actually run the more advanced a I in machine learning type of applications to really get the insights and tune the algorithms and then push some of those AP eyes and applications back to the edge. So it's it's an area of huge investment. It's an area where because of the late and see, uh, and you know things like autonomous driving and things like that. You can't put all that stuff into the public cloud. But you need the public cloud or you need cloud type capability if you will have able to compute and make the right decisions at the right time. So it's about having the right computer technology at the right place at the right time. The right cost and the right perform a >> lot of rights. Yeah, good opportunity for a center. So I mean, it's it's funny as we talk about hybrid cloud and and that kind of new new verbs around Cloud and cloud like things is where we're gonna see the same thing. Kind of the edge, the edge versus the data center comparison in terms of where the data is, where the processing is because it's gonna be this really dynamic situation, and how much can you push out? I was like the edge because there was no air conditioning a lot of times, and the power might not be that grade. And maybe connectivity is a little bit limited. So, you know, EJ offers a whole bunch of different challenges that you can control for in a data center. But it is going to be this crazy kind of hybrid world there, too, in terms of where the allocation of those resource is. Are you guys getting deeper into that model, Melissa? >> So we're definitely working with HP again to create some of all call it our edge. Manage. Service is again going back to what we're saying about the data, right? We saw the centralization of data with a cloud with the initial entrance into the cloud. Now we're seeing the decentralization of that data back out to the edge. Um, with that right in these hybrid cloud models, you're really going to need. They require a lot of high performance compute, especially for certain industries, right? If you take a look at gas, oil and exploration, if you look at media processing right, all of these need to be able to do that. One of the things and depending on where it's located, if it's on the edge. How you're gonna feed back the data as we talked about. And so we're looking at How do you take this foundation? Right. This all colonic center hybrid. Um, architecture. I take that and play that intermediate role. I'm gonna call it intermediary. Right, Because you really need a really good you know, global data map. You need a good supply chain, right? Really? To make sure that the data, no matter where it's coming from, is going to be available for that application at the right time, with right, the ability to do it at speed. And so all of these things air factors as we look at our hole ex center, hybrid cloud strategy, right? And being able to manage that EJ decor and then back out to cloud exactly >> right. And I wonder if you could share some stories because the value proposition I think around cloud is significantly shifted for those who are paying attention, right? It's not about cost. It's not about cost savings. I mean, there's a lot of that in there, and that's good. But really, the opportunity is about speed, speed and innovation and enabling more innovation across your enterprise. with more people having more access to more data to build more APS and really to react. Are people getting that or they still the customer still kind of encumbered by this this kind of transition phase. They're still trying to sort it out. Or do they get it? That that really this opportunity is about speed, Speed, speed? No, >> go ahead. I mean, we use a phrase first offices here. No cloud, right. So to your point, you know, how do you figure out the right strategy? But I think within that you you get what's the right application and how do you fit it into the overall strategy of what you're trying to do? >> And I think the other thing that we're seeing is, um, you know, customers are trying to figure that out. We have a whole right. When you start with that application map, you know, there could be 500 to 1000 workloads, write applications. And how are you going to some? You're gonna retain some? You're gonna retire some. You're gonna reap age. You're gonna re factor for the cloud or for your private cloud capability. Whatever it is you're going to be looking at doing? Um, I think, you know, we're seeing early adopters like even the papers killers themselves, right? They recognize the speed. So, you know, we're working with Google. For instance. They wanted to get into the bare metal as a service capability. Write them, actually building it. Getting it out to market would take so much longer. We already had this whole ex center hybrid cloud architecture that was cloud adjacent, so we had sub millisecond latency, and so their loved ones, Right? Everyone's figuring out that utilizing all of these, I'll call it platforms and pre book capabilities. Many of our partners have them as well is really allowing them that innovation, get products to market sooner, be able to respond to their customers because it is, as we talked about this multi cloud were lots of things that you have to manage if you can get pieces from multiple plate, you know, from a partner right that can provide Maur of the service is that you need it really enables the management of right, >> right, So gonna wrap it up. I won't give you the last word in terms of what's the what's the most consistent blind spot that you see when you're first engaging with a customer who's who's relatively early on this journey that that they miss that you see over and over and over. And you're like, you know, these are some of the things you really gotta think about that they haven't thought about >> Yeah, so for me, I think it's the cloud isn't about a destination. It's about an experience. And so how do you get you talked about the operations? But how do you provide that overall experience? I like to use a simple analogy that if you and I needed a car for five or 10 or 15 minutes, you go get a new bir. Uh, because it's easy. It's quick. If you need a car for a couple days to do a rent a car, we need a car for a year. You might do at least you need a car for 34 years. You probably buy it right. And so if you use that analogy and think whom I need a workload or in the application for 56 years putting something out, persistent workload that you know about on a public cloud, maybe the right answer, but it might be a lot more cost prohibitive. But if you need something that you can stand up in five minutes and shut right back down, the public cloud is absolutely the right way to go is long as you can deal with the security requirements and stuff. So if you think you think about what are the actual requirements, is it costs is a performance. You've talked about speed and everything else it really trying to figure out you get an experience and the only experience that can really hit you. What you need to do today is a having the right hybrid strategy and every company and a century was out way in front of the market on Public Cloud, and now they've come to the realization, and so has many other places. The world is going to be hybrid, and it's gonna be multi cloud. And as long as you can have an experience and a partner that can manage, you know, help you to find the right path, you'll be on the right journey. >> I think the blinds, but we run into is it does start off as a cost savings activity, and they're really. It really is so much more about how you're going to manage that enterprise workload. How are we gonna worry about the data? Are you gonna have access to it? Are you gonna be able to make it fluid, right. The whole essence of cloud, right? What? It What it disrupted was the I thought that something had to stay in one place, right? And that you were the real time decisions were being made where things needed to happen. Now, through all the different clouds as well as that, you had to own it yourself, right? I mean, everyone always thought Okay, uh, I'll take all of the I T. Department. Very protective of everything that wanted to keep. Now it's about saying, All right, how do I utilize the best of each of these multi clouds to stand up? What? I'll call what their core capability is as a customer, right? Are they do in the next chip design or hey, you know, doing financial market models right? That requires a high performance capability, right? So when you start to think about all of this stuff, right, that's the true power. Is is having a strategy that looks at those outcomes. What am I trying to achieve in getting my products and service is to market and touching the car customers I need versus Oh, I'm gonna move this out to an infrastructure because that's what God will save me. Money, Right, Bench. Typically the downfall we see because they're not looking at it from the workload of the application. >> Same old story, right? Focus on your core differentiator and outsource the heavy lifting on the stuff that that's not your core. All right, Well, Melissa David, Thanks for taking a minute and really enjoyed the conversation. Is Melissa? He's David. I'm Jeff. Rick, you're watching. The Cube were high above the San Francisco skyline in the sales force. Tyra. The essential innovation up. Thanks for watching. We'll see you next time.
SUMMARY :
So if you come, make sure you check that out. So first of all, let's talk about some of the vocabulary hybrid And so when you think of multi cloud, any customers goingto And so all of our customers, you know, want the right hybrid strategy, It's interesting, Melissa, You said it's based on the data, and you just talked about moving data in and out where we more and once you have that workload you could really balance. the AP eyes and whatever else you might need to get the full advantages of the public cloud. or you do need to access that data. And so as people put that strategy together, I think how you tied to those SAS based of the surrounding applications around your s a P around Oracle, is that is that your guys own cloud is, you know, kind of a solution set. We terminate the wild animal park because there's a lot of applications that right Are you going a lot going on with a I and machine learning now, which you know most of those benefits are going to be And so in many cases, you need enough compute at the edge to be able to compute in do stuff in you know, EJ offers a whole bunch of different challenges that you can control for in a data center. And so we're looking at How do you take And I wonder if you could share some stories because the value proposition I think around cloud is significantly the right application and how do you fit it into the overall strategy of as we talked about this multi cloud were lots of things that you have to manage if you can get pieces blind spot that you see when you're first engaging with a customer who's who's relatively and shut right back down, the public cloud is absolutely the right way to go is long as you can deal with And that you were the real time decisions were being We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Melissa Bessie | PERSON | 0.99+ |
Melissa | PERSON | 0.99+ |
David Stone | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
500 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Tyra | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Rick | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
56 years | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
62% | QUANTITY | 0.99+ |
34 years | QUANTITY | 0.99+ |
Melissa Besse | PERSON | 0.99+ |
two players | QUANTITY | 0.99+ |
33% | QUANTITY | 0.99+ |
Six months ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Melissa David | PERSON | 0.99+ |
1000 applications | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Feli | PERSON | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
1000 workloads | QUANTITY | 0.99+ |
HPD | ORGANIZATION | 0.98+ |
Salesforce Tower | ORGANIZATION | 0.98+ |
a year | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
SG | ORGANIZATION | 0.98+ |
Adama Melissa | PERSON | 0.98+ |
quinyx | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.97+ |
Cinderella | PERSON | 0.97+ |
I T. Department | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.95+ |
Salesforce Tower | LOCATION | 0.95+ |
HBC | ORGANIZATION | 0.92+ |
first offices | QUANTITY | 0.88+ |
EJ | ORGANIZATION | 0.85+ |
first | QUANTITY | 0.85+ |
1000 workloads | QUANTITY | 0.84+ |
SAS | ORGANIZATION | 0.83+ |
couple days | QUANTITY | 0.81+ |
HPE | ORGANIZATION | 0.81+ |
Cloud | TITLE | 0.81+ |
Office 3 | LOCATION | 0.8+ |
one | QUANTITY | 0.79+ |
last 10 years | DATE | 0.76+ |
Centre Offices | LOCATION | 0.74+ |
one of | QUANTITY | 0.71+ |
a century | QUANTITY | 0.69+ |
Accenture Innovation Day | EVENT | 0.66+ |
a ton of money | QUANTITY | 0.64+ |
Day | EVENT | 0.63+ |
Cube | ORGANIZATION | 0.61+ |
Noted | TITLE | 0.59+ |
Kree | ORGANIZATION | 0.58+ |
Officer | ORGANIZATION | 0.58+ |
ah | ORGANIZATION | 0.57+ |
E | PERSON | 0.56+ |
Chlo | ORGANIZATION | 0.55+ |
Direct Connect | TITLE | 0.54+ |
65 | QUANTITY | 0.51+ |
Daniel Bernard, SentinelOne & Bassil Habib, Tri City | Fortinet Accelerate 2018
(techno music) [Announcer] Live from Las Vegas, its the Cube! Covering Fortinet Accelerate 18. Brought to you buy Fortinet. >> Welcome back to the Cube's continuing coverage of Fortinet Accelerate 2018. I'm Lisa Martin joined by my cohost Peter Burris, and we have a very cozy set. Right now, I'd like to introduce you to our next guests, Daniel Bernard, the vice-president of business development for SentinelOne, and Basil Habib, you are the IT director at Tri City Foods. Gentleman, welcome to the Cube. >> Great to be here, thanks. >> We're excited to have you guys here. So first, Daniel first question to you. Tell us about SentinelOne, what's your role there, and how does SentinelOne partner with Fortinet? >> Sure, I run technologies integration and alliances. SentinelOne is a next generation endpoint protection platform company. Where we converge EPP and EDR into one agent that operates autonomously. So whether its connected to the internet or not, we don't rely on a cloud deliver solution. It works just as well online and offline. And we're there to disrupt the legacy AV players that have been in this market for 25 years with technology driven by artificial intelligence to map every part of the threat life cycle to specific AI capabilities, so we can stop attacks before they even occur. >> And your partnership with Fortinet, this is your first Accelerate, so talk to us about the duration of that partnership and what is differentiating-- >> Yeah. >> Lisa: For you. >> Its great to be here at Accelerate and also to work with Fortinet. We've been working with them for about a year and a half, and we're proud members of the Fortinet Security Fabric. What it means to us is that for enterprises, like Tri City Foods that we'll talk about, a defense and depth approach is really the way to go. Fortinet, leading edge, network security solutions. We have a very meaningful and exciting opportunity to work with Fortinet, given the breadth of our APIs. We have over 250 APIs, the most of any endpoint solution out there on the market. So the things we can enable within Fortinet's broad stack is really powerful. Fortinet has a lot of customers, a lot of endpoints in their environments to protect. So we're proud to partner with Fortinet to help go after those accounts together. To not only go into those accounts ourselves but also strengthen the security that Fortinet is able to offer their customers as well. >> If we can pivot on that for just a second. How do you-- how does SentinelOne help strengthen, for example, some of the announcements that came out from Fortinet this morning about the Security Fabric? How do you give an advantage to Fortinet? >> Sure. So where we come in, is we sit at the endpoint level and we're able to bring a lot of different pieces of intelligence to core and critical Fortinet assets. For example, with the Fortinet connector that we are going to be releasing tomorrow, so a little sneak peek on that right here on the Cube. The endpoint intelligence is actually through API to API connections able to go immediately into FortiSandbox and then be pushed to FortiGate. And that's in real time. So, whether an endpoint is inside of a network or running around somewhere in the world, whether its online or offline, a detection and a conviction we make through the SentinelOne client and the agent that actually sits on the endpoint, all the sudden is able to enrich and make every single endpoint inside of a Fortinet network much smarter and prone and also immune from attacks before they even occur. >> So as you think about that, how does it translate into a company like Tri City which has a large number of franchises, typically without a lot of expertise in those franchises, to do complex IT security but still very crucial data that has to be maintained and propagated. >> Well from Tri City's perspective, we look into security environment. And when you look into the Security Fabric between Fortinet and SentinelOne, that really helps us out a great deal. By looking into automating some of theses processes, mitigating some of these threats, that integration and the zero-day attack that can be prevented, that really helps us out day one. >> So tell us a little bit about Tri City. >> Well Tri City Foods is basically the second largest Franchisees for Burger King. We currently have approximately about 500 locations. Everybody thinks about Burger King as just the, you know, you go purchase Whopper. But nobody knows about all of the technology that goes in the back and in order to support that environment. You look into it, you got the Point of Sale, taking your credit card transaction, you got your digital menu board, you got all of the items in the back end, the drive-through. And we support all of those devices and we ensure that all of these are working properly, and operating efficiently. So if one of these devices is not functioning, that's all goes down. The other thing we do is basically we need to ensure that the security is up, most important for us. We're processing credit card transaction, we cannot afford to have any kind of issue to the environment. And this is, again, this is were SentinelOne comes into the picture where all of our devices down there are protected with the solution, as well as protecting the assets with Fortinet security. >> So I hear big environment complexity. Tell us about the evolution of security in your environment. You mention SentinelOne but how has that evolved as you have to, you said so many different endpoints that are vulnerable and there's personal information. Tell us about this evolution that you helped drive. >> The issue I put an end to when I first started on that is, we had the traditional antivirus. We had traditional antivirus, its just basically protecting what it knows about, it did not protect anything that is zero-day. We got in a head to a couple ransom wares. Which we are not willing to take any chances with the environment. That evolution came through as, no we cannot afford to have these type of system be taken down or be compromised. And we do like to assure the security of our clients. So this is, again, this is where we decided to go into the next gen and for protection. Ensuring the uptime and the security of the environment. >> But very importantly, you also don't have the opportunity to hire really, really expensive talent in the store to make sure that the store is digitally secure. Talk a little bit about what Daniel was talking about, relative to AI, automation, and some of the other features that you're looking for as you ensure security in those locations. >> The process to go down there is basically, we cannot expect everybody to understand security. So in order-- >> That's a good bet! (laughing) >> So in order to make-- >> While we're all here! >> That's right! >> So in order to make it easy for everybody to process the solutions, its best if we have to simplify as much as possible. We need to make sure its zero touch, we need to make sure that it works all the time, irrelevant to if you are on the network or off the network. We needed to make sure that its reliable and it works without any compromise. >> And very importantly, its multibonal right? It can be online, offline, you can have a variety of different operator characteristics, centralized, more regional. Is that all accurate? >> Multi-tenant, on-prem. >> Definitely. With every location, you got your local users, you have your managers, the district managers, they are mobile. These are mobile users that we have to protect. And in order to protect them we need to make sure that they are protected offline as well as online. And again, the SentinelOne client basically provided that security for us. It is always on, its available offline, and its preventing a lot of malware from coming in. >> Talk to us about, kind of the reduction in complexity and visibility. Cause I'm hearing that visibility is probably a key capability that you now have achieved across a pretty big environment. >> Correct. So, before with the traditional antivirus, you got on-prem solution. On-prem solution, in order to see that visibility, you have be logged in, you have to be able to access that solution, you have to be pushing application updates, signature updates, its very static. Moving into SentinelOne, its a successful solution. I don't have to touch anything, basically everything works in the background. We update the backend and just the clients get pushed, the updates get pushed, and its protected. I only have one engineer basically looking after the solution. Which is great in this environment. Because again, everywhere you go, up access is a big problem. So in order to reduce the cost, we need to make sure that we have that automation in place. We need to make sure that everything works with minimal intervention. That issues were mitigated dynamically without having any physical intervention to it. And this where the solution came in handy. >> So I'm hearing some really strong positive business outcomes. If we can kind of shift, Daniel, back to you. This is a great testimonial for how a business is continuing to evolve and grow at the speed and scale that consumers are demanding. Tell us a little bit on the SentinelOne side about some of the announcements that Fortinet has made today. For example, the Security Fabric, as well as what they announced with AI. How is that going to help your partnership and help companies like Tri City Foods and others achieve the visibility and the security that they need, at that scale and speed that they demand. >> Yeah I think Fortinet has very progressive approach when it comes to every part of their stack. What we see with the Fortinet Security Fabric is a real desire to work with best of breed vendors and bring in their capabilities so that customers can still utilize all the different pieces of what Fortinet offers, whether it be FortiGate, FortiSandbox, FortiMail, all these different fantastic products but compliment those products and enrich them with all these other great vendors here on the floor. And what we heard from Basil is what we hear from our other 2000 customers, these themes of we need something that's simple. With two people on the team, you can easily spend all your time just logging into every single console. Fortinet brings that light so seamlessly in their stack 20, 30 products that are able to be easily managed. But if you don't partner with a vendor like Fortinet or SentinelOne and your going into all these different products all day long, there's no time to actually do anything with that data. I think the problem in cyber security today is really one of data overload. What do you do with all this data? You need something that's going to be autonomous and work online and offline but also bring in this level of automation to connect all these different pieces of a security ecosystem together to make what Fortinet has very nicely labeled a Security Fabric. And that's what I believe is what's going inside Basil's environment, that's what we see in our 2000 customers and hopefully that's something that all of Fortinet's customers can benefit from. >> Basil, one of the many things that people think about is they associate digital transformation with larger businesses. Now, Tri City Food is not a small business, 500 Burger King franchises is a pretty sizable business, when you come right down to it. But how is SentinelOne, Fortinet facilitating changes in the in-store experience? Digital changes in the in-store experience? Are there things that you can now think about doing as a consequence of bringing this endpoint security into the store, in an automated, facile, simple way that you couldn't think about before? >> Actually yes, by using the Fortinet platform we deployed the FortiAPs. We have the FortiManager, we're looking into, basically, trying to manage and push all of the guest services, to provide guest services. Before we had to touch a lot of different devices, right now its just two click of a button and I'm able to provide that SSID to all of my stores. We're able to change the security settings with basically couple clicks. We don't have to go and manage 500 locations. I'm only managing a single platform and FortiManager, for instance, or FortiCloud. So this is very progressive for us. Again, when you're working with a small staff, the more automation and the more management you can do on the backend to simplify the environment, as well as providing the required security is a big plus for us. >> There's some key features that we've brought to market to help teams like Basil's. A couple ones that come to mind, our deep visibility capability where you can actually see into encrypted traffic directly from the endpoint, without any changes in network topography. That's something that's pretty groundbreaking. We're the only endpoint technology to actually do that, where you can actually threat hunt for IOCs and look around and see 70 percent of traffics encrypted today and that number is rising. You can actually see into all that traffic and look for specific data points. That's a really good example, where you can turn what you use to have to go to a very high level of SOC analyst and you can have anybody actually benefit from a tool like that. The other one that comes to mind is our rollback capability, where if something does get through or we're just operating in EDR mode, by customer choice, you can actually completely rollback a system to the previously noninfected, nonencrypted state directly from that central location. So whether that person is on an island or in Bermuda, or sitting in a store somewhere, if a system is compromise you don't need to re-image it anymore. You can just click rollback and within 90 seconds its back to where it was before. So, the time savings we can drive is really the key value proposition from a business outcome standpoint because you need all these different check boxes and more than check boxes, but frankly there's just not the people and the hours in the day to do it all. >> So, you said time savings affects maybe resource allocation. I'm wondering in terms of leveraging what you've established from a security standpoint as differentiation as Tri City is looking to grow and expand. Tell us a little bit about how this is a differentiator for your business, compared to your competition. >> I cannot speak to the competition. (all laughs) What I can speak to is, again, the differentiator for us as Daniel mentioned is basically, again, the automation pieces, the rollback features. The minimizing the threat analyses into the environment. All these features basically is going to make us more available for our customers, the environment is going to be secure and customers will be more than welcome to come into us and they know that their coming in their information is secure and their not going to be compromised. >> Well are you able to set up stores faster? Are you able to, as you've said, roll out changes faster? So you do get that common kind of view of things. >> We're at zero zero breach. >> We're at zero zero breach yes. So, basically, in order through a lot faster, we do it lock the source faster. We basically, with the zero touch deployment, that Fortinet is offering, basically send the device to the store, bring it online and its functional. We just push it out the door and its operational. With the SentinelOne platform, push the client to the store and set it and forget it. That is basically the best solution that we ever deployed. >> Set it and forget it. >> I like that. >> Set it and forget it. >> That's why you look so relaxed. (laughs) >> I can sleep at night. (all laugh) >> That's what we want to hear. >> Exactly. So Daniel, last question to you, this is your first Accelerate? >> It is our first Accelerate. >> Tell us about what excites you about being here? What are some of the things that you've heard and what are you excited about going forward in 2018 with this partnership? >> Yeah, well as we launch our Fortinet connector tomorrow, what really excites me about being here is the huge partner and customer base that Fortinet has built over the last 20 years. Customers and partners that have not only bought the first time, but they're in it to win it with Fortinet. And that's what we are too. I'm excited about the year ahead and enabling people like Basil to be able to sleep on the weekends because they can stitch they're security solutions together in a meaningful way with best of breed technologies and we're honored to be part of that Fortinet Security Fabric for that very reason. >> Well gentleman thank you both so much for taking the time to chat with us today and share your story at Accelerate 2018. >> Thanks a lot. >> Thank you. >> For this cozy panel up here, I'm Lisa Martin my cohost with the Cube is Peter Burris. You're watching us live at Fortinet Accelerate 2018. Stick around we will be right back. (techno music)
SUMMARY :
Brought to you buy Fortinet. Welcome back to the We're excited to have you guys here. to map every part of the threat life cycle So the things we can enable within for example, some of the all the sudden is able to data that has to be that integration and the in the back and in order to that you helped drive. We got in a head to a couple ransom wares. in the store to make sure that The process to go irrelevant to if you are on you can have a variety And in order to protect them a key capability that you now have So in order to reduce the cost, How is that going to help your partnership is a real desire to work in the in-store experience? on the backend to in the day to do it all. Tri City is looking to grow and expand. is going to make us more So you do get that common push the client to the store That's why you look I can sleep at night. So Daniel, last question to you, honored to be part of that time to chat with us today Stick around we will be right back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Fortinet | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Burger King | ORGANIZATION | 0.99+ |
Tri City | ORGANIZATION | 0.99+ |
Daniel Bernard | PERSON | 0.99+ |
Basil Habib | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
25 years | QUANTITY | 0.99+ |
Tri City Foods | ORGANIZATION | 0.99+ |
Tri City Food | ORGANIZATION | 0.99+ |
Bermuda | LOCATION | 0.99+ |
70 percent | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Accelerate | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
SentinelOne | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
500 locations | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
two click | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
FortiManager | TITLE | 0.98+ |
first question | QUANTITY | 0.98+ |
over 250 APIs | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
one engineer | QUANTITY | 0.98+ |
2000 customers | QUANTITY | 0.98+ |
zero | QUANTITY | 0.98+ |
Basil | ORGANIZATION | 0.97+ |
about a year and a half | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
first Accelerate | QUANTITY | 0.97+ |
one agent | QUANTITY | 0.97+ |
Bassil Habib | PERSON | 0.96+ |
90 seconds | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
30 products | QUANTITY | 0.95+ |
FortiCloud | TITLE | 0.95+ |
approximately about 500 locations | QUANTITY | 0.95+ |
Whopper | ORGANIZATION | 0.95+ |
stack 20 | QUANTITY | 0.94+ |
Peter Smails, Datos | AWS re:Invent
>> Announcer: Live from Las Vegas, it's the CUBE. Covering AWS re:Invent 2017. Presented by: AWS, Intel, and our ecosystem of partners. >> Well, welcome back to the Sands Expo. Here we are in Las Vegas in re:Invent with just about 50 000 of our closest friends. Big AWS community gathering here all week long and it's a pleasure to be here with you on the CUBE, along with Keith Townsend. I'm John Walls and we're now joined by Peter Smails, who is the vice president of marketing and business developing at Datos IO. Peter, good to see ya. >> Thanks for having me and glad to be back. I love being on the CUBE. >> You were just last week, right? >> Keith: Yeah. >> CUBE conversations with John Fury or so we're going to have to start charging you rent. (laughs) >> I only have two numbers in my head right now: 18 billion, 40% CAGR. Those are the only two numbers I have in my head right now. For those of you not in the know, those are the numbers that AWS was talking about in terms of revenue and growth. Crazy times, crazy show, good stuff. >> This show really does embody that. It certainly illustrates that. We've only been here for... the doors have been open for about a half hour or so. Already wall-to-wall traffic. >> People were queuing up to get into the expo floor, which I don't think I've seen that. >> I swung by our booth, 2825. I swung by there at 11:20 and it was standing room only. It's great. I mean, the buzz, you can feel it. If you're not down on the floor, come down to the floor, cause you can just feel the energy. >> And even still, just walking up here, if you've been here to the Sands, you've got these giant hallways. I was here probably two hours ago and it was already wall-to-wall people and it was just packed. I was really impressed. >> The conference started in full tilt at seven o'clock this morning. People were just out and just engaging. >> So you guys, you're here, your relationship obviously at AWS, we're gonna get into that >> Yeah. You got the booth here, 2825? >> 2825. Yes sir. >> So let's talk about, first off, about your presence here. >> Peter: Yeah >> What brings you into this community? You've been here for a while now. >> Peter: Yeah. >> And maybe the evolution of that from the three or four years-- >> Sure. back to where you are now. Yeah, so our view of the world aligns incredibly well with AWS. The whole notion of the world's moving to the cloud. We've been in business since 2014. We are a cloud data management company with primary use cases around backup and recovery. There's all those things like data mobility and essentially our view of the world and our strategy is that as the world moves to the cloud, organizations are building net new applications. They're building modern applications that they're running on hybrid cloud environments. Those applications need a fundamentally new approach to data management. That's what we do. About 50% of our customers run natively on AWS. So this is a very logical show for us. We've got customers building these new modern applications. They're hosting them natively in AWS. They need backup and recovery. They need data mobility. That's what we do. It's just a perfect fit for us. >> So Peter, let's talk a little bit about data mobility. You guys are unapologetically cloud first. We've had this conversation in the past just offline. Talk to me about that conversation with customers. How that's evolved from three, four years ago to now. >> (chuckles) I'll use another quote from Andy, from earlier this week, or I guess this is from Jeff Basil, so theoretically it's the whole thing about they're willing to be misunderstood for a while. You go back four years, early days, yeah, we were doing cloud first, backup and recovery for modern applications built on the MongoDB's, the Cassandra's, the non relational databases. It's going to a non relational world. In the early days people would laugh and they'd be like, "Why you doing that?" We were steadfastly believing then, as we do now, that the world is moving to the cloud. The world is moving largely to a non relational world and so there's going to be a huge opportunity to provide data management solutions. Data aware, data management solutions for that. So we've stuck to that. We've been steadfast in that. But your point about maturity, what's been really exciting for us as an organization is that, I go back even a year, and you talk about, so what do you do? And you give 'em the pitch and there was a fair amount of nuance to it and they'd be like (garbles). They'd sort of give you the "hmm". They'd kind of ask questions or whatever and then once you talk through it, maybe it was a 10 minute elevator pitch, if you will. You had to go like 20 floors. They got it but it was a little bit more nuanced. Now it's, okay great, are you moving to the cloud? No brainer. Are you building modern applications? Are you importing your old applications, building these new modern applications in a non relational world. Absolutely. Are they running a production? Yes. How are you protecting those applications? We have no idea, kinda thing or we're using native tools or we're scripting or we're not doing anything. So it varied to your point. The conversation has become much less, it's not even nuanced anymore. The qualifying questions are incredibly simple and our value proposition is incredibly easy. If you're running applications, if you've built net new modern applications running in the cloud, or on-prem that you want to back up to the cloud, you need modern data protection. That's what we do. >> Let's talk about this hybrid IT scenario. I was at dinner last night with a couple Fortune 500 AWS customers and I was talking to them about the excitement of this whole category, data protection. They were like, backup? How is that sexy at at all? Then we got into this use case of data mobility, of I've built something really big on-prem and I need John Hastings term: "I need a multi-cloud strategy." >> Yeah, John's not a huge multi... He pressed me last week on the whole multi-cloud. >> Kevin: Fourier is-- Yeah, oh yeah, sorry (laughs) >> John: I don't want you to reach over and back slap me here. >> Peter: So you're all in on multi-cloud. It's Fourier we gotta worry about. >> John: My whole life. >> Talk to us about the importance of using what we would have traditionally called backup as a data mobility strategy. >> Cool. Absolutely. It all kinda comes down to for us, being data aware. If you think about it, we're a cloud data management company. Our number one use case is backup and recovery because the first thing you have to do is you gotta capture the data, you've gotta. >> Backup recovery of my VMs right? >> Good question. We are unlike traditional backup and recovery. We're not infrastructure-centric. We're application-centric. We're actually agnostic to the underlying infrastructure. So if you're running bare metal on-prem, if you're running on EC2, if you're leveraging S3, wherever you're running, we're fine because we integrated the application level, the database level. Hence our focus on non relational. Our number one use case is protecting that data. Because we are application aware, because we're data aware and we integrated the database level, we understand the underlying scheme. We are aware of the data structures within the databases that people are protecting first and foremost. But in the context of data mobility to your point, the number two use case for us is that organizations want to protect their data but then they want to do things like, I wanna spin up copies or sub-copies of my data, of my backup copies for test F, for QA, for performance testing, for cloud instantiation, for archiving, for BI, for whatever I want to do. The key is, we're not a migration company. AWS has migration services. If you need to move two petabytes of data from on-prem and you're now gonna host it in the cloud, that's not us, but if you built these new applications and you want to basically intelligently use subsets of your data for those workloads I was talking about, we enable you to be incredibly intelligent about only recovering if you will or only moving the data that you need. For example, simple things like, with our RecoverX 2.5 that we just announced. We do something called quierably recovery. What that means is, I can do everything from star dot Peter star or I can pick individual rows and columns. >> John: Just pick and choose? >> I can pick and choose based upon my database scheme. I can mast columns of data if I have to do GDPR compliance or PII. So from a used case standpoint, it's all about being aware of the data that you actually in the first place you're backing up, but then what data you wanna move so that you can be incredibly intelligent and efficient about the data that you're moving. >> So in traditional systems, I can encrypt data at rest. I can back it up. My tapes can be encrypted. My discs that's holding that back up data can be encrypted. When I think about that, when it comes to backing up object storing into the cloud, how do I do that with...? >> Great question. Again, because we're not infrastructure based, we're not LUN based, we're not block based, we integrate at the database level. We're completely transparent to encryption. We work perfectly fine with encrypted data. We work perfectly fine with compressed data. We invented something called semantic de-duplication. If you're familiar with traditional de-duplication. >> Keith: Right. >> It works in a block level. Fixture varying length block. In a clustered database environment or in a compressed or encrypted data environment, it kinda throws the capabilities of traditional de-dup out the window. Semantic de-duplication understands the scheme of the underlying database. We are highly efficient de-duplication for encrypted data, for compressed data. We're transparent to that, if you will. So again, back to our cloud first model, we built that in from day one. It's a fundament, our underlying architecture, the platform that we've built is fundamentally unlike anything else from a traditional backup and recovery or data management platform. >> So make sure I get it right before we say good-bye. Datos IO 2825? >> 2825, correct. www.DatosIO If you are running applications in the cloud and need to protect those apps, please talk to us. We'd love to help you out. If you're looking for data mobility solutions, come talk to us. >> John: There's the pitch. >> Love to chat. >> Peter, thanks for being with us. Next week you're off, all right? >> We'll have to cancel that one because I'm back next week. >> John: Back to back cupers, but maybe we'll give you a week off. >> Thanks for having me, always like being here. Appreciate it. >> Thanks for being with us. Back for more here at re:Invent. We're in Las Vegas live here on the CUBE. Back with more right after this.
SUMMARY :
Announcer: Live from Las Vegas, it's the CUBE. and it's a pleasure to be here with you on the CUBE, I love being on the CUBE. we're going to have to start charging you rent. For those of you not in the know, the doors have been open for about a half hour or so. People were queuing up to get into the expo floor, I mean, the buzz, you can feel it. and it was already wall-to-wall people in full tilt at seven o'clock this morning. You got the booth here, 2825? What brings you into this community? and our strategy is that as the world moves to the cloud, Talk to me about that conversation with customers. and then once you talk through it, I was at dinner last night with a He pressed me last week on the whole multi-cloud. John: I don't want you to reach over Peter: So you're all in on multi-cloud. Talk to us about the importance of using what we because the first thing you have to do or only moving the data that you need. that you actually in the first place you're backing up, I can back it up. If you're familiar with traditional de-duplication. We're transparent to that, if you will. So make sure I get it right We'd love to help you out. Next week you're off, all right? We'll have to cancel that one but maybe we'll give you a week off. Thanks for having me, always like being here. We're in Las Vegas live here on the CUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Wojcicki | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Porter | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Marty Lans | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Jackson | PERSON | 0.99+ |
Jason Newton | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Dave Winokur | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Lena | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Julie Sweet | PERSON | 0.99+ |
Marty | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Kayla Nelson | PERSON | 0.99+ |
Mike Piech | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ireland | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Daniel Laury | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Todd Kerry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$20 | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
January 30th | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
Mark Little | PERSON | 0.99+ |
Luke Cerney | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Allan | PERSON | 0.99+ |
40 gig | QUANTITY | 0.99+ |
Day Two Kickoff | Big Data NYC
(quite music) >> I'll open that while he does that. >> Co-Host: Good, perfect. >> Man: All right, rock and roll. >> This is Robin Matlock, the CMO of VMware, and you're watching theCUBE. >> This is John Siegel of VPA Product Marketing at Dell EMC. You're watching theCUBE. >> This is Matthew Morgan, I'm the chief marketing officer at Druva and you are watching theCUBE. >> Announcer: Live from midtown Manhattan, it's theCUBE. Covering BigData New York City 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. (rippling music) >> Hello, everyone, welcome to a special CUBE live presentation here in New York City for theCUBE's coverage of BigData NYC. This is where all the action's happening in the big data world, machine learning, AI, the cloud, all kind of coming together. This is our fifth year doing BigData NYC. We've been covering the Hadoop ecosystem, Hadoop World, since 2010, it's our eighth year really at ground zero for the Hadoop, now the BigData, now the Data Market. We're doing this also in conjunction with Strata Data, which was Strata Hadoop. That's a separate event with O'Reilly Media, we are not part of that, we do our own event, our fifth year doing our own event, we bring in all the thought leaders. We bring all the influencers, meaning the entrepreneurs, the CEOs to get the real story about what's happening in the ecosystem. And of course, we do it with our analyst at Wikibon.com. I'm John Furrier with my cohost, Jim Kobielus, who's the chief analyst for our data piece. Lead analyst Jim, you know the data world's changed. We had commenting yesterday all up on YouTube.com/SiliconAngle. Day one was really set the table. And we kind of get the whiff of what's happening, we can kind of feel the trend, we got a finger on the pulse. Two things going on, two big notable stories is the world's continuing to expand around community and hybrid data and all these cool new data architectures, and the second kind of substory is the O'Reilly show has become basically a marketing. They're making millions of dollars over there. A lot of people were, last night, kind of not happy about that, and what's giving back to the community. So, again, the community theme is still resonating strong. You're starting to see that move into the corporate enterprise, which you're covering. What are you finding out, what did you hear last night, what are you hearing in the hallways? What is kind of the tea leaves that you're reading? What are some of the things you're seeing here? >> Well, all things hybrid. I mean, first of all it's building hybrid applications for hybrid cloud environments and there's various layers to that. So yesterday on theCUBE we had, for example, one layer is hybrid semantic virtualization labels are critically important for bridging workloads and microservices and data across public and private clouds. We had, from AtScale, we had Bruno Aziza and one of his customers discussing what they're doing. I'm hearing a fair amount of this venerable topic of semantic data virtualization become even more important now in the era of hybrid clouds. That's a fair amount of the scuttlebutt in the hallway and atrium talks that I participated in. Also yesterday from BMC we had Basil Faruqi talking about basically talking about automating data pipelines. There are data pipelines in hybrid environments. Very, very important for DevOps, productionizing these hybrid applications for these new multi-cloud environments. That's quite important. Hybrid data platforms of all sorts. Yesterday we had from ActIn Jeff Veis discussing their portfolio for on-prem, public cloud, putting the data in various places, and speeding up the queries and so forth. So hybrid data platforms are going increasingly streaming in real time. What I'm getting is that what I'm hearing is more and more of a layering of these hybrid environments is a critical concern for enterprises trying to put all this stuff together, and future-proof it so they can add on all the new stuff. That's coming along like cirrus clouds, without breaking interoperability, and without having to change code. Just plug and play in a massively multi-cloud environment. >> You know, and also I'm critical of a lot of things that are going on. 'Cause to your point, the reason why I'm kind of critical on the O'Reilly show and particularly the hype factor going on in some areas is two kinds of trends I'm seeing with respect to the owners of some of the companies. You have one camp that are kind of groping for solutions, and you'll see that with they're whitewashing new announcements, this is going on here. It's really kind of-- >> Jim: I think it's AI now, by the way. >> And they're AI-washing it, but you can, the tell sign is they're always kind of doing a magic trick of some type of new announcement, something's happening, you got to look underneath that, and say where is the deal for the customers? And you brought this up yesterday with Peter Burris, which is the business side of it is really the conversation now. It's not about the speeds and feeds and the cluster management, it's certainly important, and those solutions are maturing. That came up yesterday. The other thing that you brought up yesterday I thought was notable was the real emphasis on the data science side of it. And it's that it's still not easy or data science to do their job. And this is where you're seeing productivity conversations come up with data science. So, really the emphasis at the end of the day boils down to this. If you don't have any meat on the bone, you don't have a solution that rubber hits the road where you can come in and provide a tangible benefit to a company, an enterprise, then it's probably not going to work out. And we kind of had that tool conversation, you know, as people start to grow. And so as buyers out there, they got to look, and kind of squint through it saying where's the real deal? So that kind of brings up what's next? Who's winning, how do you as an analyst look at the playing field and say, that's good, that's got traction, that's winning, mm not too sure? What's your analysis, how do you tell the winners from the losers, and what's your take on this from the data science lens? >> Well, first of all you can tell the winners when they have an ample number of referenced customers who are doing interesting things. Interesting enough to get a jaded analyst to pay attention. Doing something that changes the fabric of work or life, whatever, clearly. Solution providers who can provide that are, they have all the hallmarks of a winner meaning they're making money, and they're likely to grow and so forth. But also the hallmarks of a winner are those, in many ways, who have a vision and catalyze an ecosystem around that vision of something that could be made, possibly be done before but not quite as efficiently. So you know, for example, now the way what we're seeing now in the whole AI space, deep learning, is, you know, AI means many things. The core right now, in terms of the buzzy stuff is deep learning for being able to process real time streams of video, images and so forth. And so, what we're seeing now is that the vendors who appear to be on the verge of being winners are those who use deep learning inside some new innovation that has enough, that appeals to a potential mass market. It's something you put on your, like an app or something you put on your smart phone, or it's something you buy at Walmart, install in your house. You know, the whole notion of clearly Alexa, and all that stuff. Anything that takes chatbot technology, really deep learning powers chatbots, and is able to drive a conversational UI into things that you wouldn't normally expect to talk to you and does it well in a way that people have to have that. Those are the vendors that I'm looking for, in terms of those are the ones that are going to make a ton of money selling to a mass market, and possibly, and very much once they go there, they're building out a revenue stream and a business model that they can conceivably take into other markets, especially business markets. You know, like Amazon, 20-something years ago when they got started in the consumer space as the exemplar of web retailing, who expected them 20 years later to be a powerhouse provider of business cloud services? You know, so we're looking for the Amazons of the world that can take something as silly as a conversational UI inside of a, driven by DL, inside of a consumer appliance and 20 years from now, maybe even sooner, become a business powerhouse. So that's what's new. >> Yeah, the thing that comes up that I want to get your thoughts on is that we've seen data integration become a continuing theme. The other thing about the community play here is you start to see customers align with syndicates or partnerships, and I think it's always been great to have customer traction, but, as you pointed out, as a benchmark. But now you're starting to see the partner equation, because this isn't open, decentralized, distributed internet these days. And it is looking like it's going to form differently than they way it was, than the web days and with mobile and connected devices it IoT and AI. A whole new infrastructure's developing, so you're starting to see people align with partnerships. So I think that's something that's signaling to me that the partnership is amping up. I think the people are partnering more. We've had Hortonworks on with IBM, people are partner, some people take a Switzerland approach where they partner with everyone. You had, WANdisco partners with all the cloud guys, I mean, they have unique ITP. So you have this model where you got to go out, do something, but you can't do it alone. Open source is a key part of this, so obviously that's part of the collaboration. This is a key thing. And then they're going to check off the boxes. Data integration, deep learning is a new way to kind of dig deeper. So the question I have for you is, the impact on developers, 'cause if you can connect the dots between open source, 90% of the software written will be already open source, 10% differentiated, and then the role of how people going to market with the enterprise of a partnership, you can almost connect the dots and saying it's kind of a community approach. So that leaves the question, what is the impact to developers? >> Well the impact to developers, first of all, is when you go to a community approach, and like some big players are going more community and partnership-oriented in hot new areas like if you look at some of the recent announcements in chatbots and those technologies, we have sort of a rapprochement between Microsoft and Facebook and so forth, or Microsoft and AWS. The impact for developers is that there's convergence among the companies that might have competed to the death in particular hot new areas, like you know, like I said, chatbot-enabled apps for mobile scenarios. And so it cuts short the platform wars fairly quickly, harmonizes around a common set of APIs for accessing a variety of competing offerings that really overlap functionally in many ways. For developers, it's simplification around a broader ecosystem where it's not so much competition on the underlying open source technologies, it's now competition to see who penetrates the mass market with actually valuable solutions that leverage one or more of those erstwhile competitors into some broader synthesis. You know, for example, the whole ramp up to the future of self-driving vehicles, and it's not clear who's going to dominate there. Will it be the vehicle manufacturers that are equipping their cars with all manner of computerized everything to do whatnot? Or will it be the up-and-comers? Will it be the computer companies like Apple and Microsoft and others who get real deep and invest fairly heavily in self-driving vehicle technology, and become themselves the new generation of automakers in the future? So, what we're getting is that going forward, developers want to see these big industry segments converge fairly rapidly around broader ecosystems, where it's not clear who will be the dominate player in 10 years. The developers don't really care, as long as there is consolidation around a common framework to which they can develop fairly soon. >> And open source is obviously a key role in this, and how is deep learning impacting some of the contributions that are being made, because we're starting to see the competitive advantage in collaboration on the community side is with the contributions from companies. For example, you mentioned TensorFlow multiple times yesterday from Google. I mean, that's a great contribution. If you're a young kind coming into the developer community, I mean, this is not normal. It wasn't like this before. People just weren't donating massive libraries of great stuff already pre-packaged, So all new dynamics emerging. Is that putting pressure on Amazon, is that putting pressure on AWS and others? >> It is. First of all, there is a fair amount of, I wouldn't call it first-mover advantage for TensorFlow, there've been a number of DL toolkits on the market, open source, for the last several years. But they achieved the deepest and broadest adoption most rapidly, and now they are a, TensorFlow is essentially a defacto standard in the way, that we just go back, betraying my age, 30, 40 years ago where you had two companies called SAS and SPSS that quickly established themselves as the go-to statistical modeling tools. And then they got a generation, our generation, of developers, or at least of data scientists, what became known as data scientists, to standardize around you're either going to go with SAS or SPSS if you're going to do data mining. Cut ahead to the 2010s now. The new generation of statistical modelers, it's all things DL and machine learning. And so SAS versus SPSS is ages ago, those companies are, those products still exist. But now, what are you going to get hooked on in school? What are you going to get hooked on in high school, for that matter, when you're just hobby-shopping DL? You'll probably get hooked on TensorFlow, 'cause they have the deepest and the broadest open source community where you learn this stuff. You learn the tools of the trade, you adopt that tool, and everybody else in your environment is using that tool, and you got to get up to speed. So the fact is, that broad adoption early on in a hot new area like DL, means tons. It means that essentially TensorFlow is the new Spark, where Spark, you know, once again, Spark just in the past five years came out real fast. And it's been eclipsed, as it were, on the stack of cool by TensorFlow. But it's a deepening stack of open source offerings. So the new generation of developers with data science workbenches, they just assume that there's Spark, and they're going to increasingly assume that there's TensorFlow in there. They're going to increasingly assume that there are the libraries and algorithms and models and so forth that are floating around in the open source space that they can use to bootstrap themselves fairly quickly. >> This is a real issue in the open source community which we talked, when we were in LA for the Open Source Summit, was exactly that. Is that, there are some projects that become fashionable, so for example, a cloud-native foundation, very relevant but also hot, really hot right now. A lot of people are jumping on board the cloud natives bandwagon, and rightfully so. A lot of work to be done there, and a lot of things to harvest from that growth. However, the boring blocking and tackling projects don't get all the fanfare but are still super relevant, so there's a real challenge of how do you nurture these awesome projects that we don't want to become like a nightclub where nobody goes anymore because it's not fashionable. Some of these open source projects are super important and have massive traction, but they're not as sexy, or flair-ish as some of that. >> Dl is not as sexy, or machine learning, for that matter, not as sexy as you would think if you're actually doing it, because the grunt work, John, as we know for any statistical modeling exercise, is data ingestion and preparation and so forth. That's 75% of the challenge for deep learning as well. But also for deep learning and machine learning, training the models that you build is where the rubber meets the road. You can't have a really strongly predictive DL model in terms of face recognition unless you train it against a fair amount of actual face data, whatever it is. And it takes a long time to train these models. That's what you hear constantly. I heard this constantly in the atrium talking-- >> Well that's a data challenge, is you need models that are adapting and you need real time, and I think-- >> Oh, here-- >> This points to the real new way of doing things, it's not yesterday's model. It's constantly evolving. >> Yeah, and that relates to something I read this morning or maybe it was last night, that Microsoft has made a huge investment in AI and deep learning machinery. They're doing amazing things. And one of the strategic advantages they have as a large, established solution provider with a search engine, Bing, is that from what I've been, this is something I read, I haven't talked to Microsoft in the last few hours to confirm this, that Bing is a source of training data that they're using for machine learning and I guess deep learning modeling for their own solutions or within their ecosystem. That actually makes a lot of sense. I mean, Google uses YouTube videos heavily in its deep learning for training data. So there's the whole issue of if you're a pipsqueak developer, some, you know, I'm sorry, this sounds patronizing. Some pimply-faced kid in high school who wants to get real deep on TensorFlow and start building and tuning these awesome kickass models to do face recognition, or whatever it might be. Where are you going to get your training data from? Well, there's plenty of open source database, or training databases out there you can use, but it's what everybody's using. So, there's sourcing the training data, there's labeling the training data, that's human-intensive, you need human beings to label it. There was a funny recent episode, or maybe it was a last-season episode of Silicone Valley that was all about machine learning and building and training models. It was the hot dog, not hot dog episode, it was so funny. They bamboozle a class on the show, fictionally. They bamboozle a class of college students to provide training data and to label the training data for this AI algorithm, it was hilarious. But where are you going to get the data? Where are you going to label it? >> Lot more work to do, that's basically what you're getting at. >> Jim: It's DevOps, you know, but it's grunt work. >> Well, we're going to kick off day two here. This is the SiliconeANGLE Media theCUBE, our fifth year doing our own event separate from O'Reilly media but in conjunction with their event in New York City. It's gotten much bigger here in New York City. We call it BigData NYC, that's the hashtag. Follow us on Twitter, I'm John Furrier, Jim Kobielus, we're here all day, we've got Peter Burris joining us later, head of research for Wikibon, and we've got great guests coming up, stay with us, be back with more after this short break. (rippling music)
SUMMARY :
This is Robin Matlock, the CMO of VMware, This is John Siegel of VPA Product Marketing This is Matthew Morgan, I'm the chief marketing officer Brought to you by SiliconANGLE Media What is kind of the tea leaves that you're reading? That's a fair amount of the scuttlebutt I'm kind of critical on the O'Reilly show is really the conversation now. Doing something that changes the fabric So the question I have for you is, the impact on developers, among the companies that might have competed to the death and how is deep learning impacting some of the contributions You learn the tools of the trade, you adopt that tool, and a lot of things to harvest from that growth. That's 75% of the challenge for deep learning as well. This points to the in the last few hours to confirm this, that's basically what you're getting at. This is the SiliconeANGLE Media theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Robin Matlock | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Matthew Morgan | PERSON | 0.99+ |
Basil Faruqi | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
John Siegel | PERSON | 0.99+ |
O'Reilly Media | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
90% | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
SPS | ORGANIZATION | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
LA | LOCATION | 0.99+ |
Silicone Valley | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
10% | QUANTITY | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2010s | DATE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
AtScale | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
10 years | QUANTITY | 0.99+ |
WANdisco | ORGANIZATION | 0.99+ |
Jeff Veis | PERSON | 0.99+ |
fifth year | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Yesterday | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
BigData | ORGANIZATION | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
Bing | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.98+ |
Amazons | ORGANIZATION | 0.98+ |
last night | DATE | 0.98+ |
two kinds | QUANTITY | 0.98+ |
Spark | TITLE | 0.98+ |
Hortonworks | ORGANIZATION | 0.98+ |
Day one | QUANTITY | 0.98+ |
20 years later | DATE | 0.98+ |
VPA | ORGANIZATION | 0.98+ |
2010 | DATE | 0.98+ |
ActIn | ORGANIZATION | 0.98+ |
Open Source Summit | EVENT | 0.98+ |
one layer | QUANTITY | 0.98+ |
Druva | ORGANIZATION | 0.97+ |
Alexa | TITLE | 0.97+ |
day two | QUANTITY | 0.97+ |
Bruno Aziza | PERSON | 0.97+ |
SPSS | TITLE | 0.97+ |
Switzerland | LOCATION | 0.97+ |
Two things | QUANTITY | 0.96+ |
NYC | LOCATION | 0.96+ |
Wikibon | ORGANIZATION | 0.96+ |
30 | DATE | 0.95+ |
Wikibon.com | ORGANIZATION | 0.95+ |
SiliconeANGLE Media | ORGANIZATION | 0.95+ |
O'Reilly | ORGANIZATION | 0.95+ |