Robin Goldstone, Lawrence Livermore National Laboratory | Red Hat Summit 2019
>> live from Boston, Massachusetts. It's the queue covering your red. Have some twenty nineteen brought to you by bread. Welcome back a few, but our way Our red have some twenty nineteen >> center along with Sue Mittleman. I'm John Walls were now joined by Robin Goldstone, who's HBC solution architect at the Lawrence Livermore National Laboratory. Hello, Robin >> Harrier. Good to see you. I >> saw you on the Keystone States this morning. Fascinating presentation, I thought. First off for the viewers at home who might not be too familiar with the laboratory If you could please just give it that thirty thousand foot level of just what kind of national security work you're involved with. >> Sure. So yes, indeed. We are a national security lab. And you know, first and foremost, our mission is assuring the safety, security reliability of our nuclear weapons stockpile. And there's a lot to that mission. But we also have broader national security mission. We work on counterterrorism and nonproliferation, a lot of of cyber security kinds of things. And but even just general science. We're doing things with precision medicine and and just all all sorts >> of interesting technology. Fascinating >> Es eso, Robin, You know so much and i t you know, the buzzword. The vast months years has been scaled on. We talk about what public loud people are doing. It's labs like yours have been challenged. Challenge with scale in many other ways, especially performance is something that you know, usually at the forefront of where things are you talked about in the keynote this morning. Sierra is the latest generation supercomputer number two, you know, supercomputer. So you know, I don't know how many people understand the petaflop one hundred twenty five flops and the like, but tell us a little bit about, you know, kind of the why and the what of that, >> right? So So Sierra's a supercomputer. And what's unique about these systems is that we're solving. There's lots of systems that network together. Maybe you're bigger number of servers than us, but we're doing scientific simulation, and that kind of computing requires a level of parallelism and very tightly coupled. So all the servers are running a piece of the problem. They all have to sort of operate together. If any one of them is running slow, it makes the whole thing goes slow. So it's really this tightly couple nature of super computers that make things really challenging. You know, we talked about performance. If if one servers just running slow for some reason, you know everything else is going to be affected by that. So we really do care about performance. And we really do care about just every little piece of the hardware you know, performing as it should. So So I >> think in national security, nuclear stockpiles. Um I mean, there is nothing more important, obviously, than the safety and security of the American people were at the center of that. Right? You're open source, right? You know, how does that work? How does that? Because as much trust and faith and confidence we have in the open source community. This is an extremely important responsibility that's being consigned more less to this open source community. >> Sure. You know, at first, people do have that feeling that we should be running some secret sauce. I mean, our applications themselves or secret. But when it comes to the system software and all the software around the applications, I mean, open source makes perfect sense. I mean, we started out running really closed source solutions in some cases, the perp. The hardware itself was really proprietary. And, of course, the vendors who made the hardware proprietary. They wanted their software to be proprietary. But I think most people can resonate when you buy a piece of software and the vendor tells you it's it's great. It's going to do everything you needed to do and trust us, right? Okay, But at our scale, it often doesn't work the way it's It's supposed to work. They've never tested it. Our skill. And when it breaks, now they have to fix. They're the only ones that can fix it. And in some cases we found it wasn't in the vendors decided. You know what? No one else has one quite like yours. And you know, it's a lot of work to make it work for you. So we're just not going to fix and you can't wait, right? And so open source is just the opposite of that, right? I mean, we have all that visibility in that software. If it doesn't work for our needs, we can make it work for our needs, and then we can give it back to the community. Because even though people are doing things that the scale that we are today, Ah, lot of the things that we're doing really do trickle down and can be used by a lot of other people. >> But it's something really important because, as you said, you used to be and I was like, OK, the Cray supercomputer is what we know, You know, let's use proprietary interfaces and I need the highest speed and therefore it's not the general purpose stuff. You moved X eighty six. Lennox is something that's been in the shower computers. Why? But it's a finely tuned version there. Let's get you know, the duct tape and baling wire. And don't breathe on it once you get it running. You're running well today and you talk a little bit about the journey with Roland. You know, now on the Super Computers, >> right? So again, there's always been this sort of proprietary, really high end supercomputing. But about in the late nineteen nineties, early two thousand, that's when we started building these these commodity clusters. You know, at the time, I think Beta Wolf was the terminology for that. But, you know, basically looking at how we could take these basic off the shelf servers and make them work for our applications and trying to take advantage of a CZ much commodity technologies we can, because we didn't want to re invent anything. We want to use as much as possible. And so we've really written that curve. And initially it was just red hat. Lennox. There was no relative time, but then when we started getting into the newer architectures going from Mexico six. Taxi, six, sixty for and Itanium, you know the support just wasn't there in basic red hat and again, even though it's open source and we could do everything ourselves, we don't want to do everything ourselves. I mean, having an organization having this Enterprise edition of Red Hat having a company stand behind it. The software is still open. Source. We can look at the source code. We can modify it if we want, But you know what at the end of the day, were happy to hand over some of our challenge is to Red Hat and and let them do what they do best. They have great, you know, reach into the into the colonel community. They can get things done that we can't necessarily get done. So it's a great relationship. >> Yes. So that that last mile getting it on Sierra there. Is that the first time on one kind of the big showcase your computer? >> Sure. And part of the reason for that is because those big computers themselves are basically now mostly commodity. I mean, again, you talked about a Cray, Some really exotic architecture. I mean, Sierra is a collection of Lennox servers. Now, in this case, they're running the power architecture instead of X eighty six. So Red hat did a lot of work with IBM to make sure that that power was was fully supported in the rail stack. But so, you know, again that the service themselves somewhat commodity were running and video GP use those air widely used everywhere. Obviously big deal for machine learning and stuff that the main the biggest proprietary component we're still dealing was is thie interconnect. So, you know, I mentioned these clusters have to be really tightly coupled. They that performance has to be really superior and most importantly, the latent see right, they have to be super low late and see an ethernet just doesn't cut it >> So you run Infinite Band today. I'm assuming we're >> running infinite band on melon oxen finna ban on Sierra on some of our commodity clusters. We run melon ox on other ones. We run intel. Omni Path was just another flavor of of infinite band. You know, if we could use it, if we could use Ethernet, we would, because again, we would get all the benefit in the leverage of what everybody else is doing, but just just hasn't hasn't quite been able to meet our needs in that >> area now, uh, find recalled the history lesson. We got a bit from me this morning. The laboratory has been around since the early fifties, born of the Cold War. And so obviously open source was, you know? Yeah, right, you know, went well. What about your evolution to open source? I mean, ahs. This has taken hold. Now, there had to be a tipping point at some point that converted and made the laboratory believers. But if you can, can you go back to that process? And was it of was it a big moment for you big time? Or was it just a kind of a steady migration? tour. >> Well, it's interesting if you go way back. We actually wrote the operating systems for those early Cray computers. We wrote those operating systems in house because there really was no operating system that will work for us. So we've been software developers for a long time. We've been system software developers, but at that time it was all proprietary in closed source. So we know how to do that stuff. The reason I think really what happened was when these commodity clusters came along when we showed that we could build a, you know, a cluster that could perform well for our applications on that commodity hardware. We started with Red Hat, but we had to add some things on top. We had to add the software that made a bunch of individual servers function as a cluster. So all the system management stuff the resource manager of the thing that lets a schedule jobs, batch jobs. We wrote that software, the parallel file system. Those things did not exist in the open source, and we helped to write those things, and those things took on lives of their own. So luster. It's a parallel file system that we helped develop slow, Erm, if anyone outside of HBC probably hasn't heard of it, but it's a resource manager that again is very widely popular. So the lab really saw that. You know, we got a lot of visibility by contributing this stuff to the community. And I think everybody has embracing. And we develop open source software at all different layers. This >> software, Robin, you know, I'm curious how you look at Public Cloud. So, you know, when I look at the public odd, they do a lot with government agencies. They got cloud. You know, I've talked to companies that said I could have built a super computer. Here's how long and do. But I could spend it up in minutes. And you know what I need? Is that a possibility for something of yours? I understand. Maybe not the super high performance, But where does it fit in? >> Sure, Yeah. I mean, certainly for a company that has no experience or no infrastructure. I mean, we have invested a huge amount in our data center, and we have a ton of power and cooling and floor space. We have already made that investment, you know, trying to outsource that to the cloud doesn't make sense. There are definitely things. Cloud is great. We are using Gove Cloud for things like prototyping, or someone wants a server, that some architecture, that we don't have the ability to just spin it up. You know, if we had to go and buy it, it would take six months because you know, we are the government. But be able to just spin that stuff up. It's really great for what we do. We use it for open source for building test. We use it to conferences when we want to run a tutorial and spin up a bunch of instances of, you know, Lennox and and run a tutorial. But the biggest thing is at the end of the day are our most important work. Clothes are on a classified environment, and we don't have the ability to run those workloads in the cloud. And so to do it on the open side and not be ableto leverage it on the close side, it really takes away some of the value of because we really want to make the two environments look a similar is possible leverage our staff and and everything like that. So that's where Cloud just doesn't quite fit >> in for us. You were talking about, you know, the speed of, Of of Sierra. And then also mentioning El Capitan, which is thie the next generation. You're next, You know, super unbelievably fast computer to an extent of ten X that off current speed is within the next four to five years. >> Right? That's the goal. I >> mean, what those Some numbers that is there because you put a pretty impressive array up there, >> right? So Series about one hundred twenty five PETA flops and are the big Holy Grail for high performance computing is excess scale and exit flop of performance. And so, you know, El Capitan is targeted to be, you know, one point two, maybe one point five exit flops or even Mohr again. That's peak performance. It doesn't necessarily translate into what our applications, um, I can get out of the platform. But the reason you keep sometimes I think, isn't it enough isn't one hundred twenty five five's enough, But it's never enough because any time we get another platform, people figure out how to do things with it that they've never done before. Either they're solving problems faster than they could. And so now they're able to explore a solution space much faster. Or they want to look at, you know, these air simulations of three dimensional space, and they want to be able to look at it in a more fine grain level. So again, every computer we get, we can either push a workload through ten times faster. Or we can look at a simulation. You know, that's ten times more resolved than the one that >> we could do before. So do this for made and for folks at home and take the work that you do and translate that toe. Why that exponential increase in speed will make you better. What you do in terms of decision making and processing of information, >> right? So, yeah, so the thing is, these these nuclear weapons systems are very complicated. There's multi physics. There's lots of different interactions going on, and to really understand them at the lowest level. One of the reasons that's so important now is we're maintaining a stockpile that is well beyond the life span that it was designed for. You know, these nuclear weapons, some of them were built in the fifties, the sixties and seventies. They weren't designed to last this long, right? And so now they're sort of out of their design regime, and we really have to understand their behaviour and their properties as they age. So it opens up a whole nother area, you know, that we have to be able to floor and and just some of that physics has never been explored before. So, you know, the problems get more challenging the farther we get away from the design basis of these weapons, but also were really starting to do new things like eh, I am machine learning things that weren't part of our workflow before. We're starting to incorporate machine learning in with simulation again to help explore a very large problem space and be ableto find interesting areas within a simulation to focus in on. And so that's a really exciting area. And that is also an area where, you know, GPS and >> stuff just exploded. You know, the performance levels that people are seeing on these machines? Well, we thank you for your work. It is critically important, azaz, we all realize and wonderfully fascinating at the same time. So thanks for the insights here on for your time. We appreciate that. >> All right, Thanks for >> thanking Robin Goldstone. Joining us back with more here on the Cube. You're watching our coverage live from Boston of Red Hat Summit twenty nineteen.
SUMMARY :
Have some twenty nineteen brought to you by bread. center along with Sue Mittleman. Good to see you. saw you on the Keystone States this morning. And you know, of interesting technology. five flops and the like, but tell us a little bit about, you know, kind of the why and the what And we really do care about just every little piece of the hardware you know, in the open source community. And you know, it's a lot of work to make it work for you. Let's get you know, We can modify it if we want, But you know what at the end of the day, were happy to hand over Is that the first time on one kind of the But so, you know, again that the service themselves So you run Infinite Band today. You know, if we could use it, if we could use Ethernet, And so obviously open source was, you know? came along when we showed that we could build a, you know, a cluster that So, you know, when I look at the public odd, they do a lot with government agencies. You know, if we had to go and buy it, it would take six months because you know, we are the government. You were talking about, you know, the speed of, Of of Sierra. That's the goal. And so, you know, El Capitan is targeted to be, you know, one point two, So do this for made and for folks at home and take the work that you do And that is also an area where, you know, GPS and Well, we thank you for your work. of Red Hat Summit twenty nineteen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sue Mittleman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
ten times | QUANTITY | 0.99+ |
Cold War | EVENT | 0.99+ |
six months | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
HBC | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
El Capitan | TITLE | 0.99+ |
thirty thousand foot | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
one point | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
late nineteen nineties | DATE | 0.98+ |
Mexico | LOCATION | 0.98+ |
one hundred | QUANTITY | 0.98+ |
Harrier | PERSON | 0.98+ |
five years | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
four | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
Cray | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
Boston | LOCATION | 0.96+ |
early fifties | DATE | 0.96+ |
red hat | TITLE | 0.96+ |
twenty nineteen | QUANTITY | 0.96+ |
Sierra | LOCATION | 0.96+ |
first | QUANTITY | 0.95+ |
this morning | DATE | 0.93+ |
ten | QUANTITY | 0.93+ |
six | QUANTITY | 0.92+ |
one hundred twenty five flops | QUANTITY | 0.9+ |
sixties | DATE | 0.89+ |
one servers | QUANTITY | 0.88+ |
Itanium | ORGANIZATION | 0.87+ |
intel | ORGANIZATION | 0.86+ |
Of of Sierra | ORGANIZATION | 0.86+ |
First | QUANTITY | 0.83+ |
five | QUANTITY | 0.82+ |
Sierra | ORGANIZATION | 0.8+ |
Red Hat | ORGANIZATION | 0.8+ |
Red Hat Summit 2019 | EVENT | 0.79+ |
Roland | ORGANIZATION | 0.79+ |
Lawrence Livermore National Laboratory | ORGANIZATION | 0.79+ |
Red Hat Summit twenty | EVENT | 0.79+ |
two | QUANTITY | 0.78+ |
Keystone States | LOCATION | 0.78+ |
seventies | DATE | 0.78+ |
Red | ORGANIZATION | 0.76+ |
twenty five five | QUANTITY | 0.73+ |
early two thousand | DATE | 0.71+ |
Lawrence Livermore | LOCATION | 0.71+ |
Sierra | COMMERCIAL_ITEM | 0.69+ |
Erm | PERSON | 0.66+ |
Mohr | PERSON | 0.65+ |
supercomputer | QUANTITY | 0.64+ |
one hundred twenty five | QUANTITY | 0.62+ |
Path | OTHER | 0.59+ |
Band | OTHER | 0.58+ |
National Laboratory | ORGANIZATION | 0.55+ |
band | OTHER | 0.55+ |
Gove Cloud | TITLE | 0.54+ |
nineteen | QUANTITY | 0.53+ |
fifties | DATE | 0.52+ |
number | QUANTITY | 0.52+ |
Beta Wolf | OTHER | 0.52+ |
dimensional | QUANTITY | 0.49+ |
sixty | ORGANIZATION | 0.47+ |
six | COMMERCIAL_ITEM | 0.45+ |
American | PERSON | 0.43+ |
Sierra | TITLE | 0.42+ |
Ildiko Vancsa, OpenStack Foundation | OpenStack Summit 2018
>> Announcer: Live from Vancouver, Canada, it's theCUBE, covering OpenStack North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> Welcome back to theCUBE's coverage of OpenStack Summit 2018 in Vancouver. I'm Stu Miniman with my cohost for the week, John Troyer. Happy to welcome to the program first-time guest Ildiko Vancsa, coming off the edge keynote presentation this morning. She is the ecosystem technical lead with the Edge Computing Group as part of the OpenStack Foundation. Thanks so much for joining us. >> Thank you. >> Coming into this show, edge is one of those things that it was actually pretty exciting to talk about because edge is not only super hot, but when I thought back to previous shows, this is the sixth year we've had theCUBE here and my fifth year doing it, it's like, wait, I've been talking to all the Telcos for years here. NFV was one of those use cases, and when you connect the dots, it's like oh, edge, of course. I said this conference is actually hipster when it comes to edge. We were totally covering it well before we called it that. So, explain to us your role in the foundation and what led to the formation of this track. >> Yeah, so I'm the ecosystem technical lead within the foundation, which is basically a role that belongs under the business development team. So, I'm basically building connections with our ecosystem members. I'm trying to help them succeed with OpenStack, both as software package and as a community. We are embracing open source, of course, so I'm also trying to advocate for involvement in open source because I think that's a key. Like, you know, picking up an open source software component and use it, that's a great start, but if you really want to be successful with it and you want to be able to successfully build it into your business model, then getting involved in the community, both enhancing the software and maintaining of the software, that's really key. So, my role is also onboarding companies as well to be active members of the community, and my focus is shifting toward edge computing. The history of edge computing in OpenStack basically started last May when Beth Cohen from Verizon described their use case, which is OpenStack in a tiny box in production cycle, wow. So that was also a little bit of an eye-opener for us as well, that yes, it's telecom. It's 5G, but this is the thing that's called edge, and maybe this is something that we should also look deeper into. So, we went to San Francisco last September, OpenDev, 200 people, architects, software developers trying to figure out what edge computing is. I think we had the question at every single session, someone asked that, okay, yeah so, what did you mean exactly when you said edge? Because from the nature of the architecture, like, you have the central cloud and then the sides on the different-- >> John: There are several edges depending on how far you want to go. >> Exactly. >> For you and OpenStack, what does edge mean, or all the above? >> With OpenStack, so after OpenDev when we realized that it's not really a well-defined term, we wrote up a white paper. It's at OpenStack the role/edge. It's a short one, really to just set the ground for what edge computing is. And what we came up with is, so don't imagine like a two-sentence definition for edge computing because I still strongly believe that doesn't exist, and anyone who claims it, that's not true. What we did with the white paper is basically we set characteristics and criteria that defines cloud edge computing per se, like what people are talking about when you're moving out the compute and then working closer to the edge. Like what that means from the bandwidth perspective, from how you will manage it, what that means for security, and all these sort of things. And you can basically characterize what edge means. So we rather described these layers and how far we go, and as far as like, you know, the very end edge device and like the IOT sensors, that's not a target of OpenStack. So, OpenStack itself is infrastructure as a service, so our Edge Computing Group is still staying on that layer. The Edge Computing Group itself is focusing on the angles, what edge brings onto the table, all these requirements, you know, collecting the use cases and trying to figure out what's missing, what we need to implement. >> If can repeat and maybe I'll get it right or wrong. The idea is at a cell tower or at a remote office or branch office or some closet somewhere, there is a full set of OpenStack running, maybe a minimal set of OpenStack, but it's live, it's updatable. You can update services on it. You can update the actual OpenStack itself, and it doesn't need the spoke hardware necessarily, but it's now updatable and part of a bigger multi-cloud infrastructure from some sort of service entity or enterprise. >> Yeah. >> Is that fair? >> I think that's fair. So, there's OpenStack itself that people know very well, a lot of projects. So when we talk about edge, obviously we don't want to say that, okay, pick the whole thing and install all the 60 projects because that's really not suitable for edge. So what, for example, the group is looking into, that which OpenStack components are essential for edge. And also the group is defining small edge, medium edge, what that means from hardware footprint perspectives, so just to figure out what the opportunities are there, what will fit, what will not fit. OpenStack itself is very modular by today, so you can pick up the services that you need. So what we discussed, for example, this week is Keystone, identity, you need it of course. So how much that fits into the edge scenarios. And I think the main conclusion of the forum session yesterday was that, yeah, Keystone supports Federation. We talked through the cases, and it seems like that it's kind of there. So, we now need a few people who will sit down, put together the environment, and start testing it because that's when it comes out that, you know, almost there, but there a few things to tweak. But basically the idea is what you described, pick up the component, put it there, and work with it. We also have another project called Cyborg, which is fairly new. That's for hardware acceleration, so it is providing a framework to plug in GPUs, FPJs, and these sort of, a bit more specialized hardware which will be really useful for edge use cases to OpenStack. So that's for example something that China Mobile and the OPNFV Edge Cloud Group is looking into to use, so I really hope that we will get there this year to test it in the OPNFV Pharos Labs in action. So we also have pretty great cross-community collaboration on trying to figure this whole thing out. >> Yeah, it often helps if we have examples to talk about to really explain this. Beth Cohen, we spoke with her last year and absolutely caught our attention. Got a lot of feedback from the community on it. Had Contron on earlier this week talking about, John was saying, here's some small device there with a little blade and is running pieces of OpenStack there to be able to run. Anything from the keynote or, boy, I think there's 40 sessions that you've got here. If you can, give us a couple of examples of some of the use cases that we're seeing to kind of bring this edge to reality. >> Example use cases is, we just heard this morning, for example, someone from the textile industry like how to detect issues with the fabric. So this is like one new manufacturing use case. I also heard another one, which is not checking the fabric itself, but basically the company who manufactures those machines that they are using to create the fabric, so they would like to have a central cloud and have it connected to the factories. So, being able to monitor how the machines are doing, how they can improve those machines, and also within the factory to monitor all the circumstances. Because for all the chemical processes, it's really important that the temperature and everything else is just, you know, clicks because otherwise all your fabrics will have to go to trash. So, that's manufacturing. A lot of telecom 5G, obviously that is really, really heavy because that's the part of the industry which is there today, so with 5G, all those strict requirements. This is really what we are mainly focusing on today. We are not specializing anything for telecom and in 5G use cases, but we want to make sure that all our components fit into that environment as well. In the white paper, for example, you also could see the retail use case. I'm not sure whether that will be exactly on stage this week, but that is also a great example on like Walmart with the lot of stores around, so how you manage those stores because they're also not wanting to do everything centrally. So, they would like to move the functionality out. What if the network connectivity is cut? They still have to be able to operate the store as nothing happened. So, there are a lot of segments of the industry who already have kind of really well-defined use cases. And what we see is that there's many overlapping between the requirements from the different segments that we're going to address. >> Are we seeing things like AI and ML coming up in these conversations also? >> Yes, like I think it was the manufacturing use case when I heard that they are planning to use that, and it's popping up. I think as far as our group is concerned, we are more looking into, I don't know, let's say lower-level requirements like how you maintain and operate the hundreds and thousands of edge sites, what happens with security, what happens with monitoring, what happens with all these sort of things. Like we have a new project rolling in under the foundation umbrella called Airship, which is basically deployment and lifecycle management, which is supposed to address one of the aspect that you were talking about on, okay, so how you manage this, how you upgrade this. And upgrade is, again, a really interesting question because I think I talked to someone yesterday who was like, yes, the Contron guys, they were saying that yeah, upgrade, it's really ambitious. So let say that maybe 18, 24 month or something like some kind of tech operator will decide to upgrade something out in the edge because it's out there, it's working, let's not touch this. So when we talk about upgrade, even that, I think, will depend on the bits of the industry that, what pace they will decide to take. >> Are there any particular surprises or learnings that you've had this year after talking with this community for a week now? You said, well, last year, I was very impressed last year when they got up on stage and talked about that. That kind of expanded my mind a little bit. You've been working with this now for a year, this whole track and forum sessions. Anything you're excited about taking to the future or learnings or surprises that, oh, this is really going to work or anything like that? Any parts of it that are really interesting? You talked about security upgrades. We've talked about a lot of the technical components, but it seems like it's working. >> I think at this point, at least on my end, I think I'm over the surprise phase. So what surprises me the most is how many groups there are out there who are trying to figure out what this whole edge thing is. And what we need to really focus on among the technical requirements is that how we are working together with all these groups just to make sure that the integration between the different things that we are all developing and working on is smooth. So like, we've been working together with the OPNFV community for a while now. It's a really fruitful relationship between us. Like seeing OpenStack being deployed in a full-stack environment and being tested, that's really priceless. And we are planning to do the same thing with edge as well, and we are also looking into ONAP, Aquino, Et-see-mac, so looking into the open source groups, looking into the standardization and really just trying to ensure that when we talk about open infrastructure, that that really is designed and developed in a way that integrates well with the other components. It's synchronized with the standardization activities because I think especially in case of edge, when we say interoperability, that's a level higher than what we call the interoperability on the telecom level I think. Like when you just imagine one operator network and applications from other providers popping up in that network, and components that just realizing the network popping up from different vendors. And this whole thing has to work together. So, I think OpenStack and open infrastructure has a really big advantage there compared to any proprietary solution because we have to address this, I think, really big challenge, and it's also a really important challenge. >> Ildiko, really appreciate you giving us all the updates here on the edge track, the keynote, definitely one of the areas that is capturing our attention and lots of people out there. So, thanks so much for joining us. >> Thank you for the opportunity. >> All right, for John Troyer, I'm Stu Miniman. Lots more coverage here from the OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE.
SUMMARY :
Brought to you by Red Hat, the OpenStack Foundation, coming off the edge keynote presentation this morning. and when you connect the dots, Yeah, so I'm the ecosystem technical lead on how far you want to go. and how far we go, and as far as like, you know, and it doesn't need the spoke hardware necessarily, But basically the idea is what you described, of some of the use cases that we're seeing it's really important that the temperature of the industry that, what pace they will decide to take. We've talked about a lot of the technical components, between the different things that we are all developing all the updates here on the edge track, the keynote, from the OpenStack Summit 2018 in Vancouver.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Marc Lemire | PERSON | 0.99+ |
Chris O'Brien | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Hilary | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ildiko Vancsa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Alan Cohen | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Rajiv | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Stefan Renner | PERSON | 0.99+ |
Ildiko | PERSON | 0.99+ |
Mark Lohmeyer | PERSON | 0.99+ |
JJ Davis | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Beth | PERSON | 0.99+ |
Jon Bakke | PERSON | 0.99+ |
John Farrier | PERSON | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Cassandra Garber | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Brown | PERSON | 0.99+ |
Beth Cohen | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Seth Dobrin | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
5 | QUANTITY | 0.99+ |
Hal Varian | PERSON | 0.99+ |
JJ | PERSON | 0.99+ |
Jen Saavedra | PERSON | 0.99+ |
Michael Loomis | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Rajiv Ramaswami | PERSON | 0.99+ |
Stefan | PERSON | 0.99+ |
Roland Cabana, Vault Systems | OpenStack Summit 2018
>> Announcer: Live from Vancouver, Canada it's theCUBE, covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack foundation, and its Ecosystem partners. >> Welcome back, I'm Stu Miniman and my cohost John Troyer and you're watching theCUBE's coverage of OpenStack Summit 2018 here in Vancouver. Happy to welcome first-time guest Roland Cabana who is a DevOps Manager at Vault Systems out of Australia, but you come from a little bit more local. Thanks for joining us Roland. >> Thank you, thanks for having me. Yes, I'm actually born and raised in Vancouver, I moved to Australia a couple years ago. I realized the potential in Australian cloud providers, and I've been there ever since. >> Alright, so one of the big things we talk about here at OpenStack of course is, you know, do people really build clouds with this stuff, where does it fit, how is it doing, so a nice lead-in to what does Vault Systems do for the people who aren't aware. >> Definitely, so yes, we do build cloud, a cloud, or many clouds, actually. And Vault Systems provides cloud services infrastructure service to Australian Government. We do that because we are a certified cloud. We are certified to handle unclassified DLM data, and protected data. And what that means is the sensitive information that is gathered for the Australian citizens, and anything to do with big user-space data is actually secured with certain controls set up by the Australian Government. The Australian Government body around this is called ASD, the Australian Signals Directorate, and they release a document called the ISM. And this document actually outlines 1,088 plus controls that dictate how a cloud should operate, how data should be handled inside of Australia. >> Just to step back for a second, I took a quick look at your website, it's not like you're listed as the government OpenStack cloud there. (Roland laughs) Could you give us, where does OpenStack fit into the overall discussion of the identity of the company, what your ultimate end-users think about how they're doing, help us kind of understand where this fits. >> Yeah, for sure, and I mean the journey started long ago when we, actually our CEO, Rupert Taylor-Price, set out to handle a lot of government information, and tried to find this cloud provider that could handle it in the prescribed way that the Australian Signals Directorate needed to handle. So, he went to different vendors, different cloud platforms, and found out that you couldn't actually meet all the controls in this document using a proprietary cloud or using a proprietary platform to plot out your bare-metal hardware. So, eventually he found OpenStack and saw that there was a great opportunity to massage the code and change it, so that it would comply 100% to the Australian Signals Directorate. >> Alright, so the keynote this morning were talking about people that build, people that operate, you've got DevOps in your title, tell us a little about your role in working with OpenStack, specifically, in broader scope of your-- >> For sure, for sure, so in Vault Systems I'm the DevOps Manager, and so what I do, we run through a lot of tests in terms of our infrastructure. So, complying to those controls I had mentioned earlier, going through the rigmarole of making sure that all the different services that are provided on our platform comply to those specific standards, the specific use cases. So, as a DevOps Manger, I handle a lot of the pipelining in terms of where the code goes. I handle a lot of the logistics and operations. And so it actually extends beyond just operation and development, it actually extends into our policies. And so marrying all that stuff together is pretty much my role day-to-day. I have a leg in the infrastructure team with the engineering and I also have a leg in with sort of the solutions architects and how they get feedback from different customers in terms of what we need and how would we architect that so it's safe and secure for government. >> Roland, so since one of your parts of your remit is compliance, would you say that you're DevSecOps? Do you like that one or not? >> Well I guess there's a few more buzzwords, and there's a few more roles I can throw in there but yeah, I guess yes. DevSecOps there's a strong security posture that Vault holds, and we hold it to a higher standard than a lot of the other incumbents or a lot of platform providers, because we are actually very sensitive about how we handle this information for government. So, security's a big portion of it, and I think the company culture internally is actually centered around how we handle the security. A good example of this is, you know, internally we actually have controls about printing, you know, most modern companies today, they print pages, and you know it's an eco thing. It's an eco thing for us too, but at the same time there are controls around printed documents, and how sensitive those things are. And so, our position in the company is if that control exists because Australian Government decides that that's a sensitive matter, let's adopt that in our entire internal ecosystem. >> There was a lot of talk this morning at the keynote both about upgrades, and I'm blanking on the name of the new feature, but also about Zuul and about upgrading OpenStack. You guys are a full Upstream, OpenStack expert cloud provider. How do you deal with upgrades, and what do you think the state of the OpenStack community is in terms of kind of upgrades, and maintenance, and day two kind of stuff? >> Well I'll tell you the truth, the upgrade path for OpenStack is actually quite difficult. I mean, there's a lot of moving parts, a lot of components that you have to be very specific in terms of how you upgrade to the next level. If you're not keeping in step of the next releases, you may fall behind and you can't upgrade, you know, Keystone from a Liberty all the way up to Alcatel, right? You're basically stuck there. And so what we do is we try to figure out what the government needs, what are the features that are required. And, you know, it's also a conversation piece with government, because we don't have certain features in this particular release of OpenStack, it doesn't mean we're not going to support it. We're not going to move to the next version just because it's available, right? There's a lot of security involved in fusing our controls inside our distribution of OpenStack. I guess you can call it a distribution, on our build of OpenStack. But it's all based on a conversation that we start with the government. So, you know, if they need VGPUs for some reason, right, with the Queens release that's coming out, that's a conversation we're starting. And we will build into that functionality as we need it. >> So, does that mean that you have different entities with different versions, and if so, how do you manage all of that? >> Well, okay, so yes that's true. We do have different versions where we have a Liberty release, and we have an Alcatel release, which is predominant in our infrastructure. And that's only because we started with the inception of the Liberty release before our certification process. A lot of the things that we work with government for is how do they progress through this cloud maturity model. And, you know, the forklift and shift is actually a problem when you're talking about releases. But when you're talking about containerization, you're talking about Agile Methodologies and things like that, it's less of a reliance on the version because you now have the ability to respawn that same application, migrate the data, and have everything live as you progress through different cloud platforms. And so, as OpenStack matures, this whole idea of the fast forward idea of getting to the next release, because now they have an integration step, or they have a path to the next version even though you're two or three versions behind, because let's face it, most operators will not go to the latest and greatest, because there's a lot of issues you're going to face there. I mean, not that the software is bad, it's just that early adopters will come with early adopter problems. And, you know, you need that userbase. You need those forum conversations to be able to be safe and secure about, you know, whether or not you can handle those kinds of things. And there's no need for our particular users' user space to have those latest and greatest things unless there is an actual request. >> Roland, you are an IAS provider. How are you handling containers, or requests for containers from your customers? >> Yes, containers is a big topic. There's a lot of maturity happening right now with government, in terms of what a container is, for example, what is orchestration with containers, how does my Legacy application forklift and shift to a container? And so, we're handling it in stages, right, because we're working with government in their maturity. We don't do container services on the platform, but what we do is we open-source a lot of code that allows people to deploy, let's say a terraform file, that creates a Docker Host, you know, and we give them examples. A good segue into what we've just launched last week was our Vault Academy, which we are now training 3,000 government public servants on new cloud technologies. We're not talking about how does an OS work, we're talking about infrastructures, code, we're talking about Kubernetes. We're talking about all these cool, fun things, all the way up to function as a service, right? And those kinds of capabilities is what's going to propel government in Australia moving forward in the future. >> You hit on one of my hot buttons here. So functions as a service, do you have serverless deployed in your environment, or is it an education at this point? >> It's an education at this point. Right now we have customers who would like to have that available as a native service in our cloud, but what we do is we concentrate on the controls and the infrastructure as a service platform first and foremost, just to make sure that it's secure and compliant. Everyone has the ability to deploy functions as a service on their platform, or on their accounts, or on their tenancies, and have that available to them through a different set of APIs. >> Great. There's a whole bunch of open-source versions out there. Is that what they're doing? Do you any preference toward the OpenWhisk, or FN, or you know, Fission, all the different versions that are out there? >> I guess, you know, you can sort of like, you know, pick your racehorse in that regard. Because it's still early days, and I think open to us is pretty much what I've been looking at recently, and it's just a discovery stage at this point. There are more mature customers who are coming in, some partners who are championing different technologies, so the great is that we can make sure our platform is secure and they can build on top of it. >> So you brought up security again, one of the areas I wanted to poke at a little bit is your network. So, it being an IS provider, networking's critical, what are you doing from a networking standpoint is micro-segmentation part of your environment? >> Definitely. So natively to build in our cloud, the functions that we build in our cloud are all around security, obviously. Micro-segmentation's a big part of that, training people in terms of how micro-segmentation works from a forklift and shift perspective. And the network connectivity we have with the government is also a part of this whole model, right? And so, we use technologies like Mellanox, 400G fabric. We're BGP internally, so we're routing through the host, or routing to the host, and we have this... Well so in Australia there's this, there's service from the Department of Finance, they create this idea of an icon network. And what it is, is an actually direct media fiber from the department directly to us. And that means, directly to the edge of our cloud and pipes right through into their tenancy. So essentially what happens is, this is true, true hybrid cloud. I'm not talking about going through gateways and stuff, I'm talking about I speed up an instance in the Vault cloud, and I can ping it from my desktop in my agency. Low latency, submillisecond direct fiber link, up to 100g. >> Do you have certain programmability you're doing in your network? I know lots of service providers, they want to play and get in there, they're using, you know, new operating models. >> Yes, I mean, we're using the... I draw a blank. There's a lot of technologies we're using for network, and the Cumulus Networking OS is what we're using. That allows us to bring it in to our automation team, and actually use more of a DevOps tool to sort of create the deployment from a code perspective instead of having a lot of engineers hardcoding things right on the actual production systems. Which allows us to gate a lot of the changes, which is part of the security posture as well. So, we were doing a lot of network offloading on the ConnectX-5 cards in the data center, we're using cumulus networks for bridging, we're working with Neutron to make sure that we have Neutron routers and making sure that that's secure and it's code reviewed. And, you know, there's a lot of moving parts there as well, and I think from a security standpoint and from a network functionality standpoint, we've come to a happy place in terms of providing the fastest network possible, and also the most secure and safe network as possible. >> Roland, you're working directly with the Upstream OpenStack projects, and it sounds like some others as well. You're not working with a vendor who's packaging it for you or supporting it. So that's a lot of responsibility on you and your team, I'm kind of curious how you work with the OpenStack community, and how you've seen the OpenStack community develop over the years. >> Yeah, so I mean we have a lot of talented people in our company who actually OpenStack as a passion, right? This is what they do, this is what they love. They've come from different companies who worked in OpenStack and have contributed a lot actually, to the community. And actually that segues into how we operate inside culturally in our company. Because if we do work with Upstream code, and it doesn't have anything to do with the security compliance of the Australian Signals Directorate in general, we'd like to Upstream that as much as possible and contribute back the code where it seems fit. Obviously, there's vendor mixes and things we have internally, and that's with the Mellanox and Cumulus stuff, but anything else beyond that is usually contributed up. Our team's actually very supportive of each other, we have network specialists, we have storage specialists. And it's a culture of learning, so there's a lot of synchronizations, a lot of synergies inside the company. And I think that's part to do with the people who make up Vault Systems, and that whole camaraderie is actually propagated through our technology as well. >> One of the big themes of the show this year has been broadening out of what's happening. We talked a little bit about containers already, Edge Computing is a big topic here. Either Edge, or some other areas, what are you looking for next from this ecosystem, or new areas that Vault is looking at poking at? >> Well, I mean, a lot of the exciting things for me personally, I guess, I can't talk to Vault in general, but, 'cause there's a lot of engineers who have their own opinions of what they like to see, but with the Queens release with the VGPUs, something I'd like, that all's great, a long-term release cycle with the OpenStack foundation would be great, or the OpenStack platform would be great. And that's just to keep in step with the next releases to make sure that we have the continuity, even though we're missing one release, there's a jump point. >> Can you actually put a point on that, what that means for you. We talked to Mark Collier a little bit about it this morning but what you're looking and why that's important. >> Well, it comes down to user acceptance, right? So, I mean, let's say you have a new feature or a new project that's integrated through OpenStack. And, you know, some people find out that there's these new functions that are available. There's a lot of testing behind-the-scenes that has to happen before that can be vetted and exposed as part of our infrastructure as a service platform. And so, by the time that you get to the point where you have all the checks and balances, and marrying that next to the Australian controls that we have it's one year, two years, or you know, however it might be. And you know by that time we're at the night of the release and so, you know, you do all that work, you want to make sure that you're not doing that work and refactoring it for the next release when you're ready to go live. And so, having that long-term release is actually what I'm really keen about. Having that point of, that jump point to the latest and greatest. >> Well Roland, I think that's a great point. You know, it used to be we were on the 18 month cycle, OpenStack was more like a six month cycle, so I absolutely understand why this is important that I don't want to be tied to a release when I want to get a new function. >> John: That's right. >> Roland Cabana, thank you the insight into Vault Systems and congrats on all the progress you have made. So for John Troyer, I'm Stu Miniman. Back here with lots more coverage from the OpenStack Summit 2018 in Vancouver, thanks for watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Red Hat, the OpenStack foundation, but you come from a little bit more local. I realized the potential in Australian cloud providers, Alright, so one of the big things we talk about and anything to do with big user-space data into the overall discussion of the identity of the company, that the Australian Signals Directorate needed to handle. I have a leg in the infrastructure team with the engineering A good example of this is, you know, of the new feature, but also about Zuul a lot of components that you have to be very specific A lot of the things that we work with government for How are you handling containers, that creates a Docker Host, you know, So functions as a service, do you have serverless deployed and the infrastructure as a service platform or you know, Fission, all the different versions so the great is that we can make sure our platform is secure what are you doing from a networking standpoint And the network connectivity we have with the government they're using, you know, new operating models. and the Cumulus Networking OS is what we're using. So that's a lot of responsibility on you and your team, and it doesn't have anything to do with One of the big themes of the show this year has been And that's just to keep in step with the next releases Can you actually put a point on that, And so, by the time that you get to the point where that I don't want to be tied to a release and congrats on all the progress you have made.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Australia | LOCATION | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
Roland Cabana | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Mark Collier | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Roland | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Vault Systems | ORGANIZATION | 0.99+ |
Alcatel | ORGANIZATION | 0.99+ |
Australian Signals Directorate | ORGANIZATION | 0.99+ |
Rupert Taylor-Price | PERSON | 0.99+ |
Department of Finance | ORGANIZATION | 0.99+ |
18 month | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
ASD | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
Neutron | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Australian Government | ORGANIZATION | 0.99+ |
OpenStack | TITLE | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
1,088 plus controls | QUANTITY | 0.99+ |
OpenStack Summit 2018 | EVENT | 0.99+ |
first-time | QUANTITY | 0.98+ |
Vault Academy | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Vault | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
Liberty | TITLE | 0.96+ |
three versions | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
Zuul | ORGANIZATION | 0.95+ |
one release | QUANTITY | 0.95+ |
DevSecOps | TITLE | 0.93+ |
up to 100g | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
OpenStack Summit North America 2018 | EVENT | 0.91+ |
ConnectX-5 cards | COMMERCIAL_ITEM | 0.9+ |
3,000 government public servants | QUANTITY | 0.9+ |
ISM | ORGANIZATION | 0.9+ |
Upstream | ORGANIZATION | 0.9+ |
this morning | DATE | 0.89+ |
Agile Methodologies | TITLE | 0.88+ |
a second | QUANTITY | 0.87+ |
Queens | ORGANIZATION | 0.87+ |
couple years ago | DATE | 0.87+ |
DevOps | TITLE | 0.86+ |
day two | QUANTITY | 0.86+ |
Liberty | ORGANIZATION | 0.85+ |
Steven Wu, Netflix | Flink Forward 2018
>> Narrator: Live from San Francisco, it's theCube, covering Flink Forward, brought to you by Data Artisans. >> Hi, this is George Gilbert. We're back at Flink Forward, the Flink conference sponsored by Data Artisans, the company that commercializes Apache Flink and provides additional application management platforms that make it easy to take stream processing at scale for commercial organizations. We have Steven Wu from Netflix, always a company that is pushing the edge of what's possible, and one of the early Flink users. Steven, welcome. >> Thank you. >> And tell us a little about the use case that was first, you know, applied to Flink. >> Sure, our first-use case is a routing job for Keystone data pipeline. Keystone data pipeline process over three trillion events per day, so we have a thousand routing jobs that we do some simple filter projection, but the Solr routing job is a challenge for us and we recently migrated our routing job to Apache Flink. >> And so is the function of a routing job, is it like an ETL pipeline? >> Not exactly ETL pipeline, but more like it's a data pipeline to deliver data from the producers to the data syncs where people can consume those data like array search, Kafka or higher. >> Oh, so almost like the source and sync with a hub in the middle? >> Yes, that is exactly- >> Okay. >> That's the one with our big use case. And the other thing is our data engineer, they also need some stream processing today to do data analytics, so their job can be stateless or it can be stateful if it's a stateful job it can be as big as a terabyte of base state for a single job. >> So tell me what these stateful jobs, what are some of the things that you use state for? >> So, for example like a session of user activity, like if you have clicked the video on the online URI all those activity, they would need to be sessionalized window, for the windows, sessionalized, yeah those are the states, typical. >> OK, and what sort of calculations might you be doing? And which of the Flink APIs are you using? >> So, right now we're using the data stream API, so a little bit low level, we haven't used the Flink SQL yet but it's in our road map, yeah. >> OK, so what is the data stream, you know, down closer to the metal, what does that give you control over, right now, that is attractive? And will you have as much control with the SQL API? >> OK, yes, so the low level data stream API can give you the full feature set of everything. High level SQL is much easier to use, but obviously you have, the feature set is more limited. Yeah, so that's a trade-off there. >> So, tell me about, for a stateful application, is there sort of scaffolding about managing this distributed cluster that you had to build that you see coming down the pipe from Flink and Data Artisans that might make it easier, either for you or for mainstream customers? >> Sure, I think internal state management, I think that is where Flink really shines compared to other stream processing engine. So they do a lot with work underneath already. I think the main thing we need from Flink for the future, near future is regarding the job recovery performance. But like a state management API is very mature. Flink is, I think it's more mature than most of the other stream processing engines. >> Meaning like Kafka, Spark. So, in the state management, can a business user or business analyst issue a SQL query across the cluster and Flink figures out how to manage the distribution of the query and the filtering and presentation of the results transparently across the cluster? >> I'm not an expert on Flink SQL, but I think yes, essentially Flink SQL will convert to a Flink job which will be using the data stream API, so they will manage the state, yes, but, >> So, when you're using the lower level data stream API, you have to manage the distributed state and sort of retrieving and filtering, but that's something at a higher level abstraction, hopefully that'll be, >> No, I think that in either case, I think the state management is handled by Flint. >> Okay. >> Yeah. >> Distributed. >> All the state management, yes >> Even if it's querying at the data stream level? >> Yeah, but if you query at the SQL level, you won't be able to deal with those state APIs directly. You can still do actual windowing, let's say you have a SQL app doing window with some session by session by idle time that would be transfer for job and Flink will manage those window, manage those session state so you do not need to worry about either way you do not need to worry about state management. Apache Flink take care of it. >> So tell me, some of the other products you might have looked at, is the issue that if they have a clean separation from the storage layer, for large scale state management, you know, as opposed to, in memory, is it that the large scale is almost treated like a second tier and therefore, you almost have a separate set or a restricted set of operations at distributed state level versus at the compute level, would that be a limitation of other streaming processors? >> No, I don't see that. I think that given that stream will have taken a different approach, you find like a Google Cloud data flow, Google Cloud flow, they are thinking about using a big table, for example. But those are external state management. Flint decided to take a the approach of embedded state management inside of Flink. >> And when it's external, what's the trade-off? >> That's good question, I think if external, the latency may be higher, but your throughput might be a little low. Because you're going all the natural. But the benefit of that external state management is now your job becomes stateless. Your job make the recovery much faster for job failure, so either trade-off over there. >> OK. >> Yes. >> OK, got it. Alright, Steven we're going to have to end it on that, but that was most enlightening, and thanks for joining. >> Sure, thank you. >> This is George Gilbert, for Wikibon and theCube, we're again at Flink Forward in San Francisco with Data Artisans, we'll be back after a short break. (techno music)
SUMMARY :
covering Flink Forward, brought to you by Data Artisans. always a company that is pushing the edge that was first, you know, applied to Flink. but the Solr routing job is a challenge for us it's a data pipeline to deliver data from the producers And the other thing is our data engineer, like if you have clicked the video on the online URI so a little bit low level, we haven't used the Flink SQL yet but obviously you have, the feature set is more limited. than most of the other stream processing engines. across the cluster and Flink figures out how to manage the No, I think that in either case, Yeah, but if you query at the SQL level, taken a different approach, you find like But the benefit of that external state management but that was most enlightening, and thanks for joining. This is George Gilbert, for Wikibon and theCube,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Steven Wu | PERSON | 0.99+ |
SQL | TITLE | 0.99+ |
Data Artisans | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Flink | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Flint | ORGANIZATION | 0.98+ |
Kafka | TITLE | 0.98+ |
Flink Forward | ORGANIZATION | 0.98+ |
Spark | TITLE | 0.97+ |
second tier | QUANTITY | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
today | DATE | 0.95+ |
over three trillion events per day | QUANTITY | 0.93+ |
Keystone | ORGANIZATION | 0.92+ |
single job | QUANTITY | 0.92+ |
Flint | PERSON | 0.91+ |
Flink SQL | TITLE | 0.91+ |
first-use case | QUANTITY | 0.86+ |
one | QUANTITY | 0.86+ |
Apache Flink | ORGANIZATION | 0.84+ |
theCube | ORGANIZATION | 0.82+ |
2018 | DATE | 0.81+ |
Forward | TITLE | 0.8+ |
SQL API | TITLE | 0.8+ |
Flink | TITLE | 0.79+ |
a thousand routing jobs | QUANTITY | 0.77+ |
Flink | EVENT | 0.77+ |
Flink Forward | EVENT | 0.73+ |
terabyte | QUANTITY | 0.71+ |
ORGANIZATION | 0.65+ | |
Cloud | TITLE | 0.48+ |
Forward | EVENT | 0.39+ |
Randy Bias, Juniper - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE
>> Voiceover: Live from Boston, Massachusetts, it's the Cube, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional Ecosystem as support. >> Welcome back, I'm Stu Miniman joined by John Troyer. This is Silken Angle Media's production of the Cube at OpenStack Summit. We're the world wide leader in tech coverage, live tech coverage. Happy to welcome back to the program someone we've had on so many times we can't keep track. He is the creator of the term Pets versus Cattle, he is one of the OG of The Cloud Group, Randy, you know, wrote about everything before most of it was done. So good to see you, thank you for joining us. >> Thanks for having me. >> Alright, so Randy, coming into this show we felt that it was a bit of resetting expectations, people not understanding, you know, where infrastructure's going, a whole hybrid multi-cloud world, so, I mean you've told us all how it's going to go, so where are we today, what have people been getting wrong, what's your take coming into this week and what you've seen? >> Well, I've said it before, which is that the public clouds have done more than just deliver compute storage and networking on demand. What they've really done is they've built these massive development organizations. They're very sophisticated, that are, you know, that really come from that Webscale background and move at a velocity that's really different than anything we've seen before, and I think the hope in the early days of OpenStack was that we would achieve a similar kind of velocity and momentum, but I think the reality is is that it just hasn't really materialized; that while there are a lot of projects and there are a lot of contributors the coordination between them is very poor, and you know it's just not the, like architectural oversight that we really needed isn't there. I, a couple years ago at the Openstack Silicon Valley gave a presentation called The Lie of the Benevolent Dictator, and I chartered a course for how we could actually have more of a technical architecture oversight, and just that really fell on deaf ears. And so we continue to do the same thing and expect different results and I just, that's a little disappointing for me. >> Yeah. So what is your view of hybrid cloud? You know, no disagreement, you look at what the public cloud companies, especially the big three, the development that they can do, Amazon, a thousand new features a year, Google, what they can do with data, Microsoft has a whole lot of applications and communities around them. We're mostly talking about private cloud here, it was a term that you fought against for many years, we've had great debates on it, so how does that hybrid play out? Cause customers, they're keeping on premises. Edge fits into a lot of this too, so it's, there's not one winner, it's not a zero sum game, but how does that hybrid cloud work? >> Yeah so, I didn't fight against private cloud, I qualified it. I said if it's going to be a private cloud it's got to be built and look and smell the way that the public cloud was. Alright? If it's just VM ware with VM's on demand, that's not a private cloud. That was my position. And then in terms of hybrid cloud, you know, I don't think we're there yet. I've presented on this at many different OpenStacks, you can see it in the past, and I sort of laid out what needs to happen and that didn't happen. But I think there's hope, and I think the hope comes in the form of Kubernetes, and to a certain degree, Helm. And the reason that Kubernetes with Helm is very powerful is that Kubernetes gives us a computive traction, so that you don't care if you're on the public cloud, or you know OpenStack or Vmware or whatever, and then what Helm gives us is our charts, so ways to deploy services, not just software, and so what we could think about doing in the future is building hybrid cloud based off of Kubernetes and Helm. >> Yeah, so Randy since last time we talked you've got a new role, you're now with Juniper. Juniper had done a Contrail acquisition. You know, quite a few years back you wrote a good blueprint on one of the Juniper forums about the OpenContrail communities. So tell us a little bit about your role, your goals, in that community. >> So OpenContrail has been a primarily Juniper initiative, and we're going to press the reset button on the OpenContrail community. I'm going to do it tonight and call for people to sort of get involved in doing that reset, and when I say reset I mean, wipe the operating system, reload it from scratch, and do it really as a community, not just as a Juniper run initiative, and so people inside Juniper are very excited about this, and what we're trying to do is that we believe that the path forward for OpenContrail is ubiquitous adoption. So rather then playing for just the pieces that we have, which we've done a great job of, we want to take the world's best SDN controller and we want to make sure everybody uses it, because we think aggregate that's good for not only the entire community but also Juniper. >> So, love the idea of kind of rebooting the community in the open, right, because you have to be transparent about these sort of things. >> Randy: Yeah, that's right. >> What are the community segments that you would like to see join you here in the OpenContrail? What kind of users, what kind of companies would you like to see come in to the tent? >> Well anybody's welcome, but we want to start with all of our key stakeholders that exist today, so first one, and arguably one of the most important is our competitors, right so we're hoping to have Mirantis at the table, maybe Ericcson, Huawei, anybody. Cisco, hey come join the party. Second is that we have done really well in Sass and in gaming, and we'd like to see all of those companies come to the table as well, Workday, Symantech, and so on. The third segment is enterprises, we've done well in financial services, we think that that's a really important segment because they're leading edge of enterprises typically, and the fourth is the carrier's obviously incredibly important for Juniper, folks like AT&T, Direction Telecom, all those companies we'd love to see come to the table. And then that's really the primary focus, and then anybody else who wants to show up, anybody who wants to develop in Contrail in the future we'd love to have there. >> Well with open source communities, right, there's always a balance of the contributors and developers versus operators, and we can use the word contributors in a lot of roles. Some open source communities, much more developer focused, >> Randy: That's right. >> Others more operator focused, where do you see this OpenContrail community starting out? >> So where it's been historically is more of our end users and operators. >> I think that's interesting and an interesting twist because I think sometimes open source communities get stuck with just the people who can contribute code, and I'm from an operator community myself, >> Randy: Right. >> So I think that's really interesting. >> We still want all those people but I think what has happened is that when people have come in and they wanted to be more sort of on the developer side, the community hasn't been friendly to them. >> John: Okay. >> Randy: And so we want, that's a key thing that we want to change. You know when we were talking, to certain carriers they came and they said look, it's great you're going to do this, we want to be a part of it, and one of the things we'd like to contribute is more advanced testing around VMFs. And I just look at that and I'm just like that's what we need, right? Juniper is not, can't carry all the water on having, you know, sophisticated test suites for VMFs and more advanced networking use cases, but the carriers are deep into this and we'd love to have them come and bring that. So not just developers, but also QA, people who want to increase the code quality, the architectural quality, and the aggregate value of OpenContrail. >> Okay, Randy can you help place OpenContrail where it fits in this kind of networking spectrum, especially, there's open source things, we've talked about about VPP a couple times on theCube here. The joke for many years was SDN still does nothing, NFV solutions have grown, have been huge use case, is really where the early money for big deployments have been for OpenStack. Where does OpenContrail fit, where does it kind of compare and contrast against some of the other options out there. >> I'm going to answer that slightly differently. I've been skeptical about SDN overlays for a long time, and now I am helping with one of the world's best SDN overlays, and what's changed for me is that in the last year I've seen key customers of Contrail's, of Juniper's actually do something very interesting, right. You've got an SDN overlay, it's complex, it's hard to void, you got to wonder, why should I do this? Well I thought the same thing about virtualization, right, until I figured out, sort of what was the killer app. And what we've seen is a company, one of our customers, and several others, but one in particular I can talk about publicly, Riot Games, take containers and OpenContrail and marry them so that you have an abstraction around compute, and an abstraction around networking, so that their developers can write to that, and they don't care whether that's running on top of public cloud, private cloud, or in some partner's data center globally. And in fact they're going to talk about that today at OpenContrail days at 3:30, and are going to present a lot more details, and that's amazing to me because by abstracting a way and disintermediating the public clouds, you actually have more power, right. You can build your own framework. And if you're using Kubernetes as a baseline you can do a lot more on top of that computing network abstraction. >> You talked about OpenContrail days, again my first summit, I've actually been impressed by the foundation, acknowledging there's a huge landscape of open source and other technologies around there, OpenStack itself doesn't invent everything. Can you talk a little bit about that kind of attitude of bringing, I mean we talk about Kubernetes and that sort of thing, but all the other CNCF projects, monitoring, even components like SCD, right, we're talking about here at this conference. So, can you talk a little bit about how OpenStack can interact with the rest of the open source and cloud native at-large community? >> That's sort of a tough question John. >> John: Okay. >> I mean the reason I say that is like the origins of OpenStack are very much NIH and there has been a very disturbing tendency to sort of re-invent the wheel. A great example is Keystone, still to this day I don't know why Keystone exists and why we created a whole new authentic standard when there were dozens and dozens of battle-tested, battle-hardened protocols and bits of code that existed prior. It's great that we're getting a little bit better at that but I still sense that the origins of the community and some of the technical leadership have resistance to organizing and working with outside components and playing nice. So, it's better but it's not great, it's not where it should be. Really OpenStack needs to be broken down into a lot of different projects that can compete with each other and all run in parallel without having to be so tightly wound together. It's still disappointing to me that we aren't doing that today. >> Randy, wonder if you could give us a little bit of a personal reflection, you've been involved in cloud many years, we've talked about some of the state of it, where do you think enterprises are when they think about their IT, how IT relates to business, some of the big challenges they're facing, and kind of this rapid pace of change that's happening in our industry right now >> Yeah well the pressures just increase. The need to pick up speed and to move faster and to have a greater velocity, that's not going away, that seems to be like an incredible macro-trend that's just going to keep driving people towards the next event. But what I see is that the tension between the infra-structure IT teams and the line of business hasn't really started to get resolved. You see a lot of enterprises back into using DevOps as a way to try to fix the culture change problems but it's just not happening fast enough. I have a lot of concerns that basically private cloud or private infra-structure for enterprises will just not materialize in the way it needs to for the next generation. And that the line of business will continue to just keep moving to public cloud. All the while all the money that's being reinvested in the public cloud is increasing their capabilities in terms feature sets and security capabilities and so on. I just, I don't see the materialization of private cloud happening very well at this point in time and I don't see any trendlines that tell me it's going to change. >> Yeah, what recommendations do you give today to the OpenStack foundation? I know that you haven't been shy in the past about giving guidance as to the direction, what do you think needs to happen to be able to help customers along that journey that they need? >> I don't give any guidance to the OpenStack Foundation anymore, I'm not on the Board of Directors, and frankly I gave a lot of advice in the past that fell on deaf ears and people were unwilling to make the changes that were necessary I think to create success. And even though I was eventually proven right, there doesn't seem to be an appetite for change. I would say that the hard partition between the Board of Directors and the technical committee that was created at the outset with the founding of the Foundation has let to a big problem which is that there's simply business concerns that are technical concerns and there are technical concerns which are business concerns and the actual structure of the Foundation does not allow that to occur because that hard partition between them. So if people on Board of Directors can't actually tell the TC that they'd like to see certain technical changes because they're business concerns and Technical Committee can't tell the Board of Directors they'd like to see business changes made because they're technical concerns around them. And I think that's, it's fundamentally broken until the bylaws are fixed. >> So Randy beyond what we've talked about already what's exciting you these days, you look at like the serverless trend, is that something that you find intriguing or maybe contrary view on it, what's exciting you these days? >> Serverless is really interesting. In fact I'd like to see serverless at the edge. I think it would be fascinating if Amazon webservices could sell a serverless capability that was actually running in the mobile carriers edge. So like on the mobile towers or in essential offices. But you could do distributive computation for IOT literally at the very edge of the network, that would be incredibly powerful. So I am very interested in serverless in that regard. With Kubernetes, I think that this is the future, I think I've seen most of the other initiatives start to fail at this point. Docker Incorporated just hasn't made the progress they need to, hopefully a change in leadership will fix that. But it does mean that more and more people are gravitating towards Kubernetes and that's a thing because whereas OpenStack is historically got no opinion, Kubernetes is a much more prescriptive model and I think that actually leads to faster innovation, a greater pace of change and combined with Helm charts, I think that we're going to see an ecosystem develop around Kubernetes that actually could be a counterweight to the public clouds and really be sort of cloud agnostic. Private, public, at the edge, who cares? >> Randy Bias, always appreciated your very opinionated viewpoints on everything that are happening here. Pleasure to catch up with you as always. John and I will be back will lots more coverage here from OpenStack Summit in Boston, thanks for watching the Cube.
SUMMARY :
Brought to you by the OpenStack Foundation, Red Hat, He is the creator of the term Pets versus Cattle, The Lie of the Benevolent Dictator, especially the big three, the development and look and smell the way that the public cloud was. a good blueprint on one of the Juniper forums and call for people to sort of get involved So, love the idea of kind of rebooting and the fourth is the carrier's obviously and we can use the word contributors in a lot of roles. of our end users and operators. the community hasn't been friendly to them. and the aggregate value of OpenContrail. of the other options out there. is that in the last year I've seen key customers by the foundation, acknowledging there's a huge landscape but I still sense that the origins of the community And that the line of business will continue of the Foundation does not allow that to occur and I think that actually leads to faster innovation, Pleasure to catch up with you as always.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Randy | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John Troyer | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
Direction Telecom | ORGANIZATION | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Randy Bias | PERSON | 0.99+ |
Ericcson | ORGANIZATION | 0.99+ |
Symantech | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
NIH | ORGANIZATION | 0.99+ |
The Lie of the Benevolent Dictator | TITLE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Docker Incorporated | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
OpenStack Summit | EVENT | 0.99+ |
fourth | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.98+ |
third segment | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Silken Angle Media | ORGANIZATION | 0.98+ |
OpenContrail | ORGANIZATION | 0.98+ |
Keystone | ORGANIZATION | 0.98+ |
one winner | QUANTITY | 0.98+ |
OpenStack Summit 2017 | EVENT | 0.98+ |
tonight | DATE | 0.97+ |
#OpenStackSummit | EVENT | 0.97+ |
this week | DATE | 0.97+ |
first one | QUANTITY | 0.97+ |
Pets versus Cattle | TITLE | 0.96+ |
OpenContrail | TITLE | 0.96+ |
Openstack | ORGANIZATION | 0.96+ |
first summit | QUANTITY | 0.94+ |
Workday | ORGANIZATION | 0.93+ |
Contrail | ORGANIZATION | 0.93+ |
Mirantis | ORGANIZATION | 0.93+ |
3:30 | DATE | 0.9+ |
The Cloud Group | ORGANIZATION | 0.89+ |
of | ORGANIZATION | 0.89+ |
Helm | ORGANIZATION | 0.89+ |
OpenStack | TITLE | 0.88+ |
OpenStack foundation | ORGANIZATION | 0.87+ |
Juniper | PERSON | 0.87+ |
OpenStack | ORGANIZATION | 0.86+ |
Patrick Stonelake & Marc Talluto, Fruition Partners, A DXC Technology Company - #Know17
>> Announcer: Live from Orlando, Florida it's the Cube covering Servicenow Knowledge 17. Brought to you by Servicenow. (electronic music) >> Welcome back to Orlando everybody. This is the Cube, the leader in live tech coverage. I'm Dave Alante with my cohost Jeff Frick. Mark Toludo is here with Patrick Stonelake, cofounders of Fruition Partners now, a DXC company. Welcome to the Cube, Mark you were one of the first SIs that we ever met in the Servicenow ecosystem, acquired by CSC and now the spin merge with HBE, explain it all, how'd you get here? >> Yeah well that's great so we really grew up in the Servicenow ecosystem, right. That's where really Fruition became really what it was and is. CSC came 2015 so they came, acquired us, we became Fruition Partners with CSC brand. CSC then did an acquisition of UXC, a very large SI out of Australia and with that was Keystone, probably now the largest Servicenow system in the greater Australia so they came into our practice as the Fruition Partners Australia brand. We then went out under CSC and did another acquisition in mainland Europe Aspediens. They covered Switzerland, France, Germany, and Spain. And so now they're the Fruition Europe end. So we still have this Fruition practice inside of CSC at the time and then the HP enterprise services so that's only the EDS group, the services group, not the hardware or software group. So then they choose to spin merge with CSC and form DXC. So we're still the Servicenow practice Fruition Partners DXE technologies company so all the Servicenow, everything you're seeing, that's what we're enabling for customers. >> Now Patrick, how did that all affect the go to market? >> It enables us to be more global right. Part of the reasons why we acquired these companies and continue to look to do so is our customers are demanding from us a very consistent, boots on the ground experience, multiple languages, but all running the same methodologies, running the same accelerators and getting them to the finish line at the same time. So DXC and the kind of checkbook and influence of DXC has really helped us do our part in consolidating that market. But what I think we've really just started to scratch the surface of is how we can empower DXC as you know kind of become the engine that runs the nine major offerings of DXC and start to get service now into support of those offerings, modernize them, make them more efficient, and make them more attractive to customers. >> You guys were early on, you know we've talked about this in the past, kind of placed your bets, paid off. Is this sort of work flow automation the next big thing? It seems now that everybody's glomming onto it. >> Yessir. >> Is it and why now? And where do you see it going? >> So we see this, as Patrick mentioned, DXC has nine service offering families, right and that includes like big data, cyber, vertical applications, certainly the outsourcing business is still significant. But what we're seeing is Servicenow is this workflow backbone middleware that kind of connects us all. So we have the DXC offering family leads coming to us and saying listen we understand that Servicenow can do ITOP for a business process orchestration, we understand it has a SECOPS component, so now we have an ISECOPS offering. So they're seeing that Servicenow is kind of the glue to bring together these various offerings and it helps us go from our traditional relationship with the IT department to now branching out into HR, into security, into that CSM space. Even in the business process automation space, that can be claims process. The total business functions that are automated by this work flow, it's not just the work flow itself, it's that the work flow ties into the other silos so that it's not just email, it's actually intelligent email, intelligent routing. So we see it as the glue to keep all these offerings together. >> And then you guys are starting to build solutions on top of a Servicenow platform and go to market with the solution, versus you already have Servicenow, we're going to be a kind of typical consultant and help you do best practices, et cetera. >> Exactly, you know it's kind of a combination of the two. But I think the best way to think about it is that Servicenow is doing its best to be as horizontal across the enterprise as possible, right? Security is a really excellent example of a place where Servicenow is a natural fit, you connect the cycle with security and IT. But one of the things that we're looking to do is to bring the industry expertise of DXC to some of these Servicenow enabled solutions. Mark talked about our ISECOP solution, which is horizontal managed security services. But we debuted yesterday that we're going to be working with Servicenow and their catalyst program around a healthcare splinter of ISECOPs because there are all kinds of uniquely healthcare provider oriented security concerns that the actual thought leadership and the knowledge of the cyber consultants at DXC really bring a lot to the table. So we could build a solution in conjunction with Servicenow. They rely on us for the industry expertise, and they just keep that security piece humming and up to date and locked in with the rest of the platform. >> You know we have another offering, just to add to that, is out of Europe, one of the consulting groups said environmental health and employee health and safety in manufacturing plants. They said listen there's a product out there in the marketplace, can you do something better or different using the Servicenow platform? So we actually took that subject matter expertise from DXC consulting experience, we've married that with our Servicenow expertise and we actually have another product that we're going to market with. It's an employee health and safety, for manufacturing plants, for slip and fall, for any environmental concerns, any of the safety issues that they have. But that's really combining industry and vertical expertise with Servicenow. >> And that shows somebody might not even know they're buying Servicenow, right. (crosstalk) >> You're essentially OEMing the platform. >> That's what we would like to get to. >> You're not there yet. >> I think there's a lot of, we have a lot of we sell a stand alone on top of a Servicenow platform and it gets built. Tony Beller who's the new GP Alliances coming in with a lot of force, environment experience, and I think he's really charging with some of the bigger partners like us to really lock down that OEM because I think that's where we get a lot of leverage for Servicenow and our customers essentially want to consume as they need it and that makes a lot of sense. >> And are you reselling Servicenow in that solution offering so that they don't have a separate relationship with Servicenow, it's all integrated into that. >> Exactly, yup. >> Correct. >> And do you guys use Servicenow internally? >> We do, yeah. Ourselves we've been big drinkers of the champagne as they say for a really long time. We have a number of systems we use to run our professional services organization. But DXC, particularly in the area of asset management, some of the real ROI driven pieces of IT is taking a very hard look at the successes they've had there and trying to figure out how we can enable that success in the rest of the organization. Purchasing, project management, you know, these are things that I think we're going to do internally and then start to share results with our customers. >> Well we also have something called My Order Style, so there actually is how we do manage service provider outsourcing relationships that's built on Servicenow. And we do that internally as well, so basically when we get support or when we need support for our equipment, whatever, worldwide, that's being logged and tracked in Servicenow. >> And in Servicenow you clearly have very strong messaging around we start with IT, IT service management and then ITOM and then moving into the lines of business. How rapidly are you seeing that in your customer base? And maybe add a little color to that. >> I think we're trying to accelerate that. >> Yeah. >> I think what we're seeing is a shift as infrastructure goes to the cloud, as the IT department moves away from being the T of technology and more the information side, that they're starting to realize this role as more of a service management organization because oftentimes the applications that they're supporting are coming from a third party if it's Servicenow, if it's Work Bay, if it's Sales Force, but they can be the glue that holds it together. They can worry about the releases, the data hierarchy, but it's that IT as they are reinventing themselves. They see themselves going out towards those other departments towards HR, towards CSM, towards field service and saying we actually have a solution we want to bring to you. >> I got to ask you guys, as a consultancy, complexity is your friend. You know when things are chaotic it's like call you guys and solve the problem, but at the same time, you hear from a lot of Servicenow customers, we're trying to minimize the customization, custom modifications. >> Patrick: Yes. >> Mark: Right. >> Is that antithetical to the way you guys typically do things? >> It shouldn't be I don't think. I mean we don't want to do as much work as possible in one project, we want to deliver value over the course of many, many transactions that are shorter in duration. And so the more we can stick to the configurable aspect of Servicenow, the better off we're going to be and the better off our customers are going to be. They'll take releases more smoothly and so forth. And what you can do with configuration and app scoping is really, it's a whole other level than what it was five years ago so we're actually starting to fulfill that promise. >> And so if you can build value on top of the platform using the platform, >> That's the point, yeah. >> Those functions beget the advantage of the upgrade. >> Yeah I would look at this and say when Fruition really got going is when we really embraced Servicenow, not just the technology, but the methodology. Because we knew a lot of other service providers, they want a two year project, they want that SAP three year whatever it was. But we embraced the methodology and said that if we can't show results in four to five months using this technology, we're not going to be invited back. But look at today, we have 400 customers worldwide, about 70 percent of those make up our annual bookings again for the next project and the next project because they see value in these increments and we're delivering that. So I would rather not elongate projects, they need to see things very fast. >> Awesome, guys congratulations, I love your story, and Mark you got to present to the financial analyst group yesterday so well done. Thanks for coming on the Cube. >> Thank you very much. >> Thank you for having us. >> Keep right there buddy, we'll be back with our next guest right after this.
SUMMARY :
it's the Cube covering Servicenow Knowledge 17. acquired by CSC and now the spin merge with HBE, So then they choose to spin merge with CSC and form DXC. the surface of is how we can empower DXC as you know in the past, kind of placed your bets, paid off. it's that the work flow ties into the other silos with the solution, versus you already have Servicenow, bring the industry expertise of DXC to some of these and we actually have another product that we're And that shows somebody might not even know I think there's a lot of, we have a lot of offering so that they don't have a separate relationship that success in the rest of the organization. so there actually is how we do manage service around we start with IT, IT service management as the IT department moves away from being the T and solve the problem, but at the same time, And so the more we can stick to the configurable again for the next project and the next project Thanks for coming on the Cube. Keep right there buddy, we'll be back with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick | PERSON | 0.99+ |
Patrick Stonelake | PERSON | 0.99+ |
Mark Toludo | PERSON | 0.99+ |
Tony Beller | PERSON | 0.99+ |
Dave Alante | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Fruition Partners | ORGANIZATION | 0.99+ |
Fruition | ORGANIZATION | 0.99+ |
CSC | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Mark | PERSON | 0.99+ |
DXC | ORGANIZATION | 0.99+ |
Marc Talluto | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
UXC | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
400 customers | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Spain | LOCATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
two year | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
ISECOPS | ORGANIZATION | 0.99+ |
ISECOPs | ORGANIZATION | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Servicenow | ORGANIZATION | 0.99+ |
France | LOCATION | 0.99+ |
ISECOP | ORGANIZATION | 0.99+ |
five months | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
HBE | ORGANIZATION | 0.99+ |
three year | QUANTITY | 0.98+ |
HP | ORGANIZATION | 0.98+ |
one project | QUANTITY | 0.98+ |
five years ago | DATE | 0.97+ |
Fruition Europe | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
about 70 percent | QUANTITY | 0.95+ |
SECOPS | ORGANIZATION | 0.94+ |
SAP | ORGANIZATION | 0.93+ |
Europe Aspediens | LOCATION | 0.91+ |
nine major offerings | QUANTITY | 0.91+ |