WINNING ROADMAP RACE FINAL
>>Well, thank you, everyone. And welcome to winning the roadmap race. How? Toe work with tech vendors to get the features that you need. We're here today with representatives or RBC Capital Markets. We will share some of their best practices for collaborating with technology vendors. I am Ada Mancini, solution architect here at Mirant us. And we're joined by Tina Bustamante, senior production manager, RBC Capital Markets and Minnows Agarwal, head of capital markets. Compute and data fabric. Um, RBC has been using docker since about 2016 and you've been closely involved with that effort. What moved you to begin, contain arising applications. >>Okay, uh, higher that. Thank you for having us. Um, back in 2016 when we started our journey one off our major focus, Syria was measuring develops capabilities And what we, uh what we found was it was challenging. Toe adopt develops across applications with different shapes and sizes, different text tax. And as the financial industry, we do have, um, a large presence of rental applications. So making it making that work was challenging. This is where containers were appealing. Tow us. In those early days, we started looking at containers as a possible solution to create a standardization across across different applications to have a consistent format. Other than that, we also saw containers as a potential technology that could be adopted across across enterprise, not just a small subset of applications. Uh, so that that was very interesting. Interesting. Tow us. In addition to that, uh, containers came with schedulers like kubernetes or swarm, which were, uh, which we're doing a lot more than all, which would do a lot more than the traditional traditional schedulers. As an example, resource management fell over management or scaling up and down, depending on a application or business requirements. So all those things were very appealing. It looked like a solutions to a number of problems that are number of challenges that we're facing. So that's when we got started with containers. >>So what subsequently motivated you to start utilizing swarm and then kubernetes? >>Yeah, other than resource management, the follower Management Aziz, you can imagine managing followers. D are Those are never difficult, Never easy on with containers. We saw that, as the container schedule is, we saw that it's a kind of becomes a manage manage service for us. Um, other aspect We are heavily regulated industry in capital markets, especially so creating an audit trail off events. Who did what? When? Uh, that's important. And containers seem to provide all those all those aspects tow us out of the box. Um, the another thing that we saw with containers under the schedulers, we could simplify our risk management. We could control what, what application on which container gets deployed, where, how they run on when they run. So all all those aspects of schedule er they simplify are seen to simplify at that time a lot off a lot off the traditional challenges, and that that's what was very appealing to us. >>Eso what kind of changes were required in the development culture and in operations in order to enable these new this new platform in this new delivery method? >>Yeah, that that's a good question, and any change obviously requires a lot of education. And this was not just a change across our developers or operations, but it was the change across throughout the change, starting with project managers, business analyst developers, Q A, uh, Cuba and our support personal. In addition, I talked about the risk and security Management so it it is. It is a change across the organization. It's, uh it's a cultural change. So the collaboration other than education collaboration was extremely, extremely important. So across those two, we started first with internal education, using something like internal lunch and learns. We did some external workshops or some hands on workshops. So a lot of those exercises were done in collaboration across all those all those things. The next item that we focused on is how do we get our high end developers the awareness of this technology on, uh, make sure they can. They can see, uh, the use cases. Or they can identify the use cases that can benefit from this technology. So we picked high end developers, noticed application and kind of try before you buy type of scenarios. So we ran through some applications to make sure they get their hands study. They feel comfortable with it on. Then they can broadcast that message. The broader organization, the next thing we did it waas getting the management buying. So obviously any change is going to require investment on uh, making sure there's a value proposition that's clear to our management as well as our business was critical very early on in in container option face. So that that that was that was another item that we focused heavily on. And the last thing I would say is a clearly defining strategy benefits so defining a roadmap off how we will proceed, How do we go from our low risk to high risk application or low risk medium risk applications? And what other strategy benefits are these purely operational? Are these purely cause best benefit? Or it's a modernization of the underlying technical facts. So if the containers do check all those three boxes So that that was that was our fourth item on the left that, uh, that, I would say, changed, um, in a container adoption journey. >>So as as people are getting onto the container ization process and as this is starting to gain traction, what things did your developers embrace as the real tangible benefits, um, of moving the containers of container platforms? >>It's interesting. The benefits are not just for developers. And the way I will answer this question is not from development operations. But let me answer it from the operations to developers. So operationally the moment developers saw that application can be deployed with containers relatively quickly without without having them on the collar without them writing a long release notes. They started seeing that benefit right away, but I don't need to be there late in the evening. I don't need to be there on call to create the environment or deploying, uh, deploying Q A versus production versus the are to them because, like do it right one on then repeat that success factor of different environments. So that was that. That was a big eye opening, um, eye opening for them. And they started realizing that Say, Look, I can free up my time now I can focus. I can focus on my core development, and I don't need to deal with the traditional traditional operational operational issues. So that's what that what? That was quite eye opening for all of us, not just for developers. And we started seeing those, uh, that are very early on. Another thing, I would say the developers talked about waas. Hey, I can validate this application on my laptop. I don't need to be I don't need to be on, uh, on on servers. I don't need all these servers. I don't need to share my service. I don't need to depend on infrastructure teams or other teams to get their check is done. Before I kept start my work, I can validate on my on my laptop. That was that was another very powerful feature. Um, that that empowered them. The last thing I would say is that the software defined aspect, uh, aspect off, um, off technology as an example, Network or storage. Although a lot of these traditional things that something Democrats have to call someone they have to wait on, then they have to deal with tickets. Now, they can do a lot of these things themselves. They can define it themselves, and that's very empowering. So they are perspective. Our move towards left, Um, s o the more control developers have, the better the product is. The better the quality of the product. The time to market improves on just the overall experience on the business benefits. They also start to They all start toe, um improved last part. One extra point. I would like to make here the success success of this waas so interesting, uh, to the development community even our developers from business. They they came along and they have shown interest in adopting containers. Whether it's, uh, the development developers from the quartz are the data science developers. They all started realizing the value value proposition of containers. So it was It was quite eye opening, I would have to say. >>And so while this while this process is happening while you're moving to container platforms, um, you started looking for new ways to try and deliver some of the benefits of containers and distributed systems orchestration more widely across the organization. And I think you identified a couple areas where, um, the doctor Enterprise kubernetes service wasn't meeting the features that you anticipated or it hadn't planned on integrating the features that you required. Um, can you tell us about that situation? >>Certainly. Haida. Thanks for having us again. Um, from the product management perspective, I would say products are always evolving and the capabilities can We have different stages of maturity. So when we reviewed what our application teams what are businesses looking to dio? One area that stood out was definitely the state of science space. Um, are quantum data science is really wanted to expand our risk analysis models. Um, they were looking for larger scales, uh, to compute like a lot more computing power. And we tried to see, um, come up with a way to be ableto facilitate their needs. Um, one thing, and it really, really came from like an early concept was the idea of being able to leverage GPU. Um, we stood up like a small R and D team, trying to see if there was something that would be feasible for our on our end. Um, but based on different factors and considerations and, you know, technical thinking involved in this we just realized that the complexity that it would bring to our you know, our overall technical back is not something, um that we would be, um, best suitable, I would say to do it on our own. So we reached out Thio Tim Aransas and brought forth, like, the concept of being able to scale the kubernetes pods on GPS. We relied on there authorities on their engineers Thio, you know, think about being able to expand, uh, kubernetes there kubernetes offering to be able to scale and potentially support running the pods and GPS um, definitely was not something that came from one day to the next that it did involve a number of conversations. Um, but, you know, I'm happy to say I was saying the recent months it has become part of the KUBERNETES product offering. >>Yeah, I believe that that effort, um, did take ah, while took a ah lot of engineering effort. Um, and I think initially all had done some internal r and D to try to work on those features, but ultimately, you decided to go with a different strategy and rely on the vendor to produce those assed part of the vendors product. Um, can you elaborate on the things that you found in that internal R and D? >>Well, we definitely saw the potential for there was definitely potential there. But, you know, the longevity of actually maintaining that GPU, uh, scaling using communities on our own was just not 100% like, in our expertise, expertise of something that we wanted to collaborate more closely with the vendor. Um, you know, technology is always evolving, So it's just the longevity of keeping up with, like, the the up to date features or capabilities testing que involved was just not something that we thought it would be. Something that we should be taking on on our own. >>Okay, So, like spending the time and engineering effort, focusing on the data science, the quantity of analysis parts I see. Um, and then ultimately, um, working with the vendor produced a release and where these features are now available. Um, how what did that engagement look like? Um, with RBC s involvement, >>I would say the engagement started off with, you know, discussing bringing it forth, being very open, you know, having transparency. So that delivery was always a little bit was the focus. Um, but it definitely, um, started office, you know, discussing what it would be like the business case. Why we would require the feature. Definitely the representative. Those and others engaged from them. A ransom side had their own, Um, you know, thoughts and opinions. Um, it had to be being able to run the work clothes, um, on GPU would be something that they would ultimately, as I mentioned, have to support on their end. Um, so we did work with them very closely. There was a very much a willingness collaborate we held a number of meetings. We discuss how the CPU support would would actually evolved. So it wasn't something that came about within like one sprint. No, that was never like our expectation. It did take a couple weeks to be able to see, like a beta product opine on it, see a demo, review it, discuss it further. Um, as you know, sometimes there might be a relief where this capability maybe offered, but there are delays. It's just, you know, part of off of our industry in a cent. Um, we're very much risk versus the nose mentioned, you know, >>when >>you are a financial institution. So we just wanted to make sure it was a viable product, that it was definitely available off the shelf, and then we would be able to leverage it. Um, but yeah, the key point, I would say, in terms of being able to bring the feature forward with definitely constant communication with Miranda, >>that's excellent. I'm glad that were ableto help bring that feature forward. I think that it's something that a lot of people have been asking for and like you said, it enables ah, whole new class of uh, problem solving. Okay. Uh, Meno je Tina, Thank you for your time today. It's been wonderful talking to you again. Uh, that is our session on working with your vendors. I want to thank everyone who's watching this for taking the time Thio contribute to our conference. Uh, awesome. Thank you, kitty.
SUMMARY :
get the features that you need. Uh, so that that was very interesting. Um, the another thing that we saw with containers under So that that that was that was another item that So it was It was quite eye opening, I would have to say. Um, can you tell us about that situation? complexity that it would bring to our you know, our overall technical back Um, can you elaborate on the things that you found in that internal testing que involved was just not something that we thought it would be. focusing on the data science, the quantity of analysis parts I I would say the engagement started off with, you know, discussing bringing that it was definitely available off the shelf, and then we would be able to leverage it. Thank you for your time today.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tina Bustamante | PERSON | 0.99+ |
RBC Capital Markets | ORGANIZATION | 0.99+ |
Ada Mancini | PERSON | 0.99+ |
RBC | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
fourth item | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Thio | PERSON | 0.99+ |
three boxes | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
Miranda | PERSON | 0.97+ |
Aziz | PERSON | 0.96+ |
Democrats | ORGANIZATION | 0.96+ |
Cuba | LOCATION | 0.94+ |
one day | QUANTITY | 0.93+ |
Thio Tim Aransas | PERSON | 0.89+ |
one thing | QUANTITY | 0.88+ |
Meno je | PERSON | 0.86+ |
Agarwal | ORGANIZATION | 0.82+ |
One extra point | QUANTITY | 0.81+ |
One area | QUANTITY | 0.75+ |
KUBERNETES | ORGANIZATION | 0.74+ |
Mirant | ORGANIZATION | 0.72+ |
one sprint | QUANTITY | 0.7+ |
couple weeks | QUANTITY | 0.68+ |
one | QUANTITY | 0.68+ |
about | DATE | 0.65+ |
Tina | PERSON | 0.63+ |
Syria | ORGANIZATION | 0.62+ |
quartz | ORGANIZATION | 0.58+ |
Minnows | PERSON | 0.56+ |
cent. | QUANTITY | 0.53+ |
ROADMAP RACE | EVENT | 0.46+ |
Dr. Hákon Guðbjartsson, WuxiNextcode & Jonsi Stefansson, NetApp | AWS re:Invent 2018
Live from Las Vegas, it's the Cube. Covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. And welcome to Las Vegas! We're at AWS re:Invent, day one of three days of coverage here on the Cube. Along with Justin Warren, I'm John Walls. Glad to have you with us here for our live coverage. We're joined now by Jonsi Stefansson, who's the vice-president of Cloud Services at NetApp and Hákon Guöbjartsson, who's the CIO of WuxiNextcode. Gentlemen, thanks for joining us, good to have you here. >> Thank you. >> Yeah, thank you for having, >> Having us. >> And I think, not only your first time on the Cube, but I believe the first time we have had natives of Iceland, I believe. (laughs) >> So, a first for us as well. But glad to have you. First off, Háken if you will, tell us a little bit about WuxiNextcode, what you do and why you're here. >> Yeah, so we are a company that specializes in analysis of genomic data, all the way from gathering cohorts for our pharma customers into providing sequencing services, data analytics, and AI. So we basically cover the full end to end solution space for genomic analysis. >> Okay, and now let's talk about the partnership, or at least the work that's going on between you, if you would, Jonsi, a little bit about when you have a client like this, genomics, what exactly are you trying to peel back for them? What's the challenge that you're trying to address for them? >> So we started Cloud Volumes Services on AWS roughly eight, uh, six months ago. And we've been running it with very selected customer base that is focusing on very specific workloads, like genome sequencing, rendering, database workloads, like workloads that have traditionally have had a hard time finding themselves into the cloud. So we've had a very deep partnership with WuxiNextcode in sort of customizing our offering that fits their needs. So we've been working very closely with them for the past I would say four to five months, and now we've moved their entire production sets into AWS. So that's been something that these research companies have been struggling with. And the Cloud Volumes addresses that, with the data management capabilities and the performance tiers that we offer. >> Could you give us a bit more detail on what it is about Cloud Volumes that's special and different compared to what you would generically get from AWS. Because people have been able to put storage into the cloud >> for some time, >> Of course. >> so what is it about Cloud Volumes that's unique? >> So I think we're very complementary to the storage offerings that AWS has currently. Like WuxiNextcode is running for traditional database, they are using 53 instances, EC2 instances, that all have EPS volumes. But for the analytic data, it actually gets pushed to NFS. So we are basically just have a more performance solution for shared everything solution. If you compare that to EFS for example, EFS is a great offering that AWS already has, but it doesn't reach into that scale, for example, when it comes to the performance tiers that we are offering. We also offer a differentiator for the customers to be able to clone and snapshot data, and only the tester, not to a full copy. So for example, it's really important for data scientists like WuxiNextcode to always be working on production datasets, for like data scientists. So for them to be able to replicate the data across all different environments, testing, staging, development, and production, they basically only have a small tester difference in all those volumes. Which is really important, instead of always having to copy 40 terabyte chunks, they're basically just taking the different between all of them and using the on tap cloning technology. So that's a very unique value proposition. Another unique value proposition of Cloud Volumes is you can automatically or dynamically change the performance tiers of the volume. So you can go from standard, premium, to extreme dynamically, based on when you actually need that extra level of performance. So you don't need to be continuously running at extreme, but only when you actually need to. >> So Háken, what was it about the Cloud Volumes that got your attention initially, that said "actually, this is something "that we should probably look at." >> I mean, so a little bit of a background, we kind of grew out of an environment where we were sort of evolving our architecture around an HPC cluster architecture with highly scalable storage, and actually we were using NextApp storage in our early days when we were developing. Then as we moved into the clouds, we were somewhat struggling with the NFS scalabilities that were available in the cloud. So I sort of like to say that we are kind of reborn now in the clouds, because we have lots of interactive analytics that are user-driven, so high-speed IO is fundamental in our analysis. And we were in a way struggling to self-manage NFS storage in the clouds. And now, Cloud Volumes was in a way, sort of like a dream come true. It's a lot of simplification for us in terms of deployment and management, to have a scalable service providing the NFS sort of service to our applications. So it was a perfect marriage in that regard. It fitted very well with our architecture, even though we use some of our storage relies on optive storage, but all the interactive analytics are performing way better using NFS storage. >> Yeah, Hákon, were there reservation making this move? I mean when, or capabilities that you thought maybe it sounds good, but I don't know if you can deliver on that and things on which you've been pleasently surprised? >> To a certain extent, because we had actually tried several experiments with other solutions, trying to solve sort of the NFS bottleneck for us, and so when we tried this it actually went extremely smoothly. We onboarded 50 terabytes of data over less than a weekend. And when we ran our first sort of test cases to see whether this was working as expected, we actually found it worked over three times better than with our conventional storage. And not only that, there were certain use cases that we had never completed really to the full end, and we were finishing them in times that we were very pleased with, so. >> I mean they were actually running, I mean our goal for the workshop that we did, and we've been doing this with a lot of customers, one of the sort of challenges Hákon came up with was query, a genome query that he created that he was never able to complete. And he wanted to see if by switching this out, he could actually complete that query. And it used to time out in like three or four hours in his time down. >> It was essentially a query that was touching on something on the the order of 20 trillion data points, so we were using lots of quartz. We have a database solution that we have developed which is sort of a proprietary database for genomic analytics, and it was spending up over 500 quartz essentially. And so it was a very kind of a IO intensive query. But as I said, we were able to run that to completion actually in a time that we were very satisfied with, so. >> That's pretty amazing. >> Yeah. >> Absolutely. >> So Hákon, what's your impression of NetApp's data fabric vision? They've been talking about that for a little while, and I'm just curious to hear what your take on it is. >> Yeah, I think it makes a lot of sense. I mean, we work with many pharma customers that have lots of data locally, but are also looking at the cloud as a solution for growth and for new endeavors. And having a data fabric infrastructure that allows you to bridge the two I think is something makes a lot of sense with where people want to go in the future. >> Yeah, what are you hoping to hear from Amazon and the show around that idea of being able to live outside of the cloud? Traditionally, Amazon's been very keen on saying, "no, no, everything must be here and in the cloud." They're not so keen on this idea of a data fabric that could move things around in different locations. What are you expecting to hear from them this week? >> I mean, I wouldn't say so much that I'm expecting to hear something, but it's clear for me that customers are more willing now to go into the cloud, but regardless of that, there's still certain reasons to keep certain infrastructures still where it is, moving legacy infrastructure into the cloud may not be necessarily the best way forward, rather to be able to integrate it more seamlessly with the cloud and evolve the new functionality, new features in the cloud. And also there are some, I wouldn't call it privacy, but there are lots of data sets that people are reluctant to move into the cloud still because of the way they are managed, et cetera. And being able to bridge those two things is something that I think is valuable for our customers. >> I actually don't think that the decision to move into the cloud, it's never been a cost decision, in my opinion. It is for companies to actually be able to compete with other companies within their sector and to take advantage of the rapid innovation that is happening in the cloud. I mean, if you take autonomous vehicles for example, the companies that are actually in the cloud and taking advantage of like Changemaker and like this deep learning and machine learning algorithms, it's really hard to compete with AWS, it's really hard to compete with Google or Azure. These are really big companies that are pouring a lot of money into innovation. So I think it's always, it's driven by necessity to stay competitive, to go into the cloud, and being able to tap into that innovation. This actually brings into the sort of, what does it mean to be cloud native? If you're cloud native, it means that your solution, even though it's being serviced through a marketplace, it needs to be able to tap into that innovation. You need to connect to that ecosystem that AWS has. To me, that's a much stronger driving force to drive those legacy applications into the cloud. But with the data fabric, we want to really bridge the gap. So it should be relatively easy for your application or your workload to find the best hope at any given time. Whether that's on premise of in the public cloud, you should have like a, an intelligent way of deciding where each one of your workloads should go. And that's the whole point of the data fabric. Make that really, really easy. >> Well you said the partnership's been about four months, so you're still in the honeymoon, but here's to continued success and thanks for being with us here on the Cube. We appreciate it. >> Thank you so much for having us. >> We are happy to be here. >> Have a great show. Back with more, we are live here on the Cube at AWS re:Invent and we'll be back with more in just a moment. (energetic music)
SUMMARY :
Glad to have you with us but I believe the first But glad to have you. all the way from gathering cohorts the performance tiers that we offer. compared to what you would So for them to be able about the Cloud Volumes in the clouds, because we have lots of that we were very pleased with, so. I mean our goal for the that we have developed and I'm just curious to hear infrastructure that allows you around that idea of being able to live And being able to bridge those two things that the decision to move but here's to continued success and we'll be back with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jonsi Stefansson | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Hákon Guöbjartsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Iceland | LOCATION | 0.99+ |
40 terabyte | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
WuxiNextcode | ORGANIZATION | 0.99+ |
Jonsi | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
53 instances | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
20 trillion data points | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.98+ |
five months | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
Cloud Volumes | TITLE | 0.98+ |
first time | QUANTITY | 0.98+ |
over 500 quartz | QUANTITY | 0.98+ |
Hákon | PERSON | 0.97+ |
about four months | QUANTITY | 0.96+ |
Cloud Services | ORGANIZATION | 0.96+ |
Hákon Guðbjartsson | PERSON | 0.96+ |
Háken | PERSON | 0.96+ |
less than a weekend | QUANTITY | 0.95+ |
one | QUANTITY | 0.93+ |
EFS | TITLE | 0.92+ |
Dr. | PERSON | 0.91+ |
each one | QUANTITY | 0.86+ |
first sort | QUANTITY | 0.83+ |
50 terabytes of | QUANTITY | 0.82+ |
AWS re:Invent | EVENT | 0.81+ |
AWS re:Invent 2018 | EVENT | 0.8+ |
re:Invent 2018 | EVENT | 0.8+ |
eight | DATE | 0.8+ |
EC2 | TITLE | 0.79+ |
over three times | QUANTITY | 0.77+ |
Changemaker | ORGANIZATION | 0.76+ |
re:Invent | EVENT | 0.71+ |
NextApp | TITLE | 0.69+ |
WuxiNextcode | TITLE | 0.69+ |
WuxiNextcode | PERSON | 0.63+ |
Cube | ORGANIZATION | 0.58+ |
Azure | TITLE | 0.53+ |
Cube | COMMERCIAL_ITEM | 0.48+ |
Cloud | OTHER | 0.48+ |
quartz | QUANTITY | 0.45+ |