Satish Iyer, Dell Technologies | SuperComputing 22
>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.
SUMMARY :
Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Terry | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian Coley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Terry Ramos | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Gell | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
190 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
European Space Agency | ORGANIZATION | 0.99+ |
Max Peterson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Arcus Global | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Bahrain | LOCATION | 0.99+ |
D.C. | LOCATION | 0.99+ |
Everee | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Zero Days | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Washington | LOCATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
Department for Wealth and Pensions | ORGANIZATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
five weeks | QUANTITY | 0.99+ |
1.8 billion | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
450 applications | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Satish Iyer | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Middle East | LOCATION | 0.99+ |
42% | QUANTITY | 0.99+ |
Jet Propulsion Lab | ORGANIZATION | 0.99+ |
Ian Colle, AWS | SuperComputing 22
(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)
SUMMARY :
Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
400 gigs | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Ian Colle | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
40% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Annaperna | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Last month | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
five years ago | DATE | 0.98+ |
five | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
Lustre | ORGANIZATION | 0.97+ |
Annaperna Labs | ORGANIZATION | 0.97+ |
Trainium | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
OpEx | TITLE | 0.96+ |
both | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
Supercomputing Conference | EVENT | 0.96+ |
first | QUANTITY | 0.96+ |
West Coast | LOCATION | 0.96+ |
thousands of dollars a day | QUANTITY | 0.96+ |
Supercomputing Conference 2022 | EVENT | 0.95+ |
CapEx | TITLE | 0.94+ |
three | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.92+ |
East Coast | LOCATION | 0.91+ |
single region | QUANTITY | 0.91+ |
years | QUANTITY | 0.91+ |
thousands of nodes | QUANTITY | 0.88+ |
Parallel Cluster | TITLE | 0.87+ |
about 25 gigs | QUANTITY | 0.87+ |
Ian Smith, Chronosphere | KubeCon + CloudNativeCon NA 2022`
(upbeat music) >> Good Friday morning everyone from Motor City, Lisa Martin here with John Furrier. This is our third day, theCUBE's third day of coverage of KubeCon + CloudNativeCon 22' North America. John, we've had some amazing conversations the last three days. We've had some good conversations about observability. We're going to take that one step further and look beyond its three pillars. >> Yeah, this is going to be a great segment. Looking forward to this. This is about in depth conversation on observability. The guest is technical and it's on the front lines with customers. Looking forward to this segment. Should be great. >> Yeah. Ian Smith is here, the field CTO at Chronosphere. Ian, welcome to theCUBE. Great to have you. >> Thank you so much. It's great to be here. >> All right. Talk about the traditional three pillars, approach, and observability. What are some of the challenges with that, and how does Chronosphere solve those? >> Sure. So hopefully everyone knows people think of the three pillars as logs, metrics and traces. What do you do with that? There's no action there. It's just data, right? You collect this data, you go put it somewhere, but it's not actually talking about any sort of outcomes. And I think that's really the heart of the issue, is you're not achieving anything. You're just collecting a whole bunch of data. Where do you put it? What are you... What can you do with it? Those are the fundamental questions. And so one of the things that we're focused on at Chronosphere is, well, what are those outcomes? What is the real value of that? And for example, thinking about phases of observability. When you have an incident or you're trying to investigate something through observability, you probably want to know what's going on. You want to triage any problems you detect. And then finally, you want to understand the cause of those and be able to take longer term steps to address them. >> What do customers do when they start thinking about it? Because observability has that promise. Hey, you know, get the data, we'll throw AI at it. >> Ian: Yeah. >> And that'll solve the problem. When they get over their skis, when do they realize that they're really not tackling it properly, or the ones that are taking the right approach? What's the revelation? What's your take on that? You're in the front lines. What's going on with the customer? The good and the bad. What's the scene look like? >> Yeah, so I think the bad is, you know, you end up buying a lot of things or implementing even in open source or self building, and it's very disconnected. You're not... You don't have a workflow, you don't have a path to success. If you ask different teams, like how do you address these particular problems? They're going to give you a bunch of different answers. And then if you ask about what their success rate is, it's probably very uneven. Another key indicator of problems is that, well, do you always need particular senior engineers in your instance or to help answer particular performance problems? And it's a massive anti pattern, right? You have your senior engineers who are probably need to be focused on innovation and competitive differentiation, but then they become the bottleneck. And you have this massive sort of wedge of maybe less experienced engineers, but no less valuable in the overall company perspective, who aren't effective at being able to address these problems because the tooling isn't right, the workflows are incorrect. >> So the senior engineers are getting pulled in to kind of fix and troubleshoot or observe what the observability data did or didn't deliver. >> Correct. Yeah. And you know, the promise of observability, a lot of people talk about unknown unknowns and there's a lot of, you know, crafting complex queries and all this other things. It's a very romantic sort of deep dive approach. But realistically, you need to make it very accessible. If you're relying on complex query languages and the required knowledge about the architecture and everything every other team is doing, that knowledge is going to be super concentrated in just a couple of heads. And those heads shouldn't be woken up every time at 3:00 AM. They shouldn't be on every instant call. But oftentimes they are the sort of linchpin to addressing, oh, as a business we need to be up 99.99% of the time. So how do we accomplish that? Well, we're going to end up burning those people. >> Lisa: Yeah. >> But also it leads to a great dissatisfaction in the bulk of the engineers who are, you know, just trying to build and operate the services. >> So talk... You mentioned that some of the problems with the traditional three pillars are, it's not outcome based, it leads to silo approaches. What is Chronosphere's definition and can you walk us through those three phases and how that really gives you that competitive edge in the market? >> Yeah, so the three phases being know, triage and understand. So just knowing about a problem, and you can relate this very specifically to capabilities, but it's not capabilities first, not feature function first. So know, I need to be able to alert on things. So I do need to collect data that gives me those signals. But particularly as you know, the industry starts moving towards as slows. You start getting more business relevant data. Everyone knows about alert storms. And as you mentioned, you know, there's this great white hope of AI and machine learning, but AI machine learning is putting a trust in sort of a black box, or the more likely reality is that really statistical model. And you have to go and spend a very significant amount time programming it for sort of not great outcomes. So know, okay, I want to know that I have a problem, I want to maybe understand the symptoms of that particular problem. And then triage, okay, maybe I have a lot of things going wrong at the same time, but I need to be very precise about my resources. I need to be able to understand the scope and importance. Maybe I have five major SLOs being violated right now. Which one is the greatest business impact? Which symptoms are impacting my most valuable customers? And then from there, not getting into the situation, which is very common where, okay, well we have every... Your customer facing engineering team, they have to be on the call. So we have 15 customer facing web services. They all have to be on that call. Triage is that really important aspect of really mitigating the cost to the organization because everyone goes, oh, well I achieved my MTTR and my experience from a variety of vendors is that most organizations, unless you're essentially failing as a business, you achieve your SLA, you know, three nines, four nines, whatever it is. But the cost of doing that becomes incredibly extreme. >> This is huge point. I want to dig into that if you don't mind, 'cause you know, we've been all seeing the cost of ownership miles in it all, the cost of doing business, cost of the shark fan, the iceberg, what's under the water, all those metaphors. >> Ian: Yeah. >> When you look at what you're talking about here, there are actually, actually real hardcore costs that might be under the water, so to speak, like labor, senior engineering time, 'cause Cloud Native engineers are coding in the pipelines. A lot of impact. Can you quantify and just share an example or illustrate where the costs are? 'Cause this is something that's kind of not obvious. >> Ian: Yeah. >> On the hard costs. It's not like a dollar amount, but time resource breach, wrong triage, gap in the data. What are some of the costs? >> Yeah, and I think they're actually far more important than the hard costs of infrastructure and licensing. And of course there are many organizations out there using open source observability components together. And they go, Oh it's free. No licensing costs. But you think again about those outcomes. Okay, I have these 15 teams and okay, I have X number of incidents a month, if I pull a representative from every single one of those teams on. And it turns out that, you know, as we get down in further phases, we need to be able to understand and remediate the issue. But actually only two teams required of that. There's 13 individuals who do not need to be on the call. Okay, yes, I met my SLA and MTTR, but if I am from a competitive standpoint, I'm comparing myself to a very similar organization that only need to impact those two engineers versus the 15 that I had over here. Who is going to be the most competitive? Who's going to be most differentiated? And it's not just in terms of number of lines of code, but leading to burnout of your engineers and the churn of that VPs of engineering, particularly in today's economy, the hardest thing to do is acquire engineers and retain them. So why do you want to burn them unnecessarily on when you can say, okay, well I can achieve the same or better result if I think more clearly about my observability, but reduce the number of people involved, reduce the number of, you know, senior engineers involved, and ultimately have those resources more focused on innovation. >> You know, one thing I want, at least want get in there, but one thing that's come up a lot this year, more than I've ever seen before, we've heard about the skill gaps, obviously, but burnout is huge. >> Ian: Yes. >> That's coming up more and more. This is a real... This actually doesn't help the skills gap either. >> Ian: Correct. >> Because you got skills gap, that's a cost potentially. >> Ian: Yeah. >> And then you got burnout. >> Ian: Yeah. >> People just kind of sitting on their hands or just walking away. >> Yeah. So one of the things that we're doing with Chronosphere is, you know, while we do deal with the, you know, the pillar data, but we're thinking about it more, what can you achieve with that? Right? So, and aligning with the know, triage and understand. And so you think about things like alerts, you know, dashboards, you be able to start triaging your symptoms. But really importantly, how do we bring the capabilities of things like distributed tracing where they can actually impact this? And it's not just in the context of, well, what can we do in this one incident? So there may be scenarios where you, absolutely do need those power users or those really sophisticated engineers. But from a product challenge perspective, what I'm personally really excited about is how do you capture that insight and those capabilities and then feed that back in from a product perspective so it's accessible. So you know, everyone talks about unknown unknowns in observability and then everyone sort of is a little dismissive of monitoring, but monitoring that thing, that democratizes access and the decision making capacity. So if you say I once worked at an organization and there were three engineers in the whole company who could generate the list of customers who were impacted by a particular incident. And I was in post sales at the time. So anytime there was a major incident, need to go generate that list. Those three engineers were on every single incident until one of them got frustrated and built a tool. But he built it entirely on his own. But can you think from an observability perspective, can you build a thing that it makes all those kinds of capabilities accessible to the first point where you take that alert, you know, which customers are affected or whatever other context was useful last time, but took an hour, two hours to achieve. And so that's what really makes a dramatic difference over time, is it's not about the day one experience, but how does the product evolve with the requirements and the workflow- >> And Cloud Native engineers, they're coding so they can actually be reactive. That's interesting, a platform and a tool. >> Ian: Yes. >> And platform engineering is the hottest topic at this event. And this year, I would say with Cloud Native hearing a lot more. I mean, I think that comes from the fact that SREs not really SRE, I think it's more a platform engineer. >> Ian: Yes. >> Not everyone's an... Not company has an SRE or SRE environment. But platform engineering is becoming that new layer that enables the developers. >> Ian: Correct. >> This is what you're talking about. >> Yeah. And there's lots of different labels for it, but I think organizations that really think about it well they're thinking about things like those teams, that developer efficiency, developer productivity. Because again, it's about the outcomes. It's not, oh, we just need to keep the site reliable. Yes, you can do that, but as we talked about, there are many different ways that you can burn unnecessary resources. But if you focus on developer efficiency and productivity, there's retainment, there's that competitive differentiation. >> Let's uplevel those business outcomes. Obviously you talked about in three phases, know, triage and understand. You've got great alignment with the Cloud Native engineers, the end users. Imagine that you're facilitating company's ability to reduce churn, attract more talent, retain talent. But what are some of the business outcomes? Like to the customer experience to the brand? >> Ian: Sure. >> Talk about it in some of those contexts. >> Yeah. One of the things that not a lot of organizations think about is, what is the reliability of my observability solution? It's like, well, that's not what I'm focused on. I'm focused on the reliability of my own website. Okay, let's take the, common open source pattern. I'm going to deploy my observability solution next to my core site infrastructure. Okay, I now have a platform problem because DNS stopped working in cloud provider of my choice. It's also affecting my observability solution. So at the moment that I need- >> And the tool chain and everything else. >> Yeah. At the moment that I need it the most to understand what's going on and to be able to know triage and understand that fails me at the same time. It's like, so reliability has this very big impact. So being able to make sure that my solution's reliable so that when I need it the most, and I can affect reliability of my own solution, my own SLA. That's a really key aspect of it. One of the things though that we, look at is it's not just about the outcomes and the value, it's ROI, right? It's what are you investing to put into that? So we've talked a little bit about the engineering cost, there's the infrastructure cost, but there's also a massive data explosion, particularly with Cloud Native. >> Yes. Give us... Alright, put that into real world examples. A customer that you think really articulates the value of what Chronosphere is delivering and why you're different in the market. >> Yeah, so DoorDash is a great customer example. They're here at KubeCon talking about their experience with Chronosphere and you know, the Cloud Native technologies, Prometheus and those other components align with Chronosphere. But being able to undergo, you know, a transformation, they're a Cloud Native organization, but going a transformation from StatsD to very heavy microservices, very heavy Kubernetes and orchestration. And doing that with your massive explosion, particularly during the last couple of years, obviously that's had a very positive impact on their business. But being able to do that in a cost effective way, right? One of the dirty little secrets about observability in particular is your business growth might be, let's say 50%, 60%, your infrastructure spend in the cloud providers is maybe going to be another 10, 15% on top of that. But then you have the intersection of, well my engineers need more data to diagnose things. The business needs more data to understand what's going on. Plus we've had this massive explosion of containers and everything like that. So oftentimes your business growth is going to be more than doubled with your observability data growth and SaaS solutions and even your on-premises solutions. What's the main cost driver? It's the volume of data that you're processing and storing. And so Chronosphere one of the key things that we do, because we're focused on organizational pain for larger scale organizations, is well, how do we extract the maximum volume of the data you're generating without having to store all of that data and then present it not just from a cost perspective, but also from a performance perspective. >> Yes. >> John: Yeah. >> And so feeding all into developer productivity and also lowering that investment so that your return can stand out more clearly and more valuably when you are assessing that TCO. >> Better insights and outcomes drives developer productivity for sure. That also has top theme here at KubeCon this year. It always is, but this is more than ever 'cause of the velocity. My question for you, given that you're the field chief technology officer for Chronosphere and you have a unique position, you've got a great experience in the industry, been involved in some really big companies and cutting edge. What's the competitive landscape? 'Cause the customers sometimes are confused by all the pitches they're getting from other vendors. Some are bolting on observability. Some have created like I would say, a shim layer or horizontally scalable platform or platform engineering approach. It's a data problem. Okay. This is a data architecture challenge. You mentioned that many times. What's the difference between a pretender and a player in this space? What's the winning architecture look like? What's a, I won't say phony or fake solution, but ones that customers should be aware of? Because my opinion, if you have a gap in the data or you configure it wrong, like a bolt on and say DNS crashes you're dead in the water. >> Ian: Yeah. >> What's the right approach from a customer standpoint? How do they squint through all the noise to figure out what's the right approach? >> Yeah, so I mean, I think one of the ways, and I've worked with customers in a pre-sales capacity for a very long time I know all the tricks of guiding you through. I think it needs to be very clear that customers should not be guided by the vendor. You don't talk to one vendor and they decide, Oh, I'm going to evaluate based off this. We need to particularly get away from feature based evaluations. Features are very important, but they're all have to be aligned around outcomes. And then you have to clearly understand, where am I today? What do I do today? And what is going to be the transformation that I have to go through to take advantage of these features? They can get very entrancing to say, Oh, there's a list of 25 features that this solution has that no one else has, but how am I going to get value out of that? >> I mean, distributed tracing is a distributed word. Distributed is the key word. This is a system architecture. The holistic big picture comes in. How do they figure that out? Knowing what they're transforming into? How does it fit in? >> Ian: Yeah. >> What's the right approach? >> Too often I say distributed tracing, particularly, you know, bought, because again, look at the shiny features look at the the premise and the MTTR expectations, all these other things. And then it's off to the side. We go through the traditional usage of metrics very often, very log heavy approaches, maybe even some legacy APM. And then it's sort of at last resort. And out of all the tools, I think distributed tracing is the worst in the problem we talked about earlier where the most sophisticated engineers, the ones who are being longest tenured, are the only ones who end up using it. So adoption is really, really poor. So again, what do we do today? Well, we alert, we probably want to understand our symptoms, but then what is the key problem? Oh, we spend a lot of time digging into the where the problem exists in my architecture, we talked about, you know, getting every engineer in at the same time, but how do we reduce the number of engineers involved? How do we make it so that, well, this looks like a great day one experience, but what is my day 30 experience like? Day 90. How is the product get more valuable? How do I get my most senior engineers out of this, not just on day one, but as we progress through it? >> You got to operationalize it. That's the key. >> Yeah, Correct. >> Summarize this as we wrap here. When you're in customer conversations, what is the key factor behind Chronosphere's success? If you can boil it down to that key nugget, what is it? >> I think the key nugget is that we're not just fixated on sort of like technical features and functions and frankly gimmicks of like, Oh, what could you possibly do with these three pillars of data? It's more about what can we do to solve organizational pain at the high level? You know, things like what is the cost of these solutions? But then also on the individual level, it's like, what exactly is an engineer trying to do? And how is their quality of life affected by this kind of tooling? And it's something I'm very passionate about. >> Sounds like it. Well, the quality of life's important, right? For everybody, for the business, and ultimately ends up affecting the overall customer experience. So great job, Ian, thank you so much for joining John and me talking about what you guys are doing beyond the three pillars of observability at Chronosphere. We appreciate your insights. >> Thank you so much. >> John: All right. >> All right. For John Furrier and our guest, I'm Lisa Martin. You're watching theCUBE live Friday morning from KubeCon + CloudNativeCon 22' from Detroit. Our next guest joins theCUBE momentarily, so stick around. (upbeat music)
SUMMARY :
the last three days. it's on the front lines Ian Smith is here, the It's great to be here. What are some of the challenges with that, the cause of those and be able to take Hey, you know, get the And that'll solve the problem. They're going to give you a So the senior engineers and the required knowledge in the bulk of the and how that really gives you the cost to the organization cost of the shark fan, are coding in the pipelines. What are some of the costs? reduce the number of, you know, but burnout is huge. the skills gap either. Because you got skills gap, People just kind of And it's not just in the context of, And Cloud Native engineers, is the hottest topic that enables the developers. Because again, it's about the outcomes. the Cloud Native engineers, Talk about it in One of the things that not the most to understand what's the value of what One of the dirty little when you are assessing that TCO. 'cause of the velocity. And then you have to clearly understand, Distributed is the key word. And out of all the tools, That's the key. If you can boil it down the cost of these solutions? beyond the three pillars For John Furrier and our
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ian Smith | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
two hours | QUANTITY | 0.99+ |
15 teams | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
13 individuals | QUANTITY | 0.99+ |
25 features | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
three engineers | QUANTITY | 0.99+ |
three engineers | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
two teams | QUANTITY | 0.99+ |
an hour | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
third day | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
10, 15% | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.99+ |
two engineers | QUANTITY | 0.99+ |
3:00 AM | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
15 customer | QUANTITY | 0.99+ |
Friday morning | DATE | 0.99+ |
first point | QUANTITY | 0.99+ |
KubeCon | ORGANIZATION | 0.99+ |
Cloud Native | ORGANIZATION | 0.99+ |
three phases | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three pillars | QUANTITY | 0.98+ |
DoorDash | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
three nines | QUANTITY | 0.95+ |
three pillars | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
one step | QUANTITY | 0.93+ |
Chronosphere | TITLE | 0.92+ |
one incident | QUANTITY | 0.92+ |
North America | LOCATION | 0.92+ |
CloudNativeCon | EVENT | 0.91+ |
Prometheus | TITLE | 0.91+ |
99.99% | QUANTITY | 0.9+ |
first | QUANTITY | 0.89+ |
one thing | QUANTITY | 0.89+ |
four nines | QUANTITY | 0.86+ |
last couple of years | DATE | 0.85+ |
one vendor | QUANTITY | 0.85+ |
Chronosphere | ORGANIZATION | 0.84+ |
Day 90 | QUANTITY | 0.84+ |
Cloud Native | TITLE | 0.83+ |
Ian Massingham, MongoDB and Robbie Belson, Verizon | MongoDB World 2022
>>Welcome back to NYC the Cube's coverage of Mongo DB 2022, a few thousand people here at least bigger than many people, perhaps expected, and a lot of buzz going on and we're gonna talk devs. I'm really excited to welcome back. Robbie Bellson who's the developer relations lead at Verizon and Ian Massingham. Who's the vice president of developer relations at Mongo DB Jens. Good to see you. Great >>To be here. >>Thanks having you. So Robbie, we just met a few weeks ago at the, the red hat summit in Boston and was blown away by what Verizon is doing in, in developer land. And of course, Ian, you know, Mongo it's rayon Detra is, is developers start there? Why is Mongo so developer friendly from your perspective? >>Well, it's been the ethos of MongoDB since day one. You know, back when we launched the first version of MongoDB back in 2009, we've always been about making developers lives easier. And then in 2016, we announced and released MongoDB Atlas, which is our cloud managed service for MongoDB, you know, starting with a small number of regions built on top of AWS and about 2,500 adoption events per week for MongoDB Atlas. After the first year today, MongoDB Atlas provides a managed service for MongoDB developers around the world. We're present in almost a hundred cloud regions across S DCP and Azure. And that adoption number is now running at about 25,000 developers a week. So, you know, the proof are in proof is really in the metrics. MongoDB is an incredibly popular platform for developers that wanna build data-centric applications. You just can't argue with the metrics really, >>You know, Ravi, sometimes there's an analyst who come up with these theories and one of the theories I've been spouting for a long time is that developers are gonna win the edge. And now to, to see you at Verizon building out this developer community was really exciting to me. So explain how you got this started with this journey. >>Absolutely. As you think about Verizon 5g edge or mobile edge computing portfolio, we knew from the start that developers would play a central role and not only consuming the service, but shaping the roadmap for what it means to build a 5g future. And so we started this journey back in late 20, 19 and fast forward to about a year ago with Mongo, we realized, well, wait a minute, you look at the core service offerings available at the edge. We didn't know really what to do with data. We wanted to figure it out. We wanted the vote of confidence from developers. So there I was in an apartment in Colorado racing, your open source Mongo against that in the region edge versus region, what would you see? And we saw tremendous performance improvements. It was so much faster. It's more than 40% faster for thousands and thousands of rights. And we said, well, wait a minute. There's something here. So what often starts is an organic developer, led intuition or hypothesis can really expand to a much broader go to market motion that really brings in the enterprise. And that's been our strategy from day one. Well, >>It's interesting. You talk about the performance. I, I just got off of a session talking about benchmarks in the financial services industry, you know, amazing numbers. And that's one of the hallmarks of, of Mongo is it can play in a lot of different places. So you guys both have developer relations in your title. Is that how you met some formal developer relations? >>We were a >>Program. >>Yeah, I would say that Verizon is one of the few customers that we also collaborate with on a developer relations effort. You know, it's in our mutual best interest to try to drive MongoDB consumption amongst developers using Verizon's 5g edge network and their platform. So of course we work together to help, to increase awareness of MongoDB amongst mobile developers that want to use that kind of technology. >>But so what's your story on this? >>I mean, as I, as I mentioned, everything starts with an organic developer discovery. It all started. I just cold messaged a developer advocate on Twitter and here we are at MongoDB world. It's amazing how things turn out. But one of the things that's really resonated with me as I was speaking with one of, one of your leads within your organization, they were mentioning that as Mongo DVIA developed over the years, the mantra really became, we wanna make software development easy. Yep. And that really stuck with me because from a network perspective, we wanna make networking easy. Developers are not gonna care about the internals of 5g network. In fact, they want us to abstract away those complexities so that they can focus on building their apps. So what better co-innovation opportunity than taking MongoDB, making software easy, and we make the network easy. >>So how do you think about the edge? How does you know variety? I mean, to me, you know, there's a lot of edge use cases, you know, think about the home Depot or lows. Okay, great. I can put like a little mini data center in there. That's cool. That's that's edge. Like, but when I think of Verizon, I mean, you got cell towers, you've got the far edge. How do you think about edge Robbie? >>Well, the edge is a, I believe a very ambiguous term by design. The edge is the device, the mobile device, an IOT device, right? It could be the radio towers that you mentioned. It could be in the Metro edge. The CDN, no one edge is better than the other. They're all just serving different use cases. So when we talk about the edge, we're focused on the mobile edge, which we believe is most conducive to B2B applications, a fleet of IOT devices that you can control a manufacturing plant, a fleet of ground and aerial robotics. And in doing so you can create a powerful compute mesh where you could have a private network and private mobile edge computing by way of say an AWS outpost and then public mobile edge computing by way of AWS wavelength. And why keep them separate. You could have a single compute mesh even with MongoDB. And this is something that we've been exploring. You can extend Atlas, take a cluster, leave it in the region and then use realm the mobile portfolio and spread it all across the edge. So you're creating that unified compute and data mesh together. >>So you're describing what we've been expecting is a new architecture emerging, and that's gonna probably bring new economics of new use cases, right? Where are we today in that first of all, is that a reasonable premise that this is a sort of a new architecture that's being built out and where are we in that build out? How, how do you think about the, the future of >>That? Absolutely. It's definitely early days. I think we're still trying to figure it out, but the architecture is definitely changing the idea to rip out a mobile device that was initially built and envisioned for the device and only for the device and say, well, wait a minute. Why can't it live at the edge? And ultimately become multi-tenant if that's the data volume that may be produced to each of those edge zones with hypothesis that was validated by developers that we continue to build out, but we recognize that we can't, we can't get that static. We gotta keep evolving. So one of our newest ideas as we think about, well, wait a minute, how can Mongo play in the 5g future? We started to get really clever with our 5g network APIs. And I, I think we talked about this briefly last time, 5g, programmability and network APIs have been talked about for a while, but developers haven't had a chance to really use them and our edge discovery service answering the question in this case of which database is the closest database, doesn't have to be invoked by the device anymore. You can take a thin client model and invoke it from the cloud using Atlas functions. So we're constantly permuting across the entire portfolio edge or otherwise for what it means to build at the edge. We've seen such tremendous results. >>So how does Mongo think about the edge and, and, and playing, you know, we've been wondering, okay, which database is actually gonna be positioned best for the edge? >>Well, I think if you've got an ultra low latency access network using data technology, that adds latency is probably not a great idea. So MongoDB since the very formative years of the company and product has been built with performance and scalability in mind, including things like in memory storage for the storage engine that we run as well. So really trying to match the performance characteristics of the data infrastructure with the evolution in the mobile network, I think is really fundamentally important. And that first principles build of MongoDB with performance and scalability in mind is actually really important here. >>So was that a lighter weight instance of, of Mongo or not >>Necessarily? No, not necessarily. No, no, not necessarily. We do have edge cashing with realm, the mobile databases Robbie's already mentioned, but the core database is designed from day one with those performance and scalability characteristics in mind, >>I've been playing around with this. This is kind of a, I get a lot of heat for this term, but super cloud. So super cloud, you might have data on Preem. You might have data in various clouds. You're gonna have data out at the edge. And, and you've got an abstraction that allows a developer to, to, to tap services without necessarily if, if he or she wants to go deep into the S great, but then there's a higher level of services that they can actually build for their customers. So is that a technical reality from a developer standpoint, in your view, >>We support that with the Mongo DB multi-cloud deployment model. So you can place Mongo DB, Atlas nodes in any one of the three hyperscalers that we mentioned, AWS, GCP or Azure, and you can distribute your data across nodes within a cluster that is spread across different cloud providers. So that kinds of an kind of answers the question about how you do data placement inside the MongoDB clustered environment that you run across the different providers. And then for the abstraction layer. When you say that I hear, you know, drivers ODMs the other intermediary software components that we provide to make developers more productive in manipulating data in MongoDB. This is one of the most interesting things about the technology. We're not forcing developers to learn a different dialect or language in order to interact with MongoDB. We meet them where they are by providing idiomatic interfaces to MongoDB in JavaScript in C sharp, in Python, in rust, in that in fact in 12 different pro programming languages that we support as a first party plus additional community contributed programming languages that the community have created drivers for ODMs for. So there's really that model that you've described in hypothesis exist in reality, using >>Those different Compli. It's not just a series of siloed instances in, >>In different it's the, it's the fabric essentially. Yeah. >>What, what does the Verizon developer look like? Where does that individual come from? We talked about this a little bit a few weeks ago, but I wonder if you could describe it. >>Absolutely. My view is that the Verizon or just mobile edge ecosystem in general for developers are present at this very conference. They're everywhere. They're building apps. And as Ian mentioned, those idiomatic interfaces, we need to take our network APIs, take the infrastructure that's being exposed and make sure that it's leveraging languages, frameworks, automation, tools, the likes of Terraform and beyond. We wanna meet developers where they are and build tools that are easy for them to use. And so you had talked about the super cloud. I often call it the cloud continuum. So we, we took it P abstraction by abstraction. We started with, will it work in one edge? Will it work in multiple edges, public and private? Will it work in all of the edges for a given region, public or private, will it work in multiple regions? Could it work in multi clouds? We've taken it piece by piece by piece and in doing so abstracting way, the complexity of the network, meaning developers, where they are providing those idiomatic interfaces to interact with our API. So think the edge discovery, but not in a silo within Atlas functions. So the way that we're able to converge portfolios, using tools that dev developers already use know and love just makes it that much easier. Do, >>Do you feel like I like the cloud continuum cause that's really what it is. The super cloud does the security model, how does the security model evolve with that? >>At least in the context of the mobile edge, the attack surface is a lot smaller because it's only for mobile traffic not to say that there couldn't be various configuration and human error that could be entertained by a given application experience, but it is a much more secure and also reliable environment from a failure domain perspective, there's more edge zones. So it's less conducive to a regionwide failure because there's so many more availability zones. And that goes hand in hand with security. Mm. >>Thoughts on security from your perspective, I mean, you added, you've made some announcements this week, the, the, the encryption component that you guys announced. >>Yeah. We, we issued a press release this morning about a capability called queryable encryption, which actually as we record this Mark Porter, our CTO is talking about in his keynote, and this is really the next generation of security for data stored within databases. So the trade off within field level encryption within databases has always been very hard, very, very rigid. Either you have keys stored within your database, which means that your memory, so your data is decrypted while it's resident in memory on your database engine. This allow, of course, allows you to perform query operations on that data. Or you have keys that are managed and stored in the client, which means the data is permanently OBS from the engine. And therefore you can't offload query capabilities to your data platform. You've gotta do everything in the client. So if you want 10 records, but you've got a million encrypted records, you have to pull a million encrypted records to the client, decrypt them all and see performance hit in there. Big performance hit what we've got with queryable encryption, which we announced today is the ability to keep data encrypted in memory in the engine, in the database, in the data platform, issue queries from the client, but use a technology called structural encryption to allow the database engine, to make decisions, operate queries, and find data without ever being able to see it without it ever being decrypted in the memory of the engine. So it's groundbreaking technology based on research in the field of structured encryption with a first commercial database provided to bring this to market. >>So how does the mobile edge developer think about that? I mean, you hear a lot about shifting left and not bolting on security. I mean, is this, is this an example of that? >>It certainly could be, but I think the mobile edge developer still stuck with how does this stuff even work? And I think we need to, we need to be mindful of that as we build out learning journeys. So one of my favorite moments with Mongo was an immersion day. We had hosted earlier last year where we, our, from an enterprise perspective, we're focused on BW BS, but there's nothing stopping us. You're building a B2C app based on the theme of the winner Olympics. At the time, you could take a picture of Sean White or of Nathan Chen and see that it was in fact that athlete and then overlaid on that web app was the number of medals they accrued with the little trumpeteer congratulating you for selecting that athlete. So I think it's important to build trust and drive education with developers with a more simple experience and then rapidly evolve overlaying the features that Ian just mentioned over time. >>I think one of the keys with cryptography is back to the familiar messaging for the cloud offloading heavy lifting. You actually need to make it difficult to impossible for developers to get this wrong, and you wanna make it as easy as possible for developers to deal with cryptography. And that of course is what we're trying to do with our driver technology combined with structure encryption, with query encryption. >>But Robbie, your point is lots of opportunity for education. I mean, I have to say the developers that I work with, it's, I'm, I'm in awe of how they solve problems and I, and the way they solve problems, if they don't know the answer, they figure out how to go get it. So how, how are your two communities and other communities, you know, how are they coming together to, to solve such problems and share whether it's best practices or how do I do this? >>Well, I'm not gonna lie in person. Events are a bunch of fun. And one of the easiest domain knowledge exchange opportunities, when you're all in person, you can ideate, you can whiteboard, you can brainstorm. And often those conversations are what leads to that infrastructure module that an immersion day features. And it's just amazing what in person events can do, but community groups of interest, whether it's a Twitch stream, whether it's a particular code sample, we rely heavily on digital means today to upscale the developer community, but also build on by, by means of a simple port request, introduce new features that maybe you weren't even thinking of before. >>Yeah. You know, that's a really important point because when you meet people face to face, you build a connection. And so if you ask a question, you're more likely perhaps to get an answer, or if one doesn't exist in a, in a search, you know, you, oh, Hey, we met at the, at the conference and let's collaborate on this guys. Congratulations on, on this brave new world. You're in a really interesting spot. You know, developers, developers, developers, as Steve bomber says screamed. And I was glad to see Dave was not screaming and jumping up and down on the stage like that, but, but the message still resonates. So thank you, definitely appreciate. All right, keep it right there. This is Dave ante for the cubes coverage of Mago DB world 2022 from New York city. We'll be right back.
SUMMARY :
Who's the vice president of developer relations at Mongo DB Jens. And of course, Ian, you know, Mongo it's rayon Detra is, is developers start Well, it's been the ethos of MongoDB since day one. So explain how you versus region, what would you see? So you guys both have developer relations in your So of course we But one of the things that's really resonated with me as I was speaking with one So how do you think about the edge? It could be the radio towers that you mentioned. the idea to rip out a mobile device that was initially built and envisioned for the of the company and product has been built with performance and scalability in mind, including things like the mobile databases Robbie's already mentioned, but the core database is designed from day one So super cloud, you might have data on Preem. So that kinds of an kind of answers the question about how It's not just a series of siloed instances in, In different it's the, it's the fabric essentially. but I wonder if you could describe it. So the way that we're able to model, how does the security model evolve with that? And that goes hand in hand with security. week, the, the, the encryption component that you guys announced. So it's groundbreaking technology based on research in the field of structured So how does the mobile edge developer think about that? At the time, you could take a picture of Sean White or of Nathan Chen And that of course is what we're trying to do with our driver technology combined with structure encryption, with query encryption. and other communities, you know, how are they coming together to, to solve such problems And one of the easiest domain knowledge exchange And so if you ask a question, you're more likely perhaps to get an answer, or if one doesn't exist
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Robbie Bellson | PERSON | 0.99+ |
Ian Massingham | PERSON | 0.99+ |
Ian | PERSON | 0.99+ |
10 records | QUANTITY | 0.99+ |
Robbie | PERSON | 0.99+ |
Robbie Belson | PERSON | 0.99+ |
Colorado | LOCATION | 0.99+ |
2009 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Mark Porter | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
MongoDB | ORGANIZATION | 0.99+ |
Sean White | PERSON | 0.99+ |
Nathan Chen | PERSON | 0.99+ |
Olympics | EVENT | 0.99+ |
Python | TITLE | 0.99+ |
MongoDB | TITLE | 0.99+ |
today | DATE | 0.99+ |
NYC | LOCATION | 0.99+ |
late 20 | DATE | 0.99+ |
more than 40% | QUANTITY | 0.99+ |
two communities | QUANTITY | 0.99+ |
Ravi | PERSON | 0.98+ |
MongoDB Atlas | TITLE | 0.98+ |
Mongo DB | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
JavaScript | TITLE | 0.98+ |
this morning | DATE | 0.98+ |
one edge | QUANTITY | 0.97+ |
12 different pro programming languages | QUANTITY | 0.97+ |
New York city | LOCATION | 0.97+ |
first version | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
Azure | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Atlas | TITLE | 0.95+ |
C sharp | TITLE | 0.95+ |
a million encrypted records | QUANTITY | 0.95+ |
about 25,000 developers a week | QUANTITY | 0.93+ |
Twitch | ORGANIZATION | 0.93+ |
first year | QUANTITY | 0.93+ |
19 | DATE | 0.89+ |
Ian Massingham, MongoDB | AWS Summit SF 2022
>>Okay, welcome back everyone. Cube's coverage here. Live on the floor at AWS summit, 2022, an in person event in San Francisco. Of course, AWS summit, 2022 in New York city is coming up this summer. The cube will be there as well. Make sure you check us out then too, but we day two of coverage had a great guest here. I Han VP of developer relations, Mongo DB, formally of AWS. We've been known each other for a long time doing, uh, developer relations at Mongo DB. Welcome to the queue. Good to see >>You. Thank to be here. Thanks for inviting me, John. It's great >>To, so Mongo DB is, um, first of all, stocks' doing really well right now. Businesswise is good, but I still think it's undervalue. A lot of people think is, is a lot more going huge success with Atlas. So congratulations to the team over there. Um, what's the update? What's the relationship withs, you know, guys have been great partners for years. What's the new thing. Yeah. >>So MongoDB Atlas obviously runs on several different major cloud providers, but AWS is the largest partner that we work with in the public cloud. So the majority of our Atlas workloads for our customers are running on the AWS platform. And just earlier this year, we announced a new strategic collaboration agreement with AWS. That's gonna further strengthen and deepen that partnership that we have with them. >>What's the main product value right now on the scale on, on Atlas, what's the drive in the revenue momentum. >>So, I mean, you know, there's a huge trend in the industry towards cloud managed databases, right? You look back 10, 15 years ago when we first met, most customers were only and operating their own data infrastructure, either running it in their own data centers, or maybe if they were really early using the primitives that cloud providers like AWS offered to run their databases in the cloud when Amazon launched RDS back in 2009, I think it was, we started to see this trend towards cloud managed databases. We followed that with our own Atlas offering back in 2016. And as Andy jazzy from AWS would say very often it's offloading that UND differentiated, heavy lifting, allowing developers to focus on building applications. They don't have to win and operate the data infrastructure. We do it for them, and that has proven incredibly popular amongst our customers. You know, Atlas route right now is growing at 50, sorry, 85% car year on year growth. >>You know, um, I've been following MongoDB for a long, long time. I mean, going back to the lamp stack days, you know, and you think about what Mongo has done as a product because of the developer traction, you know, Mongo can't do this, just keeps getting better every year. And, and the, I think the stickiness with developers is a real big part of that. Can you your view there cuz you're in VE relations. I mean, developers all love Mongo. They're teaching in school. People are picking up a side hustles, they're coding on it, using it all everywhere. I mean it's well known. >>There's a few different reasons for that. I think the main one is the, the document orientated model that we use, the document data models that are used by Mongo DB, just a net way for developers to work with data. And then, uh, we've invested in creating 16 first party drivers that allow developers using various different programming languages, whether that's JavaScript or Python or rust to integrate MongoDB, natively and idiomatic with their software. So it's very, very easy for a developer to pick up MongoDB, grab one of these drivers from their package manager of their choice and then build applications that natively manipulate data inside MongoDB, whether that's MongoDB Atlas or our enterprise edition on their own premises. They get a very consistent and very easy to, I easy to use developer experience with our, with our platform. >>Talk about the go to market with AWS. You guys also have a tightly coupled relationships. There's been announcements there recently. Uh, what's changing most right now that people should pay attention to. Well, >>The first thing is there's a huge amount of technical integration between MongoDB and AWS services. And that's the basis for many of our customers choosing to run Mon Mongo DB on AWS. We're active in 23 AWS regions around the world. And there's many other integration points as well, like cryptographic protection of Mongo MongoDB, stored data using Amazon cryptographic services, for example, or building serverless applications with AWS Lambda and MongoDB servers. So there's a ton of technical integration. Yeah, but what we started to work on now is go to market integration with AWS as well. So you can buy Mongo DB Atlas through AWS's marketplace. You can use the payer, you go offering to pay for it with your AWS bill. And then we're collaborating with AWS on migrations and other joint go to market activities as well. That >>Means get incentives, the sales people at AWS. >>Of course our moreover I mean, it's just really easy for customers, really easy for developers to consume. Yeah, they don't need to contract with MongoDB. They can use their existing AWS contracting, their existing discounting relationships and pre purchasing arrangements with AWS to consume Atlas. >>It's the classic meet the customers where they >>Are exactly right. Meet the developer where they are and meet the customers where they are now with this new model as well. >>Yeah. I love marketplace. I think it's been great. You know, even with its kind of catalog and vibe, I think it's gonna get better and better, uh, over there teams doing good work. Um, and it's easy to consume. That's key. >>Yeah. Super easy. Reduce that friction and make it real easy for developers to adopt this. Right. >>Talk about some of the top customers that you guys share with AWS. What are some of the customers you guys have together and what the benefits of the >>Relationship joint references that we talk about? A lot, one of them is Shutterfly. So in the photographic products area, they built a eCommerce offering with MongoDB and AWS. The second is seven 11 with seven 11. We're doing a lot in the mobile space. So edge applications, we've got a feature in MongoDB Atlas that allows you to synchronize data with databases on mobile devices. Those can be phones point of sale devices or handheld devices that might be used in the parcel industry, for example. So seven 11 using us in that way. And then lastly with Pitney Bowes, we've got a big digital transformation project with Pitney Bowes where they've reimagined their, uh, postage and packaging services, delivering those to their customers, using MongoDB as a data store as well. >>I wanna get in some of the trends, you've got a great per you know, you know, Mongo from Amazon side and now you're there. Um, Mongo's, as you pointed out has, has been around for a long time. What are some of the stats? I mean, how many customers, how many countries? Well, it's pretty massive >>Mind. We've got almost quarter of a billion downloads today, 240 million MongoDB downloads since we launched the first product <laugh>, we've got 33,000 active customers that are using MongoDB Atlas today and we're running well over a million free tier clusters on MongoDB Atlas across all of the different providers where we operate the service as well. So these numbers are, you know, mind blowing in terms of scale. Uh, but of course at the core of that is operational excellence. Customers love Mongo DBS because they don't have to operate it themselves. They don't have to deal with fairly conditions. They don't have to deal with scaling. They don't have to deal with deployment. We all, we do all of those things as part of the service offering and customers get an endpoint that they can use with their applications to store and retrieve data reliably. And with consistently high perform, >>You know, it's, you know, in the media, something has to be dead. Someone's the death of the iPhone, the death of this, nothing that really dies. Mongo DB has always been kind of like talked about, well, it doesn't scale on the high end. Of course, Oracle was saying that, I mean, all the, all the big database vendors were kind of throwing darts at, at Mongo, uh, DB, uh, but it kept scaling. Atlas is a whole nother. Could you just unpack that a little bit more? Why is it so important? Because scale is just, I mean, it's, it's horizontal, but it's also performant. >>Exactly. Right. So with, uh, Mongo DB's document access model that I've described already, you break some of the limitations that exist inside traditional relational databases. So, you know, they don't scale well, if you've got high concurrent and see of data access, and they're typically difficult and expensive to scale because you need to share data. Once you grow beyond individual cluster nodes, and you'll know that all relational databases suffer from these same kinds of issues with non relational systems, no SQL systems like MongoDB, you have to think a little bit more about design at the beginning. So designing database to cater for the different access patterns that you have, but in return for that upfront preparation, that design work, you get near limitless, scalability and performance will scale nearly linearly with that scalability as well. So very much more high performance, very much more simplicity for the developer as their database gets larger and their cluster gets larger to support it. >>Yeah. You know, Amazon web service has always had an a and D jazz. We talk to us all the time, every interview I've done with Swami and Matt wood or whoever on the team and executive levels always said the same thing. There's not one database to rule the world, right? Obvious you're talking about Oracle, but even within AWS customers, they're mixing and matching databases based on use cases. So in distributed environment, they're all working together. So, um, you guys fit nicely into that. So how does that, >>I think strategy slightly counterbalances that so, you know, they would say use the specific tool for the specific task that you have in hand. Yeah. What we try to focus on is creating the simple and most effective developer experience that we can, and then supporting different facets to the product in order to allow developers to different use cases. A really good example with something like MongoDB Atlas search. So we integrated Apache Luine into MongoDB Atlas. Customers can very simply apply Apache Luine search indexes to the data that they've got in MongoDB. And then they can interact with that search data using the same drivers as an API. Yeah, yeah. That they use for regular queries. So if you want to run search on your application data, you don't need a separate open search or elastic search cluster, just turn on MongoDB Atlas search and use that, that search facet. So it's interest and we have other capabilities that it's >>Vertically integrating inside within Mongo, >>Correct? Yes. That's better. Yeah. With the guy, all of creating a really simple and effective developer experience, boosting developer productivity and helping developers get more done in less time. >>You mentioned serverless earlier, what's the serverless angle with AWS when Mongo, >>Is there one? Yeah. So we have MongoDB serverless currently in preview, uh, has the same kind of characteristics that you would, or the characteristics that you would expect from a serverless data base. So consumption based model, you provision an endpoint and that will scale elastically in accordance with your usage and you get billed by consumption units so much like the serverless paradigm that we've seen delivered by AWS, the same kind of model for Mongo, DB, Atlas serverless. >>What, what attracted you to Mongo DBS? So you knew them before, or you moved over there. Um, what's going on there? What's the culture like right now? Oh, >>The culture's great. I mean, it's a much smaller company than AWS where I was before, you know, it's a very large organization. And one of the things that I really like about MongoDB is, as I've said earlier, we can serve the different use cases that a developer might have with a single product, with different aspects, to it, different facets to it. Uh, and it's a really great conversation to have with a, with a developer, with a developer customer, to be able to offer one thing that helps them solve five or six different problems that have traditionally been quite hard for them to wrestle with quite difficult for them to, to deal with. And then we've got this focus on developer experience through these driver packages that we have as well. So it's really great to have as a developer relations pro have that kind of tooling in my kit bag that can help developers become more effective. >>Talk about tooling, cuz you know, I always have, uh, kind of moments where I waffle between more. I love platforms, tools are being over overused, too many tools tool with the tool, you know, the expressions, but we're seeing from developers, the ones that don't want to go into the hood, we serverless plays beautifully. Yep. They want tools. They do. And, and the, the new engineering developers that are coming outta college and universities, they love tools. >>Yeah. And we actually have quite a few of those built into Mongo, DB Atlas. So inside Mongo, DB Atlas, we've got things like an index optimizer, which will suggest the best way that you might index your data for better perform months inside MongoDB, running on Atlas, we've got a data Explorer, which is much like another product that we've got called MongoDB compass that allows you to see and manipulate the data that you have stored within your database natively within the Atlas interface. Uh, and then we also have, uh, whole slew of different metrics, monitoring capabilities built into the platform as well. So these are aspects of Atlas that developers can take advantage of. And then over on the client side, visual studio code plugins. Yeah. So you can manipulate and operate with data directly inside visual studio code, which is obviously the most common and popular IDE out there today, as well as integration with things like infrastructure is code tools. So we support cloud formation for provisioning. We have CDK constructs inside. Yeah. The CDK construct library. We also have a lot of customers using Terraform to provision MongoDB across both AWS and other providers. So having that developer tooling of course is super important. Yeah. Aspect of the developer experience, trying to >>Build out deploying observability is a big one. How does that fit in? Cuz you knew need to talk and not only measure everything here, but talk to other systems. >>Yeah. So we recently announced a provider for Prometheus and Grafana. So we can emit metrics into those providers. Obviously CNCF projects, very common and popular inside customers that are running on Kubernetes. We've got a Kubernetes operator for MongoDB Atlas as well. Good. So you can provision MongoDB Atlas from within Kubernetes as well as having our own native metrics directly within Atlas as well. >>Ian you're crushing it. You got all the, the data, the fingertips. Are you gonna be at Cuban this year? Uh, >>I will be, but some of our team members will definitely be there. >>Yeah, we'll be at, uh, EU. The cube will be there. Great. Thanks for coming on. Appreciate the insight final world. I'll give you the last word. Tell the audience what's going on. What's at Mongo DB. What should they pay attention to? If they've used Mongo and are aware of it? What's the update. What's >>The so you should come to MongoDB world actually in New York at the beginning of June, June 7th, the ninth in the Javit center in New York. Gonna have our own show there. And of course we'd love to see you there. >>Okay. Cube comes here day two of eight, us summit, 2020, this Cub I'm John for your host. Stay with us more. Our coverage as day two winds down. Great coverage.
SUMMARY :
Make sure you check Thanks for inviting me, John. So congratulations to the team over there. That's gonna further strengthen and deepen that partnership that we have with them. So, I mean, you know, there's a huge trend in the industry towards cloud managed databases, right? I think the stickiness with developers is a real big part of that. or Python or rust to integrate MongoDB, natively and idiomatic with their software. Talk about the go to market with AWS. And that's the basis for many of our customers choosing to run Mon Mongo DB on AWS. Yeah, they don't need to contract with MongoDB. Meet the developer where they are and meet the customers where they are now with this new model as well. You know, even with its kind of catalog and vibe, Reduce that friction and make it real easy for developers to adopt this. Talk about some of the top customers that you guys share with AWS. Atlas that allows you to synchronize data with databases on mobile devices. Um, Mongo's, as you pointed out has, has been around for a long time. part of the service offering and customers get an endpoint that they can use with their applications to store and You know, it's, you know, in the media, something has to be dead. cater for the different access patterns that you have, but in return for that upfront preparation, So, um, you guys fit nicely into that. the specific task that you have in hand. boosting developer productivity and helping developers get more done in less time. that you would, or the characteristics that you would expect from a serverless data base. So you knew them before, or you moved over Uh, and it's a really great conversation to have with a, Talk about tooling, cuz you know, I always have, uh, kind of moments where I waffle between more. So you can manipulate and operate with data directly inside visual studio code, Cuz you knew need to talk and not only measure everything So you can provision MongoDB Are you gonna be at Cuban this year? I'll give you the last word. And of course we'd love to see you there. Stay with us more.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Ian Massingham | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Andy jazzy | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Ian | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
Atlas | ORGANIZATION | 0.99+ |
240 million | QUANTITY | 0.99+ |
Atlas | TITLE | 0.99+ |
33,000 active customers | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
MongoDB | TITLE | 0.99+ |
50 | QUANTITY | 0.99+ |
MongoDB Atlas | TITLE | 0.99+ |
MongoDB Atlas | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
JavaScript | TITLE | 0.99+ |
today | DATE | 0.99+ |
23 | QUANTITY | 0.98+ |
Mongo DB | ORGANIZATION | 0.98+ |
first product | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
Swami | PERSON | 0.98+ |
Grafana | ORGANIZATION | 0.98+ |
eight | QUANTITY | 0.98+ |
10, 15 years ago | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Matt wood | PERSON | 0.97+ |
this year | DATE | 0.97+ |
single product | QUANTITY | 0.97+ |
six different problems | QUANTITY | 0.96+ |
Andrew Backes, Armory & Ian Delahorne, Patreon | AWS Startup Showcase S2 E1 | Open Cloud Innovations
(upbeat music) >> Welcome to the AWS start up showcase, theCUBE's premiere platform and show. This is our second season, episode one of this program. I'm Lisa Martin, your host here with two guests here to talk about open source. Please welcome Andrew Backes, the VP of engineering at Armory, and one of our alumni, Ian Delahorne, the staff site, reliability engineer at Patreon. Guys, it's great to have you on the program. >> Thank you. >> Good to be back. >> We're going to dig into a whole bunch of stuff here in the next fast paced, 15 minutes. But Andrew, let's go ahead and start with you. Give the audience an overview of Armory, who you guys are, what you do. >> I'd love to. So Armory was founded in 2016 with the vision to help companies unlock innovation through software. And what we're focusing on right now is, helping those companies and make software delivery, continuous, collaborative, scalable, and safe. >> Got it, those are all very important things. Ian help the audience, if anyone isn't familiar with Patreon, it's a very cool platform. Talk to us a little bit about that Ian. >> Absolutely, Patreon is a membership platform for creators to be able to connect with their fans and for fans to be able to subscribe to their favorite creators and help creators get paid and have them earn a living with, just by being connected straight to their audience. >> Very cool, creators like podcasters, even journalists video content writers. >> Absolutely. There's so many, there's everything from like you said, journalists, YouTubers, photographers, 3D modelers. We have a nightclub that's on there, there's several theater groups on there. There's a lot of different creators. I keep discovering new ones every day. >> I like that, I got to check that out, very cool. So Andrew, let's go to your, we talk about enterprise scale and I'm using air quotes here. 'Cause it's a phrase that we use in every conversation in the tech industry, right? Scalability is key. Talk to us about what enterprise scale actually means from Armory's perspective. Why is it so critical? And how do you help enterprises to actually achieve it? >> Yeah, so the, I think a lot of the times when companies think about enterprise scale, they think about the volume of infrastructure, or volume of software that's running at any given time. There's also a few more things that go into that just beyond how many EC2 instances you're running or containers you're running. Also velocity, count how much time does it take you to get features out to your customers and then stability and reliability. Then of course, in enterprises, it isn't as simple as everyone deploying to the same targets. It isn't always just EC2, a lot of the time it's going to be multiple targets, EC2, it's going to be ECS, Lambda. All of these workloads are out there running. And how does a central platform team or a tooling team at a site enable that for users, enable deployment capabilities to those targets? Then of course, on top of that, there's going to be site specific technologies. And how do, how does your deployment tooling integrate with those site specific technologies? >> Is, Andrew is enterprise scale now even more important given the very transformative events, we've seen the last two years? We've seen such acceleration, cloud adoption, digital transformation, really becoming a necessity for businesses to stay alive. Do you think that, that skill now is even more important? >> Definitely, definitely. The, what we see, we've went through a wave of the, the first set of digital transformations, where companies are moving to the cloud and we know that's accelerating quite a bit. So that scale is all moving to the cloud and the amount of multiple targets that are being deployed to at any given moment, they just keep increasing. So that is a concern that companies need to address. >> Let's talk about the value, but we're going to just Spinnaker here in the deployment. But also let's start Andrew with the value that, Armory delivers on top of Spinnaker. What makes this a best of breed solution? >> Yeah, so on top of open-source Spinnaker, there are a lot of other building blocks that you're going to need to deploy at scale. So you're going to need to be able to provide modules or some way of giving your users a reusable building block that is catered to your site. So that is one of the big areas that Armory focuses on, is how can we provide building blocks on top of open source Spinnaker that sites can use to tailor the solution to their needs. >> Got it, tailor it to their needs. Ian let's bring you back into the conversation. Now, talk to us about the business seeds, the compelling event that led Patreon to choose Spinnaker on top of Armory. >> Absolutely. Almost three years ago, we had an outage which resulted in our payment processing slowed down. And that's something we definitely don't want to have happen because this would hinder creator's ability to get paid on time for them to be able to pay their employees, pay their rent, hold that hole, like everything that, everyone that depends on them. And there were many factors that went into this outage and one of them we identified is that it was very hard for us to, with our custom belt deploy tooling, to be able to easily deploy fast and to roll back if things went wrong. So I had used Spinnaker before to previous employer early on, and I knew that, that would be a tool that we could use to solve our problem. The problem was that the SRE team at Patreon at that time was only two people. So Spinnaker is a very complex product. I didn't have the engineering bandwidth to be able to, set up, deploy, manage it on my own. And I had happened to heard of Armory just that week before and was like, "This is the company that could probably help me solve my problems." So I engaged early on with Andrew and the team. And we migrated our customers deployed to, into Spinnaker and help stabilize our deploys and speed them up. >> So you were saying that the deployments were taking way too long before. And of course, as you mentioned from a payment processing perspective, that's people's livelihoods. So that's a pretty serious issue there. You found Armory a week into searching this seems like stuff went pretty quickly. >> And the week before the incident, they had randomly, the, one of the co-founders randomly reached out to me and was like, "We're doing this thing with Armory. You might be interested in this, we're doing this thing with Spinnaker, it's called Armory." And I kind of filed it away. And then they came fortuitous that we were able to use them, like just reach out to them like a week later. >> That is fortuitous, my goodness, what a good outreach and good timing there on Armory's part. And sticking with you a little bit, talk to us about what it is that the business challenges that Armory helps you to resolve? What is it about it that, that just makes you know this is the exact right solution for us? Obviously you talked about not going direct with Spinnaker as a very lean IT team. But what are some of the key business needs that it's solving? >> Yeah, there's several business things that we've been able to leverage Armory for. One of them as I mentioned, they, having a deployment platform that we know will give us, able deploys has been very important. There's been, they have a policy engine module that we use for making sure that certain environments can only be deployed to by certain individuals for compliance issues. We definitely, we use their pipelines as code module for being able to use, build, to build reusable deploy pipelines so that software engineers can easily integrate Spinnaker into their builds. Without having to know a lot about Spinnaker. There's like here, take these, take this pipeline module and add your variables into it, and you'll be off to the races deploying. So those are some of the value adds that Armory has been able to add on top of Spinnaker. On top of that, we use their managed products. So they have a team that's managing our Spinnaker installation, helping us with upgrades, helping up the issues, all that stuff that unlocks us to be able to focus on building our creators. Instead of focusing on operating Spinnaker. >> Andrew, back to you. Talk to me a little bit about as the VP of engineering, the partnership, the relationship that Armory has with Patreon and how symbiotic is it? How much are they helping you to develop the product that Armory is delivering to its customers? >> Yeah, one of the main things we want to make sure we do is help Patreon be successful. So that's, there are going to be some site specific needs there that we want to make sure that we are in tune with and that we're helping with, but really we view it as a partnership. So, Patreon has worked with us. Well, I can't believe it's been three years or kind of a little bit more now. But it's, it, we have had a lot of inner, a lot of feedback sessions, a lot of going back and forth on how we can improve our product to meet the needs of Patreon better. And then of course the wider market. So one thing that is neat about seeing a smaller team, SRE team that Ian is on, is they can depend on us more. They have less bandwidth with themselves to invest into their tooling. So that's the opportunity for us to provide those more mature building blocks to them. So that they can combine those in a way that makes them, that meets their needs and their business needs. >> And Ian, back to you, talk to me about how has the partnership with Armory? You said it's been almost three years now. How has that helped you do your job better as an SRE? What are some of the advantages of that, to that role? >> Yeah, absolutely. Armory has been a great partner to work with. We've used their expertise in helping to bring new features into the open-source Spinnaker. Especially when we decided that we wanted to not only deploy to EC2 instances, but we wanted to play to elastic container service and Lambdas to shift from our normal instance based deploys into the containerization. There were several warrants around the existing elastic container service deploy, and Lambda deploys that we were able to work with Armory and have them champion some changes inside open-source as well as their custom modules to help us be able to shift our displays to those targets. >> Got it. Andrew back over to you, talk to me, I want to walk through, you talked about from an enterprise scale perspective, some of the absolute critical components there. But I want to talk about what Armory has done to help customers like Patreon to address things like speed to market, customer satisfaction as Ian was talking about, the compelling event was payment processing. A lot of content creators could have been in trouble there. Talk to, walk me through how you're actually solving those key challenges that not just Patreon is facing, but enterprises across industries. >> Yeah, of course, so the, talking to specifically to what brought Ian in was, a problem that they needed to fix inside of their system. So when you are rolling out a change like that, you want it to be fast. You want to get that chain, change out very quickly, but you also want to make sure that the deployment system itself is stable and reliable. So the last thing you're going to want is any sort of hiccup with the tool that you're using to fix your product, to roll out changes to your customers. So that is a key focus area for us in everything that we do is we make sure that whenever we're building features that are going to expand capabilities, deployment capabilities. That we're, we are focusing firstly on stability and reliability of the deployment system itself. So those are a few features, a few focus areas that we continually build into the product. And you can, I mean, I'm sure a lot of enterprises know that as soon as you start doing things at massive scale, sometimes the stability and reliability, can, you'll be jeopardized a little bit. Or you start hitting against those limits or what are the, what walls do you encounter? So one of the key things we're doing is building ahead of that, making sure that our features are enabling users to hit deployment scales they've never seen or imagined before. So that's a big part of what Armory is. >> Ian, can you add a number to that in terms of the before Armory and the after in terms of that velocity? >> Absolutely, before Armory our deploys would take some times, somewhere around 45 minutes. And we cut that in half, if not more to down to like the like 16 to 20 minute ranges where we are currently deploying to a few hundred hosts. So, and that is the previous deployment strategy would take longer. If we scaled up the number of instances for big events, like our payment processing we do the first of the month currently. So being able to have that and know that our deploys will take about the same amount of time each time, it will be faster. That helps us bring features to create some fans a lot faster. And the stability aspect has also been very important, knowing that we have a secure way to roll back if needed, which you didn't have previously in case something goes wrong, that's been extremely useful. >> And I can imagine, Ian that velocity is critical because I mean more and more and more these days, there are content creators everywhere in so many different categories that we've talked about. Even nightclubs, that to be able to deliver that velocity through a part, a technology like Armory is table-stakes for against business. >> Absolutely, yeah. >> Andrew, back over to you. I want to kind of finish out here with, in the last couple of years where things have been dynamic. Have you seen any leading indices? I know you guys work with enterprises across organizations and Fortune 500s. But have you seen any industries in particular that are really leaning on Armory to help them achieve that velocity that we've been talking about? >> We have a pretty good spread across the market, but since we are focused on cloud, to deploy to cloud technologies, that's one of the main value props for Armory. So that's going to be enabling deployments to AWS in similar clouds. So the companies that we work with are really ones that have either already gone through that transformation or are on their journey. Then of course, now Kubernetes is a force, it's kind of taken over. So we're getting pulled into even more companies that are embracing Kubernetes. So I wouldn't say that there's an overall trend, but we have customers all across the Fortune 500, all across mid-market to Fortune 500. So there's depending on the complexity of the corporation itself or the enterprise itself we're able to do. I think Ian mentioned our policy engine and a few other features that are really tailored to companies that have restricted environments and moving into the cloud. >> Got it, and that's absolutely critical these days to help organizations pivot multiple times and to get that speed to market. 'Cause that's, of course as consumers, whether we're on the business side or the commercial side, we have an expectation that we're going to be able to get whatever we want A-S-A-P. And especially if that's payments processing, that's pretty critical. Guys, thank you for joining me today, talking about Armory, built on Spinnaker, what it's doing for customers like Patreon. We appreciate your time and your insights. >> Thank you so much. >> Thank you. Thank you so much. >> For my guests, I'm Lisa Martin. You're watching theCUBE's, AWS startup showcase, season two, episode one. (upbeat music)
SUMMARY :
Guys, it's great to We're going to dig into to help companies unlock Talk to us a little bit about that Ian. and for fans to be able to subscribe Very cool, creators like everything from like you said, So Andrew, let's go to your, to get features out to your customers for businesses to stay alive. So that scale is all moving to the cloud Spinnaker here in the deployment. that is catered to your site. Now, talk to us about the business seeds, and to roll back if things went wrong. And of course, as you mentioned like just reach out to talk to us about what it is to be able to focus on Andrew, back to you. So that's, there are going to be of that, to that role? and Lambdas to shift from our like speed to market, that are going to expand the like 16 to 20 minute ranges Even nightclubs, that to be Andrew, back over to you. So that's going to be enabling deployments and to get that speed to market. Thank you so much. (upbeat music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian Delahorne | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Armory | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Ian | PERSON | 0.99+ |
Andrew Backes | PERSON | 0.99+ |
16 | QUANTITY | 0.99+ |
Patreon | ORGANIZATION | 0.99+ |
Spinnaker | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
second season | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
20 minute | QUANTITY | 0.99+ |
EC2 | TITLE | 0.99+ |
a week later | DATE | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
SRE | ORGANIZATION | 0.98+ |
Lambdas | TITLE | 0.98+ |
today | DATE | 0.98+ |
each time | QUANTITY | 0.97+ |
around 45 minutes | QUANTITY | 0.96+ |
Lambda | TITLE | 0.96+ |
a week | QUANTITY | 0.95+ |
ECS | TITLE | 0.94+ |
first set | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.93+ |
Ben Mappen, Armory & Ian Delahorne, Patreon | CUBE Conversation
>>Welcome to the cube conversation here. I'm Sean ferry with the cube in Palo Alto, California. We've got two great guests here featuring armory who has with them Patrion open-source and talking open source and the enterprise. I'm your host, John ferry with the cube. Thanks for watching guys. Thanks for coming on. Really appreciate. I've got two great guests, Ben mapping, and SVP, a strategic partner in the armory and Ian Della horn, S staff SRE at Patrion gentlemen, you know, open source and enterprise is here and we wouldn't talk about thanks for coming. I appreciate it. >>Yeah. Thank you, John. Really happy to be here. Thank you to the Cuban and your whole crew. I'll start with a quick intro. My name is Ben Mappin, farmers founders, lead strategic partnerships. As John mentioned, you know, it all, it really starts with a premise that traditional businesses, such as hotels, banks, car manufacturers are now acting and behaving much more like software companies than they did in the past. And so if you believe that that's true. What does it mean? It means that these businesses need to get great at delivering their software and specifically to the cloud, like AWS. And that's exactly what armory aims to do for our customers. We're based on opensource Spinnaker, which is a continuous delivery platform. And, and I'm very happy that Ian from Patrion is here to talk about our journey together >>And introduce yourself what you do at Patriot and when Patrion does, and then why you guys together here? What's the, what's the story? >>Absolutely. Hi, John and Ben. Thanks for, thanks for having me. So I am Ian. I am a site reliability engineer at Patrion and Patrion is a membership platform for creators. And what we're our mission is to get creators paid, changing the way the art is valued so that creators can make money by having a membership relationship with, with fans. And we are, we're built on top of AWS and we are using Spinnaker with armory to deploy our applications that, you know, help, help creators get paid. Basically >>Talk about the original story of Ben. How are you guys together? What brought you together? Obviously patron is well-known in the creator circles. Congratulations, by the way, all your success. You've done a great service for the industry and have changed the game you were doing creators before it was fashionable. And also you got some cutting-edge decentralization business models as well. So again, we'll come back to that in a minute, but Ben, talk about how this all comes together. Yeah, >>Yeah. So Ian's got a great kind of origin story on our relationship together. I'll give him a lead in which is, you know, what we've learned over the years from our large customers is that in order to get great at deploying software, it really comes down to three things or at least three things. The first being velocity, you have to ship your software with velocity. So if you're deploying your software once a quarter or even once a year, that does no good to your customers or to your business, like just code sitting in a feature branch on a shelf, more or less not creating any business value. So you have to ship with speed. Second, you have to ship with reliability. So invariably there will be bugs. There will be some outages, but you know, one of the things that armory provides with Spinnaker open sources, the ability to create hardened deployment pipeline so that you're testing the right things at the right times with the right folks involved to do reviews. >>And if there is hopefully not, but if there is a problem in production, you're isolating that problem to a small group of users. And then we call this the progressive deployment or Canary deployment where you're deploying to a small number of users. You measure the results, make sure it's good, expand it and expand it. And so I think, you know, preventing outages is incredibly incredibly important. And then the last thing is being able to deploy multi target multi-cloud. And so in the AWS ecosystem, we're talking about ECS, EKS Lambda. And so I think that these pieces of value or kind of the, the pain points that, that enterprises face can resonate with a lot of companies out there, including ENN Patriot. And so I'll, I'll, I'll let you tell the story. >>Yeah, go ahead. Absolutely. Thanks. Thanks for the intro, man. So background background of our partnership with armory as back in the backend, February of 2019, we had a payments payments slowed down for payments processing, and we were risking not getting creators paid on time, which is a doc great for creators because they rely on us for income to be able to pay themselves, pay their rent or mortgage, but also pay staff because they have video editors, website admins, people that nature work with them. And there were, they're a very, there's a very many root causes to this, to this incident, all kind of culminate at once. One of the things that we saw was that deploying D point fixes to remediate. This took too long or taking at least 45 minutes to deploy a new version of the application. And so we've had continuous delivery before using a custom custom home built, rolling deploy. >>We needed to get that time down. We also needed to be secure in our knowledge of like that deploy was stable. So we had had to place a break in the middle due to various factors that that can happen during the deploy previously, I had used a Spinnaker at previous employers. I have been set it up myself and introduced it. And I knew about, I knew like, oh, this is something we could, this would be great. But the Patriot team, the patron SRE team at that time was two people. So I don't have the ability to manage Spinnaker on my own. It's a complex open-source product. It can do a lot of things. There's a lot of knobs to tweak a lot of various settings and stuff you need to know about tangentially. One of the co-founders of, of armory had been, had to hit, had hit me up earlier. I was like, Hey, have you heard of armory? We're doing this thing, opens our Spinnaker, we're packaging this and managing it, check us out if you want. I kind of like filed it away. Like, okay, well that might be something we can use later. And then like two weeks later, I was like, oh wait, this company that does Spinnaker, I know of them. We should probably have a conversation with them and engage with them. >>And so you hit him up and said, Hey, too many knobs and buttons to push what's the deal. >>Yeah, exactly. Yeah. So I was, I was like, Hey, so by the way, I about that thing, how, how soon can you get someone get someone over here? >>So Ben take us through the progression. Cause that really is how things work in the open source. Open source is really one of those things where a lot of community outreach, a lot of people are literally a one degree or two separation from someone who either wrote the project or is involved in the project. Here's a great example. He saw the need for Spinnaker. The business model was there for him to solve. Okay. Fixes rolling deployments, homegrown all the things, pick your pick, your use case, but he wanted to make it easier. This tends to, this is kind of a pattern. What did you guys do? What's the next step? How did this go from here? >>Yeah. You know, Spinnaker being source is critical to armory's success. Many companies, not just pastry on open source software, I think is not really debatable anymore in terms of being applicable to enterprise companies. But the thing with selling open source software to large companies is that they need a backstop. They need not just enterprise support, but they need features and functionality that enable them to use that software at scale and safely. And so those are really the things that, that we focus on and we use open source as a really, it's a great community to collaborate and to contribute fixes that other companies can use. Other companies contribute fixes and functionality that we then use. But it's, it's really a great place to get feedback and to find new customers that perhaps need that enhanced level of functionality and support. And, and I'm very, very happy that Patrion was one of those companies. >>Okay. So let's talk about the Patrion. Okay. Obviously scaling is a big part of it. You're an SRE site, reliability engineers with folks who don't know what that is, is your, your job is essentially, you know, managing scale. Some say you the dev ops manager, but that's not really right answer. What is the SRE role at patriotics share with folks out there who are either having an SRE. They don't even know it yet or need to have SRS because this is a huge transition that, and new, new and emerging must have role in companies, >>Right? Yeah. We're the history of Patrion covers a lot. We cover a wide swath of a wide swath of, of, of things that we work with and, and areas that we consider to be our, our purview. Not only are we working on working with our AWS environment, but we also are involved in how can we make the site more reliable or performance so that, so that creators fans have a good experience. So we work with our content delivery numbers or caching strategies for caching caching assets. We work inside the application itself for doing performance performance, a hassle. This is also in proving observability with distributed tracing and metrics on a lot of that stuff, but also on the build and deploy side, if we can, if we can get that deploy time faster, like give engineers faster feedback on features that they're working on or bug fixes and also being secure and knowing that the, the code that they're working on it gets delivered reliably. >>Yeah. I think I, you have the continuous delivery is always the, the, the killer killer workflow as both the Spinnaker question here. Well, how has Spinnaker, well, what, how, how does Spinnaker being an open source project help you guys? I mean, obviously open source code is great. How has that been significant and beneficial for both armory and Patrion? >>Yeah, I'll take the first stab at this one. And it starts at the beginning. Spinnaker was created by Netflix and since Netflix open source that four or five years ago, there have been countless and significant contributions from many other companies, including armory, including AWS and those contributions collectively push the industry forward and allow the, the companies that, you know, that use open-source Spinnaker or armory, they can now benefit from all of the collective effort together. So just that community aspect working together is huge. Absolutely huge. And, you know, open source, I guess on the go-to-market side is a big driver for us. You know, there's many, many companies using open-source Spinnaker in production that are not our customers yet. And we, we survey them. We want to know how they're using open-source Spinnaker so that we can then improve open-source Spinnaker, but also build features that are critical for large companies to run at scale, deploy at scale, deploy with velocity and with reliability. >>Yeah. What's your take on, on the benefits of Spinnaker being open source? >>A lot of what Ben, it's been really beneficial to be able to like, be able to go in and look at the source code for components. I've been wondering something like, why is this thing working like this? Or how did they solve this? It's also been useful for, I can go ask the community for, for advice on things. If armory doesn't has the, it doesn't have the time or bandwidth to work on some things I've been able to ask the special interest groups in the source community. Like, can we, can we help improve this or something like that. And I've also been able to commit simple bug fixes for features that I've, that I've needed. I was like, well, I don't need to, I don't need to go engage are very on this. I can just like, I can just write up a simple patch on and have that out for review. >>You know, that's the beautiful thing about open sources. You get the source code and that's, and some people just think it's so easy, Ben, you know, just, Hey, just give me the open source. I'll code it. I got an unlimited resource team. Not, not always the case. So I gotta ask you guys on Patrion. Why use a company like armory, if you have the open source code and armory, why did you build a business on the open source project? Like Spinnaker? >>Yeah. Like I see. Absolutely. Yeah. Like I, like I said earlier, the atrium, the Patrion SRE team was wasn't is fairly small. There's two people. Now we're six. People are still people down. We're six people now. So being sure we could run a Spinnaker on our own if we, if we wanted to. And, but then we'd have no time to do anything else basically. And that's not the best use of our, of our creators money. Our fans, the fans being the creators artists. We have obviously take a percentage on top of that. And we, we need to spend our, that money well, and having armory who's dedicated to the Spinnaker is dedicated, involved the open source project. But also there are experts on this Sunday. It was something that would take me like a week of stumbling around trying to find documentation on how to set this thing up. They done this like 15, 20 times and they can just go, oh yeah, this is what we do for this. And let me go fix it for you >>At score. You know, you've got a teammate. I think that's where, what you're getting at. I got to ask you what other things is that free you up? Because this is the classic business model of life. You know, you have a partner you're moving fast, it slows you down to get into it. Sure. You can do it yourself, but why it's faster to go with it, go together with a partner and a wing man as we will. What things did does that free you up to work on as an SRE? >>Oh, that's freed me up to work on a bigger parts of our build and deploy pipeline. It's freed me up to work on moving from a usage based deploys onto a containerization strategy. It's freed me up to work on more broader observability issues instead of just being laser-focused on running an operating spending. >>Yeah. And that really kind of highlights. I'm glad you said that because it highlights what's going on. You had a lot of speed and velocity. You've got scale, you've got security and you've got new challenges you got to fix in and move fast. It's a whole new world. So again, this is why I love cloud native. Right? So you got open source, you got scale and you guys are applying directly to the, to the infrastructure of the business. So Ben, I got to ask you armory. Co-founder why did you guys build your business on an open source project? Like Spinnaker? What was the mindset? How did you attack this? What did you guys do? Take us through that piece because this is truly a great entrepreneurial story about open source. >>Yeah. Yeah. I'll give you the abridged version, which is that my co-founders and I, we solved the same problem, which is CD at a previous company, but we did it kind of the old fashioned way we home role. We handled it ourselves. We built it on top of Jenkins and it was great for that company, but, and that was kind of the inspiration for us to then ask questions. Hey, is this bigger? We, when at the time we found that Spinnaker had just been, you know, dog food inside of Netflix and they were open sourcing it. And we thought it was a great opportunity for us to partner. But the bigger reason is that Spinnaker is a platform that deploys to other platforms like AWS and Kubernetes and the sheer amount of surface area that's required to build a great product is enormous. And I actually believe that the only way to be successful in this space is to be open source, to have a community of large companies and passionate developers that contribute the roads if you will, to deploy into various targets. >>And so that's the reason, number one for it being open source and us wanting to build our business on top of open source. And then the second reason is because we focus almost exclusively on solving enterprise scale problems. We have a platform that needs to be extensible and open source is by definition extensible. So our customers, I mean, Ian just had a great example, right? Like he needed to fix something he was able to do so solve it in open source. And then, you know, shortly thereafter that that fix in mainline gets into the armory official build and then he can consume his fix. So we see a lot of that from our other customers. And then even, you know, take a very, very large company. They may have custom off that they need to integrate with, but that doesn't, that's not in open-source Spinnaker, but they can go and build that themselves. >>Yeah, it's real. It really is the new modern way to develop. And I, you know, last 80 with startup showcase last season, Emily Freeman gave a talk on, you know, you know, retiring, I call it killing the software, SDLC, the lifecycle of how software was developed in the past. And I got to ask you guys, and, and this cube conversation is that this is kind of like the, the kind of the big wave we're on now is cloud scale, open source, cloud, native data security, all being built in on this in the pipelines to your point is SRS enabling a new infrastructure and a new environment for people to build essentially SAS. So I got to ask you guys as, and you mentioned it Ben, the old way you hand rolled something, Netflix, open source, something, you got to look at Lyft with Envoy. I mean, large-scale comes, are donating their stuff into open source and people getting on top of it and building it. So the world's changed. So we've got to ask you, what's the difference between standing up a SAS application today versus say five to eight years ago, because we all see salesforce.com. You know, they're out there, they built their own data center. Cloud skills changed the dynamics of how software is being built. And with open-source accelerating every quarter, you're seeing more growth in software. How has building a platform for applications changed and how has that changed? How people build SAS applications, Ben, what's your take on this? It's kind of a thought exercise here. >>Yeah. I mean, I wouldn't even call it a thought exercise. We're seeing it firsthand from our customers. And then I'll, you know, I'll, I'll give my answer and you can weigh in on like practical, like what you're actually doing at Patrion with SAS, but the, the costs and the kind of entry fee, if you will, for building a SAS application has tremendously dropped. You don't need to buy servers and put them inside data centers anymore. You just spin up a VM or Kubernetes cluster with AWS. AWS has led the way in public cloud to make this incredible easy. And the tool sets being built around cloud native, like armory and like many other companies in the space are making it even easier. So we're just seeing the proliferation of, of software being developed and, and hopefully, you know, armory is playing a role in, in making it easier and better. >>So before we get to Unum for a second, I just want to just double down on it because there's great conversation that implies that there's going to be a new migration of apps everywhere, right. As tsunami of clutter good or bad, is that good or bad or is it all open source? Is it all good then? >>Absolutely good. For sure. There will be, you know, good stuff developed and not so good stuff developed, but survival of the fittest will hopefully promote those, the best apps with the highest value to the end user and, and society at large and push us all forward. So, >>And what's your take, obviously, Kubernetes, you seeing things like observability talking about how we're managing stateful and services that are being deployed and tear down in real time, automated, all new things are developing. How does building a true scalable SAS application change today versus say five, eight years ago? >>I mean, like you said, there's a, there's a lot, there's a lot of new, both open source. So SAS products available that you can use to build a scale stuff. Like if you're going to need that to build like secure authentication, instead of having to roll that out and you could go with something like Okta raw zero, you can just pull that off the shelf stuff for like managing push notifications before that was like something really hard to really hard to do. Then Firebase came on the scene and also for manic state and application and stuff like that. And also for like being, being able to deliver before >>You had Jenkins, maybe even for that, you didn't really have anything Jenkins came along. And then now you have open-source products like Spinnaker that you can use to deliver. And then you have companies built around that, that you can just go and say, Hey, can you please help us deliver this? Like you just help us, enable us to be able to build, build our products so that we can focus on delivering value to our creators and fans instead of having to focus on, on other things. >>So bill it builds faster. You can compose stuff faster. You don't have to roll your own code. You can just roll your own modules basically, and then exactly what prietary on top of it. Absolutely. Yeah. And that's why commercial open source is booming. Guys. Thank you so much, Ben, congratulations on armory and great to have you on from Patrion well-known success. So we'll accompany you congratulate. If we don't know patriarch, check it out, they have changed the game on creators and leading the industry. Ben. Great, great shot with armory and Spinnaker. Thanks for coming on. Thank you >>So much. Thank you >>So much. Okay. I'm Sean Ferrer here with the cube conversation with Palo Alto. Thanks for watching.
SUMMARY :
horn, S staff SRE at Patrion gentlemen, you know, open source and enterprise is here And so if you believe that that's true. our applications that, you know, help, help creators get paid. the game you were doing creators before it was fashionable. So you have to ship with speed. And so I think, you know, preventing outages is One of the things that we saw was that deploying D So I don't have the ability to manage Spinnaker on my own. how soon can you get someone get someone over here? did you guys do? And so those are really the things that, that we focus on and we use you know, managing scale. So we work with our content delivery numbers or how does Spinnaker being an open source project help you guys? And it starts at the beginning. And I've also been able to commit So I gotta ask you guys on Patrion. And let me go fix it for you I got to ask you what other things is that free you up? It's freed me up to work on moving from a usage So Ben, I got to ask you armory. And I actually believe that the only way to be successful in this space is to And then even, you know, take a very, very large company. And I got to ask you guys, And then I'll, you know, I'll, I'll give my answer and you can weigh in on like practical, So before we get to Unum for a second, I just want to just double down on it because there's great conversation that implies that there's going There will be, you know, good stuff developed and And what's your take, obviously, Kubernetes, you seeing things like observability talking about how we're managing So SAS products available that you can use to build a scale stuff. And then now you have open-source products like Spinnaker that you can use to deliver. congratulations on armory and great to have you on from Patrion well-known success. Thank you Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sean Ferrer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ben Mappin | PERSON | 0.99+ |
Emily Freeman | PERSON | 0.99+ |
Ian | PERSON | 0.99+ |
February of 2019 | DATE | 0.99+ |
Ian Delahorne | PERSON | 0.99+ |
Ben | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Patrion | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
Ian Della horn | PERSON | 0.99+ |
Ben Mappen | PERSON | 0.99+ |
two people | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Spinnaker | ORGANIZATION | 0.99+ |
SRE | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
one degree | QUANTITY | 0.99+ |
second reason | QUANTITY | 0.99+ |
two weeks later | DATE | 0.99+ |
Patriot | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
20 times | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two great guests | QUANTITY | 0.98+ |
Patreon | ORGANIZATION | 0.98+ |
four | DATE | 0.98+ |
once a year | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
armory | ORGANIZATION | 0.98+ |
once a quarter | QUANTITY | 0.97+ |
SAS | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
three things | QUANTITY | 0.96+ |
five years ago | DATE | 0.96+ |
Envoy | ORGANIZATION | 0.96+ |
eight years ago | DATE | 0.95+ |
Lyft | ORGANIZATION | 0.95+ |
five | DATE | 0.94+ |
last 80 | DATE | 0.94+ |
Jenkins | TITLE | 0.94+ |
Sean ferry | PERSON | 0.93+ |
ENN Patriot | ORGANIZATION | 0.91+ |
today | DATE | 0.91+ |
Spinnaker | TITLE | 0.91+ |
salesforce.com | OTHER | 0.91+ |
first stab | QUANTITY | 0.9+ |
Sandy Carter, AWS & Fred Swaniker, The Room | AWS re:Invent 2021
>>Welcome back to the cubes coverage of ADA reinvent 2021 here, the cube coverage. I'm Judd for a, your host we're on the ground with two sets on the floor, real event. Of course, it's hybrid. It's online as well. You can check it out there. All the on-demand replays are there. We're here with Sandy Carter, worldwide vice president, public sector partners and programs. And we've got Fred Swanick, her founder, and chief curator of the room. We're talking about getting the best talent programming and in the cloud, doing great things, innovation all happening, Sandy. Great to see you. Thanks for coming on the cube, but appreciate it. Thanks for halfway to see. Okay. So tell us about the room. What is the room what's going on? >>Um, well, I mentioned in the room is to help the world's most extraordinary do us to fulfill their potential. So, um, it's a community of exceptional talent that we are building throughout the world, um, and connecting this talent to each other and connecting them to the organizations that are looking for people who can really move the needle for those organizations. >>So what kind of results are you guys seeing right now? Give us some stats. >>Well, it's a, it's a relatively new concept. So we're about 5,000 members so far, um, from 77 different countries. Um, and this is, you know, we're talking about sort of the top two to 3% of talent in different fields. Um, and, um, as we go forward, you know, we're really looking, seeing this as an opportunity to curate, um, exceptional talent. Um, and it feels like software engineering, data science, UX, UI design, cloud computing, um, and, uh, it really helped to, um, identify diverse talent as well from pockets that have typically been untapped for technology. Okay. >>I want to ask you kind of, what's the, how you read the tea leaves. How do I spot the talent, but first talk about the relationship with Amazon. What's the program together? How you guys working together? It's a great mission. I mean, we need more people anyway, coding everywhere, globally. What's the AWS connection. >>So Fred and I met and, uh, he had this, I mean the brilliant concept of the room. And so, uh, obviously you need to run that on the cloud. And so he's got organizations he's working at connecting them through the room and kind of that piece that he was needing was the technology. So we stepped in to help him with the technology piece because he's got all the subject matter expertise to train 3 million Africans, um, coming up on tech, we also were able to provide him some of the classwork as well for the cloud computing models. So some of those certs and things that we want to get out into the marketplace as well, we're also helping Fred with that as well. So >>I mean, want to, just to add onto that, you know, one of the things that's unique about the room is that we're trying to really build a long-term relationship with talent. So imagine joining the room as a 20 year old and being part of it until you're 60. So you're going to have a lot of that. You collect on someone as they progress through different stages of their career and the ability for us to leverage that data, um, and continuously learn about someone's, you know, skills and values and use, um, predictive algorithms to be able to match them to the right opportunities at the right time of their lives. And this is where the machine learning comes in and the, you know, the data lake that we're building to build to really store this massive data that we're going to be building on the top talent to the world. >>You know, that's a really good point. It's a list that's like big trend in tech where it's, it's still it's over the life's life of the horizon of the person. And it's also blends community, exactly nurturing, identifying, and assisting. But at the same day, not just giving people the answer, they got to grow on their own, but some people grow differently. So again, progressions are nonlinear sometimes and creativity can come out of nowhere. Got it. Uh, which brings me up to my number one question, because this always was on my mind is how do you spot talent? What's the secret sauce? >>Well, there is no real secret source because every person is unique. So what we look for are people who have an extra dose of five things, courage, passion, resilience, imagination, and good values, right? And this is what we're looking for. And you will someone who is unusually driven to achieve great things. Um, so of course, you know, you look at it from a combination of their, their training, you know, what they, what they've learned, but also what they've actually done in the workplace and feedback that you get from previous employers and data that we collect through our own interactions with this person. Um, and so we screened them through, you know, with the town that we had, didn't fly, we take them through really rigorous selection process. So, um, it takes, uh, for example, people go through an online assessments and then they go through an in-person interview and then we'll take them through a one to three month bootcamp to really identify, you know, people who are exceptional and of course get data from different sources about the person as well. >>Sandy, how do you see this collaboration helping, uh, your other clients? I mean, obviously talent, cross pollinates, um, learnings, what's your, you see this level of >>It has, uh, you know, AWS grows, obviously we're going to need more talent, especially in Africa because we're growing so rapidly there and there's going to be so much talent available in Africa here in just a few short years. Most of the tech talent will be in Africa. I think that that's really essential, but also as looking after my partners, I had Fred today on the keynote explaining to all my partners around the world, 55,000 streaming folks, how they can also leverage the room to fill some of their roles as well. Because if you think about it, you know, we heard from Presidio there's 3 million open cyber security roles. Um, you know, we're training 20 of mine million cloud folks because we have a gap. We see a gap around the world. And part of my responsibility with partners is making sure that they can get access to the right skills. And we're counting on the room and what Fred has produced to produce some of those great skills. You have AI, AML and dev ops. Tell us some of the areas you haven't. >>You know, we're looking at, uh, business intelligence, data science, um, full-stack software engineering, cybersecurity, um, you know, IOT talent. So fields that, um, the world needs a lot more talented. And I think today, a lot of technology, um, talent is moving from one place to another and what we need is new supply. And so what the room is doing is not only a community of top 10, but we're actually producing and training a lot more new talent. And that was going to hopefully, uh, remove a key bottleneck that a lot of companies are facing today as they try to undergo the digital trends. >>Well, maybe you can add some hosts on there. We need some cube hosts, come on, always looking for more talent on the set. You could be there. >>Yeah. The other interesting thing, John, Fred and I on stage today, he was talking about how easy to the first narrative written for easy to was written by a gentleman out of South Africa. So think about that right. ECE to talent. And he was talking about Ian Musk is based, you know, south African, right? So think about all the great talent that exists. There. There you go. There you go. So how do you get access to that talent? And that's why we're so excited to partner with Fred. Not only is he wicked impressive when a time's most influential people, but his mission, his life purpose has really been to develop this great talent. And for us, that gets us really excited because we, yeah, >>I think there's plenty of opportunities to around new business models in the U S for instance, um, my friends started upstart, which they were betting on people almost like a stock market. You know, almost like currency will fund you and you pay us back. And there's all kinds of gamification techniques that you can start to weave into the system. Exactly. As you get the flywheel going, exactly, you can look at it holistically and say, Hey, how do we get more people in and harvest the value of knowledge? >>That's exactly. I mean, one of the elements of the technology platform that we developed to the Amazon with AWS is the room intelligence platform. And in there is something called legacy points. So every time you, as a member of the room, give someone else an opportunity. You invest in their venture, you hire them, you mentor them, you get points and you can leverage those points for some really cool experiences, right? So you want to game-ify um, this community that is, uh, you know, essentially crowdsourcing opportunities. And you're not only getting things from the room, but you're also giving to others to enable everyone to grow. >>Yeah, what's the coolest thing you've seen. And this is a great initiative. First of all, it's a great model. I think it's, this is the future. Cause I'm a big believer that communities groups, as we get into this hybrid world is going to open up the virtualization. What the virtual world has shown us is virtualization, which is a cloud technology when Amazon started with Zen, which is virtualization technology, but virtualization, conceptually is replicating things. So if you think hybrid world, you can blend the connect people together. So now you have this social construct, this connective tissue between relationships, and it's always evolving, you know, this and you've been involved in community from, from, from the early days when you have that social evolution, it's not software as a mechanism. It's a human thing. Exactly. It's organism, it evolves. And so if you can get the software to think like that and the group to drive the behavior, it's not community software. >>Exactly. I mean, we say that the room is not an online community. It's really an offline community powered by technology. So our vision is to actually have physical rooms in different cities around the world, whether it's talent gathers, but imagine showing up at a, at a room space and we've got the technology to know what your interests are. We know that you're working on a new venture and there's this, there's a venture capitalists in that area, investing that venture, we can connect you right then that space powered by the, >>And then you can have watch parties. For instance, there's an event going on in us. You can do some watch parties and time shifted and then re replicated online and create a localization, but yet have that connection in >>Present. Exactly, exactly. Exactly. So what are the >>Learnings, what's your big learning share with the audience? What you've learned, because this is really kind of on the front edge of the new kind of innovation we're seeing, being enabled with software. >>I mean, one thing we're learning is that, uh, talent is truly, uh, evenly distribute around the world, but what is not as opportunity. And so, um, there's some truly exceptional talent that is hidden and on tap today. And if we can, you know, and, and today with the COVID pandemic companies or around the world, a lot more open to hiring more talent. So there's a huge opportunity to access new talent from, from sources that haven't been tapped before. Well, but also learnings the power of blending, the online and offline world. So, um, you know, the room is, as I mentioned, brings people together, normally in line, but also offline. And so when you're able to meet talent and actually see someone's personality and get a sense of the culture fit the 360 degree for your foot, some of that, you can't just get on a LinkedIn. Yes. That I built it to make a decision, to hire someone who is much better. And finally, we're also learning about the importance of long-term relationships. One of my motives in the room is relationships not transactions where, um, you actually get to meet someone in an environment where they're not pretending in an interview and you get to really see who they are and build relationships with them before you need to hide them. And these are some really unique ways that we think we can redefine how talent finds opportunity in the 21st. So >>You can put a cube in every room, we pick >>You up because, >>And the cube, what we do here is that when people collaborate, whether they're doing an interview together, riffing and sharing content is creating knowledge, but that shared experience creates a bonding. So when you have that kind of mindset and this room concept where it's not just resume, get a job, see you later, it's learning, having peers and colleagues and people around you, and then seeing them in a journey, multiple laps around the track of humans >>And going through a career, not just a job. >>Yes, exactly. And then, and then celebrating the ups and downs in learning. It's not always roses, as you know, it's always pain before you accelerate. >>Exactly. And you never quite arrive at your destination. You're always growing, and this is where technology can really play. >>Okay. So super exciting. Where's this go next, Sandy. And next couple of minutes left in. >>So, um, one of the things that we've envisioned, so this is not done yet, but, um, Fred and I imagined like, what if you could have an Alexa set up and you could say, Hey, you know, Alexa, what should be my next job? Or how should I go train? Or I'm really interested in being on a Ted talk. What could I do having an Alexa skill might be a really cool thing to do. And with the great funding that Fred Scott and you should talk about the $400 million to that, he's already raised $400 million. I mean, there, I think the sky's the limit on platforms. Like >>That's a nice chunk of change. There it is. We've got some fat financing as they say, >>But, well, it's a big mission. So to request significant resources, >>Who's backing you guys. What's the, who's the, where's the money coming from? >>It's coming from, um, the MasterCard foundation. They, our biggest funder, um, as well as, um, some philanthropists, um, and essentially these are people who truly see the potential, uh, to unlock, um, opportunity for millions of people global >>For Glen, a global scale. The vision has global >>Executive starting in Africa, but truly global. Our vision is eventually to have a community of about 10 to 20 million of the most extraordinary doers in the world, in this community, and to connect them to opportunity >>Angela and diverse John. I mean, this is the other thing that gets me excited because innovation comes from diversity of thought and given the community, we'll have so many diverse individuals in it that are going to get trained and mentored to create something that is amazing for their career as well. That really gets me excited too, as well as Amazon website, >>Smart people, and yet identifying the fresh voices and the fresh minds that come with it, all that that comes together, >>The social capital that they need to really accelerate their impact. >>Then you read the room and then you get wherever you need. Thanks so much. Congratulations on your great mission. Love the room. Um, you need to be the in Cuban, every room, you gotta get those fresh voices out there. See any graduates on a great project, super exciting. And SageMaker, AI's all part of, it's all kind of, it's a cool wave. It's fun. Can I join? Can I play? I tell you I need a room. >>I think he's top talent. >>Thanks so much for coming. I really appreciate your insight. Great stuff here, bringing you all the action and knowledge and insight here at re-invent with the cube two sets on the floor. It's a hybrid event. We're in person in Las Vegas for a real event. I'm John ferry with the cube, the leader in global tech coverage. Thanks for watching.
SUMMARY :
Thanks for coming on the cube, but appreciate it. and connecting this talent to each other and connecting them to the organizations that are looking for people who can really move So what kind of results are you guys seeing right now? and, um, as we go forward, you know, we're really looking, I want to ask you kind of, what's the, how you read the tea leaves. And so, uh, obviously you need to run that on the cloud. I mean, want to, just to add onto that, you know, one of the things that's unique about the room is that we're trying to really build a But at the same day, not just giving people the answer, they got to grow on their own, but some people grow differently. to really identify, you know, people who are exceptional and of course get data from different sources about the person Um, you know, we're training 20 of mine million cloud you know, IOT talent. Well, maybe you can add some hosts on there. So how do you get access to that talent? that you can start to weave into the system. So you want to game-ify um, this community that is, And so if you can get the software to think like there's a venture capitalists in that area, investing that venture, we can connect you right then that space powered And then you can have watch parties. So what are the of the new kind of innovation we're seeing, being enabled with software. And if we can, you know, and, and today with the COVID pandemic companies or around the world, So when you have that kind of mindset and this room It's not always roses, as you know, it's always pain before you accelerate. And you never quite arrive at your destination. And next couple of minutes left in. And with the great funding that Fred Scott and you should talk about the That's a nice chunk of change. So to request significant resources, Who's backing you guys. It's coming from, um, the MasterCard foundation. For Glen, a global scale. to 20 million of the most extraordinary doers in the world, in this community, and to connect them to opportunity individuals in it that are going to get trained and mentored to create something I tell you I need a room. Great stuff here, bringing you all the action and knowledge and insight here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Fred Swanick | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
Ian Musk | PERSON | 0.99+ |
Fred Swaniker | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
20 year | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sandy Carter | PERSON | 0.99+ |
Sandy | PERSON | 0.99+ |
South Africa | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Fred Scott | PERSON | 0.99+ |
$400 million | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
two sets | QUANTITY | 0.99+ |
3 million | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
U S | LOCATION | 0.99+ |
Angela | PERSON | 0.99+ |
77 different countries | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Glen | PERSON | 0.98+ |
3% | QUANTITY | 0.98+ |
John ferry | PERSON | 0.98+ |
five things | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
first narrative | QUANTITY | 0.96+ |
three month | QUANTITY | 0.96+ |
about 10 | QUANTITY | 0.95+ |
55,000 streaming folks | QUANTITY | 0.94+ |
about 5,000 members | QUANTITY | 0.93+ |
20 million | QUANTITY | 0.92+ |
First | QUANTITY | 0.92+ |
million | QUANTITY | 0.92+ |
Alexa | TITLE | 0.91+ |
MasterCard foundation | ORGANIZATION | 0.87+ |
south African | OTHER | 0.87+ |
3 million open cyber | QUANTITY | 0.87+ |
millions of people | QUANTITY | 0.87+ |
Presidio | ORGANIZATION | 0.84+ |
21st | QUANTITY | 0.82+ |
Cuban | LOCATION | 0.81+ |
Ted talk | TITLE | 0.77+ |
top 10 | QUANTITY | 0.74+ |
COVID pandemic | EVENT | 0.72+ |
number one question | QUANTITY | 0.72+ |
one place | QUANTITY | 0.68+ |
top two | QUANTITY | 0.64+ |
re:Invent | EVENT | 0.62+ |
SageMaker | ORGANIZATION | 0.59+ |
ADA | TITLE | 0.56+ |
The Room | ORGANIZATION | 0.52+ |
Africans | PERSON | 0.5+ |
2021 | DATE | 0.49+ |
2021 | TITLE | 0.48+ |
Zen | COMMERCIAL_ITEM | 0.4+ |
lexa | TITLE | 0.38+ |
Ian Buck, NVIDIA | AWS re:Invent 2021
>>Well, welcome back to the cubes coverage of AWS reinvent 2021. We're here joined by Ian buck, general manager and vice president of accelerated computing at Nvidia I'm. John Ford, your host of the QB. And thanks for coming on. So in video, obviously, great brand congratulates on all your continued success. Everyone who has does anything in graphics knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing, uh, in ML and AI, that's accelerating computing to the cloud. Yeah, >>I mean, AI is kind of drape bragging breakthroughs innovations across so many segments, so many different use cases. We see it showing up with things like credit card, fraud prevention and product and content recommendations. Really it's the new engine behind search engines is AI. Uh, people are applying AI to things like, um, meeting transcriptions, uh, virtual calls like this using AI to actually capture what was said. Um, and that gets applied in person to person interactions. We also see it in intelligence systems assistance for a contact center, automation or chat bots, uh, medical imaging, um, and intelligence stores and warehouses and everywhere. It's really, it's really amazing what AI has been demonstrated, what it can do. And, uh, it's new use cases are showing up all the time. >>Yeah. I'd love to get your thoughts on, on how the world's evolved just in the past few years, along with cloud, and certainly the pandemics proven it. You had this whole kind of full stack mindset initially, and now you're seeing more of a horizontal scale, but yet enabling this vertical specialization in applications. I mean, you mentioned some of those apps, the new enablers, this kind of the horizontal play with enablement for specialization, with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >>Yeah, it's the innovations on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIS as well as machine learning techniques that are, um, just being invented by researchers for, uh, and the community at large, including Amazon. Um, you know, it started with these convolutional neural networks, which are great for image processing, but as it expanded more recently into, uh, recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic graph neural networks, where the actual graph now is trained as a, as a neural network, you have this underpinning of great AI technologies that are being adventure around the world in videos role is try to productize that and provide a platform for people to do that innovation and then take the next step and innovate vertically. Um, take it, take it and apply it to two particular field, um, like medical, like healthcare and medical imaging applying AI, so that radiologists can have an AI assistant with them and highlight different parts of the scan. >>Then maybe troublesome worrying, or requires more investigation, um, using it for robotics, building virtual worlds, where robots can be trained in a virtual environment, their AI being constantly trained, reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box, um, to do, to activate that we co we are creating different vertical solutions, vertical stacks for products that talk the languages of those businesses, of those users, uh, in medical imaging, it's processing medical data, which is obviously a very complicated large format data, often three-dimensional boxes in robotics. It's building combining both our graphics and simulation technologies, along with the, you know, the AI training capabilities and different capabilities in order to run in real time. Those are, >>Yeah. I mean, it's just so cutting edge. It's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just to go back to the late two thousands, you know, how unstructured data or object store created, a lot of people realize that the value out of that now you've got graph graph value, you got graph network effect, you've got all kinds of new patterns. You guys have this notion of graph neural networks. Um, that's, that's, that's out there. What is, what is a graph neural network and what does it actually mean for deep learning and an AI perspective? >>Yeah, we have a graph is exactly what it sounds like. You have points that are connected to each other, that established relationships and the example of amazon.com. You might have buyers, distributors, sellers, um, and all of them are buying or recommending or selling different products. And they're represented in a graph if I buy something from you and from you, I'm connected to those end points and likewise more deeply across a supply chain or warehouse or other buyers and sellers across the network. What's new right now is that those connections now can be treated and trained like a neural network, understanding the relationship. How strong is that connection between that buyer and seller or that distributor and supplier, and then build up a network that figure out and understand patterns across them. For example, what products I may like. Cause I have this connection in my graph, what other products may meet those requirements, or also identifying things like fraud when, when patterns and buying patterns don't match, what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two captured by the frequency half I buy things or how I rate them or give them stars as she used cases, uh, this application graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, it's very exciting to a new application, but applying AI to optimizing business, to reducing fraud and letting us, you know, get access to the products that we want, the products that they have, our recommendations be things that, that excited us and want us to buy things >>Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads are changing. The game. People are refactoring their business with not just replatform, but actually using this to identify value and see cloud scale allows you to have the compute power to, you know, look at a note on an arc and actually code that. It's all, it's all science, all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS before? >>Yeah. 80 of us has been a great partner and one of the first cloud providers to ever provide GPS the cloud, uh, we most more recently we've announced two new instances, uh, the instance, which is based on the RA 10 G GPU, which has it was supports the Nvidia RTX technology or rendering technology, uh, for real-time Ray tracing and graphics and game streaming is their highest performance graphics, enhanced replicate without allows for those high performance graphics applications to be directly hosted in the cloud. And of course runs everything else as well, including our AI has access to our AI technology runs all of our AI stacks. We also announced with AWS, the G 5g instance, this is exciting because it's the first, uh, graviton or ARM-based processor connected to a GPU and successful in the cloud. Um, this makes, uh, the focus here is Android gaming and machine learning and France. And we're excited to see the advancements that Amazon is making and AWS is making with arm and the cloud. And we're glad to be part of that journey. >>Well, congratulations. I remember I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was getting, he was teasing this out, that they're going to build their own, get in there and build their own connections, take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new new interfaces and the new servers, new technology that you guys are doing, you're enabling applications. What does, what do you see this enabling as this, as this new capability comes out, new speed, more, more performance, but also now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >>Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, uh, led of course, by grab a tiny to be. I spend many others, uh, and by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to arm, we can help bring that innovation. That arm allows that open innovation because there's an open architecture to the entire ecosystem. Uh, we can help bring it forward, uh, to the state of the art in AI machine learning, the graphics. Um, we all have our software that we released is both supportive, both on x86 and an army equally, um, and including all of our AI stacks. So most notably for inference the deployment of AI models. We have our, the Nvidia Triton inference server. Uh, this is the, our inference serving software where after he was trained to model, he wanted to play it at scale on any CPU or GPU instance, um, for that matter. So we support both CPS and GPS with Triton. Um, it's natively integrated with SageMaker and provides the benefit of all those performance optimizations all the time. Uh, things like, uh, features like dynamic batching. It supports all the different AI frameworks from PI torch to TensorFlow, even a generalized Python code. Um, we're activating how activating the arm ecosystem as well as bringing all those AI new AI use cases and all those different performance levels, uh, with our partnership with AWS and all the different clouds. >>And you got to making it really easy for people to use, use the technology that brings up the next kind of question I want to ask you. I mean, a lot of people are really going in jumping in the big time into this. They're adopting AI. Either they're moving in from prototype to production. There's always some gaps, whether it's knowledge, skills, gaps, or whatever, but people are accelerating into the AI and leaning into it hard. What advancements have is Nvidia made to make it more accessible, um, for people to move faster through the, through the system, through the process? >>Yeah, it's one of the biggest challenges. The other promise of AI, all the publications that are coming all the way research now, how can you make it more accessible or easier to use by more people rather than just being an AI researcher, which is, uh, uh, obviously a very challenging and interesting field, but not one that's directly in the business. Nvidia is trying to write a full stack approach to AI. So as we make, uh, discover or see these AI technologies come available, we produce SDKs to help activate them or connect them with developers around the world. Uh, we have over 150 different STKs at this point, certain industries from gaming to design, to life sciences, to earth scientist. We even have stuff to help simulate quantum computing. Um, and of course all the, all the work we're doing with AI, 5g and robotics. So, uh, we actually just introduced about 65 new updates just this past month on all those SDKs. Uh, some of the newer stuff that's really exciting is the large language models. Uh, people are building some amazing AI. That's capable of understanding the Corpus of like human understanding, these language models that are trained on literally the continent of the internet to provide general purpose or open domain chatbots. So the customer is going to have a new kind of experience with a computer or the cloud. Uh, we're offering large language, uh, those large language models, as well as AI frameworks to help companies take advantage of this new kind of technology. >>You know, each and every time I do an interview with Nvidia or talk about Nvidia my kids and their friends, they first thing they said, you get me a good graphics card. Hey, I want the best thing in their rig. Obviously the gaming market's hot and known for that, but I mean, but there's a huge software team behind Nvidia. This is a well-known your CEO is always talking about on his keynotes, you're in the software business. And then you had, do have hardware. You were integrating with graviton and other things. So, but it's a software practices, software. This is all about software. Could you share kind of more about how Nvidia culture and their cloud culture and specifically around the scale? I mean, you, you hit every, every use case. So what's the software culture there at Nvidia, >>And it is actually a bigger, we have more software people than hardware people, people don't often realize this. Uh, and in fact that it's because of we create, uh, the, the, it just starts with the chip, obviously building great Silicon is necessary to provide that level of innovation, but as it expanded dramatically from then, from there, uh, not just the Silicon and the GPU, but the server designs themselves, we actually do entire server designs ourselves to help build out this infrastructure. We consume it and use it ourselves and build our own supercomputers to use AI, to improve our products. And then all that software that we build on top, we make it available. As I mentioned before, uh, as containers on our, uh, NGC container store container registry, which is accessible for me to bus, um, to connect to those vertical markets, instead of just opening up the hardware and none of the ecosystem in develop on it, they can with a low-level and programmatic stacks that we provide with Kuda. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make as well, >>Ram a little software is so much easier. I want to get that plug for, I think it's worth noting that you guys are, are heavy hardcore, especially on the AI side. And it's worth calling out, uh, getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about and looking at how they're doing? >>Yeah. Um, for training, it's all about time to solution. Um, it's not the hardware that that's the cost, it's the opportunity that AI can provide your business and many, and the productivity of those data scientists, which are developing, which are not easy to come by. So, uh, what we hear from customers is they need a fast time to solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it often. So in training is time to solution for inference. It's about our, your ability to deploy at scale. Often people need to have real time requirements. They want to run in a certain amount of latency, a certain amount of time. And typically most companies don't have a single AI model. They have a collection of them. They want, they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure leveraging the trading infant server. I mentioned before can actually run multiple models on a single GPU saving costs, optimizing for efficiency yet still meeting the requirements for latency and the real time experience so that your customers have a good, a good interaction with the AI. >>Awesome. Great. Let's get into, uh, the customer examples. You guys have obviously great customers. Can you share some of the use cases, examples with customers, notable customers? >>Yeah. I want one great part about working in videos as a technology company. You see, you get to engage with such amazing customers across many verticals. Uh, some of the ones that are pretty exciting right now, Netflix is using the G4 instances to CLA um, to do a video effects and animation content. And, you know, from anywhere in the world, in the cloud, uh, as a cloud creation content platform, uh, we work in the energy field that Siemens energy is actually using AI combined with, um, uh, simulation to do predictive maintenance on their energy plants, um, and, and, uh, doing preventing or optimizing onsite inspection activities and eliminating downtime, which is saving a lot of money for the engine industry. Uh, we have worked with Oxford university, uh, which is Oxford university actually has over two, over 20 million artifacts and specimens and collections across its gardens and museums and libraries. They're actually using convenient GPS and Amazon to do enhance image recognition, to classify all these things, which would take literally years with, um, uh, going through manually each of these artifacts using AI, we can click and quickly catalog all of them and connect them with their users. Um, great stories across graphics, about cross industries across research that, uh, it's just so exciting to see what people are doing with our technology together with, >>And thank you so much for coming on the cube. I really appreciate Greg, a lot of great content there. We probably going to go another hour, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up >>Now, the, um, really what Nvidia is about as accelerating cloud computing, whether it be AI, machine learning, graphics, or headphones, community simulation, and AWS was one of the first with this in the beginning, and they continue to bring out great instances to help connect, uh, the cloud and accelerated computing with all the different opportunities integrations with with SageMaker really Ks and ECS. Uh, the new instances with G five and G 5g, very excited to see all the work that we're doing together. >>Ian buck, general manager, and vice president of accelerated computing. I mean, how can you not love that title? We want more, more power, more faster, come on. More computing. No, one's going to complain with more computing know, thanks for coming on. Thank you. Appreciate it. I'm John Farrell hosted the cube. You're watching Amazon coverage reinvent 2021. Thanks for watching.
SUMMARY :
knows the GPU's are hot and you guys get great brand great success in the company, but AI and machine learning was seeing the AI. Uh, people are applying AI to things like, um, meeting transcriptions, I mean, you mentioned some of those apps, the new enablers, Yeah, it's the innovations on two fronts. technologies, along with the, you know, the AI training capabilities and different capabilities in I mean, I think one of the things you mentioned about the neural networks, You have points that are connected to each Great setup for the real conversation that's going on here at re-invent, which is new kinds of workloads And we're excited to see the advancements that Amazon is making and AWS is making with arm and interfaces and the new servers, new technology that you guys are doing, you're enabling applications. Well, so first off I think arm is here to stay and you can see the growth and explosion of my arm, I mean, a lot of people are really going in jumping in the big time into this. So the customer is going to have a new kind of experience with a computer And then you had, do have hardware. not just the Silicon and the GPU, but the server designs themselves, we actually do entire server I want to get that plug for, I think it's worth noting that you guys are, that that's the cost, it's the opportunity that AI can provide your business and many, Can you share some of the use cases, examples with customers, notable customers? research that, uh, it's just so exciting to see what people are doing with our technology together with, all the great stuff going on in the video, any closing remarks you want to share as we wrap this last minute up Uh, the new instances with G one's going to complain with more computing know, thanks for coming on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian buck | PERSON | 0.99+ |
John Farrell | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Ian Buck | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian buck | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Ford | PERSON | 0.99+ |
James Hamilton | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
G five | COMMERCIAL_ITEM | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
G 5g | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
Oxford university | ORGANIZATION | 0.99+ |
2013 | DATE | 0.98+ |
amazon.com | ORGANIZATION | 0.98+ |
over two | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
single service | QUANTITY | 0.97+ |
2021 | DATE | 0.97+ |
two fronts | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
over 20 million artifacts | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
about 65 new updates | QUANTITY | 0.93+ |
Siemens energy | ORGANIZATION | 0.92+ |
over 150 different STKs | QUANTITY | 0.92+ |
single GPU | QUANTITY | 0.91+ |
two new instances | QUANTITY | 0.91+ |
first thing | QUANTITY | 0.9+ |
France | LOCATION | 0.87+ |
two particular field | QUANTITY | 0.85+ |
SageMaker | TITLE | 0.85+ |
Triton | TITLE | 0.82+ |
first cloud providers | QUANTITY | 0.81+ |
NGC | ORGANIZATION | 0.77+ |
80 of | QUANTITY | 0.74+ |
past month | DATE | 0.68+ |
x86 | COMMERCIAL_ITEM | 0.67+ |
late | DATE | 0.67+ |
two thousands | QUANTITY | 0.64+ |
pandemics | EVENT | 0.64+ |
past few years | DATE | 0.61+ |
G4 | ORGANIZATION | 0.6+ |
RA | COMMERCIAL_ITEM | 0.6+ |
Kuda | ORGANIZATION | 0.59+ |
ECS | ORGANIZATION | 0.55+ |
10 G | OTHER | 0.54+ |
SageMaker | ORGANIZATION | 0.49+ |
TensorFlow | OTHER | 0.48+ |
Ks | ORGANIZATION | 0.36+ |
PA3 Ian Buck
(bright music) >> Well, welcome back to theCUBE's coverage of AWS re:Invent 2021. We're here joined by Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. I'm John Furrrier, host of theCUBE. Ian, thanks for coming on. >> Oh, thanks for having me. >> So NVIDIA, obviously, great brand. Congratulations on all your continued success. Everyone who does anything in graphics knows that GPU's are hot, and you guys have a great brand, great success in the company. But AI and machine learning, we're seeing the trend significantly being powered by the GPU's and other systems. So it's a key part of everything. So what's the trends that you're seeing in ML and AI that's accelerating computing to the cloud? >> Yeah. I mean, AI is kind of driving breakthroughs and innovations across so many segments, so many different use cases. We see it showing up with things like credit card fraud prevention, and product and content recommendations. Really, it's the new engine behind search engines, is AI. People are applying AI to things like meeting transcriptions, virtual calls like this, using AI to actually capture what was said. And that gets applied in person-to-person interactions. We also see it in intelligence assistance for contact center automation, or chat bots, medical imaging, and intelligence stores, and warehouses, and everywhere. It's really amazing what AI has been demonstrating, what it can do, and its new use cases are showing up all the time. >> You know, Ian, I'd love to get your thoughts on how the world's evolved, just in the past few years alone, with cloud. And certainly, the pandemic's proven it. You had this whole kind of fullstack mindset, initially, and now you're seeing more of a horizontal scale, but yet, enabling this vertical specialization in applications. I mean, you mentioned some of those apps. The new enablers, this kind of, the horizontal play with enablement for, you know, specialization with data, this is a huge shift that's going on. It's been happening. What's your reaction to that? >> Yeah. The innovation's on two fronts. There's a horizontal front, which is basically the different kinds of neural networks or AIs, as well as machine learning techniques, that are just being invented by researchers and the community at large, including Amazon. You know, it started with these convolutional neural networks, which are great for image processing, but has expanded more recently into recurrent neural networks, transformer models, which are great for language and language and understanding, and then the new hot topic, graph neural networks, where the actual graph now is trained as a neural network. You have this underpinning of great AI technologies that are being invented around the world. NVIDIA's role is to try to productize that and provide a platform for people to do that innovation. And then, take the next step and innovate vertically. Take it and apply it to a particular field, like medical, like healthcare and medical imaging, applying AI so that radiologists can have an AI assistant with them and highlight different parts of the scan that may be troublesome or worrying, or require some more investigation. Using it for robotics, building virtual worlds where robots can be trained in a virtual environment, their AI being constantly trained and reinforced, and learn how to do certain activities and techniques. So that the first time it's ever downloaded into a real robot, it works right out of the box. To activate that, we are creating different vertical solutions, vertical stacks, vertical products, that talk the languages of those businesses, of those users. In medical imaging, it's processing medical data, which is obviously a very complicated, large format data, often three-dimensional voxels. In robotics, it's building, combining both our graphics and simulation technologies, along with the AI training capabilities and difference capabilities, in order to run in real time. Those are just two simple- >> Yeah, no. I mean, it's just so cutting-edge, it's so relevant. I mean, I think one of the things you mentioned about the neural networks, specifically, the graph neural networks, I mean, we saw, I mean, just go back to the late 2000s, how unstructured data, or object storage created, a lot of people realized a lot of value out of that. Now you got graph value, you got network effect, you got all kinds of new patterns. You guys have this notion of graph neural networks that's out there. What is a graph neural network, and what does it actually mean from a deep learning and an AI perspective? >> Yeah. I mean, a graph is exactly what it sounds like. You have points that are connected to each other, that establish relationships. In the example of Amazon.com, you might have buyers, distributors, sellers, and all of them are buying, or recommending, or selling different products. And they're represented in a graph. If I buy something from you and from you, I'm connected to those endpoints, and likewise, more deeply across a supply chain, or warehouse, or other buyers and sellers across the network. What's new right now is, that those connections now can be treated and trained like a neural network, understanding the relationship, how strong is that connection between that buyer and seller, or the distributor and supplier, and then build up a network to figure out and understand patterns across them. For example, what products I may like, 'cause I have this connection in my graph, what other products may meet those requirements? Or, also, identifying things like fraud, When patterns and buying patterns don't match what a graph neural networks should say would be the typical kind of graph connectivity, the different kind of weights and connections between the two, captured by the frequency of how often I buy things, or how I rate them or give them stars, or other such use cases. This application, graph neural networks, which is basically capturing the connections of all things with all people, especially in the world of e-commerce, is very exciting to a new application of applying AI to optimizing business, to reducing fraud, and letting us, you know, get access to the products that we want. They have our recommendations be things that excite us and want us to buy things, and buy more. >> That's a great setup for the real conversation that's going on here at re:Invent, which is new kinds of workloads are changing the game, people are refactoring their business with, not just re-platforming, but actually using this to identify value. And also, your cloud scale allows you to have the compute power to, you know, look at a note in an arc and actually code that. It's all science, it's all computer science, all at scale. So with that, that brings up the whole AWS relationship. Can you tell us how you're working with AWS, specifically? >> Yeah, AWS have been a great partner, and one of the first cloud providers to ever provide GPUs to the cloud. More recently, we've announced two new instances, the G5 instance, which is based on our A10G GPU, which supports the NVIDIA RTX technology, our rendering technology, for real-time ray tracing in graphics and game streaming. This is our highest performance graphics enhanced application, allows for those high-performance graphics applications to be directly hosted in the cloud. And, of course, runs everything else as well. It has access to our AI technology and runs all of our AI stacks. We also announced, with AWS, the G5 G instance. This is exciting because it's the first Graviton or Arm-based processor connected to a GPU and successful in the cloud. The focus here is Android gaming and machine learning inference. And we're excited to see the advancements that Amazon is making and AWS is making, with Arm in the cloud. And we're glad to be part of that journey. >> Well, congratulations. I remember, I was just watching my interview with James Hamilton from AWS 2013 and 2014. He was teasing this out, that they're going to build their own, get in there, and build their own connections to take that latency down and do other things. This is kind of the harvest of all that. As you start looking at these new interfaces, and the new servers, new technology that you guys are doing, you're enabling applications. What do you see this enabling? As this new capability comes out, new speed, more performance, but also, now it's enabling more capabilities so that new workloads can be realized. What would you say to folks who want to ask that question? >> Well, so first off, I think Arm is here to stay. We can see the growth and explosion of Arm, led of course, by Graviton and AWS, but many others. And by bringing all of NVIDIA's rendering graphics, machine learning and AI technologies to Arm, we can help bring that innovation that Arm allows, that open innovation, because there's an open architecture, to the entire ecosystem. We can help bring it forward to the state of the art in AI machine learning and graphics. All of our software that we release is both supportive, both on x86 and on Arm equally, and including all of our AI stacks. So most notably, for inference, the deployment of AI models, we have the NVIDIA Triton inference server. This is our inference serving software, where after you've trained a model, you want to deploy it at scale on any CPU, or GPU instance, for that matter. So we support both CPUs and GPUs with Triton. It's natively integrated with SageMaker and provides the benefit of all those performance optimizations. Features like dynamic batching, it supports all the different AI frameworks, from PyTorch to TensorFlow, even a generalized Python code. We're activating, and help activating, the Arm ecosystem, as well as bringing all those new AI use cases, and all those different performance levels with our partnership with AWS and all the different cloud instances. >> And you guys are making it really easy for people to use use the technology. That brings up the next, kind of, question I wanted to ask you. I mean, a lot of people are really going in, jumping in big-time into this. They're adopting AI, either they're moving it from prototype to production. There's always some gaps, whether it's, you know, knowledge, skills gaps, or whatever. But people are accelerating into the AI and leaning into it hard. What advancements has NVIDIA made to make it more accessible for people to move faster through the system, through the process? >> Yeah. It's one of the biggest challenges. You know, the promise of AI, all the publications that are coming out, all the great research, you know, how can you make it more accessible or easier to use by more people? Rather than just being an AI researcher, which is obviously a very challenging and interesting field, but not one that's directly connected to the business. NVIDIA is trying to provide a fullstack approach to AI. So as we discover or see these AI technologies become available, we produce SDKs to help activate them or connect them with developers around the world. We have over 150 different SDKs at this point, serving industries from gaming, to design, to life sciences, to earth sciences. We even have stuff to help simulate quantum computing. And of course, all the work we're doing with AI, 5G, and robotics. So we actually just introduced about 65 new updates, just this past month, on all those SDKs. Some of the newer stuff that's really exciting is the large language models. People are building some amazing AI that's capable of understanding the corpus of, like, human understanding. These language models that are trained on literally the content of the internet to provide general purpose or open-domain chatbots, so the customer is going to have a new kind of experience with the computer or the cloud. We're offering those large language models, as well as AI frameworks, to help companies take advantage of this new kind of technology. >> You know, Ian, every time I do an interview with NVIDIA or talk about NVIDIA, my kids and friends, first thing they say is, "Can you get me a good graphics card?" They all want the best thing in their rig. Obviously the gaming market's hot and known for that. But there's a huge software team behind NVIDIA. This is well-known. Your CEO is always talking about it on his keynotes. You're in the software business. And you do have hardware, you are integrating with Graviton and other things. But it's a software practice. This is software. This is all about software. >> Right. >> Can you share, kind of, more about how NVIDIA culture and their cloud culture, and specifically around the scale, I mean, you hit every use case. So what's the software culture there at NVIDIA? >> Yeah, NVIDIA's actually a bigger, we have more software people than hardware people. But people don't often realize this. And in fact, that it's because of, it just starts with the chip, and obviously, building great silicon is necessary to provide that level of innovation. But it's expanded dramatically from there. Not just the silicon and the GPU, but the server designs themselves. We actually do entire server designs ourselves, to help build out this infrastructure. We consume it and use it ourselves, and build our own supercomputers to use AI to improve our products. And then, all that software that we build on top, we make it available, as I mentioned before, as containers on our NGC container store, container registry, which is accessible from AWS, to connect to those vertical markets. Instead of just opening up the hardware and letting the ecosystem develop on it, they can, with the low-level and programmatic stacks that we provide with CUDA. We believe that those vertical stacks are the ways we can help accelerate and advance AI. And that's why we make them so available. >> And programmable software is so much easier. I want to get that plug in for, I think it's worth noting that you guys are heavy hardcore, especially on the AI side, and it's worth calling out. Getting back to the customers who are bridging that gap and getting out there, what are the metrics they should consider as they're deploying AI? What are success metrics? What does success look like? Can you share any insight into what they should be thinking about, and looking at how they're doing? >> Yeah. For training, it's all about time-to-solution. It's not the hardware that's the cost, it's the opportunity that AI can provide to your business, and the productivity of those data scientists which are developing them, which are not easy to come by. So what we hear from customers is they need a fast time-to-solution to allow people to prototype very quickly, to train a model to convergence, to get into production quickly, and of course, move on to the next or continue to refine it. >> John Furrier: Often. >> So in training, it's time-to-solution. For inference, it's about your ability to deploy at scale. Often people need to have real-time requirements. They want to run in a certain amount of latency, in a certain amount of time. And typically, most companies don't have a single AI model. They have a collection of them they want to run for a single service or across multiple services. That's where you can aggregate some of your infrastructure. Leveraging the Triton inference server, I mentioned before, can actually run multiple models on a single GPU saving costs, optimizing for efficiency, yet still meeting the requirements for latency and the real-time experience, so that our customers have a good interaction with the AI. >> Awesome. Great. Let's get into the customer examples. You guys have, obviously, great customers. Can you share some of the use cases examples with customers, notable customers? >> Yeah. One great part about working at NVIDIA is, as technology company, you get to engage with such amazing customers across many verticals. Some of the ones that are pretty exciting right now, Netflix is using the G4 instances to do a video effects and animation content from anywhere in the world, in the cloud, as a cloud creation content platform. We work in the energy field. Siemens energy is actually using AI combined with simulation to do predictive maintenance on their energy plants, preventing, or optimizing, onsite inspection activities and eliminating downtime, which is saving a lot of money for the energy industry. We have worked with Oxford University. Oxford University actually has over 20 million artifacts and specimens and collections, across its gardens and museums and libraries. They're actually using NVIDIA GPU's and Amazon to do enhanced image recognition to classify all these things, which would take literally years going through manually, each of these artifacts. Using AI, we can quickly catalog all of them and connect them with their users. Great stories across graphics, across industries, across research, that it's just so exciting to see what people are doing with our technology, together with Amazon. >> Ian, thank you so much for coming on theCUBE. I really appreciate it. A lot of great content there. We probably could go another hour. All the great stuff going on at NVIDIA. Any closing remarks you want to share, as we wrap this last minute up? >> You know, really what NVIDIA's about, is accelerating cloud computing. Whether it be AI, machine learning, graphics, or high-performance computing and simulation. And AWS was one of the first with this, in the beginning, and they continue to bring out great instances to help connect the cloud and accelerated computing with all the different opportunities. The integrations with EC2, with SageMaker, with EKS, and ECS. The new instances with G5 and G5 G. Very excited to see all the work that we're doing together. >> Ian Buck, general manager and vice president of Accelerated Computing. I mean, how can you not love that title? We want more power, more faster, come on. More computing. No one's going to complain with more computing. Ian, thanks for coming on. >> Thank you. >> Appreciate it. I'm John Furrier, host of theCUBE. You're watching Amazon coverage re:Invent 2021. Thanks for watching. (bright music)
SUMMARY :
to theCUBE's coverage and you guys have a great brand, Really, it's the new engine And certainly, the pandemic's proven it. and the community at the things you mentioned and connections between the two, the compute power to, you and one of the first cloud providers This is kind of the harvest of all that. and all the different cloud instances. But people are accelerating into the AI so the customer is going to You're in the software business. and specifically around the scale, and build our own supercomputers to use AI especially on the AI side, and the productivity of and the real-time experience, the use cases examples Some of the ones that are All the great stuff going on at NVIDIA. and they continue to No one's going to complain I'm John Furrier, host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Furrrier | PERSON | 0.99+ |
Ian Buck | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Oxford University | ORGANIZATION | 0.99+ |
James Hamilton | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
G5 G | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
late 2000s | DATE | 0.99+ |
Graviton | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Accelerated Computing | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
2013 | DATE | 0.98+ |
A10G | COMMERCIAL_ITEM | 0.98+ |
both | QUANTITY | 0.98+ |
two fronts | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
single service | QUANTITY | 0.98+ |
PyTorch | TITLE | 0.98+ |
over 20 million artifacts | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
TensorFlow | TITLE | 0.95+ |
EC2 | TITLE | 0.94+ |
G5 instance | COMMERCIAL_ITEM | 0.94+ |
over 150 different SDKs | QUANTITY | 0.93+ |
SageMaker | TITLE | 0.93+ |
G5 | COMMERCIAL_ITEM | 0.93+ |
Arm | ORGANIZATION | 0.91+ |
first thing | QUANTITY | 0.91+ |
single GPU | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
about 65 new updates | QUANTITY | 0.89+ |
two new instances | QUANTITY | 0.89+ |
pandemic | EVENT | 0.88+ |
Triton | ORGANIZATION | 0.87+ |
PA3 | ORGANIZATION | 0.87+ |
Triton | TITLE | 0.84+ |
Invent | EVENT | 0.83+ |
G5 G. | COMMERCIAL_ITEM | 0.82+ |
two simple | QUANTITY | 0.8+ |
Derek Manky Chief, Security Insights & Global Threat Alliances at Fortinet's FortiGuard Labs
>>As we've been reporting, the pandemic has called CSOs to really shift their spending priorities towards securing remote workers. Almost overnight. Zero trust has gone from buzzword to mandate. What's more as we wrote in our recent cybersecurity breaking analysis, not only Maseca pro secured increasingly distributed workforce, but now they have to be wary of software updates in the digital supply chain, including the very patches designed to protect them against cyber attacks. Hello everyone. And welcome to this Q conversation. My name is Dave Vellante and I'm pleased to welcome Derek manky. Who's chief security insights, and global threat alliances for four guard labs with fresh data from its global threat landscape report. Derek. Welcome. Great to see you. >>Thanks so much for, for the invitation to speak. It's always a pleasure. Multicover yeah, >>You're welcome. So first I wonder if you could explain for the audience, what is for guard labs and what's its relationship to fortunate? >>Right. So 40 grand labs is, is our global sockets, our global threat intelligence operation center. It never sleeps, and this is the beat. Um, you know, it's, it's been here since inception at port in it. So it's it's 20, 21 years in the making, since Fortinet was founded, uh, we have built this in-house, uh, so we don't go yum technology. We built everything from the ground up, including creating our own training programs for our, our analysts. We're following malware, following exploits. We even have a unique program that I created back in 2006 to ethical hacking program. And it's a zero-day research. So we try to meet the hackers, the bad guys to their game. And we of course do that responsibly to work with vendors, to close schools and create virtual patches. Um, and, but, you know, so it's, it's everything from, uh, customer protection first and foremost, to following, uh, the threat landscape and cyber. It's very important to understand who they are, what they're doing, who they're, uh, what they're targeting, what tools are they using? >>Yeah, that's great. Some serious DNA and skills in that group. And it's, it's critical because like you said, you can, you can minimize the spread of those malware very, very quickly. So what, what now you have, uh, the global threat landscape report. We're going to talk about that, but what exactly is that? >>Right? So this a global threat landscape report, it's a summary of, uh, all, all the data that we collect over a period of time. So we released this, that biannually two times a year. Um, cyber crime is changing very fast, as you can imagine. So, uh, while we do release security blogs, and, uh, what we call threat signals for breaking security events, we have a lot of other vehicles to release threat intelligence, but this threat landscape report is truly global. It looks at all of our global data. So we have over 5 million censorship worldwide in 40 guard labs, we're processing. I know it seems like a very large amount, but North of a hundred billion, uh, threat events in just one day. And we have to take the task of taking all of that data and put that onto scale for half a year and compile that into something, um, that is, uh, the, you know, that that's digestible. That's a, a very tough task, as you can imagine, so that, you know, we have to work with a huge technologies back to machine learning and artificial intelligence automation. And of course our analyst view to do that. >>Yeah. So this year, of course, there's like the every year is a battle, but this year was an extra battle. Can you explain what you saw in terms of the hacker dynamics over the past? Let's say 12 months. I know you do this twice a year, but what trends did you see evolving throughout the year and what have you seen with the way that attackers have exploited this expanded attack surface outside of corporate network? >>Yeah, it was quite interesting last year. It certainly was not normal. Like we all say, um, and that was no exception for cybersecurity. You know, if we look at cyber criminals and how they pivoted and adapted to the scrap threat landscape, cyber cyber criminals are always trying to take advantage of the weakest link of the chain. They're trying to always prey off here and ride waves of global trends and themes. We've seen this before in, uh, natural disasters as an example, you know, um, trying to do charity kind of scams and campaigns. And they're usually limited to a region where that incident happened and they usually live about two to three weeks, maybe a month at the most. And then they'll move on to the next to the next trip. That's braking, of course, because COVID is so global and dominant. Um, we saw attacks coming in from, uh, well over 40 different languages as an example, um, in regions all across the world that wasn't lasting two to three weeks and it lasted for the better part of a year. >>And of course, what they're, they're using this as a vehicle, right? Not preying on the fear. They're doing everything from initial lockdown, uh, fishing. We were as COVID-19 movers to, um, uh, lay off notices then to phase one, reopenings all the way up to fast forward to where we are today with vaccine rollover development. So there's always that new flavor and theme that they were rolling out, but because it was so successful for them, they were able to, they didn't have to innovate too much, right. They didn't have to expand and shifted to new to new trends. And themes are really developed on new rats families as an example, or a new sophisticated malware. That was the first half of the year and the second half of the year. Um, of course people started to experience COVID fatigue, right? Um, people started to become, we did a lot of education around this. >>People started to become more aware of this threat. And so, um, cyber criminals have started to, um, as we expected, started to become more sophisticated with their attacks. We saw an expansion in different ransomware families. We saw more of a shift of focus on, on, um, uh, you know, targeting the digital supply chain as an example. And so that, that was, that was really towards Q4. Uh, so it, it was a long lived lead year with success on the Google themes, um, targeting healthcare as an example, a lot of, um, a lot of the organizations that were, you know, really in a vulnerable position, I would say >>So, okay. I want to clarify something because my assumption was that they actually did really increase the sophistication, but it sounds like that was kind of a first half trends. Not only did they have to adapt and not have to, but they adapt it to these new vulnerabilities. Uh, my sense was that when you talk about the digital supply chain, that that was a fairly sophisticated attack. Am I, am I getting that right? That they did their sort of their, their, their increased sophistication in the first half, and then they sort of deployed it, did it, uh, w what actually happened there from your data? >>Well, if we look at, so generally there's two types of attacks that we look at, we look at the, uh, the premeditated sophisticated attacks that can have, um, you know, a lot of ramp up work on their end, a lot of time developing the, the, the, the weaponization phase. So developing, uh, the exploits of the sophisticated malware that they're gonna use for the campaign reconnaissance, understanding the targets, where platforms are developed, um, the blueprinting that DNA of, of, of the supply chain, those take time. Um, in fact years, even if we look back to, um, uh, 10 plus years ago with the Stuxnet attacks, as an example that was on, uh, nuclear centrifuges, um, and that, that had four different zero-day weapons at the time. That was very sophisticated, that took over two years to develop as an example. So some of these can take years of time to develop, but they're, they're, uh, very specific in terms of the targets are going to go after obviously the ROI from their end. >>Uh, the other type of attack that we see is as ongoing, um, these broad, wide sweeping attacks, and the reality for those ones is they don't unfortunately need to be too sophisticated. And those ones were the ones I was talking about that were really just playing on the cool, the deem, and they still do today with the vaccine road and development. Uh, but, but it's really because they're just playing on, on, um, you know, social engineering, um, using, uh, topical themes. And in fact, the weapons they're using these vulnerabilities are from our research data. And this was highlighted actually the first pop landscape before last year, uh, on average were two to three years old. So we're not talking about fresh vulnerabilities. You've got to patch right away. I mean, these are things that should have been patched two years ago, but they're still unfortunately having success with that. >>So you mentioned stuck next Stuxnet as the former sort of example, of one of the types of attacks that you see. And I always felt like that was a watershed moment. One of the most sophisticated, if not the most sophisticated attack that we'd ever seen. When I talk to CSOs about the recent government hack, they, they, they suggest I infer maybe they don't suggest it. I infer that it was of similar sophistication. It was maybe thousands of people working on this for years and years and years. Is that, is that accurate or not necessarily? >>Yeah, there's definitely a, there's definitely some comparisons there. Uh, you know, one of the largest things is, uh, both attacks used digital circuits certificate personation, so they're digitally signed. So, you know, of course that whole technology using cryptography is designed by design, uh, to say that, you know, this piece of software installed in your system, hassles certificate is coming from the source. It's legitimate. Of course, if that's compromised, that's all out of the window. And, um, yeah, this is what we saw in both attacks. In fact, you know, stocks in that they also had digitally designed, uh, certificates that were compromised. So when it gets to that level of students or, uh, sophistication, that means definitely that there's a target that there has been usually months of, of, uh, homework done by cyber criminals, for reconnaissance to be able to weaponize that. >>W w what did you see with respect to ransomware? What were the trends there over the past 12 months? I've heard some data and it's pretty scary, but what did you see? >>Yeah, so we're actually, ransomware is always the thorn in our side, and it's going to continue to be so, um, you know, in fact, uh, ransomware is not a new itself. It was actually first created in 1989, and they demanded ransom payments through snail mail. This was to appeal a box, obviously that, that, that didn't take off. Wasn't a successful on the internet was porn at the time. But if you look at it now, of course, over the last 10 years, really, that's where it ran. The ransomware model has been, uh, you know, lucrative, right? I mean, it's been, um, using, uh, by force encrypting data on systems, so that users had to, if they were forced to pay the ransom because they wanted access to their data back data was the target currency for ransomware. That's shifted now. And that's actually been a big pivotal over the last year or so, because again, before it was this let's cast a wide net, in fact, as many people as we can random, um, and try to see if we can hold some of their data for ransom. >>Some people that data may be valuable, it may not be valuable. Um, and that model still exists. Uh, and we see that, but really the big shift that we saw last year and the threat landscape before it was a shift to targeted rats. So again, the sophistication is starting to rise because they're not just going out to random data. They're going out to data that they know is valuable to large organizations, and they're taking that a step further now. So there's various ransomware families. We saw that have now reverted to extortion and blackmail, right? So they're taking that data, encrypting it and saying, unless you pay us as large sum of money, we're going to release this to the public or sell it to a buyer on the dark web. And of course you can imagine the amount of, um, you know, damages that can happen from that. The other thing we're seeing is, is a target of going to revenue services, right? So if they can cripple networks, it's essentially a denial of service. They know that the company is going to be bleeding, you know, X, millions of dollars a day, so they can demand Y million dollars of ransom payments, and that's effectively what's happening. So it's, again, becoming more targeted, uh, and more sophisticated. And unfortunately the ransom is going up. >>So they go to where the money is. And of course your job is to, it's a lower the ROI for them, a constant challenge. Um, we talked about some of the attack vectors, uh, that you saw this year that, that cyber criminals are targeting. I wonder if, if, you know, given the work from home, if things like IOT devices and cameras and, you know, thermostats, uh, with 75% of the work force at home, is this infrastructure more vulnerable? I guess, of course it is. But what did you see there in terms of attacks on those devices? >>Yeah, so, uh, um, uh, you know, unfortunately the attack surface as we call it, uh, so the amount of target points is expanding. It's not shifting, it's expanding. We still see, um, I saw, I mentioned earlier vulnerabilities from two years ago that are being used in some cases, you know, over the holidays where e-commerce means we saw e-commerce heavily under attack in e-commerce has spikes since last summer, right. It's been a huge amount of traffic increase everybody's shopping from home. And, uh, those vulnerabilities going after a shopping cart, plugins, as an example, are five to six years old. So we still have this theme of old vulnerabilities are still new in a sense being attacked, but we're also now seeing this complication of, yeah, as you said, IOT, uh, B roll out everywhere, the really quick shift to work from home. Uh, we really have to treat this as if you guys, as the, uh, distributed branch model for enterprise, right. >>And it's really now the secure branch. How do we take, um, um, you know, any of these devices on, on those networks and secure them, uh, because yeah, if you look at the, what we highlighted in our landscape report and the top 10 attacks that we're seeing, so hacking attacks hacking in tabs, this is who our IPS triggers. You know, we're seeing attempts to go after IOT devices. Uh, right now they're mostly, uh, favoring, uh, well in terms of targets, um, consumer grade routers. Uh, but they're also looking at, um, uh, DVR devices as an example for, uh, you know, home entertainment systems, uh, network attached storage as well, and IP security cameras, um, some of the newer devices, uh, what, the quote unquote smart devices that are now on, you know, virtual assistance and home networks. Uh, we actually released a predictions piece at the end of last year as well. So this is what we call the new intelligent edge. And that's what I think is we're really going to see this year in terms of what's ahead. Um, cause we always have to look ahead and prepare for that. But yeah, right now, unfortunately, the story is, all of this is still happening. IOT is being targeted. Of course they're being targeted because they're easy targets. Um, it's like for cybercriminals, it's like shooting fish in a barrel. There's not just one, but there's multiple vulnerabilities, security holes associated with these devices, easy entry points into networks. >>I mean, it's, um, I mean, attackers they're, they're highly capable. They're organized, they're well-funded they move fast, they're they're agile, uh, and they follow the money. As we were saying, uh, you, you mentioned, you know, co vaccines and, you know, big pharma healthcare, uh, where >>Did you see advanced, persistent >>Threat groups really targeting? Were there any patterns that emerged in terms of other industry types or organizations being targeted? >>Yeah. So just to be clear again, when we talk about AP teams, um, uh, advanced, specific correct group, the groups themselves they're targeting, these are usually the more sophisticated groups, of course. So going back to that theme, these are usually the target, the, um, the premeditated targeted attacks usually points to nation state. Um, sometimes of course there's overlap. They can be affiliated with cyber crime, cyber crime, uh, uh, groups are typically, um, looking at some other targets for ROI, uh, bio there's there's a blend, right? So as an example, if we're looking at the, uh, apt groups I had last year, absolutely. Number one I would say would be healthcare. Healthcare was one of those, and it's, it's, it's, uh, you know, very unfortunate, but obviously with the shift that was happening at a pop up medical facilities, there's a big, a rush to change networks, uh, for a good cause of course, but with that game, um, you know, uh, security holes and concerns the targets and, and that's what we saw IPT groups targeting was going after those and, and ransomware and the cyber crime shrine followed as well. Right? Because if you can follow, uh, those critical networks and crippled them on from cybercriminals point of view, you can, you can expect them to pay the ransom because they think that they need to buy in order to, um, get those systems back online. Uh, in fact, last year or two, unfortunately we saw the first, um, uh, death that was caused because of a denial of service attack in healthcare, right. Facilities were weren't available because of the cyber attack. Patients had to be diverted and didn't make it on the way. >>All right. Jericho, sufficiently bummed out. So maybe in the time remaining, we can talk about remediation strategies. You know, we know there's no silver bullet in security. Uh, but what approaches are you recommending for organizations? How are you consulting with folks? >>Sure. Yeah. So a couple of things, um, good news is there's a lot that we can do about this, right? And, um, and, and basic measures go a long way. So a couple of things just to get out of the way I call it housekeeping, cyber hygiene, but it's always worth reminding. So when we talk about keeping security patches up to date, we always have to talk about that because that is reality as et cetera, these, these vulnerabilities that are still being successful are five to six years old in some cases, the majority two years old. Um, so being able to do that, manage that from an organization's point of view, really treat the new work from home. I don't like to call it a work from home. So the reality is it's work from anywhere a lot of the times for some people. So really treat that as, as the, um, as a secure branch, uh, methodology, doing things like segmentations on network, secure wifi access, multi-factor authentication is a huge muscle, right? >>So using multi-factor authentication because passwords are dead, um, using things like, uh, XDR. So Xers is a combination of detection and response for end points. This is a mass centralized management thing, right? So, uh, endpoint detection and response, as an example, those are all, uh, you know, good security things. So of course having security inspection, that that's what we do. So good threat intelligence baked into your security solution. That's supported by labs angles. So, uh, that's, uh, you know, uh, antivirus, intrusion prevention, web filtering, sandbox, and so forth, but then it gets that that's the security stack beyond that it gets into the end user, right? Everybody has a responsibility. This is that supply chain. We talked about. The supply chain is, is, is a target for attackers attackers have their own supply chain as well. And we're also part of that supply chain, right? The end users where we're constantly fished for social engineering. So using phishing campaigns against employees to better do training and awareness is always recommended to, um, so that's what we can do, obviously that's, what's recommended to secure, uh, via the endpoints in the secure branch there's things we're also doing in the industry, um, to fight back against that with prime as well. >>Well, I, I want to actually talk about that and talk about ecosystems and collaboration, because while you have competitors, you all want the same thing. You, SecOps teams are like superheroes in my book. I mean, they're trying to save the world from the bad guys. And I remember I was talking to Robert Gates on the cube a couple of years ago, a former defense secretary. And I said, yeah, but don't, we have like the best security people and can't we go on the offensive and weaponize that ourselves. Of course, there's examples of that. Us. Government's pretty good at it, even though they won't admit it. But his answer to me was, yeah, we gotta be careful because we have a lot more to lose than many countries. So I thought that was pretty interesting, but how do you collaborate with whether it's the U S government or other governments or other other competitors even, or your ecosystem? Maybe you could talk about that a little bit. >>Yeah. Th th this is what, this is what makes me tick. I love working with industry. I've actually built programs for 15 years of collaboration in the industry. Um, so, you know, we, we need, I always say we can't win this war alone. You actually hit on this point earlier, you talked about following and trying to disrupt the ROI of cybercriminals. Absolutely. That is our target, right. We're always looking at how we can disrupt their business model. Uh, and, and in order, there's obviously a lot of different ways to do that, right? So a couple of things we do is resiliency. That's what we just talked about increasing the security stack so that they go knocking on someone else's door. But beyond that, uh, it comes down to private, private sector collaborations. So, uh, we, we, uh, co-founder of the cyber threat Alliance in 2014 as an example, this was our fierce competitors coming in to work with us to share intelligence, because like you said, um, competitors in the space, but we need to work together to do the better fight. >>And so this is a Venn diagram. What's compared notes, let's team up, uh, when there's a breaking attack and make sure that we have the intelligence so that we can still remain competitive on the technology stack to gradation the solutions themselves. Uh, but let's, let's level the playing field here because cybercriminals moved out, uh, you know, um, uh, that, that there's no borders and they move with great agility. So, uh, that's one thing we do in the private private sector. Uh, there's also, uh, public private sector relationships, right? So we're working with Interpol as an example, Interfor project gateway, and that's when we find attribution. So it's not just the, what are these people doing like infrastructure, but who, who are they, where are they operating? What, what events tools are they creating? We've actually worked on cases that are led down to, um, uh, warrants and arrests, you know, and in some cases, one case with a $60 million business email compromise fraud scam, the great news is if you look at the industry as a whole, uh, over the last three to four months has been for take downs, a motet net Walker, uh, um, there's also IE Gregor, uh, recently as well too. >>And, and Ian Gregor they're actually going in and arresting the affiliates. So not just the CEO or the King, kind of these organizations, but the people who are distributing the ransomware themselves. And that was a unprecedented step, really important. So you really start to paint a picture of this, again, supply chain, this ecosystem of cyber criminals and how we can hit them, where it hurts on all angles. I've most recently, um, I've been heavily involved with the world economic forum. Uh, so I'm, co-author of a report from last year of the partnership on cyber crime. And, uh, this is really not just the pro uh, private, private sector, but the private and public sector working together. We know a lot about cybercriminals. We can't arrest them. Uh, we can't take servers offline from the data centers, but working together, we can have that whole, you know, that holistic effect. >>Great. Thank you for that, Derek. What if people want, want to go deeper? Uh, I know you guys mentioned that you do blogs, but are there other resources that, that they can tap? Yeah, absolutely. So, >>Uh, everything you can see is on our threat research blog on, uh, so 40 net blog, it's under expired research. We also put out, uh, playbooks, w we're doing blah, this is more for the, um, the heroes as he called them the security operation centers. Uh, we're doing playbooks on the aggressors. And so this is a playbook on the offense, on the offense. What are they up to? How are they doing that? That's on 40 guard.com. Uh, we also release, uh, threat signals there. So, um, we typically release, uh, about 50 of those a year, and those are all, um, our, our insights and views into specific attacks that are now >>Well, Derek Mackie, thanks so much for joining us today. And thanks for the work that you and your teams do. Very important. >>Thanks. It's yeah, it's a pleasure. And, uh, rest assured we will still be there 24 seven, three 65. >>Good to know. Good to know. And thank you for watching everybody. This is Dave Volante for the cube. We'll see you next time.
SUMMARY :
but now they have to be wary of software updates in the digital supply chain, Thanks so much for, for the invitation to speak. So first I wonder if you could explain for the audience, what is for guard labs Um, and, but, you know, so it's, it's everything from, uh, customer protection first And it's, it's critical because like you said, you can, you can minimize the um, that is, uh, the, you know, that that's digestible. I know you do this twice a year, but what trends did you see evolving throughout the year and what have you seen with the uh, natural disasters as an example, you know, um, trying to do charity Um, people started to become, we did a lot of education around this. on, um, uh, you know, targeting the digital supply chain as an example. in the first half, and then they sort of deployed it, did it, uh, w what actually happened there from um, you know, a lot of ramp up work on their end, a lot of time developing the, on, um, you know, social engineering, um, using, uh, topical themes. So you mentioned stuck next Stuxnet as the former sort of example, of one of the types of attacks is designed by design, uh, to say that, you know, um, you know, in fact, uh, ransomware is not a new of, um, you know, damages that can happen from that. and cameras and, you know, thermostats, uh, with 75% Yeah, so, uh, um, uh, you know, unfortunately the attack surface as we call it, uh, you know, home entertainment systems, uh, network attached storage as well, you know, big pharma healthcare, uh, where and it's, it's, it's, uh, you know, very unfortunate, but obviously with So maybe in the time remaining, we can talk about remediation strategies. So a couple of things just to get out of the way I call it housekeeping, cyber hygiene, So, uh, that's, uh, you know, uh, antivirus, intrusion prevention, web filtering, And I remember I was talking to Robert Gates on the cube a couple of years ago, a former defense secretary. Um, so, you know, we, we need, I always say we can't win this war alone. cybercriminals moved out, uh, you know, um, uh, that, but working together, we can have that whole, you know, that holistic effect. Uh, I know you guys mentioned that Uh, everything you can see is on our threat research blog on, uh, And thanks for the work that you and your teams do. And, uh, rest assured we will still be there 24 seven, And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
Derek Mackie | PERSON | 0.99+ |
1989 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
Ian Gregor | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Derek | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
$60 million | QUANTITY | 0.99+ |
Interpol | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
Robert Gates | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Derek Manky | PERSON | 0.99+ |
first half | QUANTITY | 0.99+ |
U S government | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
40 guard labs | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
FortiGuard Labs | ORGANIZATION | 0.99+ |
one case | QUANTITY | 0.99+ |
one day | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
last summer | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
half a year | QUANTITY | 0.99+ |
a month | QUANTITY | 0.98+ |
three weeks | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both attacks | QUANTITY | 0.98+ |
COVID-19 | OTHER | 0.98+ |
this year | DATE | 0.98+ |
10 plus years ago | DATE | 0.98+ |
Security Insights | ORGANIZATION | 0.98+ |
over two years | QUANTITY | 0.98+ |
Interfor | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.97+ |
two times a year | QUANTITY | 0.96+ |
million dollars | QUANTITY | 0.96+ |
40 grand labs | QUANTITY | 0.96+ |
Zero trust | QUANTITY | 0.96+ |
four months | QUANTITY | 0.95+ |
Derek manky | PERSON | 0.95+ |
Jericho | PERSON | 0.95+ |
millions of dollars a day | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
40 net | QUANTITY | 0.94+ |
pandemic | EVENT | 0.94+ |
COVID | OTHER | 0.94+ |
thousands of people | QUANTITY | 0.94+ |
over 5 million censorship | QUANTITY | 0.94+ |
four | QUANTITY | 0.93+ |
twice a year | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.9+ |
40 guard.com | OTHER | 0.9+ |
a hundred billion | QUANTITY | 0.89+ |
about 50 | QUANTITY | 0.89+ |
six years old | QUANTITY | 0.89+ |
Chief | PERSON | 0.89+ |
over 40 different languages | QUANTITY | 0.88+ |
three | QUANTITY | 0.87+ |
about two | QUANTITY | 0.86+ |
Stuxnet attacks | EVENT | 0.86+ |
zero-day weapons | QUANTITY | 0.86+ |
Q4 | DATE | 0.86+ |
21 years | QUANTITY | 0.85+ |
Maseca pro | ORGANIZATION | 0.85+ |
two years old | QUANTITY | 0.85+ |
Global Threat Alliances | ORGANIZATION | 0.83+ |
EMBARGO Derek Manky Chief, Security Insights & Global Threat Alliances, FortiGuard Labs
>>As we've been reporting, the pandemic has called CSOs to really shift their spending priorities towards securing remote workers. Almost overnight. Zero trust has gone from buzzword to mandate. What's more as we wrote in our recent cybersecurity breaking analysis, not only Maseca pro secured increasingly distributed workforce, but now they have to be wary of software updates in the digital supply chain, including the very patches designed to protect them against cyber attacks. Hello everyone. And welcome to this Q conversation. My name is Dave Vellante and I'm pleased to welcome Derek manky. Who's chief security insights, and global threat alliances for four guard labs with fresh data from its global threat landscape report. Derek. Welcome. Great to see you. >>Thanks so much for, for the invitation to speak. It's always a pleasure. Multicover yeah, >>You're welcome. So first I wonder if you could explain for the audience, what is for guard labs and what's its relationship to fortunate? >>Right. So 40 grand labs is, is our global sockets, our global threat intelligence operation center. It never sleeps, and this is the beat. Um, you know, it's, it's been here since inception at port in it. So it's it's 20, 21 years in the making, since Fortinet was founded, uh, we have built this in-house, uh, so we don't go yum technology. We built everything from the ground up, including creating our own training programs for our, our analysts. We're following malware, following exploits. We even have a unique program that I created back in 2006 to ethical hacking program. And it's a zero-day research. So we try to meet the hackers, the bad guys to their game. And we of course do that responsibly to work with vendors, to close schools and create virtual patches. Um, and, but, you know, so it's, it's everything from, uh, customer protection first and foremost, to following, uh, the threat landscape and cyber. It's very important to understand who they are, what they're doing, who they're, uh, what they're targeting, what tools are they using? >>Yeah, that's great. Some serious DNA and skills in that group. And it's, it's critical because like you said, you can, you can minimize the spread of those malware very, very quickly. So what, what now you have, uh, the global threat landscape report. We're going to talk about that, but what exactly is that? >>Right? So this a global threat landscape report, it's a summary of, uh, all, all the data that we collect over a period of time. So we released this, that biannually two times a year. Um, cyber crime is changing very fast, as you can imagine. So, uh, while we do release security blogs, and, uh, what we call threat signals for breaking security events, we have a lot of other vehicles to release threat intelligence, but this threat landscape report is truly global. It looks at all of our global data. So we have over 5 million censorship worldwide in 40 guard labs, we're processing. I know it seems like a very large amount, but North of a hundred billion, uh, threat events in just one day. And we have to take the task of taking all of that data and put that onto scale for half a year and compile that into something, um, that is, uh, the, you know, that that's digestible. That's a, a very tough task, as you can imagine, so that, you know, we have to work with a huge technologies back to machine learning and artificial intelligence automation. And of course our analyst view to do that. >>Yeah. So this year, of course, there's like the every year is a battle, but this year was an extra battle. Can you explain what you saw in terms of the hacker dynamics over the past? Let's say 12 months. I know you do this twice a year, but what trends did you see evolving throughout the year and what have you seen with the way that attackers have exploited this expanded attack surface outside of corporate network? >>Yeah, it was quite interesting last year. It certainly was not normal. Like we all say, um, and that was no exception for cybersecurity. You know, if we look at cyber criminals and how they pivoted and adapted to the scrap threat landscape, cyber cyber criminals are always trying to take advantage of the weakest link of the chain. They're trying to always prey off here and ride waves of global trends and themes. We've seen this before in, uh, natural disasters as an example, you know, um, trying to do charity kind of scams and campaigns. And they're usually limited to a region where that incident happened and they usually live about two to three weeks, maybe a month at the most. And then they'll move on to the next to the next trip. That's braking, of course, because COVID is so global and dominant. Um, we saw attacks coming in from, uh, well over 40 different languages as an example, um, in regions all across the world that wasn't lasting two to three weeks and it lasted for the better part of a year. >>And of course, what they're, they're using this as a vehicle, right? Not preying on the fear. They're doing everything from initial lockdown, uh, fishing. We were as COVID-19 movers to, um, uh, lay off notices then to phase one, reopenings all the way up to fast forward to where we are today with vaccine rollover development. So there's always that new flavor and theme that they were rolling out, but because it was so successful for them, they were able to, they didn't have to innovate too much, right. They didn't have to expand and shifted to new to new trends. And themes are really developed on new rats families as an example, or a new sophisticated malware. That was the first half of the year and the second half of the year. Um, of course people started to experience COVID fatigue, right? Um, people started to become, we did a lot of education around this. >>People started to become more aware of this threat. And so, um, cyber criminals have started to, um, as we expected, started to become more sophisticated with their attacks. We saw an expansion in different ransomware families. We saw more of a shift of focus on, on, um, uh, you know, targeting the digital supply chain as an example. And so that, that was, that was really towards Q4. Uh, so it, it was a long lived lead year with success on the Google themes, um, targeting healthcare as an example, a lot of, um, a lot of the organizations that were, you know, really in a vulnerable position, I would say >>So, okay. I want to clarify something because my assumption was that they actually did really increase the sophistication, but it sounds like that was kind of a first half trends. Not only did they have to adapt and not have to, but they adapt it to these new vulnerabilities. Uh, my sense was that when you talk about the digital supply chain, that that was a fairly sophisticated attack. Am I, am I getting that right? That they did their sort of their, their, their increased sophistication in the first half, and then they sort of deployed it, did it, uh, w what actually happened there from your data? >>Well, if we look at, so generally there's two types of attacks that we look at, we look at the, uh, the premeditated sophisticated attacks that can have, um, you know, a lot of ramp up work on their end, a lot of time developing the, the, the, the weaponization phase. So developing, uh, the exploits of the sophisticated malware that they're gonna use for the campaign reconnaissance, understanding the targets, where platforms are developed, um, the blueprinting that DNA of, of, of the supply chain, those take time. Um, in fact years, even if we look back to, um, uh, 10 plus years ago with the Stuxnet attacks, as an example that was on, uh, nuclear centrifuges, um, and that, that had four different zero-day weapons at the time. That was very sophisticated, that took over two years to develop as an example. So some of these can take years of time to develop, but they're, they're, uh, very specific in terms of the targets are going to go after obviously the ROI from their end. >>Uh, the other type of attack that we see is as ongoing, um, these broad, wide sweeping attacks, and the reality for those ones is they don't unfortunately need to be too sophisticated. And those ones were the ones I was talking about that were really just playing on the cool, the deem, and they still do today with the vaccine road and development. Uh, but, but it's really because they're just playing on, on, um, you know, social engineering, um, using, uh, topical themes. And in fact, the weapons they're using these vulnerabilities are from our research data. And this was highlighted actually the first pop landscape before last year, uh, on average were two to three years old. So we're not talking about fresh vulnerabilities. You've got to patch right away. I mean, these are things that should have been patched two years ago, but they're still unfortunately having success with that. >>So you mentioned stuck next Stuxnet as the former sort of example, of one of the types of attacks that you see. And I always felt like that was a watershed moment. One of the most sophisticated, if not the most sophisticated attack that we'd ever seen. When I talk to CSOs about the recent government hack, they, they, they suggest I infer maybe they don't suggest it. I infer that it was of similar sophistication. It was maybe thousands of people working on this for years and years and years. Is that, is that accurate or not necessarily? >>Yeah, there's definitely a, there's definitely some comparisons there. Uh, you know, one of the largest things is, uh, both attacks used digital circuits certificate personation, so they're digitally signed. So, you know, of course that whole technology using cryptography is designed by design, uh, to say that, you know, this piece of software installed in your system, hassles certificate is coming from the source. It's legitimate. Of course, if that's compromised, that's all out of the window. And, um, yeah, this is what we saw in both attacks. In fact, you know, stocks in that they also had digitally designed, uh, certificates that were compromised. So when it gets to that level of students or, uh, sophistication, that means definitely that there's a target that there has been usually months of, of, uh, homework done by cyber criminals, for reconnaissance to be able to weaponize that. >>W w what did you see with respect to ransomware? What were the trends there over the past 12 months? I've heard some data and it's pretty scary, but what did you see? >>Yeah, so we're actually, ransomware is always the thorn in our side, and it's going to continue to be so, um, you know, in fact, uh, ransomware is not a new itself. It was actually first created in 1989, and they demanded ransom payments through snail mail. This was to appeal a box, obviously that, that, that didn't take off. Wasn't a successful on the internet was porn at the time. But if you look at it now, of course, over the last 10 years, really, that's where it ran. The ransomware model has been, uh, you know, lucrative, right? I mean, it's been, um, using, uh, by force encrypting data on systems, so that users had to, if they were forced to pay the ransom because they wanted access to their data back data was the target currency for ransomware. That's shifted now. And that's actually been a big pivotal over the last year or so, because again, before it was this let's cast a wide net, in fact, as many people as we can random, um, and try to see if we can hold some of their data for ransom. >>Some people that data may be valuable, it may not be valuable. Um, and that model still exists. Uh, and we see that, but really the big shift that we saw last year and the threat landscape before it was a shift to targeted rats. So again, the sophistication is starting to rise because they're not just going out to random data. They're going out to data that they know is valuable to large organizations, and they're taking that a step further now. So there's various ransomware families. We saw that have now reverted to extortion and blackmail, right? So they're taking that data, encrypting it and saying, unless you pay us as large sum of money, we're going to release this to the public or sell it to a buyer on the dark web. And of course you can imagine the amount of, um, you know, damages that can happen from that. The other thing we're seeing is, is a target of going to revenue services, right? So if they can cripple networks, it's essentially a denial of service. They know that the company is going to be bleeding, you know, X, millions of dollars a day, so they can demand Y million dollars of ransom payments, and that's effectively what's happening. So it's, again, becoming more targeted, uh, and more sophisticated. And unfortunately the ransom is going up. >>So they go to where the money is. And of course your job is to, it's a lower the ROI for them, a constant challenge. Um, we talked about some of the attack vectors, uh, that you saw this year that, that cyber criminals are targeting. I wonder if, if, you know, given the work from home, if things like IOT devices and cameras and, you know, thermostats, uh, with 75% of the work force at home, is this infrastructure more vulnerable? I guess, of course it is. But what did you see there in terms of attacks on those devices? >>Yeah, so, uh, um, uh, you know, unfortunately the attack surface as we call it, uh, so the amount of target points is expanding. It's not shifting, it's expanding. We still see, um, I saw, I mentioned earlier vulnerabilities from two years ago that are being used in some cases, you know, over the holidays where e-commerce means we saw e-commerce heavily under attack in e-commerce has spikes since last summer, right. It's been a huge amount of traffic increase everybody's shopping from home. And, uh, those vulnerabilities going after a shopping cart, plugins, as an example, are five to six years old. So we still have this theme of old vulnerabilities are still new in a sense being attacked, but we're also now seeing this complication of, yeah, as you said, IOT, uh, B roll out everywhere, the really quick shift to work from home. Uh, we really have to treat this as if you guys, as the, uh, distributed branch model for enterprise, right. >>And it's really now the secure branch. How do we take, um, um, you know, any of these devices on, on those networks and secure them, uh, because yeah, if you look at the, what we highlighted in our landscape report and the top 10 attacks that we're seeing, so hacking attacks hacking in tabs, this is who our IPS triggers. You know, we're seeing attempts to go after IOT devices. Uh, right now they're mostly, uh, favoring, uh, well in terms of targets, um, consumer grade routers. Uh, but they're also looking at, um, uh, DVR devices as an example for, uh, you know, home entertainment systems, uh, network attached storage as well, and IP security cameras, um, some of the newer devices, uh, what, the quote unquote smart devices that are now on, you know, virtual assistance and home networks. Uh, we actually released a predictions piece at the end of last year as well. So this is what we call the new intelligent edge. And that's what I think is we're really going to see this year in terms of what's ahead. Um, cause we always have to look ahead and prepare for that. But yeah, right now, unfortunately, the story is, all of this is still happening. IOT is being targeted. Of course they're being targeted because they're easy targets. Um, it's like for cybercriminals, it's like shooting fish in a barrel. There's not just one, but there's multiple vulnerabilities, security holes associated with these devices, easy entry points into networks. >>I mean, it's, um, I mean, attackers they're, they're highly capable. They're organized, they're well-funded they move fast, they're they're agile, uh, and they follow the money. As we were saying, uh, you, you mentioned, you know, co vaccines and, you know, big pharma healthcare, uh, where >>Did you see advanced, persistent >>Threat groups really targeting? Were there any patterns that emerged in terms of other industry types or organizations being targeted? >>Yeah. So just to be clear again, when we talk about AP teams, um, uh, advanced, specific correct group, the groups themselves they're targeting, these are usually the more sophisticated groups, of course. So going back to that theme, these are usually the target, the, um, the premeditated targeted attacks usually points to nation state. Um, sometimes of course there's overlap. They can be affiliated with cyber crime, cyber crime, uh, uh, groups are typically, um, looking at some other targets for ROI, uh, bio there's there's a blend, right? So as an example, if we're looking at the, uh, apt groups I had last year, absolutely. Number one I would say would be healthcare. Healthcare was one of those, and it's, it's, it's, uh, you know, very unfortunate, but obviously with the shift that was happening at a pop up medical facilities, there's a big, a rush to change networks, uh, for a good cause of course, but with that game, um, you know, uh, security holes and concerns the targets and, and that's what we saw IPT groups targeting was going after those and, and ransomware and the cyber crime shrine followed as well. Right? Because if you can follow, uh, those critical networks and crippled them on from cybercriminals point of view, you can, you can expect them to pay the ransom because they think that they need to buy in order to, um, get those systems back online. Uh, in fact, last year or two, unfortunately we saw the first, um, uh, death that was caused because of a denial of service attack in healthcare, right. Facilities were weren't available because of the cyber attack. Patients had to be diverted and didn't make it on the way. >>All right. Jericho, sufficiently bummed out. So maybe in the time remaining, we can talk about remediation strategies. You know, we know there's no silver bullet in security. Uh, but what approaches are you recommending for organizations? How are you consulting with folks? >>Sure. Yeah. So a couple of things, um, good news is there's a lot that we can do about this, right? And, um, and, and basic measures go a long way. So a couple of things just to get out of the way I call it housekeeping, cyber hygiene, but it's always worth reminding. So when we talk about keeping security patches up to date, we always have to talk about that because that is reality as et cetera, these, these vulnerabilities that are still being successful are five to six years old in some cases, the majority two years old. Um, so being able to do that, manage that from an organization's point of view, really treat the new work from home. I don't like to call it a work from home. So the reality is it's work from anywhere a lot of the times for some people. So really treat that as, as the, um, as a secure branch, uh, methodology, doing things like segmentations on network, secure wifi access, multi-factor authentication is a huge muscle, right? >>So using multi-factor authentication because passwords are dead, um, using things like, uh, XDR. So Xers is a combination of detection and response for end points. This is a mass centralized management thing, right? So, uh, endpoint detection and response, as an example, those are all, uh, you know, good security things. So of course having security inspection, that that's what we do. So good threat intelligence baked into your security solution. That's supported by labs angles. So, uh, that's, uh, you know, uh, antivirus, intrusion prevention, web filtering, sandbox, and so forth, but then it gets that that's the security stack beyond that it gets into the end user, right? Everybody has a responsibility. This is that supply chain. We talked about. The supply chain is, is, is a target for attackers attackers have their own supply chain as well. And we're also part of that supply chain, right? The end users where we're constantly fished for social engineering. So using phishing campaigns against employees to better do training and awareness is always recommended to, um, so that's what we can do, obviously that's, what's recommended to secure, uh, via the endpoints in the secure branch there's things we're also doing in the industry, um, to fight back against that with prime as well. >>Well, I, I want to actually talk about that and talk about ecosystems and collaboration, because while you have competitors, you all want the same thing. You, SecOps teams are like superheroes in my book. I mean, they're trying to save the world from the bad guys. And I remember I was talking to Robert Gates on the cube a couple of years ago, a former defense secretary. And I said, yeah, but don't, we have like the best security people and can't we go on the offensive and weaponize that ourselves. Of course, there's examples of that. Us. Government's pretty good at it, even though they won't admit it. But his answer to me was, yeah, we gotta be careful because we have a lot more to lose than many countries. So I thought that was pretty interesting, but how do you collaborate with whether it's the U S government or other governments or other other competitors even, or your ecosystem? Maybe you could talk about that a little bit. >>Yeah. Th th this is what, this is what makes me tick. I love working with industry. I've actually built programs for 15 years of collaboration in the industry. Um, so, you know, we, we need, I always say we can't win this war alone. You actually hit on this point earlier, you talked about following and trying to disrupt the ROI of cybercriminals. Absolutely. That is our target, right. We're always looking at how we can disrupt their business model. Uh, and, and in order, there's obviously a lot of different ways to do that, right? So a couple of things we do is resiliency. That's what we just talked about increasing the security stack so that they go knocking on someone else's door. But beyond that, uh, it comes down to private, private sector collaborations. So, uh, we, we, uh, co-founder of the cyber threat Alliance in 2014 as an example, this was our fierce competitors coming in to work with us to share intelligence, because like you said, um, competitors in the space, but we need to work together to do the better fight. >>And so this is a Venn diagram. What's compared notes, let's team up, uh, when there's a breaking attack and make sure that we have the intelligence so that we can still remain competitive on the technology stack to gradation the solutions themselves. Uh, but let's, let's level the playing field here because cybercriminals moved out, uh, you know, um, uh, that, that there's no borders and they move with great agility. So, uh, that's one thing we do in the private private sector. Uh, there's also, uh, public private sector relationships, right? So we're working with Interpol as an example, Interfor project gateway, and that's when we find attribution. So it's not just the, what are these people doing like infrastructure, but who, who are they, where are they operating? What, what events tools are they creating? We've actually worked on cases that are led down to, um, uh, warrants and arrests, you know, and in some cases, one case with a $60 million business email compromise fraud scam, the great news is if you look at the industry as a whole, uh, over the last three to four months has been for take downs, a motet net Walker, uh, um, there's also IE Gregor, uh, recently as well too. >>And, and Ian Gregor they're actually going in and arresting the affiliates. So not just the CEO or the King, kind of these organizations, but the people who are distributing the ransomware themselves. And that was a unprecedented step, really important. So you really start to paint a picture of this, again, supply chain, this ecosystem of cyber criminals and how we can hit them, where it hurts on all angles. I've most recently, um, I've been heavily involved with the world economic forum. Uh, so I'm, co-author of a report from last year of the partnership on cyber crime. And, uh, this is really not just the pro uh, private, private sector, but the private and public sector working together. We know a lot about cybercriminals. We can't arrest them. Uh, we can't take servers offline from the data centers, but working together, we can have that whole, you know, that holistic effect. >>Great. Thank you for that, Derek. What if people want, want to go deeper? Uh, I know you guys mentioned that you do blogs, but are there other resources that, that they can tap? Yeah, absolutely. So, >>Uh, everything you can see is on our threat research blog on, uh, so 40 net blog, it's under expired research. We also put out, uh, playbooks, w we're doing blah, this is more for the, um, the heroes as he called them the security operation centers. Uh, we're doing playbooks on the aggressors. And so this is a playbook on the offense, on the offense. What are they up to? How are they doing that? That's on 40 guard.com. Uh, we also release, uh, threat signals there. So, um, we typically release, uh, about 50 of those a year, and those are all, um, our, our insights and views into specific attacks that are now >>Well, Derek Mackie, thanks so much for joining us today. And thanks for the work that you and your teams do. Very important. >>Thanks. It's yeah, it's a pleasure. And, uh, rest assured we will still be there 24 seven, three 65. >>Good to know. Good to know. And thank you for watching everybody. This is Dave Volante for the cube. We'll see you next time.
SUMMARY :
but now they have to be wary of software updates in the digital supply chain, Thanks so much for, for the invitation to speak. So first I wonder if you could explain for the audience, what is for guard labs Um, and, but, you know, so it's, it's everything from, uh, customer protection first And it's, it's critical because like you said, you can, you can minimize the um, that is, uh, the, you know, that that's digestible. I know you do this twice a year, but what trends did you see evolving throughout the year and what have you seen with the uh, natural disasters as an example, you know, um, trying to do charity Um, people started to become, we did a lot of education around this. on, um, uh, you know, targeting the digital supply chain as an example. in the first half, and then they sort of deployed it, did it, uh, w what actually happened there from um, you know, a lot of ramp up work on their end, a lot of time developing the, on, um, you know, social engineering, um, using, uh, topical themes. So you mentioned stuck next Stuxnet as the former sort of example, of one of the types of attacks is designed by design, uh, to say that, you know, um, you know, in fact, uh, ransomware is not a new of, um, you know, damages that can happen from that. and cameras and, you know, thermostats, uh, with 75% Yeah, so, uh, um, uh, you know, unfortunately the attack surface as we call it, uh, you know, home entertainment systems, uh, network attached storage as well, you know, big pharma healthcare, uh, where and it's, it's, it's, uh, you know, very unfortunate, but obviously with So maybe in the time remaining, we can talk about remediation strategies. So a couple of things just to get out of the way I call it housekeeping, cyber hygiene, So, uh, that's, uh, you know, uh, antivirus, intrusion prevention, web filtering, And I remember I was talking to Robert Gates on the cube a couple of years ago, a former defense secretary. Um, so, you know, we, we need, I always say we can't win this war alone. cybercriminals moved out, uh, you know, um, uh, that, but working together, we can have that whole, you know, that holistic effect. Uh, I know you guys mentioned that Uh, everything you can see is on our threat research blog on, uh, And thanks for the work that you and your teams do. And, uh, rest assured we will still be there 24 seven, And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
Derek Mackie | PERSON | 0.99+ |
1989 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
Ian Gregor | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
Derek | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
$60 million | QUANTITY | 0.99+ |
Interpol | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
Robert Gates | PERSON | 0.99+ |
last year | DATE | 0.99+ |
FortiGuard Labs | ORGANIZATION | 0.99+ |
first half | QUANTITY | 0.99+ |
U S government | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
40 guard labs | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one case | QUANTITY | 0.99+ |
one day | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
last summer | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
half a year | QUANTITY | 0.99+ |
a month | QUANTITY | 0.98+ |
three weeks | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both attacks | QUANTITY | 0.98+ |
COVID-19 | OTHER | 0.98+ |
this year | DATE | 0.98+ |
10 plus years ago | DATE | 0.98+ |
EMBARGO | PERSON | 0.98+ |
over two years | QUANTITY | 0.98+ |
Interfor | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.97+ |
two times a year | QUANTITY | 0.96+ |
million dollars | QUANTITY | 0.96+ |
40 grand labs | QUANTITY | 0.96+ |
Zero trust | QUANTITY | 0.96+ |
four months | QUANTITY | 0.95+ |
Derek manky | PERSON | 0.95+ |
Jericho | PERSON | 0.95+ |
millions of dollars a day | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
40 net | QUANTITY | 0.94+ |
pandemic | EVENT | 0.94+ |
COVID | OTHER | 0.94+ |
thousands of people | QUANTITY | 0.94+ |
over 5 million censorship | QUANTITY | 0.94+ |
four | QUANTITY | 0.93+ |
twice a year | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.9+ |
40 guard.com | OTHER | 0.9+ |
Derek Manky | PERSON | 0.89+ |
a hundred billion | QUANTITY | 0.89+ |
about 50 | QUANTITY | 0.89+ |
six years old | QUANTITY | 0.89+ |
over 40 different languages | QUANTITY | 0.88+ |
Chief | PERSON | 0.87+ |
Security Insights & Global Threat Alliances | ORGANIZATION | 0.87+ |
three | QUANTITY | 0.87+ |
about two | QUANTITY | 0.86+ |
Stuxnet attacks | EVENT | 0.86+ |
zero-day weapons | QUANTITY | 0.86+ |
Q4 | DATE | 0.86+ |
21 years | QUANTITY | 0.85+ |
Maseca pro | ORGANIZATION | 0.85+ |
two years old | QUANTITY | 0.85+ |
cyber threat Alliance | ORGANIZATION | 0.83+ |
Matt Burr, Scott Sinclair, Garrett Belschner | The Convergence of File and Object
>>From around the globe presenting the convergence of file and object brought to you by pure storage. Okay. >>We're back with the convergence of file and object and a power panel. This is a special content program made possible by pure storage. And co-created with the cube. Now in this series, what we're doing is we're exploring the coming together of file and object storage, trying to understand the trends that are driving this convergence, the architectural considerations that users should be aware of and which use cases make the most sense for so-called unified fast file in object storage. And with me are three great guests to unpack these issues. Garrett bell center is the data center solutions architect he's with CDW. Scott Sinclair is a senior analyst at enterprise strategy group. He's got deep experience on enterprise storage and brings that independent analyst perspective. And Matt Burr is back with us, gentlemen, welcome to the program. >>Thank you. >>Hey Scott, let me, let me start with you, uh, and get your perspective on what's going on in the market with, with object to cloud huge amount of unstructured data out there. It lives in files. Give us your independent view of the trends that you're seeing out there. >>Well, Dave, you know where to start, I mean, surprise, surprise data's growing. Um, but one of the big things that we've seen is that we've been talking about data growth for what decades now, but what's really fascinating is or changed is because of the digital economy, digital business, digital transformation, whatever you call it. Now, people are not just storing data. They actually have to use it. And so we see this in trends like analytics and artificial intelligence. And what that does is it's just increasing the demand for not only consolidation of massive amounts of storage that we've seen for awhile, but also the demand for incredibly low latency access to that storage. And I think that's one of the things that we're seeing, that's driving this need for convergence, as you put it of having multiple protocols can Solidated onto one platform, but also the need for high performance access to that data. >>Thank you for that. A great setup. I got, like I wrote down three topics that we're going to unpack as a result of that. So Garrett, let me, let me go to you. Maybe you can give us the perspective of what you see with customers is, is this, is this like a push where customers are saying, Hey, listen, I need to converge my file and object. Or is it more a story where they're saying, Garrett, I have this problem. And then you see unified file and object as a solution. >>Yeah, I think, I think for us, it's, you know, taking that consultative approach with our customers and really kind of hearing pain around some of the pipelines, the way that they're going to market with data today and kind of what are the problems that they're seeing. We're also seeing a lot of the change driven by the software vendors as well. So really being able to support a dis-aggregated design where you're not having to upgrade and maintain everything as a single block has been a place where we've seen a lot of customers pivot to where they have more flexibility as they need to maintain larger volumes of data and higher performance data, having the ability to do that separate from compute and cash. And some of those other layers are, is really critical. >>So, Matt, I wonder if you could follow up on that. So, so Gary was talking about this dis-aggregated design, so I like it, you know, distributed cloud, et cetera, but then we're talking about bringing things together in one place, right? So square that circle. How does this fit in with this hyper distributed cloud edge that's getting built out? >>Yeah. You know, I mean, I could give you the easy answer on that, but I can also pass it back to Garrett in the sense that, you know, Garrett, maybe it's important to talk about, um, elastic and Splunk and some of the things that you're seeing in, in that world and, and how that, I think the answer today, the question I think you can give, you can give a pretty qualified answer relative to what your customers are seeing. >>Oh, that'd be great, please. >>Yeah, absolutely. No, no problem at all. So, you know, I think with, um, Splunk kind of moving from its traditional design and classic design, whatever you want to, you want to call it up into smart store? Um, that was kind of one of the first that we saw kind of make that move towards kind of separating object out. And I think, you know, a lot of that comes from their own move to the cloud and updating their code to basically take advantage of object object in the cloud. Um, but we're starting to see, you know, with like Vertica Ian, for example, um, elastic other folks taking that same type of approach where in the past we were building out many to use servers. We were jamming them full of, uh, you know, SSDs and then DME drives. Um, that was great, but it doesn't really scale. >>And it kind of gets into that same problem that we see with hyperconvergence a little bit where it's, you know, you're all, you're always adding something maybe that you didn't want to add. Um, so I think it, you know, again, being driven by software is really kind of where we're seeing the world open up there. Um, but that whole idea of just having that as a hub and a central place where you can then leverage that out to other applications, whether that's out to the edge for machine learning or AI applications to take advantage of it. I think that's where that convergence really comes back in. Um, but I think like Scott mentioned earlier, it's really folks are now doing things with the data where before I think they were really storing and trying to figure out what are we going to actually do with it when we need to do something with it? So this is making it possible. >>Yeah. And Dave, if I could just sort of tack onto the end of the Garrett's answer there, you know, in particular verdict with beyond mode, the ability to leverage sharted sub clusters, give you, um, you know, sort of an advantage in terms of being able to isolate performance, hotspots you an advantage to that as being able to do that on a flash blade, for example. So, um, sharted, sub clusters allow you to sort of say, I am, you know, I am going to give prioritization to, you know, this particular element of my application in my dataset, but I can still share those, share that data across those, across those sub clusters. So, um, you know, as you see, you know, Vertica with the non-motor, >>You see Splunk advanced with, with smart store, um, you know, these are all sort of advancements that are, you know, it's a chicken and the egg thing. Um, they need faster storage, they need, you know, sort of a consolidated data storage data set. Um, and, and that's what sort of allows these things to drive forward. Yes, >>The verdict eon mode, there was a no, no, it's the ability to separate compute and storage and scale independently. I think, I think Vertica, if they're, if they're not the only one, they're one of the only ones I think they might even be the only one that does that in the cloud and on prem and that sort of plays into this distributed nature of this hyper distributed cloud. I sometimes call it and I'm interested in the, in the data pipeline. And I wonder Scott, if we can talk a little bit about that maybe where unified object and file fund. I mean, I'm envisioning this, this distributed mesh and then, you know, UFO is sort of a note on that, that I can tap when I need it. But, but Scott, what are you seeing as the state of infrastructure as it relates to the data pipeline and the trends there? >>Yeah, absolutely. Dave, so w when I think data pipeline, I immediately gravitate to analytics or, or machine learning initiatives. Right. And so one of the big things we see, and this is, it's an interesting trend. It seems, you know, we continue to see increased investment in AI, increase interest and people think, and as companies get started, they think, okay, well, what does that mean? Well, I gotta go hire a data scientist. Okay. Well that data scientist probably needs some infrastructure. And what they end, what often happens in these environments is where it ends up being a bespoke environment or a one-off environment. And then over time organizations run into challenges. And one of the big challenges is the data science team or people whose jobs are outside of it, spend way too much time trying to get the infrastructure, um, to, to keep up with their demands and predominantly around data performance. So one of the, one of the ways organizations that especially have artificial intelligence workloads in production, and we found this in our research have started mitigating that is by deploying flash all across the data pipe. We have. Yeah, >>We have data on this. Sorry to interrupt, but Pat, if you could bring up that, that chart, that would be great. Um, so take us through this, uh, Scott and, and share with us what we're looking at here. >>Yeah, absolutely. So, so Dave, I'm glad you brought this up. So we did this study. Um, I want to say late last year, uh, one of the things we looked at was across artificial intelligence environments. Now, one thing that you're not seeing on this slide is we went through and we asked all around the data pipeline and we saw flash everywhere. But I thought this was really telling because this is around data lakes. And when many people think about the idea of a data Lake, they think about it as a repository. It's a place where you keep maybe cold data. And what we see here is especially within production environments, a pervasive use of flash stores. So I think that 69% of organizations are saying their data Lake is mostly flash or all flash. And I think we had 0% that don't have any flash in that environment. So organizations are out that thing that flashes in essential technology to allow them to harness the value of their data. >>So Garrett, and then Matt, I wonder if you could chime in as well. We talk about digital transformation and I, I sometimes call it, you know, the COVID forced March to digital transformation. And, and I'm curious as to your perspective on things like machine learning and the adoption, um, and Scott, you may have a perspective on this as well. You know, we had to pivot, he had to get laptops. We had to secure the end points, you know, VDI, those became super high priorities. What happened to, you know, injecting AI into my applications and, and machine learning. Did that go in the back burner? Was that accelerated along with the need to digitally transform, uh, Garrett, I wonder if you could share with us what you saw with, with customers last year? >>Yeah. I mean, I think we definitely saw an acceleration. Um, I think folks are in, in my market are, are still kind of figuring out how they inject that into more of a widely distributed business use case. Um, but again, this data hub and allowing folks to now take advantage of this data that they've had in these data lakes for a long time. I agree with Scott. I mean, many of the data lakes that we have were somewhat flashing, accelerated, but they were typically really made up of large capacity, uh, slower spinning nearline drives, um, accelerated with some flash, but I'm really starting to see folks now look at some of those older Hadoop implementations and really leveraging new ways to look at how they consume data. And many of those redesigned customers are coming to us, wanting to look at all flash solutions. So we're definitely seeing it. And we're seeing an acceleration towards folks trying to figure out how to actually use it in more of a business sense now, or before I feel it goes a little bit more skunkworks kind of people dealing with, uh, you know, in a much smaller situation, maybe in the executive offices trying to do some testing and things. >>Scott you're nodding away. Anything you can add in here. >>Yeah. So, well, first off, it's great to get that confirmation that the stuff we're seeing in our research, Garrett seeing, you know, out in the field and in the real world, um, but you know, as it relates to really the past year, it's been really fascinating. So one of the things we, we studied at ESG is it buying intentions. What are things, what are initiatives that companies plan to invest in? And at the beginning of 2020, we saw heavy interest in machine learning initiatives. Then you transition to the middle of 2020 in the midst of COVID. Uh, some organizations continued on that path, but a lot of them had the pivot, right? How do we get laptops, everyone? How do we continue business in this new world? Well, now as we enter into 2021, and hopefully we're coming out of this, uh, you know, the, the pandemic era, um, we're getting into a world where organizations are pivoting back towards these strategic investments around how do I maximize the usage of data and actually accelerating those because they've seen the importance of, of digital business initiatives over the past >>Year. >>Yeah, Matt, I mean, when we exited 2019, we saw a narrowing of experimentation in our premise was, you know, that that organizations are going to start now operationalizing all their digital transformation experiments. And, and then we had a 10 month Petri dish on, on digital. So what are you, what are you seeing in this regard? >>It's 10 months, Petri dish is an interesting way to interesting way to describe it. Um, you know, we, we saw another, there's another, there's another candidate for pivot in there around ransomware as well. Right. Um, you know, security entered into the mix, uh, which took people's attention away from some of this as well. I mean, look, I I'd like to bring this up just a level or two, um, because what we're actually talking about here is progress, right? And, and progress is an, is an inevitability. Um, you know, whether it's whether, whether you believe that it's by 20, 25 or you, or you think it's 20, 35 or 2050, it doesn't matter. We're on a forced March to the eradication of desk. And that is happening in many ways. Uh, you know, in many ways, um, due to some of the things that Garrett was referring to and what Scott was referring to in terms of what our customer's demands for, how they're going to actually leverage the data that they have. >>And that brings me to kind of my final point on this, which is we see customers in three phases. There's the first phase where they say, Hey, I have this large data store, and I know there's value in there. I don't know how to get to it. Or I have this large data store and I've started a project to get value out of it. And we failed. Those could be customers that, um, you know, marched down the dupe, the Hadoop path early on. And they, they, they got some value out of it. Um, but they realized that, you know, HDFS, wasn't going to be a modern protocol going forward for any number of reasons. You know, the first being, Hey, if I have gold dot master, how do I know that I have gold dot four is consistent with my gold dot master? So data consistency matters. >>And then you have the sort of third group that says, I have these large datasets. I know how to extract value from them. And I'm already on to the Vertica is the elastics, you know, the Splunks et cetera. Um, I think those folks are the folks that, that latter group are the folks that kept their, their, their projects going because they were already extracting value from them. The first two groups we were seeing, sort of saying the second half of this year is when we're going to begin really being picking up on these, on these types of initiatives again. >>Well, thank you, Matt, by the way, for, for hitting the escape key, because I think value from data really is what this is all about. And there are some real blockers there that I kind of want to talk about. You've mentioned HDFS. I mean, we were very excited, of course, in the early days of a dupes, many of the concepts were profound, but at the end of the day, it was too complicated. We've got these hyper specialized roles that are, that are serving the business, but it still takes too long. It's, it's too hard to get value from data. And one of the blockers is infrastructure that the complexity of that infrastructure really needs to be abstracted taken up a level. We're starting to see this in, in cloud where you're seeing some of those abstraction layers being built from some of the cloud vendors, but more importantly, a lot of the vendors like pure, Hey, we can do that heavy lifting for you. Uh, and we, you know, we have expertise in engineering to do cloud native. So I'm wondering what you guys see. Maybe Garrett, you could start us off and the other salmon as some of the blockers, uh, to getting value from data and how we're going to address those in the coming decade. >>Yeah. I mean, I think part of it we're solving here obviously with, with pure bringing, uh, you know, flash to a market that traditionally was utilizing a much slower media. Um, you know, the other thing that I, that I see that's very nice with flash blade for example, is the ability to kind of do things, you know, once you get it set up a blade at a time. I mean, a lot of the things that we see from just kind of more of a simplistic approach to this, like a lot of these teams don't have big budgets and being able to kind of break them down into almost a blade type chunk, I think has really kind of allowed folks to get more projects and, and things off the ground because they don't have to buy a full expensive system to run these projects. Um, so that's helped a lot. >>I think the wider use cases have helped a lot. So, um, Matt mentioned ransomware, um, you know, using safe mode as a, as a place to help with ransomware has been a really big growth spot for us. We've got a lot of customers, very interested and excited about that. Um, and the other thing that I would say is bringing dev ops into data is another thing that we're seeing. So kind of that push towards data ops and really kind of using automation and infrastructure as code as a way to now kind of drive things through the system. The way that we've seen with automation through dev ops is, is really an area we're seeing a ton of growth with from a services perspective, >>Guys, any other thoughts on that? I mean, we're, I I'll, I'll tee it up there. I, we are seeing some bleeding edge, which is somewhat counterintuitive, especially from a cost standpoint, organizational changes at some, some companies, uh, think of some of the, the, the, the internet companies that do, uh, music, uh, for instance, and adding podcasts, et cetera. And those are different data products. We're seeing them actually reorganize their data architectures to make them more distributed, uh, and actually put the domain heads, the business heads in charge of the data and the data pipeline. And that is maybe less efficient, but, but it's, again, some of these bleeding edge. What else are you guys seeing out there that might be some harbinger of the next decade? >>Uh, I'll go first. Um, you know, I think specific to, um, the, the construct that you threw out, Dave, one of the things that we're seeing is, um, you know, the, the, the application owner, maybe it's the dev ops person, but it's, you know, maybe it's, it's, it's, it's the application owner through the dev ops person. They're, they're becoming more technical in their understanding of how infrastructure, um, interfaces with their, with their application. I think, um, you know, what, what we're seeing on the flash blade side is we're having a lot more conversations with application people than, um, just it people. It doesn't mean that the, it people aren't there, the it, people are still there for sure if they have to deliver the service, et cetera. Um, but you know, the days of, of it, you know, building up a catalog of services and a business owner subscribing to one of those services, you know, picking, you know, whatever sort of fits their need. >>Um, I don't think that constant, I think that's the construct that changes going forward. The application owner is becoming much more prescriptive about what they want the infrastructure to fit, how they want the infrastructure to fit into their application. Um, and that's a big change. And for, for, um, you know, certainly folks like, like Garrett and CDW, um, you know, they do a good job with this being able to sort of get to the application owner and bring those two sides together. There's a tremendous amount of value there, uh, for us to spend a little bit of a, of a retooling we've traditionally sold to the it side of the house. And, um, you know, we've had to teach ourselves how to go talk the language of, of applications. So, um, you know, I think you pointed out a good, a good, a good construct there, and you know, that that application owner tank playing a much bigger role in what they're expecting from the performance of it, infrastructure I think is, is, is a key, is a key change. >>Interesting. I mean, that definitely is a trend. That's puts you guys closer to the business where the infrastructure team is serving the business, as opposed to sometimes I talked to data experts and they're frustrated, uh, especially data owners or data, product builders who are frustrated that they feel like they have to beg, beg the, the data pipeline team to get, you know, new data sources or get data out. How about the edge? Um, you know, maybe Scott, you can kick us off. I mean, we're seeing, you know, the emergence of, of edge use cases, AI inferencing at the edge, lot of data at the edge. W what are you seeing there and how does this unified object I'll bring us back to that in file fit. >>Wow. Dave, how much time do we have, um, tell me, first of all, Scott, why don't you, why don't you just tell everybody what the edge is? Yeah. You got it all figured out. How much time do you have end of the day. And that's, that's a great question, right? Is if you take a step back and I think it comes back to Dave, something you mentioned it's about extracting value from data. And what that means is when you extract value from data, what it does is as Matt pointed out the, the influencers or the users of data, the application owners, they have more power because they're driving revenue now. And so what that means is from an it standpoint, it's not just, Hey, here are the services you get, use them or lose them, or, you know, don't throw a fit. It is no, I have to, I have to adapt. I have to follow what my application owners me. Now, when you bring that back to the edge, what it means is, is that data is not localized to the data center. I mean, we just went through a nearly 12 month period where >>The entire workforce for most of the companies in this country had went distributed and business continued. So if business is distributed, data is distributed. And that means, that means in the data center, that means at the edge, that means that the cloud, and that means in all other places and tons of places. And what it also means is you have to be able to extract and utilize data anywhere it may be. And I think that's something that we're going to continue to and continue to see. And I think it comes back to, you know, if you think about key characteristics, we've talked about, um, things like performance and scale for years, but we need to start rethinking it because on one hand, we need to get performance everywhere. But also in terms of scale, and this ties back to some of the other initiatives and getting value from data, it's something I call the, the massive success problem. One of the things we see, especially with, with workloads like machine learning is businesses find success with them. And as soon as they do they say, well, I need about 20 of these projects now will all of a sudden that overburdens it organizations, especially across, across core and edge and cloud environments. And so when you look at environments ability to meet performance and scale demands, wherever it needs to be is something that's really important. You know, >>Dave, I'd like to, um, just sort of tie together sort of two things that, um, I think that I heard from Scott and Garrett that I think are important and it's around this concept of scale. Um, you know, some of us are old enough to remember the day when kind of a 10 terabyte blast radius was too big of a blast radius for people to take on, or a terabyte of storage was considered to be, um, you know, uh, uh, an exemplary budget environment. Right. Um, now we sort of think as terabytes, kind of like we used to think of as gigabytes in some ways, um, petabyte, like you don't have to explain to anybody what a petabyte is anymore. Um, and you know, what's on the horizon and it's not far are our exabyte type dataset workloads. Um, and you start to think about what could be in that exabyte of data. >>We've talked about how you extract that value. And we've talked about sort of, um, how you start, but if the scale is big, not everybody's going to start at a petabyte or an exabyte to Garrett's point, the ability to start small and grow into these products, or excuse me, these projects, I think is a, is a really, um, fundamental concept here because you're not going to just go buy five. I'm going to go kick off a five petabyte project, whether you do that on disk or flash, it's going to be expensive, right. But if you could start at a couple of hundred terabytes, not just as a proof of concept, but as something that, you know, you could get predictable value out of that, then you could say, Hey, this either scales linearly, or non-linearly in a way that I can then go map my investments to how I can go dig deeper into this. That's how all of these things are going to, that's how these successful projects are going to start, because the people that are starting with these very large, you know, sort of, um, expansive, you know, Greenfield projects at multi petabyte scale, it's gonna be hard to realize near-term value. Excellent. Uh, >>We we're, we gotta wrap, but, but Garrett, I wonder if you could close it, when you look forward, you talk to customers, do you see this unification of file and object? Is it, is this an evolutionary trend? Is it something that is, that is, that is, that is going to be a lever that customers use. How do you see it evolving over the next two, three years and beyond? >>Yeah, I mean, I think from our perspective, I mean, just from what we're seeing from the numbers within the market, the amount of growth that's happening with unstructured data is really just starting to finally really kind of hit this data delusion or whatever you want to call it that we've been talking about for so many years. Um, it really does seem to now be becoming true, um, as we start to see things scale out and really folks settle into, okay, I'm going to use the cloud to start and maybe train my models, but now I'm going to get it back on prem because of latency or security or whatever the, the, the, um, decision points are there. Um, this is something that is not going to slow down. And I think, you know, folks like pure having the ability to have the tools that they give us, um, do use and bring to market with our customers are, are really key and critical for us. So I see it as a huge growth area and a big focus for us moving forward, >>Guys, great job unpacking a topic that, you know, it's covered a little bit, but I think we, we covered some ground. That is a, that is new. And so thank you so much for those insights and that data really appreciate your time. >>Thanks, Dave. Thanks. Yeah. Thanks, Dave. >>Okay. And thank you for watching the convergence of file and object. Keep it right there. Bright, bright back after the short break.
SUMMARY :
of file and object brought to you by pure storage. And Matt Burr is back with us, gentlemen, welcome to the program. Hey Scott, let me, let me start with you, uh, and get your perspective on what's going on in the market with, but also the need for high performance access to that data. And then you see unified Yeah, I think, I think for us, it's, you know, taking that consultative approach with our customers and really kind design, so I like it, you know, distributed cloud, et cetera, you know, Garrett, maybe it's important to talk about, um, elastic and Splunk and some of the things that you're seeing Um, but we're starting to see, you know, with like Vertica Ian, so I think it, you know, again, being driven by software is really kind of where we're seeing the world I am, you know, I am going to give prioritization to, you know, this particular element of my application you know, it's a chicken and the egg thing. But, but Scott, what are you seeing as the state of infrastructure as it relates to the data It seems, you know, we continue to see increased investment in AI, Sorry to interrupt, but Pat, if you could bring up that, that chart, that would be great. So, so Dave, I'm glad you brought this up. We had to secure the end points, you know, uh, you know, in a much smaller situation, maybe in the executive offices trying to do some testing and things. Anything you can add in here. Garrett seeing, you know, out in the field and in the real world, um, but you know, in our premise was, you know, that that organizations are going to start now operationalizing all Um, you know, security entered into the mix, uh, which took people's attention away from some of this as well. Um, but they realized that, you know, HDFS, wasn't going to be a modern you know, the Splunks et cetera. Uh, and we, you know, we have expertise in engineering is the ability to kind of do things, you know, once you get it set up a blade at a time. um, you know, using safe mode as a, as a place to help with ransomware has been a really What else are you guys seeing out there that Um, but you know, the days of, of it, you know, building up a So, um, you know, I think you pointed out a good, a good, a good construct there, to get, you know, new data sources or get data out. And what that means is when you extract value from data, what it does And I think it comes back to, you know, if you think about key characteristics, considered to be, um, you know, uh, uh, an exemplary budget environment. you know, sort of, um, expansive, you know, Greenfield projects at multi petabyte scale, you talk to customers, do you see this unification of file and object? And I think, you know, folks like pure having the Guys, great job unpacking a topic that, you know, it's covered a little bit, but I think we, we covered some ground. Bright, bright back after the short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt | PERSON | 0.99+ |
Garrett | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
Scott Sinclair | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Garrett Belschner | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
Petr | PERSON | 0.99+ |
69% | QUANTITY | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
first phase | QUANTITY | 0.99+ |
10 month | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
10 months | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
0% | QUANTITY | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
today | DATE | 0.99+ |
next decade | DATE | 0.98+ |
25 | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
20 | QUANTITY | 0.98+ |
three phases | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Vertica | ORGANIZATION | 0.98+ |
2050 | DATE | 0.98+ |
third group | QUANTITY | 0.98+ |
single block | QUANTITY | 0.97+ |
one platform | QUANTITY | 0.97+ |
three topics | QUANTITY | 0.97+ |
five petabyte | QUANTITY | 0.96+ |
March | DATE | 0.95+ |
three great guests | QUANTITY | 0.95+ |
late last year | DATE | 0.95+ |
one place | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.92+ |
past year | DATE | 0.91+ |
Greenfield | ORGANIZATION | 0.9+ |
CDW | PERSON | 0.89+ |
CDW | ORGANIZATION | 0.88+ |
35 | QUANTITY | 0.88+ |
pandemic | EVENT | 0.87+ |
One | QUANTITY | 0.87+ |
three years | QUANTITY | 0.85+ |
Doc D'Errico, Infinidat | CUBE Conversation, December 2020
>>From the cubes studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a cute conversation. >>The external storage array business, as we know it has changed forever. You know, you can see that in the survey data that we do and the financial information from the largest public storage companies. And it's not just because of COVID, although that's clearly a factor which has accelerated the shifts that we see in the market, specifically, those CIO is, are rationalizing their infrastructure portfolios by consolidating workloads to simplify, reduce costs and minimize vendor sprawl. So they can shift resources toward digital initiatives that include cloud containers, machine intelligence, and automation all while reducing their risks. Hello everyone. This is Dave Vellante and welcome to this cube conversation where we're going to discuss strategies related to workload consolidation at petabyte scale. And with me is Dr. Rico. He's the vice president office of the CTO at INFINIDAT welcome back to the cube doc, always a pleasure to see you >>And great to be here. Always a pleasure to work with you, Dave. >>So doc, I just published a piece over the weekend and I pointed out that of the largest storage companies, only one showed revenue growth last quarter, and that was on a significantly reduced compare with last year. So my first question to you is, is INFINIDAT growing its business. >>Oh, absolutely. It's been a very interesting year all across as you can quite imagine. Um, but you know, our footprint is such that with our elastic pricing models and the, and the fact that we've got excess capacity, uh, in almost every single system that's out there, we were really given our customers a, an opportunity to take advantage of that, to increase their capacity levels while maintaining the same levels of performance and availability, but not have to have anybody on premises during this crazy, you know, COVID struck era. >>Yeah. So you're bringing that cloud model to the, to the data center, which has obviously been a challenge. I mean, you mentioned the subscription sort of like pricing, we're going to get into the cloud more, but I wonder if we could step back a little bit and look at some of the macro trends that you're seeing in the market and specifically as it relates to on-prem storage strategies that CEO's are taking. >>Yeah. You know, it's been interesting, we've seen over the course of the past five years or so, certainly a big uptick in people looking at next generation or what they believe in perceived to be next generation storage platforms, which are really just evolutions of media. They're not really taking advantage of any new innovations in storage and, you know, not withstanding our own products, which are all software driven. We've talked about that before, but what what's really happened in this past year, as you, as you said, CEOs and CTOs, they're always looking for that, that next point of leverage advantage. And they're looking for more agility in application deployment, they're looking in a way to rapidly respond to business requirements. So they're looking very much at those cloud-like requirements. They're looking at those capabilities to containerize applications. They're looking at how they can, um, you know, shift out virtual machines if they're not in a directly in a container, uh, and how the storage by the way, can, can have the same advantage and in order to do so, they really need to look at storage consolidation. You know, I think Dave, to, to sum it up from the storage perspective, you know, I love Ken Steinhardt was recently on a video and, you know, he was, he was challenged that, you know, people aren't looking at spinning rust, riff, you know, a derogatory wave of referring a disc and, and Ken, so rightly and accurately responded. Yeah. But people weren't really looking for QLC either. You know, what they're looking for is performance, scale availability and certainly cost effectiveness and price. >>Yeah. It was like, I set up front dock. I mean, if you're a C level executive today, you don't want to worry about your storage infrastructure. You've got bigger problems to worry about. You just want it to work. And so when you talk about consolidating workloads, people often talk about the so-called blast radiation. In other words, people who run data centers, they understand that things fail. And sometimes something as simple, it might be a power supply can have a catastrophic downstream effect on application availability. So my question is, how do you think about architecting systems? So as to minimize the effects of component failures on the business? >>Yeah. You know, it's a very interesting term, Dave blast radius, right? We've, we've heard this referred to storage over the last several decades. In fact, when it really should refer to the data center and the application infrastructure. Uh, but you know, if we're talking about just the storage footprint itself, one of the things that we really need to look, look at is the resilience and the reliability of the architecture. And when you look at something that is maybe dual controller single or double power supply, there are issues and concerns that take in, in, into, into play. And what we've done is we've designed something that's really triple redundant, which is typically only been applied to the very high end of the market before. And we do it in a very active, active, active manner. And naturally we have suggestions for best practices for deployment within a data center as well, you know, multiple sources of power coming into the array and things of that nature. But everything needs to be this active, active, active type of architecture in order to bring those reliability levels up to the point where as long as it's a component failure within the array, it's not going to cause an outage or data on availability event. >>Yeah. So imagine a heat map when people talk about the blast radius. So imagine the heat map is green. There's a big, you know, there's a yellow area and there's a, there's a red area. And what you're saying is, as far as the array goes, you're essentially eliminating the red area. Now, if you take it to the broader installation, you know, that red area, you have to deal with it in different ways, remote replication, then you can at the sink and in a sink. Uh, but, but essentially what I'm hearing you say, doc, is, is you're squeezing that red area out. So, so your customers could sleep at night. >>That absolutely sleep at night is so appropriate. And in fact, we've got a large portion of our customer base is, or they're running mission critical businesses. You know, we have some of the most mission critical companies in our, in our logo portfolio, in the world. We also have, by the way, some very significant service provider businesses who were we're providing, you know, mission critical capabilities to their customers in turn, and they need to sleep at night. And it it's, you know, availability is only one factor. Certainly manageability is another cause you know, not meeting a service level is just like data unavailability in some respects. So making manageability is automatic as it can be making sure that the, that the system is not only self-healing, but can re respond to variations in workload appropriately is very, very critically important as well. >>Yeah. So that, that you mentioned mission critical workloads, and those are the, those are the workloads that let's face it. They're not moving into the cloud, certainly not in any, any big way, you know, why would they generally are CIO CTO is they're putting a brick wall around that saying, Hey, it works. We don't want to migrate that piece, but I want to talk more about how your customers are thinking about workload consolidation and rationalizing their storage portfolios. What are those conversations like? Where do they start and what are some of the outcomes that you're seeing with your customers? >>Yeah, I think the funny thing about that point Dave, is that customers are really starting to think about a cloud in an entirely different way. You know, at one point cloud meant public cloud and men, this entity, uh, outside the walls of the data center and people were starting to use services without realizing that that was another type of cloud. And then they were starting to build their own versions of cloud. You know, we were referring to them as private clouds, but they were, you know, really spread beyond the walls of a single data center. So now it's a very hybrid world and there's lots of different ways to look at it, hybrid cloud multi-cloud, whatever moniker you want to put on it. It really comes down to a consistency in how you manage that infrastructure, how you interface with that infrastructure and then understanding what the practicality is of putting workloads in different places. >>And practicality means not only the, you know, the latency of access of the data, but the costs associated with it. And of course the other aspects that we talked about, like what the, the availability metrics, and as you increase the availability and performance metrics, those costs go up. And that's one of the reasons why some of these larger mission critical data centers are really, you know, repatriating their, their mission, critical workloads, at least the highest, highest levels of them and others are looking at other models, for example, AWS outposts, um, which, you know, talked about quite a bit recently in AWS reinvent. >>Yeah. I just wrote, again, this weekend that you guys were one of the, uh, partners that was qualified now, uh, to run on AWS outpost, it's interesting as Amazon moves, it's, you know, it's, it's it's model to the edge, which includes the data center to them. They need partners that can, that really understand how to operate in an on-premise world, how to service those customers. And so that's great to see you guys as part of that. >>Yeah. Thank you. And, you know, it was actually a very seamless integration because of the power and capability of all of the different interface models that we have is they all are fully and tightly integrated and work seamlessly. So if you want to use a, you know, a CSI type model, uh, you know, do you interface with your storage again, uh, with, with INFINIDAT and, you know, we work with all of the different flavors so that the qualification process, the certification process and the documentation process was actually quite easy. And now we're able to provide, you know, people who have particularly larger workloads that capability in the AWS on premises type environment. >>Yeah. Now I implied upfront that that cloud computing was the main factor, if not the primary factor, really driving some of the changes that we're seeing in the marketplace. Now, of course, it's all, not all pink roses with the cloud. We've seen numerous public cloud outages this year, certainly from Microsoft. We saw the AWS Kinesis outage in November. Google just had a major outage this month. Gmail was down G suite was down for an extended period of time. And that disrupted businesses, we rely on that schools, for example. So it's always caveat emptor as we know, but, but talk to INFINIDAT cloud strategy, you mentioned hybrid, uh, particularly interested in, in how you're dealing with things like orchestration and containers and Kubernetes. >>Yeah, well, of course we have a very feature rich set of interfaces for containers, Kubernetes interfaces, you know, downloadable through native, uh, native. So they're, they're very easy to integrate with, you know, but our cloud strategy is that, you know, we are a software centric model and we, you know, all of the, all of the value and feature function that we provide is through the software. The hardware of infiniboxes really a reference architecture that we, uh, we deliver to make it easier for customers to enjoy say 100% availability model. But if, if you want to run something in a traditional on premises data center, you know, straighten InfiniBox is fine, but we also give you the flexibility of cloud-like consumption through our pricing models, our, our elastic pricing models. So you don't need to consume an entire InfiniBox day one. You can grow and shrink that environment with, uh, with an OPEX model, or you can, um, buy it as you consume it in a, in a cap ex model. >>And you can switch, uh, from OPEX over to CapEx if it becomes more cost effective for you in time, which I think is, is what a lot of people are looking for. If you're looking for that public cloud, we, you know, we have our new tricks cloud offering, which is now being delivered more through partners, but you know, some businesses and especially the, the mid tier, um, you know, the SMB all the way through the mid enterprise are also now looking to cloud service providers, many of which use InfiniBox as, as their backend. And now with AWS outposts, of course, you know, we can give you that on premises, uh, uh, experience of the public cloud, >>You guys were early on. And obviously in that, that subscription-based model, and now everyone's doing it. I noticed in the latest Gartner magic quadrant on, on storage arrays, which you guys were named a leader, uh, they, I think they had a stat in there and said, I, I forget what the exact timeframe was that 50% of customers would be using that type of model. And again, I guarantee you by whatever time frame, that was a hundred percent of the vendor community is going to be delivering that type of model. So, so congratulations on being named a leader, I will say this there's there's there's consolidation happening in the market. So this, to me, this bodes well, to the extent that you can guarantee high availability and consistent performance, uh, at, at scale, that bodes well for, for you guys in a consolidating market. And I know IDC just released a paper, it was called, uh, I got, uh, I got a copy here. >>It's called a checklist for, uh, storage, workload consolidation at petabyte scale. It was written by Eric Bergner, who I've known for a number of years. He's the VP of infrastructure. Uh, he knows his stuff and the paper is very detailed. So I'm not going to go through the checklist items, but I, but I think if you don't mind, doc, I think it's worth reading an excerpt from this. If I can, as part of his conclusions, when workload consolidation, it organizations should carefully consider their performance availability, functionality, and affordability requirements. Of course, few storage systems in the market will be able to cost effectively consolidate different types of workloads with different IO profiles onto a single system. But that is in INFINIDAT forte. They're very good at it. So that's a, that's quite a testimonial, you know, why is that your thoughts on what Eric wrote? >>Well, you know, first of all, thank you for the kudos on the Gartner MQ, you know, being a leader on the second year in a row for primary storage, only because that documents only existed for two years, but, uh, you know, we were also a leader in hybrid storage arrays before that. And, you know, we, we love Gardner. We think they're, they're, you know, um, uh, real critical, you know, reliable source for, for a lot of large companies and, and IDC, you know, Eric of course is, uh, he's a name in the industry. So we, you know, we very much appreciate when he writes something, you know, that positive about us. But to answer your question, Dave, you know, there's, there's a lot that goes on inside InfiniBox and is the neural cash capabilities, the deep learning engine that is able to understand the different types of workloads, how they operate, uh, how to provide, you know, predictable performance. >>And that I think is ultimately key to an application. It's not just high performance. It's, it's predictable performance is making sure the application knows what to expect. And of course it has to be performant. It can't just be slow, but predictable. It has to be fast and predictable providing a multi-tenant infrastructure that is, that is native to the architecture, uh, so that these workloads can coexist whether they're truly just workloads from multiple applications or workloads from different business units, or potentially, as we mentioned with cloud service providers, workloads from different customers, you know, they, they need to be segmented in such a way so that they can be managed, operating and provide that performance and availability, you know, at scale because that's where data centers go. That's where data centers are. >>Great. Well, so we'll bring that graphic back up just to show you, obviously, this is available on your website. Uh, you can go download this paper from Erik, uh, from IDC, www infinidat.com/ian/resource. I would definitely recommend you check it out. Uh, as I say, Ericsson, you know, I've been in the business a long, long time, so, so that's great, doc, we'll give you the last word. Anything we didn't cover any big takeaways you want to, you want to share with the audience? >>Yeah. You know, I think I'll go back to that point. You know, consolidation is absolutely key for, uh, not just simplicity of management, but capability for you respond quickly to changing business requirements and or new business requirements, and also do it in a way that is cost-effective, you know, just buying the new shiny object is it's expensive and it's very limited in, in shelf life. You're just going to be looking for the next one the next year. You want to provide something that is going to provide you that predictable capability over time, because frankly, I have never met a C X O of anything that wasn't trying to increase their profit. >>You know, that's a great point. And I just, I would add, I mean, the shiny new object thing. Look, if you're in an experimental mode and playing around with, you know, artificial intelligence or automation thinking, you know, areas that you really don't know a lot about, you know, what, check out the shiny new objects, but I would argue you're on-prem storage. You don't want to be messing around with that. That's, it's not a shiny new objects business. It's really about, you know, making sure that that base is stable. And as you say, predictable and reliable. So doc Terico thanks so much for coming back into cube. Great to see you. >>Great to see you, David, and look forward to next time. >>And thank you for watching everybody. This is Dave Volante and we'll see you next time on the queue.
SUMMARY :
From the cubes studios in Palo Alto, in Boston, connecting with thought leaders all around the world. You know, you can see that in the survey And great to be here. So my first question to you is, is INFINIDAT growing Um, but you know, our footprint is such that I mean, you mentioned the subscription sort of like pricing, we're going to get into the cloud more, you know, he was, he was challenged that, you know, people aren't looking at spinning And so when you talk about Uh, but you know, if we're talking about you know, that red area, you have to deal with it in different ways, remote replication, And it it's, you know, availability is only one factor. They're not moving into the cloud, certainly not in any, any big way, you know, clouds, but they were, you know, really spread beyond the walls of a single data center. And practicality means not only the, you know, the latency of access of the And so that's great to see you guys as part And now we're able to provide, you know, people who have particularly larger you mentioned hybrid, uh, particularly interested in, in how you're dealing with things like orchestration you know, but our cloud strategy is that, you know, we are a software centric the, the mid tier, um, you know, the SMB all the way through the mid enterprise are also to the extent that you can guarantee high availability and consistent performance, you know, why is that your thoughts on what Eric wrote? We think they're, they're, you know, um, uh, real critical, you know, providers, workloads from different customers, you know, they, they need to be segmented in such Uh, as I say, Ericsson, you know, that is cost-effective, you know, just buying the new shiny object is thinking, you know, areas that you really don't know a lot about, you know, what, check out the shiny new objects, And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Eric Bergner | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Ken Steinhardt | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
December 2020 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
INFINIDAT | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
Ken | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
second year | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Erik | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
Rico | PERSON | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Gmail | TITLE | 0.99+ |
last quarter | DATE | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.98+ |
one factor | QUANTITY | 0.98+ |
hundred percent | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
this year | DATE | 0.97+ |
OPEX | ORGANIZATION | 0.97+ |
single | QUANTITY | 0.97+ |
CapEx | ORGANIZATION | 0.97+ |
this month | DATE | 0.96+ |
double | QUANTITY | 0.96+ |
single system | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
Gardner | PERSON | 0.95+ |
G suite | TITLE | 0.95+ |
one point | QUANTITY | 0.94+ |
Doc D'Errico | PERSON | 0.93+ |
Gartner MQ | ORGANIZATION | 0.89+ |
doc | PERSON | 0.89+ |
CTO | ORGANIZATION | 0.89+ |
InfiniBox | COMMERCIAL_ITEM | 0.85+ |
single data center | QUANTITY | 0.84+ |
Gartner | ORGANIZATION | 0.84+ |
Kubernetes | TITLE | 0.8+ |
Terico | PERSON | 0.79+ |
first | QUANTITY | 0.77+ |
COVID | OTHER | 0.77+ |
past five years | DATE | 0.75+ |
last several decades | DATE | 0.73+ |
www infinidat.com/ian/resource | OTHER | 0.72+ |
Infinidat | PERSON | 0.7+ |
InfiniBox | ORGANIZATION | 0.7+ |
past year | DATE | 0.69+ |
Dr. | PERSON | 0.69+ |
this weekend | DATE | 0.67+ |
cap ex | ORGANIZATION | 0.66+ |
day one | QUANTITY | 0.62+ |
infiniboxes | ORGANIZATION | 0.6+ |
InfiniBox | TITLE | 0.58+ |
COVID | TITLE | 0.58+ |
dual | QUANTITY | 0.55+ |
Kinesis | COMMERCIAL_ITEM | 0.42+ |
3 3 Adminstering Analytics v4 TRT 20m 23s
>>Yeah. >>All right. Welcome back to our third session, which is all about administering analytics at Global Scale. We're gonna be discussing how you can implement security data compliance and governance across the globe at for large numbers of users to ensure thoughts. What is open for everyone across your organization? So coming right up is Cheryl Zang, who is a senior director of product management of Thought spot, and Kendrick. He threw the sports sports director of Systems Engineering. So, Cheryl and Kendrick, the floor is yours. >>Thank you, Tina, for the introduction. So let's talk about analytics scale on. Let's understand what that is. It's really three components. It's the access to not only data but its technology, and we start looking at the intersection of that is the value that you get as an organization. When you start thinking about analytics scale, a lot of times we think of analysts at scale and we look at the cloud as the A seven m for it, and that's a That's an accurate statement because people are moving towards the cloud for a variety of reasons. And if you think about what's been driving, it has been the applications like Salesforce, Forcados, Mongo, DB, among others. And it's actually part of where we're seeing our market go where 64% of the company's air planning to move their analytics to the cloud. And if you think of stock spotted specifically, we see that vast majority of our customers are already in the cloud with one of the Big Four Cloud Data warehouses, or they're evaluated. And what we found, though, is that even though companies are moving their analytics to the cloud, we have not solved. The problem of accessing the data is a matter of fact. Our customers. They're telling us that 10 to 25% of that data warehouse that they're leveraging, they've moved and I'm utilizing. And if you look at in General, Forrester says that 60 to 73% of data that you have is not being leveraged, and if we think about why you go through, you have this process of taking enterprise data, moving it into these cubes and aggregates and building these reports dashboards. And there's this bottleneck typically of that be I to and at the end of the day, the people that are getting that data on the right hand side or on Lee. Anywhere from 20 to 30% of the population when companies want to be data driven is 20 to 30% of the population. Really what you're looking for now it's something north of that. And if you think of Cloud data, warehouse is being the the process and you bring Cloud Data Warehouse and it's still within the same framework. You know? Why invest? Why invest and truly not fix the problem? And if you take that out and your leverage okay, you don't necessarily have the You could go directly against the warehouse, but you're still not solving the reports and dashboards. Why investing truly not scale? It's the three pillars. It's technology, it's data, and it's a accessibility. So if we look at analytics at scale, it truly is being able to get to that north of the 20 to 30% have that be I team become enablers, often organization. Have them be ableto work with the data in the Cloud Data warehouse and allow the cells marking finding supplies and then hr get direct access to that. Ask their own questions to be able to leverage that to be able to do that. You really have to look at your modern data architecture and figure out where you are in this maturity, and then they'll be able to build that out. So you look at this from the left to right and sources. It's ingestion transformation. It's the storage that the technology brains e. It's the data from a historical predictive perspective. And then it's the accessibility. So it's technology. It's data accessibility. And how do you build that? Well, if you look at for a thought to spot perspective, it truly is taking and driving and leveraging the cloud data warehouse architectures, interrogated, essay behind it. And then the accessibility is the search answers pen boards and embedded analytics. If you take that and extend it where you want to augment it, it's adding our partners from E T L R E L t. Perspective like al tricks talent Matile Ian Streaming data from data brings or if you wanna leverage your cloud, data warehouses of Data Lake and then leverage the Martin capability of your child data warehouse. The augmentation leveraging out through its data bricks and data robot. And that's where your data side of that pillar gets stronger, the technologies are enabling it. And then the accessibility from the output. This thought spot. Now, if you look at the hot spots, why and how do we make this technology accessible? What's the user experience we are? We allow an organization to go from 20 to 30% population, having access to data to what it means to be truly data driven by our users. That user experience is enabled by our ability to lead a person through the search process. There are search index and rankings. This is built for search for corporate data on top of the Cloud Data Warehouse. On top of the data that you need to be able to allow a person who doesn't understand analytics to get access to the data and the questions they need to answer, Arcuri Engine makes it simple for customers to take. Ask those questions and what you might think are not complex business questions. But they turn into complex queries in the back end that someone who typically needs to know that's that power user needs to know are very engine. Isolate that from an end user and allows them to ask that question and drive that query. And it's built on an architecture that allows us to change and adapt to the types of things. It's micro services architecture, that we've not only gone from a non grim system to our cloud offering, in a matter of of really true these 23 years. And it's amazing the reason why we can do that, do that and in a sense, future proof your investment. It's because of the way we've developed this. It's wild. First, it's Michael Services. It's able to drive. So what this architecture ER that we've talked about. We've seen different conversations of beyond its thought spot everywhere, which allows us to take that spot. One. Our ability to for search for search data for auto analyzed the Monitor with that govern security in the background and being able to leverage that not only internally but externally and then being able to take thought spot modeling language for that analysts and that person who just really good at creating and let them create these models that it could be deployed anywhere very, very quickly and then taking advantage off the Cloud Data warehouse or the technology that you have and really give you accessibility the technology that you need as well as the data that you need. That's what you need to be able to administer, uh, to take analytics at scale. So what I'm gonna do now is I'm gonna turn it over to Cheryl and she's gonna talk about administration in thought spot. Cheryl, >>thank you very much Can take. Today. I'm going to show you how you can administrator and manage South Spot for your organization >>covering >>streaming topics, the user management >>data management and >>also user adoption and performance monitoring. Let's jump into the demo. >>I think the Southport Application The Admin Council provides all the core functions needed for system level administration. Let's start with user management and authentication. With the user tab. You can add or delete a user, or you can modify the setting for an existing user. For example, user name, password email. Or you can add the user toe a different group with the group's tab. You can add or delete group, or you can manage the group setting. For example, Privileges associated with all the group members, for example, can administrate a soft spot can share data with all users or can manage data this can manage data privilege is very important. It grants a user the privileges to add data source added table and worksheet, manage data for different organizations or use cases without being an at me. There is also a field called Default Pin Board. You can select a set of PIN board that will be shown toe all of the users in that group on their homepage in terms off authentication. Currently, we support three different methods local active directory and samel By default. Local authentication is enabled and you can also choose to have several integration with an external identity provider. Currently, we support actor Ping Identity, Seaside Minor or a T. F. S. The third method is integration with active directory. You can configure integration with L DAP through active directory, allowing you to authenticate users against an elder up server. Once the users and groups are added to the system, we can share pin board wisdom or they can search to ask and answer their own questions. To create a searchable data, we first need to connect to our data warehouses with embraced. You can directly query the data as it exists in the data warehouse without having to move or transfer the data. In this page, you can add a connection to any off the six supported data warehouses. Today we will be focusing on the administrative aspect off the data management. So I will close the tap here and we will be using the connections that are already being set up. Under the Data Objects tab, we can see all of the tables from the connections. Sometimes there are a lot of tables, and it may be overwhelming for the administrator to manage the data as a best practice. We recommend using stickers toe organize your data sets here, we're going to select the Salesforce sticker. This will refined a list off tables coming from Salesforce only. This helps with data, lineage and the traceability because worksheets are curated data that's based on those tables. Let's take a look at this worksheet. Here we can see the joints between tables that created a schema. Once the data analyst created the table and worksheet, the data is searchable by end users. Let's go to search first, let's select the data source here. We can see all of the data that we have been granted access to see Let's choose the Salesforce sticker and we will see all of the tables and work ship that's available to us as a data source. Let's choose this worksheet as a data source. Now we're ready to search the search Insight can be saved either into a PIN board or an answer. Okay, it's important to know that the sticker actually persist with PIN board and answers. So when the user logging, they will be able to see all of the content that's available to them. Let's go to the Admin Council and check out the User Adoption Pin board. The User Adoption Pin board contains essential information about your soft spot users and their adoption off the platform. Here, you can see daily active user, weekly, active user and monthly active user. Count that in the last 30 days you can also see the total count off the pin board and answers that saved in the system. Here, you can see that unique count off users. Now. You can also find out the top 10 users in the last 30 days. The top 10 PIN board consumers and top 10 ad hoc searchers here, you can see that trending off weekly, active users, daily, active users and hourly active users over time. You can also get information about popular pin boards and user actions in the last one month. Now let's zoom in into this chart. With this chart, you can see weekly active users and how they're using soft spot. In this example, you can see 60% of the time people are doing at Hawk search. If you would like to see what people are searching, you can do a simple drill down on quarry tax. Here we can find out the most popular credit tax that's being used is number off the opportunities. At last, I would like to show you assistant performance Tracking PIN board that's available to the ad means this PIN board contains essential information about your soft spot. Instance performance You this pimple. To understand the query, Leighton see user traffic, how users are interacting with soft spot, most frequently loaded tables and so on. The last component toe scowling hundreds of users, is a great on boarding experience. A new feature we call Search Assist helps automate on boarding while ensuring new users have the foundation. They need to be successful on Day one, when new users logging for the first time, they're presented with personalized sample searches that are specific to their data set. In this example, someone in a sales organization would see questions like What were sales by product? Type in 2020. From there are guided step by step process helps introduce new users with search ensuring a successful on boarding experience. The search assist. The coach is a customized in product Walk through that uses your own data and your own business vocabulary to take your business users from unfamiliar to near fluent in minutes. Instead of showing the entire end user experience today, I will focus on the set up and administration side off the search assist. Search Assist is easy to set up at worksheet level with flexible options for multiple guided lessons. Using preview template, we help you create multiple learning path based on department or based on your business. Users needs to set up a learning path. You're simply feeling the template with relevant search examples while previewing what the end user will see and then increase the complexity with each additional question toe. Help your users progress >>in summary. It is easy to administrator user management, data management, management and the user adoption at scale Using soft spot Admin Council Back to you, Kendrick. >>Thank you, Cheryl. That was great. Appreciate the demo there. It's awesome. It's real life data, real life software. You know what? Enclosing here? I want to talk a little bit about what we've seen out in the marketplace and some of them when we're talking through prospects and customers, what they talk a little bit about. Well, I'm not quite area either. My data is not ready or I've got I don't have a file data warehouse. That's this process. In this thinking on, we have examples and three different examples. We have a company that actually had never I hadn't even thought about analytics at scale. We come in, we talked to them in less than a week. They're able to move their data thought spot and ask questions of the billion rose in less than a week now. We've also had customers that are early adoption. They're sticking their toes in the water around the technology, so they have a lot of data warehouse and they put some data at it, and with 11 minute within 11 minutes, we were able to search on a billion rows of their data. Now they're adding more data to combine to, to be able to work with. And then we have customers that are more mature in their process. Uh, they put large volumes of data within nine minutes. We're asking questions of their data, their business users air understanding. What's going on? A second question we get sometimes is my data is not clean. We'll talk Spot is very, very good at finding that type of data. If you take, you start moving and becomes an inner door process, and we can help with that again. Within a week, we could take data, get it into your system, start asking business questions of that and be ready to go. You know, I'm gonna turn it back to you and thank you for your time. >>Kendrick and Carol thank you for joining us today and bringing all of that amazing inside for our audience at home. Let's do a couple of stretches and then join us in a few minutes for our last session of the track. Insides for all about how Canadian Tire is delivering Korean making business outcomes would certainly not in a I. So you're there
SUMMARY :
We're gonna be discussing how you can implement security data compliance and governance across the globe Forrester says that 60 to 73% of data that you have is not I'm going to show you how you Let's jump into the demo. and it may be overwhelming for the administrator to manage the data as data management, management and the user adoption at scale Using soft spot Admin and thank you for your time. Kendrick and Carol thank you for joining us today and bringing all of that amazing inside for our audience at home.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cheryl | PERSON | 0.99+ |
Tina | PERSON | 0.99+ |
Kendrick | PERSON | 0.99+ |
Cheryl Zang | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
third session | QUANTITY | 0.99+ |
64% | QUANTITY | 0.99+ |
11 minute | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
nine minutes | QUANTITY | 0.99+ |
third method | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
Global Scale | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
South Spot | ORGANIZATION | 0.99+ |
less than a week | QUANTITY | 0.99+ |
23 years | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Carol | PERSON | 0.99+ |
Leighton | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Michael Services | ORGANIZATION | 0.98+ |
25% | QUANTITY | 0.97+ |
73% | QUANTITY | 0.97+ |
hundreds of users | QUANTITY | 0.97+ |
11 minutes | QUANTITY | 0.97+ |
Matile Ian | PERSON | 0.97+ |
first | QUANTITY | 0.96+ |
three pillars | QUANTITY | 0.96+ |
three components | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
three different methods | QUANTITY | 0.95+ |
10 users | QUANTITY | 0.95+ |
Day one | QUANTITY | 0.95+ |
six supported data warehouses | QUANTITY | 0.94+ |
Systems Engineering | ORGANIZATION | 0.94+ |
Thought spot | ORGANIZATION | 0.93+ |
Data Lake | ORGANIZATION | 0.91+ |
Arcuri Engine | ORGANIZATION | 0.9+ |
10 ad hoc searchers | QUANTITY | 0.9+ |
Warehouse | TITLE | 0.89+ |
billion rows | QUANTITY | 0.88+ |
Cloud Data warehouse | TITLE | 0.87+ |
billion | QUANTITY | 0.86+ |
three different examples | QUANTITY | 0.86+ |
last one month | DATE | 0.86+ |
Salesforce | ORGANIZATION | 0.86+ |
a week | QUANTITY | 0.85+ |
Canadian | OTHER | 0.84+ |
each additional question | QUANTITY | 0.83+ |
v4 | OTHER | 0.83+ |
last 30 days | DATE | 0.78+ |
Salesforce | TITLE | 0.77+ |
last 30 days | DATE | 0.77+ |
Korean | OTHER | 0.75+ |
One | QUANTITY | 0.74+ |
Search | TITLE | 0.73+ |
Big Four | QUANTITY | 0.73+ |
Martin | PERSON | 0.72+ |
DB | TITLE | 0.72+ |
10 PIN | QUANTITY | 0.71+ |
Southport | TITLE | 0.66+ |
Lee | PERSON | 0.66+ |
Hawk | ORGANIZATION | 0.66+ |
Adminstering Analytics | TITLE | 0.66+ |
Mongo | TITLE | 0.64+ |
Forcados | TITLE | 0.64+ |
Seaside Minor | ORGANIZATION | 0.62+ |
gress | ORGANIZATION | 0.6+ |
Cloud | TITLE | 0.57+ |
Ping | TITLE | 0.53+ |
seven | QUANTITY | 0.49+ |
User Adoption | ORGANIZATION | 0.39+ |
20m | OTHER | 0.36+ |
User | ORGANIZATION | 0.35+ |
Adoption | COMMERCIAL_ITEM | 0.35+ |
Craig Wicks & Tod Golding, AWS | AWS re:Invent 2020 Partner Network Day
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by A. W s Global Partner Network. Welcome back to the cubes Coverage Cube. Virtual coverage of AWS reinvent 2020. We're not in person this year. We have to do the all the Cube interviews remote. But we've got two great guests from the Amazon Web Services Partner Network A W s a p N. Craig Wicks, senior manager of AWS Satisfactory. Todd Golden, Principal Cloud Architect, Global SAS Tech Lead Gentlemen, Thanks for joining the Cube. Appreciate it. >>Thanks, John. >>Um, first of all, I want to get in Craig with you and just take them in to explain what is the satisfactory. Because this is a unique and growing team within AWS. Um, we've been saying it for years, but the moves to the cloud houses has been obvious is mainstream. But your team, your role is doing some interesting things. Explain. What is the satisfactory? What do you guys do? >>Yeah, Thanks, John. Really delighted to be here today. Yeah, the satisfactory. Maybe for those that may be somewhat disappointing. There's no factory, no sort of easy button for SAS. There's no templates. There's no machinery. We wish we had it. But we're really a global team of subject matter. Experts in SAS that really help AWS partners transform their business right both business and technical to the Saas model and help them do that faster with greater confidence and all the best practices that our team has learned over the years. >>And Todd, your solution architect. So you're the partner. You have to help your customers get their, um, you know, being a solution architect really is like the mechanic of the business. You gotta lay out the engine of innovation and this is what clients are trying to strive for. Can you take him and explain how your role is involved in this? Obviously, SAS is not. It makes sense on paper, but making it happen is not trivial. What do you What do you what? Your role. >>Yeah, so I'm very much, in fact, connected to Craig. We're all part of the same organization, and we're sort of very much deeply involved with these organizations. We get very much, um, embedded with these these partners that we work with and really helped them through sort of the nuts and bolts of what it means to transform an application thio multi tenant sort of SAS models. That means helping them figure out how to map that two different AWS services. It means helping them figure out how to realize the sort of the business objective objectives of transforming to sass. But really, our goal is to sort of just get into the weeds with them, figure out their specific domain because there's no one size fits all. Versace figure out how that really connects toe, where they're at in their trajectory, in terms of where they're trying to get to end of the journey is a business and then find that alignment with a W S services. So there's sort of that trifecta of lining all those bits up and sort of formulating, Ah, technical strategy that really brings all those pieces together for them. >>Craig, I want to get your thoughts on the trends, and Todd, you can weigh in to want to get your reaction. Over the weekend, I was picking some folks on on the Internet, linked in and whatnot from eight years ago when that we did our first cube at reinvent with second year of reinvent, and nobody was there in the industry press, wasn't there were the first I think press to be there. Um and a lot of people have either moved on to big positions or companies have gone public. I bought me. Major things have happened in 2013 clouds certainly rose there. SAS became the business model. Everyone kind of knows that. But the dynamics today are different when you think about the on premises and you got the edge. A big part of the themes this week in the next couple weeks as we unfold here reinvent. This >>is >>different, but the same Can you share? What is the trend that people are riding on? What's the What's the wind of innovation? >>Yeah, and certainly I would say, First of all, just personally, I've been in SAS for some time. It was involved early on, in sort of, ah, model. We called the application service provider model, which was sort of a predecessor assassin, you know, the gray hairs out to remember that one. But, uh, you know, I think first of all, I would say SAS is everywhere and people wanted to be everywhere And so there's just We just see insatiable demand for sass from from customers out there, right? And I think the challenge problem we see is that organizations that we work with just can't transition fast enough, right? The rial technical challenges that air in front of them in terms of how they build an architect, Assaf solution and but most importantly, the business model that sort of underpins. That is a huge transformation for companies that they're going through. And that's one of the things that we just see. You know, Justin, my time in satisfactory native us. The range of organizations we worked with has just changed. So, you know, early on we're working with companies and infrastructure around security and storage and those areas, and the last few years it's just expanded to all sorts of industries, from public sector oil and gas. Um, sort of financial services. You know, everyone really wants to build this model, and that's really, you know, born around the customer demand they're seeing for South. >>That's interesting. You mention challenge. I wanna get your thoughts. You mentioned a SP application service provided you remember those days, you know, vividly, mainly a tech thing, but it's really a consumption model around delivery of software and services. And, you know, Web services came on in 2000. The rest is history. We've got Amazon Web services, but now, as you get more vertically expanded oil and gas and go mainstream. But what >>are some >>of the challenges? Because as people get smarter, it's not just about self service or buy as you go. It's a business model you mentioned. Is it a managed services itself? Services has been embedded into the application. Can you share some of the new things that are emerging on the business model side that people should pay attention to? What, some of those challenges? Yeah, I >>think one of the first things is just a fundamentally are operating service, right? So that changes the dynamics to everything, for in terms of how you engage with customers to how you deliver. You know, the kind of simple thing E I often tell people is you know who's answering the pager now. If someone goes, if something goes wrong, it's not your customer. That's you right, and you have to manage and sustain that service and and really continue Thio provide innovation and value to customers. Right? That's one of the challenges we see is is organizations are now on a treadmill in terms of innovation where customers expect something from South model and you really have to deliver on that. And then one of the final points I would say is it really transforms how you think about going to market right sales and marketing your fundamentally transformed. And, um, you know, traditional ways of really selling software and technology. Um, largely go away and go away and some good ways. And SAS, where you can really put customers in experience right and have them evaluate your technology in a manner where they can have a trial experience, right in a way, toe really introduce them to technology very slowly. And then, um, they grow over time, right? As they see value in that software, which is very aligned, how we think about, you know, a AWS our own technology. >>Okay, Todd, I gotta ask you out. So you want to drive that car? The SAS car, What's under the hood with the right tires? What's the conditions? And it's a technical issues here. If I'm a customer, I'm in a PM, partner. Okay, I'm in there. I got a traditional business pandemic hits or just my business models forcing me. What's your advice? What have I got to do? What's the playbook on the technical side? How doe I go to the next level? >>Well, uh, you know, we're obviously gonna ask a lot of questions and probably the answer to that, sadly, like most technical people will say to you is it depends which is never the answer anybody wants to hear. But so we're definitely gonna ask a lot of questions you about, like where you're at. What are the immediate sort of pressures in your business? This is where the technical team people on our team tended wearing a little bit of a business hat here where we want to know before we sort of guide you down any one particular technical path, like water. Sort of the key sort of dimensions of getting you to a SAS till every model, but but probably as a theme generally were saying to people is, Let's look at how we can get you there incrementally. Let's get you into a SAS model as fast as we possibly can. So we have a lot of different sort of patterns and strategies will use that air about sort of incremental adoption of SAS, which are how can I sort of lift my existing environment, move it into a SAS model, present a SAS offering to the business, Let me operate and run, get the metrics and analytics, get the sort of operational efficiency and the Dev ops goodness of sass, and then sort of move after that into the insides of that sass application. And think about now, how can I begin to move that two more modern constructs? How can I move that into containers? Potentially? Or how can I begin to adopt server list technologies? How can I apply? I am another constructs to achieve Tenet isolation. Eso We're really just trying to put them in a position where they can sort of incrementally modernize their applications while still realizing the benefits of getting to market on a saas model. >>So you're saying that the the playbook is come in low hanging fruit is used existing core building blocks, you see two s three dynamo whatever and then hit the higher level services as you get more experience Or is there a certain recipe that you see working for customers? >>So it's it's probably less about that. It's probably It's not about necessarily where you're out in the service continuum and which services you're using. Um, well, we're gonna move you to a set of services that are probably a good set of services that are that way to move your monolith in most effectively into a saas model as a beginning point that could land you in to that could land you in containers. The more important thing we're going to do here is we're going to surround the that sort of experience with all the other moving parts that you have tow have billing metrics. We're gonna We're gonna build in on boarding so that you could get frictionless on boarding. Those are all gonna be net new things you have to build. We're probably gonna change your identity model and connect that up with cognito or one of our partners solutions eso for us. It's it's sort of grabbing your existing environment. Can we move it over effectively, maybe modernize it a little bit along the way, but more importantly, build all those horizontal concepts in leveraging the right AWS services for you, uh, to bring that to life. >>That's actually smart, aleck. The way you described it that way, it's almost as if it's the core tenant of what Amazon stood for. You standing up fast and you get value, right? So what you're saying is, whatever it takes is a variety of tools to stand it up. I mean, this is interesting, Craig, and talk if you can comment on this because one of the things that we've been reporting on, I've done probably a dozen interviews specifically around companies that have moved to the cloud early, proactively kind of in this way, not in a major radical way. But, you know, operationally they have been transforming, you know, piece by piece. How Todd you laid it out and then pandemic it. And they've had successfully position themselves to take advantage of the forcing function of necessity of dealing with, you know, remote work and all these things that just clobbered him so and again. They were on the wave at the right time. Kind of because they had to because they did the right work. This >>is a >>factor. This is gonna tell sign. Can you guys share your reaction? What you've seen with satisfactory because this >>is the >>benefit of moving to the club. Being positioned needs pandemic today. Tomorrow, its edge. What's after that? Right space. I mean, there's a lot of things. This is kind of the playbook. What's your reaction to that? Correct. >>Yeah. I certainly see, you know, organizations that we work with that have really delivering the SAS model, being more agile, right. The ability to sort of flex resource is and change the way they sell and work with customers and find ways to, um, sort of delivered to them. Um, that don't require, um, some of the things that we're really maybe some of the things that are holding them back from traditional software in terms of how fast they deliver new features and services and, you know, changing to sort of market and world dynamics very quickly. Right is a big part of that. And, you know, one of the things we talked about in the SAS model is really not just getting to sass, but being to deliver in that model, right? And dr Innovations to customers very quickly. Um, s O that you really getting sort of securing, you know, sort of them is the loyal customers and sort of a lifetime customer. Hopefully, um, you know, that's a big part of status. >>Yeah. And there's two types of organizations that you guys have been successful with. The startup, obviously, you know, category creators or disruptors will come in, you know, come in with a nap. Born in the cloud, kick some ass you've seen that movie happens all the time still going on. And then you got the existing organizations that have to stay in that innovation wave and not get crushed by the by the change can you guys share how the factories working? The satisfactory from a mix of of clients is Atmore establishes its startups in between. Give us a taste of What's the makeup? >>Yeah, it's range just to give you a range of some of the companies worked with from kind of legacy technology companies or companies that have been around in some time, like BMC, you know, f five alfresco we've all worked with over the past few years, and they've launched products with our team on a W s. You know, to kind of start ups like Matile. Ian. You know, Cloud zero. Cokie City, which just launched a data management service announced here at Reinvent um, two very kind of specific industry players. I think this is a trend we've seen most recently where, you know, we work with organizations like NASDAQ. I based tea in the aerospace, you know, area Emerson in oil and gas. We've seen in a number of oil and gas companies really come to us based on sort of dynamics, their industry and the constraints the customers are in in terms of how they could deliver the value they provide, >>is there. Is there a key thing that's popping out of all these deals that kind of has a is a tale sign of pattern or, um, a specific thing That's obvious on then, when you look at the data, when you zoom out, >>Yeah, I think one thing I would just say people underestimate the transformation. They have to go through continually. And we still have organizations that come to us, and maybe they come to Todd or others, and they're really they're envisioning This is a technical transformation, right? And they sort of want to talk all about the application and and sort of the new architecture er they they want to move to. But we really see theon pertinent A line business and technology around sass is a model, and that's really fundamental to getting it right. And so, you know, often we see organizations that really have unrealistic launch dates, you know, which is pretty common in software and services these days, but particularly a staff model. We just see that, you know, they underestimate the work in front of them and kind of what they need to bring with that >>Todd real quick for it against the announcements which are cool. Um, technical things that pop out of these organizations is there, Uh, the cream kind of rises to the top. When you look at the value proposition, what do they focused on? Technically, >>um, you know, it's interesting because to me, ah, lot of the focus tends to be more on the things that would surprise you. Like a lot of people are wanna sort of think about how to design the ins Thea click ation on the business logic of their application and take advantage of this scale on the sizing of AWS and those things, they're still all true. But but really an assassin organization with a really successful SAS organizations will see ah, lot more shift to the agility and the operational efficiency, right? So really good organizations will say we're going to invest in all the metrics and all the land analytics, all the tooling that lets us really have our finger on the pulse of what our customers are doing. And then they'll derive all their tech and their business strategy based on this really data driven experience. And I see that as the trend and the thing we certainly advocate a ton inside of the satisfactory is don't under invest in that data because that data is really especially in a multi 10 environment where everybody's running in this sort of shared environment. That data is essential to understanding how to morph your business, how to innovate, understand how your cost profile is really evolving. And so I see the really strong organizations building lots of the sort of foundational bits here, even ahead sometimes of building features and functions into their own products. >>It's not only moving fast and deploying tech is moving fast on the business model innovation as well. You're basically saying, Don't overplay your hand and try toe lock in the business model logic because it's gonna change with the data that what you're saying. >>Yeah, they're playing for for the innovation. They're playing for the agility they're playing for new markets, new segments that may evolve. And so they're really trying to put themselves in the position of being able to pivot and move. And they're really taking pride in the fact that their technology lets them do that. >>You know, that's not that's a business model That's not for the faint of heart. You know, when you have a market that has a lot of competitiveness to it and certainly was seeing the sea change happening over this year in the past few years, with cloud completely changing the playing field, winners and losers air emerging. And that's I think, this key it's you know, as I said in The Godfather, you know, you need a wartime conciliatory for these kind of times, and this is kind of what we're seeing, and I think that's a great point. Todd. Good stuff there. Um Okay. So announcements. You guys had some things on stage. Talked about Craig. You guys launching some new stuff? New programs? >>Yeah, absolutely mhm. Yeah, John, I guess our model is really to learn from a range of partners and experiences we have and then, you know, build tools and approaches to help everyone go faster, right? Because we certainly can't work with thousands organizations. And one of things that our team has had the opportunity over the last few years is published ton of articles, Blog's white papers, you know, very specific approaches to building SAS solutions. If you search Todd Golding out there on YouTube or anything, you'll find a bunch of things. But we wanted to bring on the altogether. And so we've created Central directory called Satisfactory Insights. Hug. And there's a right now over 70 unique pieces of content that our team is produced and curated. Whether you're starting on your staff journey right, you need socks one on one and business planning to level 400 right? 10 10 in isolation from Todd Golding, right. That's all there and available to you on the satisfactory program page. >>What? Some of the interesting things that came out of that that data from the insights you can share. >>Yeah, a couple things that we have we published most recently I would point to are really interesting. We just recently published a five case study where we go deeper in terms of their transformation. To really understand what was, you know, behind the scenes and that, um, we also published a white paper called the SAS Journey Framework, where for the first time, our team really broke down the journey. And what are the steps required? And what are some of the key questions you need to ask Onda Final piece I'd point to for people that Todd talks to is, we have, ah, white paper on SAS tended isolation strategies where we really go deep on on that particular challenge and what's there and that's also published and available on our satisfactory inside sub. Could you >>just define what is that mean tenant isolation strategies? What does that >>go to Todd with that for sure? >>Let's get that on the record. What is the definition of SAS tenant isolation? >>Sure, sure. So, you know, I think I've been in the room and with a lot of people that reinvent and basically have been in Chuck talks and said, You know what's tended isolation to you, and a lot of people will say Oh, that's authentication. Essentially, somebody got into the system. So now I know my system is isolated, but and a multi tenant environment right where we're running all this. These resource is in this data all co mingled from all of these different tenants. Um, it would be a huge blow to the business if one tenant somehow inadvertently exposed the resource or exposed to the resource is of another tenant. And so, fundamentally 10 of isolation is all of these techniques and strategies and architectural patterns that you use to ensure that one tenant can inadvertently get access to the resource is of another tenant s. So it's a sort of a layer of protection and security that goes beyond just the authentication and authorization schemes that you'll typically see in a cess architectures. >>So that's basically like having your own room lock and key doorway not just getting in, but no one can access your your stuff. >>Yeah, so it's a whole set of measures you could imagine. Identity and access management and other policies sort of defining tenant boundaries and saying, as each tenant is trying to access a resource or trying toe, interact with the system in some way, you've put these extra walls up to ensure that you can't cross those boundaries. >>Todd, I want to get your thoughts on this. Well, architected sas lens piece. What is this all about? >>Well, um, a WS has had for a long time the sort of the well architected framework, which has been a really great set of sort of guiding principles and best practices around how to design an architect solutions on top of AWS. And certainly SAS providers have been using that all along the way to sort of ask foundational questions of their architecture. Er But there's always been this layer of additional sort of SAS considerations that have set on top of that are that air SAS specific architectural patterns. And so what we've done is we've used this mechanism called the well architected lens that lets us essentially take our SAS architectural principles and extend the well architected framework and introduce all these concepts into the SAS and to the architecture pillars that really ask the hard SAS architecture questions so security operations reliability all the sort of classic pillars that are part of the well architected framework now have a SAS specific context added to them. Thio to really go after those areas that are unique to sass providers. And this really gives developers, architects, consultants the ability to sit down and look at a SAS application and evaluate its alignment with these best practices. And so far we can really positive response. Thio the content. >>Great job, guys doing great work. Finally, there's something new that you guys are announcing today to make life easier. Preview building SAS on a bus. What's that? What's that about? >>Sure. Eso You know you can imagine. We've been working with thes SAS providers for a number of years now, and as we've worked with them, we've seen a number of different themes emerge on and and we've run into this pattern That's pretty common where we'll see these, uh, these customers that have a classic sort of installed software model. They're installing it on premises or in the cloud, but basically each customer's sort of has their own version of the product. They have one off versions. They have their potentially have customization that are different. And while this works for some time for these businesses, what they find is they sort of run into this operational efficiency and cost wall. Whereas they're trying to grow their businesses, they they just really can't. They can't sort of keep up based on the way that they're running their current systems, and this is sort of a natural draw to move them to sass. But the other pattern that we've seen here is that these organizations are sometimes not in a position where they have the luxury of sort of going away and just saying, Hey, I'll rewrite my system or modernize it and make all of these changes. There could be any number of factors competitive pressures, market realities, cost that just make that too much of, ah, difficult process for them to be able to just take the application and rewrite it. And so what we did is sort of try to acknowledge that and say, What could we do to give you, ah, more prescriptive solution of this, the sort of turn key, easy button, if you will to say, Take my existing monolithic application that I deliver in this classic way and plug it into an existing pre built framework. An environment that is essentially includes all these foundational bits of assassin Vyron mint. And let me just take my monolith, move it into that environment and begin toe offer a SAS product to to the universe. And so what we've done is we've printed something and were introduced. We've introduced this thing called a W s SAS boost So a W s ass boost. It's not on a W s service. It is an open source reference environment. So you essentially download it. You install it into your own A W s account. And then this installs all these building blocks of sass that we've talked about. And it gives you all this sort of prescriptive ability to say, How can I now take my existing monolithic environment lifted into this experience and begin toe offer that to the market as a sash products. So it has, you know, it has billing. It has metrics and analytics. All the things we've been kind of talked about here they're all baked into that from the ground up on. We've also offered this an open source model. So our hope here is that this is really just the starting point of this solution, which, which will solve one business case. But our hope is that essentially the open source community will lean in with us, help us figure out how to evolve and make this into something that addresses a broader set of needs. >>Well, I love the SAS boost. Firstly, I wanna take the energy drink business there. Right there. It sounds like an energy drink. Give me some of that sass boost by that at 7. 11. Craig, I wanna get the final word with you. You've been the SAS business for over 20 years. You've seen this movie before. There are a lot of people who know the SAS business, and some people are learning it. You guys are helping people get there. It's different, though. Now what's different today? Because it's it's It's not just your grandfather's sass. As the expression goes, it's different. It's new dynamics. What is, uh, the most important thing people should pay attention to Whether they have a SAS legacy kind of mindset or they're new to the game. Take us >>home. Yeah, I >>think certainly, you know, getting disaster is not the end of the journey. You know, we see really successful fast provider. Just continue to differentiate, right? And then one of the things that I think we've seen successful SAT providers do is really take advantage of AWS services to go faster. Right? And that's really key, I think in this model is to really find a way to accelerate your business and deliver value faster. Andi just sort of keep that differentiation innovation there. Um, but I would just say now that there's more information out there available than ever, you know, and not only from from our team, but from a host of people that really are our SAS experts and follow the space. And so lots of resources available. Everyone >>All right, gentlemen, Thanks for coming on. Great insight. Great segment on getting to sass, sass boost Just the landscape. You guys are helping customers get there, and that's really the top priority. It's necessity is the mother of all invention during this pandemic. More than ever, uh, keeping business model going and establishing new ones. So thanks for coming on. >>Thanks for having us, John. >>Okay, It's the cubes. Virtual coverage. We are a SAS business. Now we're virtual bringing you remote. Uh, SAS Cube and, uh, more coverage with reinvent next few weeks. Thanks for watching. Okay, yeah.
SUMMARY :
It's the Cube with digital Um, first of all, I want to get in Craig with you and just take them in to explain what is the satisfactory. Yeah, the satisfactory. get their, um, you know, being a solution architect really is like the mechanic of the business. But really, our goal is to sort of just get into the weeds with But the dynamics today are different when you think about the on premises and you got the edge. You know, everyone really wants to build this model, and that's really, you know, born around the customer demand they're seeing And, you know, Web services came on in 2000. Can you share some of the new things that are emerging on the business model side that people should pay attention So that changes the dynamics to everything, for in terms of how you engage with customers So you want to drive that car? Sort of the key sort of dimensions of getting you to a SAS till every model, We're gonna We're gonna build in on boarding so that you could get frictionless on boarding. necessity of dealing with, you know, remote work and all these things that just clobbered Can you guys share your reaction? This is kind of the playbook. of how fast they deliver new features and services and, you know, changing to sort of market get crushed by the by the change can you guys share how the Yeah, it's range just to give you a range of some of the companies worked with from kind of legacy technology companies when you look at the data, when you zoom out, And so, you know, often we see organizations that really have unrealistic launch dates, When you look at the value proposition, And I see that as the trend and the thing we certainly advocate a ton inside of the satisfactory It's not only moving fast and deploying tech is moving fast on the business model innovation as well. They're playing for the agility they're playing for And that's I think, this key it's you know, as I said in The Godfather, That's all there and available to you on the satisfactory Some of the interesting things that came out of that that data from the insights you And what are some of the key questions you need to ask Onda Final piece I'd point to for Let's get that on the record. exposed the resource or exposed to the resource is of another tenant. So that's basically like having your own room lock and key doorway ensure that you can't cross those boundaries. What is this all about? consultants the ability to sit down and look at a SAS application and evaluate Finally, there's something new that you guys are announcing today the sort of turn key, easy button, if you will to say, Take my existing monolithic application Whether they have a SAS legacy kind of mindset or they're new to the game. Yeah, I And that's really key, I think in this model is to really find a way to accelerate your business It's necessity is the mother of all Now we're virtual bringing you remote.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Todd Golding | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Todd | PERSON | 0.99+ |
Todd Golden | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Craig | PERSON | 0.99+ |
Tomorrow | DATE | 0.99+ |
2000 | DATE | 0.99+ |
Ian | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
WS | ORGANIZATION | 0.99+ |
Assaf | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
N. Craig Wicks | PERSON | 0.99+ |
A. W s Global Partner Network | ORGANIZATION | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
Amazon Web Services Partner Network | ORGANIZATION | 0.99+ |
Emerson | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Atmore | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
eight years ago | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
7. 11 | DATE | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
first cube | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
YouTube | ORGANIZATION | 0.98+ |
over 70 unique pieces | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.98+ |
Firstly | QUANTITY | 0.98+ |
Cokie City | ORGANIZATION | 0.98+ |
The Godfather | TITLE | 0.98+ |
Cloud zero | ORGANIZATION | 0.98+ |
two types | QUANTITY | 0.97+ |
each customer | QUANTITY | 0.97+ |
AWS Satisfactory | ORGANIZATION | 0.97+ |
second year | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
this year | DATE | 0.96+ |
Chuck | PERSON | 0.96+ |
wave | EVENT | 0.96+ |
A W s | ORGANIZATION | 0.95+ |
Tod Golding | PERSON | 0.95+ |
pandemic | EVENT | 0.94+ |
SAS Journey Framework | TITLE | 0.94+ |
a dozen interviews | QUANTITY | 0.93+ |
Ian McCrae, Orion Health | AWS Public Sector Summit Online
>> Announcer: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Online, brought to you by Amazon Web Services. >> Everyone welcome back to theCUBE's coverage virtually of the AWS, Amazon Web Services, Public Sector Summit Online. Normally we're face to face in Bahrain or Asia Pacific, or even down in New Zealand and Australia, but we have to do it remotely. I'm John Furrier host of theCUBE, we've got a great segment here with a great guest, Ian McCrae, Founder and CEO of Orion Health, talking about the Global Healthcare Industry with Cloud Technology because now more than ever, we all know what it looks like, before COVID and after COVID, has upending the health care business, we're seeing it play out in real time, a lot of great benefits to technology. Ian, thank you for coming remotely from New Zealand and we're here in Palo Alto, California, thank you for joining me. >> Thank you for the invitation. >> You're the Founder and CEO of Orion Health global, award-winning provider of health information technology, supports the delivery of optimized healthcare throughout New Zealand, but now more than ever around the world, congratulations. But now COVID has hit, what is the impact of COVID because this is changing healthcare for the better and speed agility, is the services up to snuff, is it up to par? What is the situation of the post-COVID or the current COVID and then what we'll post-COVID look like for healthcare, what's your opinion? >> So, sir, I've never seen such a dramatic change in such a short time, as has happened over the last nine to 10 months. And you know what we're seeing is before COVID, a lot of focus on automating hospitals, probably primary care, et cetera, now all the focus is on putting medical records together, digital front doors giving patients access to their medical records, and much of the same way you have access to your bank records, when you travel you go into well, we don't travel now actually, but when you go into the lounges, the airline apps are very, very user friendly and the healthcare sector has been a laggard on this area, that's all about to change. And patients will be wanting, they don't want to go when they're feeling ill, they don't want to go down to their local physician practice because, well, there are other sick people there, they want to get the right care, at the right time, and the right place. And usually when they're not feeling well, they want to go online, probably symptom checking, if they need to have a consult they would like to do it there and then and not two or three days later, and they'd like to it virtually, and you know, there are definitely some things that can be done remotely and that's what people want. >> One of the things that comes up in all my interviews around innovation and certainly around AWS and cloud is the speed of innovation, and we were talking before we came on camera about I'm in Palo Alto, California, you're in Auckland, New Zealand, I don't have to fly there, although it's been quarantined for 14 days in New Zealand and summer is coming. but we can get remote services, we're talking and sharing knowledge right now. And when you were also talking before we went on about how healthcare is taking a trajectory similar to the financial industry, you saw our ATM machines, what an innovation, self service, then you got apps and then, you know, the rest is history just connect the dots. The same kind of thing is happening in healthcare, can you share your vision of how you see this playing out, why is it so successful, what are some of the things that need to be worked on and how does cloud bring it all together? >> Just on the banking front, I haven't been to the bank for many years because I understood all online, I had to go to the bank the other day, it was a novel experience. But you know I have a lot of, when I discussed with our developers and they say, well what are the requirements, I said, well, hold on, you're a patient you know what you want, you want your medical record pulled together, right, you want everything there, you can have easy access to it, perhaps you might like the computer to make some suggestions to you, it may want to give you warnings and alerts. And you know what we're also getting is a lot more data, and historically a medical record will be your lab, your radiology, your pharmacy, few procedures, maybe, but what we're getting now is genomic data getting added to its social determinants, where do you live, where do you work, behavioral and lots of other things are getting entered onto the medical record and it is going to get big. Oh, actually I forgot device data as well, all sorts of data. Now, within that vast amount of data, there will be signals that can be picked up, not by humans, but by machine learning and we need to pick the right suggestions that I give them back to the patients themselves, or the circle of care, be it their doctors, physicians, or maybe their family. So the picture I'm trying to paint here is health is going to, historically it's been all seated around physicians and hospitals, and it's all about to change. And it's going to happen quickly, you know normally health is very slow, it's a leg out it takes forever and forever to change, what we're seeing right across the world, I'm talking from Europe, Middle East, Asia, the North America, right across the world, the big health systems looking to provide firm or far richer services to their populations. >> Big joke in Silicon Valley used to be about a decade ago when big data was hitting the scene, we have the smartest data engineers, working on how to make an ad, be placed next to for you and on a page, which in concept is actually technically a challenge, you know, getting the right contextual, relevant piece of information in front of you, I guess it's smart. But if you take that construct to say medicine, you have precision needs, you also have contextual needs so if I need to get a physician, why not do virtually? If that gives me faster care, I got knowledge based system behind it, but if I want precision, I then can come in and it's much efficient, much more efficient. Can you share how the data, 'cause machine learning is a big part of it and machine learning is a consumer of data too, not just users, you're consuming data, but the results are still the same, how are you seeing that translate into value? >> I think the first thing is that if you can treat patients earlier more accurately, you can ultimately keep them healthier and using less health resources. And, you know, you notice around the world, different health systems take a different approach. The most interesting approach we see is when a payer also happens to own the hospitals, their approach changes dramatically and they start pouring a lot of money into primary care so they have to have less hospital beds, but, with data information, you can be more precise in the way you treat the patient. So I've had my genome done, probably quite a few times actually, I just one of the care pair, the different providers so I have avian called CYP2C19, I'm pretty sure I've got it right, and that means I hyper metabolize suite on drugs, so you give them to me they won't work. And so there's information in our medical records, with machine-learning, if you can keep a Tesla on the road, we must be able to use the same, in fact we're, we have a very big machine learning project here on this company, and to not only get the information out of the medical records but save it back up, this is the hard part, save it back up to the providers, and to the patients in a meaningful useful way, an actionable way, not too much, not too little and that's usually the challenge, actually. >> You're a customer in your business, and you guys are in New Zealand, but it's global, you've a global footprint, how are you leveraging cloud technology to address your customers? >> It's usually useful because we end up with one target platform so when we come to deploy in any part of the world, it's the same platform. And you know from a security point of view, if we're trying to secure all these on-prem installations, it's very, very hard so we have a lot of security features that are provided for us, there are lots of infrastructure tooling, deployment and monitoring all the stuff is just inherent within the cloud and I guess what's most important we have a standard platform that we can target right across the world. >> And you're using Amazon Web Services, I mean, I'd imagine that as you go outside and look at the edge, as you have to have these secure edge points where you're serving clients, that's important, how're you securing that edge? >> Well, fortunately for us as Amazon is increasingly getting right across the world so there are still some regions which, this tool are working on, but over time, we would be expecting officially every country in the world to have all sorts of services available. >> You see the future of health care going from your standpoint, I mean, if you had to throw a projectile in the future to say, you know, five years from now, where are we on the progress and innovation wave, how do you see that Ian, playing out? >> So, certainly last 30 years, we've had various ways of innovation on healthcare, I think this pandemic is going to transform healthcare in such a major way in such a short time, and we'll see it totally transform within two to four years. And the transformation will be just like your bank, your airline, or lots of other buying stuff actually via Amazon actually, we'll see that sort of transformation of healthcare. We have talked a lot about healthcare, historically being patient centric, it is really not true, our healthcare today and most parts of the world has been geared around the various healthcare facilities, so this change we're going to see now, it'll be geared around the patients themselves, which is really intriguing but exciting. >> Position, I want to get my genome done, you've reminded me, I got to get that done. >> Finding that out, you know, you know--- >> I want to know, (laughs) I want to kind of know in advance, so I can either go down the planes, have a good time or low the loam games. >> I find out I had the positivity gene, you know, I kind of knew that and you know, I'm the fairly positive individual, so (laughs). >> Yeah, well, so as you I'm going to get my, I've to go through that process. But you know, again, fundamentally, you know that I agree this industry is going to be right for change, I remember the old debates on HIPAA and having silos, and so the data protection was a big part of that business and privacy as a huge, but one area, I'll get to that in a second, but the one area I want to touch on first is that really an important one, for everyone around the world is how does technology help people, everywhere get access to healthcare? How do you see that unless there's one approach that the government do it all, some people like that, some people don't, but generally speaking technology should help you, what's your view on how technology helps us, get accessible healthcare? >> What it means no matter where you live or what you do, most people have access to the internet either via our phone or a computer. And so what you want to be able to do, what we need to do, as a society, is give everybody access, just like they have access to their banking records, have a similar access to their medical records. And again, you know, the standard features, you know, symptom checking for patients who have chronic conditions, advice, help, medication charts are really important, the ability to go online and do internet consult or the conditions that don't require a physical examination, be able to message your circle of care, it's basically the automation of healthcare, which, you know, sadly has legged other industries. >> It is a critical point, you mentioned that early, I want to get back on the date and we'll get to privacy right after. You mentioned AI and machine learning, obviously it's a huge part of it, having data models that are intelligent, I know I've covered Amazon SageMaker and a bunch of other stuff they're working on, so they're getting smarter and they're doing it by industry, which I think is smart. But I want to ask you about data, I was just having a conversation this morning with a colleague, and we hear about AI and AI and machine learning, they're consumers too, (chuckles) so if machines are going to automate humans, which they are, the machines are consuming data so the machine learning is now a consumer, not just a technology. So when you're consuming data, you got to have a good approach. You guys are doing a lot with data, how should people think about machine learning and data, because if you believe that machine learning will assist humans, then machines are going to talk to other machines and consume data, and create insights, et cetera, and spoil another systematic effects. How should people think about data who are in healthcare, what's your insight there? >> Well, the tricky thing with machine learning and healthcare is not so much the algorithms, the algorithms are readily available on Amazon and elsewhere, and the big problem that we have found, and we've been working on this for some time and have a lot of people working on it, the big problem we have is first of all marshaling, getting all the data together, wrangling the data, so and then there's a fun part where run the algorithms and then the next big problem is getting the results back into the clinical workflow. So we spent all our time upstream and downstream and a bit in the middle, which is the fun bit, takes a very small amount of time. And so it's probably the hardest part is getting it back into the clinical workflow, that's the hardest part, really, it's really difficult. >> You know, I really appreciate what you do, I think this is going to be the beginning of a big wave of innovation, I was talking with Max Peterson about some areas where they saw, you know, thousands and thousands of people being cared, that they never would have been cared for virtually with the systems and then cloud. Again, just the beginning, and I think this is a reconfiguration of the healthcare value chain and--- >> Configuration, I mean, at pre-COVID we as a company spend so much time on planes, traveling all over the world, I've hardly traveled this year and zoom and all the other technologies, I've quite enjoyed it to be fair. So, and I think that there's a reconfiguration of how business is done, it's started to happen in healthcare and--- >> If tell my wife, I'm coming to New Zealand, I get quarantined for 14 days. >> That's right. >> Yeah, I'm stuck down under summertime. >> You get one of those hotels with the view of the Harbor, very nice. >> And final question and just close it out here in the segments, I think this is super important, you mentioned at the top, COVID has upended the healthcare industry, remote health is what people want, whether it's for, you know, not to being around other sick people, or for convenience, or for just access. This is a game changer, you got iWatches now, I was just watching Apple discuss some of the new technologies and processes that they have in these things for heartbeat, so, you know how this signals. This is absolutely going to be a game changer, software needs to be written, it has to be so far defined, cloud is going to be at the center of it. What's your final assessment, share your partying thoughts? >> We are definitely, in a major reconfiguration of healthcare that's going to happen very quickly, I would've thought that 24 months, maybe no more than 36 and what we're going to end up with is a health system, just like your bank and the big challenge for our sector is first of all, the large amounts of data, how do you store it, where do you store, and the cloud is ideal place to do it, then how do you make sense of it, you know, how do you give just the right advice to an elderly patient versus a millennial who is very technology aware? So these, there's lots of innovation and problems to be solved and lots of opportunities I believe for startups and new innovative companies, and so it's interesting times. >> I think time's short, you know, it's just so much to do, great recruitment opportunity in Orion Health. Thank you for spending time, Ian McCrae, Founder and CEO of Orion health, an award winning provider of health information global based out of New Zealand, thank you for taking the time to come on, appreciate it. >> Thank you. >> Okay, I'm John Furrier with theCUBE coverage of AWS Public Sector Summit Online. We're not face to face, normally we'd be in person, but we're doing it remotely due to the pandemic, thank you for watching theCUBE. (soft upbeat music)
SUMMARY :
brought to you by Amazon Web Services. of the AWS, Amazon Web Services, is the services up to and much of the same way you have access and then, you know, the rest is history and it's all about to change. be placed next to for you and on a page, in the way you treat the patient. in any part of the world, in the world to have all and most parts of the world got to get that done. so I can either go down the planes, I kind of knew that and you know, but the one area I want to touch on first the ability to go online But I want to ask you about data, and a bit in the middle, I think this is going to be the beginning and all the other technologies, coming to New Zealand, with the view of the Harbor, very nice. in the segments, I think and the cloud is ideal place to do it, I think time's short, you know, thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian McCrae | PERSON | 0.99+ |
Bahrain | LOCATION | 0.99+ |
New Zealand | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Orion Health | ORGANIZATION | 0.99+ |
New Zealand | LOCATION | 0.99+ |
Orion health | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
Ian | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
14 days | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
COVID | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
24 months | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Middle East | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Max Peterson | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
one approach | QUANTITY | 0.97+ |
Tesla | ORGANIZATION | 0.96+ |
iWatches | COMMERCIAL_ITEM | 0.96+ |
four years | QUANTITY | 0.96+ |
this morning | DATE | 0.96+ |
this year | DATE | 0.95+ |
COVID | OTHER | 0.94+ |
one target platform | QUANTITY | 0.94+ |
10 months | QUANTITY | 0.93+ |
two | QUANTITY | 0.93+ |
one area | QUANTITY | 0.93+ |
two | DATE | 0.93+ |
pandemic | EVENT | 0.92+ |
three days later | DATE | 0.91+ |
thousands of people | QUANTITY | 0.91+ |
today | DATE | 0.88+ |
every country | QUANTITY | 0.83+ |
Online | TITLE | 0.83+ |
last 30 years | DATE | 0.81+ |
Auckland, New Zealand | LOCATION | 0.76+ |
pair | QUANTITY | 0.76+ |
SageMaker | TITLE | 0.75+ |
Public Sector Summit | EVENT | 0.75+ |
second | QUANTITY | 0.74+ |
big | EVENT | 0.73+ |
about a decade ago | DATE | 0.73+ |
lot | QUANTITY | 0.72+ |
COVID | TITLE | 0.72+ |
theCUBE | ORGANIZATION | 0.72+ |
CYP2C19 | OTHER | 0.65+ |
AWS Public Sector Online | ORGANIZATION | 0.63+ |
36 | QUANTITY | 0.61+ |
CEO | PERSON | 0.61+ |
wave of | EVENT | 0.6+ |
wave | EVENT | 0.59+ |
marshaling | PERSON | 0.58+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Networks of Optical Parametric Oscillators
>>Good morning. Good afternoon. Good evening, everyone. I should thank Entity Research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech. And today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum. Photonics should acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or meta materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics. And if you want to extend it even further. Some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down, and the couplings is given by the G I J. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart in standard computers, if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric oscillator on what it is is resonator with non linearity in it and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible face states as the escalation result of these Opio, which are off by pie, and that's one of the important characteristics of them. So I want to emphasize >>a little more on that, and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the strength on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal, which is half the frequency of the pump. >>And I have two of them to show you that they can acquire these face states so they're still face their frequency lock to the pump. But it can also lead in either the zero pie face state on. The idea is to use this binary phase to represent the binary icing spin. So each Opio is going to represent spin, which can be >>either is your pie or up or down, >>and to implement the network of these resonate er's. We use the time off blood scheme, and the idea is that we put impulses in the cavity, these pulses air separated by the repetition period that you put in or t R. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's If you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. If you have any minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can >>have a program. We'll all toe all connected network in this time off like scheme. >>And the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos. Each of them can be either zero pie, and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem thin the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillating the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um the first implementation was on our optical interaction. We also had an unequal 16 implementation and then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing. Hamiltonian is both in the linear and >>nonlinear side and also how we're working on miniaturization of these Opio networks. So >>the first experiment, which was the four Opium machine it was a free space implementation and this is the actual picture of the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. Yeah, so then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one, and you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective. Looks so I'm gonna split this idea of opium based icing machine into two parts One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme, and that's basically >>what gives you the icing Hamiltonian model A. So the optical loss of this network corresponds to the icing Hamiltonian. >>And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. >>So you go either to zero the pie face state, and the expectation is that this the network oscillates in the lowest possible state, the lowest possible loss state. >>There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non their dynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to on the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of States and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate er's which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping. And now we can actually look at the band structure on. This is an actual measurement >>that we get with this associate model and you see how it reasonably how how? Well, it actually follows the prediction and the theory. >>One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as we were running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example >>that we have looked at is we can actually go to the transition off going from top a logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. >>You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, >>uh, network with Harper Hofstadter model when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics. And we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic oh, classical and quantum, non innate behaviors in these networks. >>So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this phase transition, that threshold. So the low threshold we have squeezed state in these Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network. Which, for example, is if one Opio starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also, can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise, behavior So in the degenerate regime, which we call it the order state. You're gonna have the phase being locked to the phase of the pump as I talked about in the non the general regime. However, the phase is the phase is mostly dominated by the quantum diffusion off the off the phase, which is limited by the so called shallow towns limit and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. And if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at So now the question is can utilize this phase transition, which is a face driven phase transition and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition. You can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts of more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to oppose. And that's a very abrupt face transition and compared to the to the single Opio face transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and >>what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non linear elements, where we are now with the optics is probably very similar to seven years ago, which is a tabletop implementation. >>And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's Did you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar and also might affair at Stanford. And, uh, we could show that you can do the >>periodic polling in the phenomenon of it and get all sorts of very highly non in your process is happening in this net. Photonic periodically polls if, um Diabate >>and now we're working on building. Opio was based on that kind of photonic lithium Diabate and these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the OPI ohs and the Opio networks are in the works, and that's not the only way of making large networks. But also I want to point out that the reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint, they also provide some opportunities in terms of the operation regime. On one of them is about making cat states in o pos, which is can we have the quantum superposition of >>the zero pie states that I talked about >>and the nano photonics within? I would provide some opportunities to actually get >>closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform, other existing platforms and to >>go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us. See, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamiltonian implementations on those networks. So if you can't build a pos, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to >>estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pippen O pos that we have been building in the past 50 years or so. >>So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and >>our work that has been going on on icing machines and the >>measurement feedback on I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you >>a little bit about the efforts on miniaturization and going to the to the nano scale. So with that, I would like Thio stop here and thank you for your attention.
SUMMARY :
And if you look at the phase locking which is the length of the strength on by that modulation, which is that will make a pump. And I have two of them to show you that they can acquire these face states so they're still face their frequency and the idea is that we put impulses in the cavity, these pulses air separated by the repetition have a program. into the network, then the OPI ohs are expected to oscillating the lowest, So the reason that this implementation was very interesting is that you don't need the end what gives you the icing Hamiltonian model A. So the optical loss of this network and the delay lines are going to give you a different losses. So you go either to zero the pie face state, and the expectation is that this breaking the time reversal symmetry, meaning that you go from one spin to on the one side that we get with this associate model and you see how it reasonably how how? that now you have the flexibility of changing the network as we were running the machine. the to the standard nontrivial. You can then look at the edge states and you can also see the trivial and states and the technological at uh, network with Harper Hofstadter model when you don't have the results the motivation is if you look at the electron ICS and from relatively small scale computers in the order And the question is, how can we utilize nano photonics? periodic polling in the phenomenon of it and get all sorts of very highly non in your been building in the past few months, which I'm not gonna tell you more about. closer to that regime because of the spatial temporal confinement that you can the chi to non linearity and see how and when you can get the Opio be even lower than the type of bulk Pippen O pos that we have been building in the past So let me summarize the talk And I also told you a little bit about the efforts on miniaturization and going to the to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Caltech | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Each | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
Al Gore ism | PERSON | 0.99+ |
today | DATE | 0.99+ |
first implementation | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
one experiment | QUANTITY | 0.99+ |
seven years ago | DATE | 0.99+ |
Graham | PERSON | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
one phase | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Mexico | LOCATION | 0.98+ |
Harper Hofstadter | PERSON | 0.98+ |
Entity Research | ORGANIZATION | 0.98+ |
eight graduate students | QUANTITY | 0.98+ |
billions | QUANTITY | 0.98+ |
two parts | QUANTITY | 0.98+ |
Thio | PERSON | 0.98+ |
two directions | QUANTITY | 0.97+ |
second delay | QUANTITY | 0.97+ |
two possible face states | QUANTITY | 0.97+ |
Hamiltonian | OTHER | 0.97+ |
two losses | QUANTITY | 0.97+ |
seven years | QUANTITY | 0.96+ |
one example | QUANTITY | 0.96+ |
single | QUANTITY | 0.95+ |
two times | QUANTITY | 0.95+ |
One vote | QUANTITY | 0.95+ |
two simple pendulum | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
one spin | QUANTITY | 0.94+ |
60 | DATE | 0.94+ |
70 years ago | DATE | 0.94+ |
Gaussian | OTHER | 0.93+ |
16 implementation | QUANTITY | 0.92+ |
Nana | ORGANIZATION | 0.91+ |
3 | QUANTITY | 0.91+ |
two identical photons | QUANTITY | 0.9+ |
Stanford | ORGANIZATION | 0.87+ |
Opio | OTHER | 0.85+ |
one side | QUANTITY | 0.82+ |
thousands of problems | QUANTITY | 0.79+ |
first order phase | QUANTITY | 0.79+ |
one delay | QUANTITY | 0.77+ |
zero | QUANTITY | 0.76+ |
lithium Diabate | OTHER | 0.75+ |
Marko Loncar | PERSON | 0.75+ |
four Opium | QUANTITY | 0.75+ |
Nana | OTHER | 0.73+ |
G I J. | PERSON | 0.72+ |
2 | QUANTITY | 0.72+ |
J I J. | PERSON | 0.72+ |
one of | QUANTITY | 0.7+ |
Oshie | PERSON | 0.69+ |
past few months | DATE | 0.66+ |
NPR | ORGANIZATION | 0.65+ |
zero pie | QUANTITY | 0.64+ |
CI/CD: Getting Started, No Matter Where You Are
>>Hello, everyone. My name is John Jane Shake. I work from Iran. Tous Andi. I am here this afternoon very gratefully with Anders Vulcan, who is VP of technology strategy for cloud bees, a Miranda's partner and a well known company in the space that we're going to be discussing. Anders is also a well known entity in this space, which is continuous integration and continuous delivery. Um, you've seen already today some sessions that focus on specific implementations of continuous integration and delivery, um, particularly around security. And, uh, we think this is a critically important topic for anyone in the cloud space, particularly in this increasingly complicated kubernetes space. To understand, um, Miranda's thanks, Uh, if I can recapitulate our own our own strategy and, uh, and language that with complexity on uncertainty consistently increasing with the depth of the technology stacks that we have to deal with consistently, um um elaborating themselves that navigating this requires, um first three implementation of automation to increase speed, which is what C and C d do. Um, and that this speed ba leveraged toe let us ship and iterate code faster. Since that's ultimately the business that all of us air in one way or another. I would like, I guess, toe open this conversation by asking Onders what does he think of that core strategy? >>You know, I think you know, hitting the security thing, right? Right off the bat. You know, security doesn't happen by accident. You know, security is not something that you know, Like a like a server in a restaurant. You know, Sprinkles a little bit of Parmesan cheese right before they serve you the the food. It's not something you Sprinkle on at the end. It's something that has to be baked in from the beginning, not just in the kitchen, but in the supply chain from from from the very beginning. So the you know it's a feature, and if you don't build it, if you're not going to get an outcome that you're not gonna be happy with and I think the you know it's increasingly it's obviously increasingly important and increasingly visible. You know, the you know, the kinds of security problems that we that we see these days can can be, you know, life altering, for for people that are subject to them and and can be, you know, life or death for a company that that's exposed to it. So it's it's it's very, very important. Thio pay attention to it and to work to achieve that as an explicit outcome of the software delivery process. And I think, you know, C i n c d as as process as tooling as culture plays a big part in that because ah, lot of it has to do with, you know, set things up, right? Um run them the same way over and over, you know, get the machine going. Turned the crane. Now, you wanna you wanna make improvements over over time. You know, it's not just, you know, set it and forget it. You know, we got that set up. We don't have to worry about it anymore, but it really is a question of, you know, get the human out of the loop a lot of the times because if if you're dealing with configuring complex systems, you wanna make sure that you get them set up configured, you know, documented Ideally, you know, as code, whether it's a domain specific language or or something like that. And then that's something that you contest against that you can verify against that you can that you can difficult. And then that becomes the basis for for your, you know, for yourself, for pipelines, for your automation around, you know, kind of the software factory floor. So I think automation is a key aspect of that because it, you know, it takes a lot of the drudgery out of it, for one thing, So now the humans have more time to spend on doing on the on the creative things on the things that we're good at a zoo. Humans and it also make sure that, you know, one of the things that computers are really good at is doing the same thing over and over and over and over. Eso that kind of puts that responsibility into the hands of the entity that that knows how to do that well, which is which is the machine eso I think it's, you know, it's a light. It's a deep, deep topic, obviously, but, you know, automation plays into it. Uh, you know, small batch sizes play into it, you know, being able to test very frequently whether that's testing in. You're kind of you're C I pipeline where you're sort of doing building mostly unit testing, maybe some integration testing, but also in layering in the mawr. Serious kinds of testing in terms of security scanning, penetration, testing, vulnerability, scanning. You know those sorts of things which, you know, maybe you do on every single see I Bill. But most people don't because those things tend toe take a little bit longer on. And you know you want your sea ice cycle to be as fast as possible because that's really in service of the developer who has committed code and wants toe kind of see the thumbs up from the system saying it. And, um, so most organizations most organizations are are are focusing on, you know, making sure that there's a follow on pipeline to follow on set of tests that happened after the C I passes successfully and and that's, you know, where a lot of the security scanning and those sorts of things happen. >>It's a It's an interesting problem. I mean, you mentioned, um, what almost sounds like a Lawrence Lessig Ian kind of idea that, you know, code is law in enterprises today, code particularly see, I code ends up being policy, but At the same time, there's, Ah, it seems to me there's a an alternative peril, which is, as you increase speed, particularly when you become more and more dependent on things like containers and layering technology to provide components and capabilities that you don't have to build yourself to your build pipeline, that there are new vulnerabilities, potentially that creep in and can creep in despite automation. Zor at least 1st. 1st order automation is attempts toe to prevent them from creeping in. You don't wanna you wanna freeze people on a six month old version of a key container image. But on the other hand, if the latest version has vulnerabilities, that could be a problem. >>Yeah, I mean, it's, you know, it's it's a it's a it's a double edged sword. It's two sides of the same coin. I think you know, when I talked to a lot of security people, um, you know, people to do it for a living is supposed to mean I just talk about it, um, that Z not completely true. But, um, the ah, lot of times the problem is old vulnerabilities. The thing that I think keeps a lot of people up at night isn't necessarily that the thing at the tip of the releases for particular, you know, well known open source, library or something like that. But that's gonna burn you all the vast majority of the time. And I want to say, like, 80 85% of the time. The vulnerability is that you that you get hosed by are ones that have been known about for years. And so I think the if I had to pick. So if you know, in that sort of two sides of that coin, if I had to pick, I would say Be aggressive in making sure that your third party dependencies are updated frequently and and continuously right, because that is the biggest, biggest cause of of of security vulnerabilities when it comes to third party code. Um, now you know the famous saying, You know, move fast and break things Well, there's certain things you don't want to break. You know you don't want to break a radiation machine that's going to deliver radio radiotherapy to someone because that will endanger their health. So So those sorts of systems, you know, naturally or subject a little bit more kind of caution and scrutiny and rigor and process those sorts of things. The micro service that I run that shows my little avatar when I log in, that one probably gets a little less group. You know, Andre rightfully so. So I think a lot of it has to do. And somebody once said in a I think it was, Ah, panel. I was on a PR say conference, which was, which was kind of a wise thing to say it was Don't spend a million dollars protecting a $5 assets. You know, you wanna be smart and you wanna you wanna figure out where your vulnerabilities they're going to come from and in my experience, and and you know, what I hear from a lot of the security professionals is pay attention to your supply chain. You're you want to make sure that you're up to date with the latest patches of, of all of your third party, you know, open source or close source. It doesn't really matter. I mean, if anything, you know, open source is is more open. Eso You could inspect things a little bit better than the close source, but with both kinds of streams of code that you consume and and use. You wanna make sure that you're you're more up to date as opposed to a less up to date? Um, that generally will be better. Now, can a new version of the library cause problems? You know, introduce bugs? You know, those sorts of things? Yes. That's why we have tests. That's what we have automated tests, regression, sweets, You know, those sorts of things. And so you wanna, you know, you wanna live in a in a world where you feel the confidence as a as a developer, that if I update this library from, you know, one debt owed at 3 to 1 debt owed at 10 to pick up a bunch of, you know, bug fixes and patches and those sorts of things. But that's not going to break some on demand in the test suites that that will run against that ought to cover that that sort of functionality. And I'd rather be in that world of Oh, yeah, we tried to update to that, but it But it broke the tests and then have to go spend time on that, then say, Oh, it broke the test. So let's not update. And then six months later, you do find out. Oh, geez. There was a problem in one that owed at three. And it was fixed in one. That about four. If only we had updated. Um, you know, you look at the, um you look at some of the highest profile security breaches that are out there that you sort of can trace toe third party libraries. It's almost always gonna be that it was out of date and hadn't been patched. That's so that's my you know, opinionated. Take on that. Sure. >>What are the parts of modern C I c D. As opposed to what one would encounter 56 years ago? Maybe if we can imagine that is being before the micro services and containers revolution really took off. >>You know, I think e think you're absolutely right that, you know, not the whole world is not doing. See, I Yeah, and certainly the whole world is not doing city yet. Um, you know, I think you know, as you say, we kind of live in a little bit of an ivory tower. You know, we live in an echo chamber in a little bit of a bubble Aziz vendors in this space. The truth is that I would say less than 50% of the software organizations out there do real. See, I do real CD. The number's probably less than that. Um, you know, I don't have anything to back that up other than just I talked to a lot of folks and work with, you know, with a lot of organizations and like, Yeah, that team does see I that team does Weekly builds You know, those sorts of things. It's it's really all over the place, Onda. Lot of times there's There's definitely, in my experience, a high correlation there with the amount of time that a team or a code base has been around, and the amount of sort of modern technologies and processes and and and so on that are that are brought to it on. And that sort of makes sense. I mean, if you if you're starting with the green field with a blank sheet of paper, you're gonna adopt, you know, the technologies and the processes and the cultures of today. A knot of 5, 10 15 15 years ago, Um but but most organizations air moving in that direction. Right? Andi, I think you know what? What? What? What's really changed in the last few years is the level of integration between the various tools between the various pieces and the amount of automation that you could bring to bear. I mean, I you know, I remember, you know, five or 10 years ago having all kinds of conversations with customers and prospects and and people of conferences and so on and they said, Oh, yeah, we'd like to automate our our software development life cycle, but, you know, we can't We have a manual thing here. We have a manual thing there. We do this kind of testing that we can automate it, and then we have this system, but it doesn't have any guy. So somebody has to sit and click on the screen. And, you know, and I used to say e used to say I don't accept No for an answer of can you automate this right? Everything. Anything can be automated. Even if you just get the little drinking bird. You know that just pokes the mouse. Everyone something. You can automate it, and I Actually, you know, I had one customer who was like, Okay, and we had a discussion and and and and they said, Well, we had this old Windows tool. We Its's an obscure tool. It's no longer updated, but it's it's it's used in a critical part of the life cycle and it can't be automated. And I said, Well, just install one of those Windows tools that allows you to peek and poke at the, you know, mass with my aunt I said so I don't accept your answer. And I said, Well, unfortunately, security won't allow us to install those tools, Eh? So I had to accept No, at that point, but But I think the big change were one of the biggest changes that's happened in the last few years is the systems now have all I'll have a p i s and they all talk to each other. So if you've gotta, you know, if you if you've got a scanning tool, if you've got a deployment tool, if you have a deployment, you know, infrastructure, you know, kubernetes based or, you know, kind of sitting in front of our around kubernetes thes things. I'll talk to each other and are all automated. So one of the things that's happened is we've taken out a lot of the weight states. A lot of the pauses, right? So if you you know, if you do something like a value stream mapping where you sit down and I'll date myself here and probably lose some of the audience with this analogy. But if you remember Schoolhouse Rock cartoons in in the late seventies, early eighties, there was one which was one of my favorites, and and the guy who did the music for this passed away last year, sadly, But, uh, the it was called How a bill Becomes a Law and they personified the bill. So the bill, you know, becomes a little person and, you know, first time passed by the house and then the Senate, and then the president either signs me or doesn't and or he vetoes, and it really sort of did this and what I always talk about with respect to sort of value stream mapping and talking about your processes, put a GoPro camera on your source codes head, and then follow that source code all the way through to your customer understand all of the stuff that happens to it, including nothing, right? Because a lot of times in that elapsed time, nothing keeps happening, right. If we build software the way we were sorry. If we build cars the way we build software, we would install the radio in a car, and then we would park it in a corner of the factory for three weeks. And then we might remember to test the radio before we ship the car out to the customer. Right, Because that's how a lot of us still develop some for. And I think one thing that's changed in the in the last few years is that we don't have these kind of, Well, we did the bill. So now we're waiting for somebody to create an environment and rack up some hardware and install an operating system and install. You know, this that and the other. You know, that that went from manual to we use Scheffer puppet to do it, which then went to we use containers to do it, which then went to we use containers and kubernetes to do it. So whole swaths of elapsed time in our software development life cycles basically went to nothing, right and went to the point where we can weaken, weaken, configure them way to the left and and and follow them all the way through. And that the artifact that we're delivering isn't necessarily and execute herbal. It could be a container, right? So now that starts to get interesting for us in terms of being able to test against that container scan against that container, def. Against that container, Um, you know, and it, you know, it does bring complexity to in terms of now you've got a layered file system in there. Well, what all is in there, you know, And so there's tools for scanning those kinds of things, But But I think that one of the biggest things that's happened is a lot of the natural pause. Points are no longer natural. Pause points their unnatural pause points, and they're now just delays in yourself for delivery. And so what? What a lot of organizations are working on is kind of getting to the point where those sorts of things get get automated and connected, and that's now possible. And it wasn't 55 or 10 years ago. >>So It sounds like a great deal of the speed benefit, which has been quantified many different ways. But is once you get one of these systems working, as we've all experienced enormous, um, is actually done by collapsing out what would have been unused time in a prior process or non paralyze herbal stuff has been made parallel. >>I remember doing a, uh, spent some time with a customer, and they did a value stream mapping, and they they found out at the end that of the 30 days of elapsed time they were spending three days on task. Everything else was waiting, waiting for a build waiting foran install, waiting for an environment, waiting for an approval, having meetings, you know, those sorts of things. And I thought to myself, Oh, my goodness, you know, 90% of the elapsed time is doing nothing. And I was talking to someone Gene Kim, actually, and I said, Oh my God, it was terrible that these you know, these people are screwed and he says, 0 90%. That's actually pretty good, you know? So So I think you know, if you if you think today, you know, if you If you if you look at the teams that are doing just really pure continuous delivery, you know, write some code committed, gets picked up by the sea ice system and passes through CIA goes through whatever coast, see I processing, you need to do security scanning and so on. It gets staged and it gets pushed into production. That stuff can happen in minutes, right? That's new. That's different. Now, if you do that without having the right automated gates in place around security and and and and those sorts of things you know, then you're living a little bit dangerously, although I would argue not necessarily any more dangerously, than just letting that insecure coat sit around for a week before your shipment, right? It's not like that problem is going to fix itself if you just let it sit there, Um, but But, you know, you definitely operated at a higher velocity. Now that's a lot of the benefit that you're tryingto trying to get out of it, right? You can get stuff out to the market faster, or if you take a little bit more time, you get more out to the market in, in in the same amount of time you could turn around and fix problems faster. Um, if you have a vulnerability, you can get it fixed and pushed out much more quickly. If you have a competitive threat that you need to address, you can you know, you could move that that much faster if you have a critical bug. You know, I mean, all security issues or bugs, sort of by definition. But, you know, if you have a functionality bug, you can you can get that pushed out faster. Eso So I think kind of all factors of the business benefit from from this increase in speed. And I think developers due to because anybody you know, any human that has a context switch and step away from something for for for, you know, duration of time longer than a few minutes, you know, you're gonna you're gonna you're gonna you're gonna have to load back up again. And so that's productivity loss. Now, that's a soft cost. But man, is it Is it expensive and is a painful So you see a lot of benefit there. Think >>if you have, you know, an organization that is just starting this journey What would you ask that organization to consider in orderto sort of move them down this path? >>It's by far the most frequent and almost always the first question I get at the end of the talk or or a presentation or something like that is where do we start? How do I know where to start? And and And there's a couple of answers to that. What one is Don't boil the ocean, right? Don't try to fix everything all at once. You know that because that's not agile, right? The be agile about your transformation Here, you know, pick, pick a set of problems that you have and and make a, you know, basically make a burn down list and and do them in order. So find find a pain point that you have right and, you know, just go address that and and try to make it small and actionable and especially early on when you're trying to affect change. And you're tryingto convinced teams that this is the way to go and you may have some naysayers, or you may have people who are skeptical or have been through these processes before that have been you know failures released, not the successes that they that they were supposed to be. You know, it's important to have some wind. So what I always say is look, you know, if you have a pebble in your shoe, you've got a pain point. You know how to address that. You know, you're not gonna address that by changing out your wardrobe or or by buying a new pair of shoes. You know, you're gonna address that by taking your shoe off, shaking it until the pebble falls out there putting the shoe back on. So look for those kinds of use cases, right? So if you're engineers are complaining that whenever I check in the build is broken and we're not doing see, I well, then let's look at doing C I Let's do see eye, right? If you're not doing that. And for most organizations, you know, setting up C I is a very manageable, very doable thing. There's lots of open source tooling out there. There's lots of commercial tooling out there. Thio do that to do it for small teams to do it for large teams and and everything in between. Um, if the problem is Gosh, Every time we push a change, we break something. You know where every time something works in staging it doesn't work in production. Then you gotta look at Well, how are these systems being configured? If you're If you're configuring them manually, stop automate the configuration of them. Um, you know, if you're if you're fixing system manually, don't you know, as a friend of mine says, don't fix, Repave? Um, you know, you don't wanna, you know, there's a story of, you know how how Google operates in their data centers. You know, they don't they don't go look for a broken disk drive and swap it out. You know, when it breaks, they just have a team of people that, like once a month or something, I don't know what the interval is. They just walked through the data center and they pull out all the dead stuff and they throw it out, and what they did was they assume that if the scale that they operate, things are always going to break physical things are always going to break. You have to build a software to assume that breakage and any system that assumes that we're going to step in when a disk drive is broken and fix it so that we can get back to running just isn't gonna work at scale. There's a similarity. There's sort of ah, parallel to that in in software, which is you know, any time you have these kinds of complex systems, you have to assume that they're gonna break and you have to put the things in place to catch those things. The automated testing, whether it's, you know, whether you have 10,000 tests that you that you've written already or whether you have no tests and you just need to go right, your first test that that journey, you've got to start somewhere. But my answer thio their questions generally always just start small, pick a very specific problem. Build a plan around it, you know, build a burned down list of things that you wanna address and just start working your way down that the same way that you would for any, you know, kind of agile project, your transformation of your own processes of your own internal systems. You should use agile processes for those as well, because if you if you go off for six months and and build something. By the time you come back, it's gonna be relevant. Probably thio the problems that you were facing six months ago. >>A Then let's consider the situation of, ah, company that's using C I and maybe sea ice and C d together. Um, and they want to reach what you might call the next level. Um, they've seen obvious benefits they're interested in, you know, in increasing their investment in, you know and cycles devoted to this technology. You don't have to sell them anymore, but they're looking for a next direction. What would you say that direction should be? I >>think oftentimes what organizations start to do is they start to look at feedback loops. So on DAT starts to go into the area of sort of metrics and analytics and those sorts of things. You know what we're we're always concerned about? You know, we're always affected by things like meantime to recovery. Meantime, the detection, what are our cycle times from, you know, ideation, toe codecommit. What's the cycle? Time from codecommit the production, those sorts of things. And you know you can't change what you don't measure eso so a lot of times the next step after kind of getting the rudimentary zoo of C I Orsini or some combination of both in places start to measure. Stop you, Um, and and then but But there. I think you know, you gotta be smart about it, because what you don't want to do is kind of just pull all the metrics out that exists. Barf them up on the dashboard. And the giant television screens say boom metrics, right. You know, Mike, drop go home. That's the wrong way to do it. You want to use metrics very specifically to achieve outcomes. So if you have an outcome that you want to achieve and you can tie it to a metric start looking at that metric and start working that problem once you saw that problem, you can take that metric. And you know, if that's the metric you're showing on the big you know, the big screen TV, you can pop that off and pick the next one and put it up there. I I always worry when you know a little different when you're in a knock or something like that. When when you're looking at the network stuff and so on. But I'm always leery of when I walk into to a software development organization. You know, just a Brazilian different metrics, this whole place because they're not all relevant. They're not all relevant at the same time. Some of them you wanna look at often, some of them you just want to kind of set an alarm on and make sure that, you know, I mean, you don't go down in your basement every day to check that the sump pump is working. What you do is you put a little water detector in there and you have an alarm go off if the water level ever rises above a certain amount. Well, you want to do the same thing with metrics, right? Once you've got in the water out of your basement, you don't have to go down there and look at it all the time. You put the little detector in, and then you move on and you worry about something else. And so organizations as they start to get a little bit more sophisticated and start to look at the analytics, the metrics, um, start to say, Hey, look, if our if our cycle time from from, you know, commit to deploy is this much. And we want it to be this much. What happens during that time, And where can we take slices out of that? You know, without without affecting the outcomes in terms of quality and so on, or or if it's, you know, from from ideation, toe codecommit. You know what? What can we do there? Um, you start to do that. And and then as you get those sort of virtuous cycles of feedback loops happening, you know, you get better and better and better, but you wanna be careful with metrics, you know, you don't wanna, you know, like I said, you don't wanna barf a bunch of metrics up just to say, Look, we got metrics. Metrics are there to serve a particular outcome. And once you've achieved that outcome, and you know that you can continue to achieve that outcome, you turn it into an alarm or a trigger, and you put it out of sight. And you know that. You know, you don't need to have, like, a code coverage metric prominently displayed you you pick a code coverage number that you're happy with you work to achieve that. Once you achieve it, you just worry about not going below that threshold again. So you can take that graph off and just put a trigger on this as if we ever get below this, you know, raising alarm or fail a build or fail a pipeline or something like that and then start to focus on improving another man. Uh, or another outcome using another matter >>makes enormous sense. So I'm afraid we are getting to be out of time. I want to thank you very much on this for joining us today. This has been certainly informative for me, and I hope for the audience, um, you know, thank you very, very much for sharing your insulin.
SUMMARY :
Um, and that this speed ba leveraged toe let us ship and iterate You know, the you know, the kinds of security problems that we that we see these days what almost sounds like a Lawrence Lessig Ian kind of idea that, you know, I think you know, when I talked to a lot of security people, um, you know, What are the parts of modern C I c D. As opposed to what one would encounter I mean, I you know, I remember, you know, five or 10 years ago having all kinds of conversations But is once you get one of these systems working, So So I think you know, if you if you think today, you know, if you If you if you look at the teams that are doing Um, you know, you don't wanna, you know, there's a story of, Um, they've seen obvious benefits they're interested in, you know, I think you know, you gotta be smart about it, you know, thank you very, very much for sharing your insulin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Jane Shake | PERSON | 0.99+ |
$5 | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Gene Kim | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Mike | PERSON | 0.99+ |
Anders Vulcan | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Iran | LOCATION | 0.99+ |
10,000 tests | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Tous Andi | PERSON | 0.99+ |
GoPro | ORGANIZATION | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
80 | QUANTITY | 0.99+ |
late seventies | DATE | 0.99+ |
first test | QUANTITY | 0.99+ |
six months later | DATE | 0.99+ |
six months | QUANTITY | 0.98+ |
six months ago | DATE | 0.98+ |
CIA | ORGANIZATION | 0.98+ |
90% | QUANTITY | 0.98+ |
Senate | ORGANIZATION | 0.98+ |
1 | QUANTITY | 0.98+ |
first question | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Windows | TITLE | 0.98+ |
56 years ago | DATE | 0.98+ |
Gosh | PERSON | 0.98+ |
early eighties | DATE | 0.98+ |
Andre | PERSON | 0.97+ |
once a month | QUANTITY | 0.97+ |
10 | QUANTITY | 0.97+ |
one customer | QUANTITY | 0.97+ |
10 years ago | DATE | 0.96+ |
first three | QUANTITY | 0.96+ |
55 | DATE | 0.95+ |
five | DATE | 0.95+ |
this afternoon | DATE | 0.94+ |
one thing | QUANTITY | 0.94+ |
a week | QUANTITY | 0.93+ |
both kinds | QUANTITY | 0.92+ |
Schoolhouse Rock | TITLE | 0.91+ |
one way | QUANTITY | 0.91+ |
first time | QUANTITY | 0.89+ |
agile | TITLE | 0.88+ |
six month old | QUANTITY | 0.86+ |
million dollars | QUANTITY | 0.85+ |
15 years ago | DATE | 0.84+ |
Lawrence Lessig Ian | PERSON | 0.83+ |
Miranda | PERSON | 0.81+ |
Anders | ORGANIZATION | 0.81+ |
5 | DATE | 0.81+ |
C | TITLE | 0.79+ |
Brazilian | OTHER | 0.78+ |
Scheffer | TITLE | 0.78+ |
about four | QUANTITY | 0.77+ |
single | QUANTITY | 0.76+ |
about | QUANTITY | 0.74+ |
85% | QUANTITY | 0.73+ |
C I Orsini | LOCATION | 0.72+ |
0 | QUANTITY | 0.71+ |
No Matter Where You Are | TITLE | 0.7+ |
double | QUANTITY | 0.7+ |
last few years | DATE | 0.7+ |
Onda | ORGANIZATION | 0.69+ |
bill | TITLE | 0.67+ |
Andi | PERSON | 0.67+ |
Onders | PERSON | 0.64+ |
favorites | QUANTITY | 0.64+ |
C I | TITLE | 0.63+ |
Miranda | ORGANIZATION | 0.63+ |
for years | QUANTITY | 0.62+ |
last | DATE | 0.62+ |
minutes | QUANTITY | 0.61+ |
Paul Speciale, Scality | HPE Discover 2020
>>from around the globe. It's the Cube covering HP Discover Virtual experience Brought to you by HP >>Hi, welcome to the Cube's coverage of HP Discover 2020 Virtual experience. I'm Lisa Martin, and I'm pleased to welcome from scale any one of our long Time Cube alumni. We have, all specially the chief product officer at agility. Hey, Paul, welcome back to the Cube. >>Hi, Lisa. It's been a long time, and it's just wonderful to be back. Thank you. >>This is our new virtual cube that appear where everybody is very socially distant but socially connected. So since it's been a while since we've had you on and your peers from stability tell us a little bit about scale and then we'll dive into what you're doing with HP, >>Okay? Absolutely. Let me give you kind of a pop down recap of where we're at. So interestingly, we're now it 10 year old company. We actually celebrated our never anniversary last year. Um, we still have our flagship product, the Ring, which we launched originally in 2000 and 10 that is distributed file and object storage software. But about three years ago, we added a second product called Zenko, which is for multi cloud data management. We do continue to invest in the ring a lot, both on the file side and the object side. The current release now is Ring eight. The target market for this is pretty broad, but we really focus on financial services institutions. That's a big base for us. We have something like half of the world's banks, about 60% of the world service providers, a lot of government institutions. But what's been fastest growing for us now is healthcare. We have a lot of growth there in medical imaging and genomics research. And then I guess the last thing I'll add is that partners are just super important to us. We continue to certify and test with SDI Solutions. I think we have 80 of them now deployed and ready to go. But there's a real focus here now on partners like Said Era and with a Iot and Splunk VM HP East or one. So those partners are critical to our business and we just love to partner with them. >>Do you been partners with HP for quite a while? Tell me about the evolution of the partnership as you've evolved your technology. >>Yeah, absolutely. It's interesting, cause I just noted this Ah, a couple of weeks ago. The company is 10 years old. We've been partners with HP for over half of that. It's about 5.5 years. The way to think about this is that we have a worldwide OM relationship with HP for the Apollo 4000 server line. The official name for our product is HP Apollo 4000 systems with scale itty ring scalable storage. Also quite a mouthful, but very descriptive. Ah, and then we work very closely with the HP storage and big data teams. I'm very tied into the product side, talking to the product managers, but also the marketing side and very much so. On the sales side, we've had super success with them in Europe, also here in the US, and there's growing business, but also in a P J in Japan. Specifically, >>you mentioned that one of the doctors right now that's really urging a healthcare and given the fact that we are three months into a global pandemic, anything that's interesting that you want to share in terms of how skeleton is helping some of your health care customers rapidly pivot in this very unprecedented time. >>Yeah, I would say that there's a couple of very notable trends here. The 1st 1 started a few years ago. We really, honestly didn't focus so much on health care until about 2000 and 17 18. But since that time, we now have something like 40 hospital hospital systems globally using our product and notably on H P E servers. Uh, and that's to retain medical images for long term retention. These are things like digital diagnostic images. MRI's CAT scans CT scans. These hospitals are mandated to keep them for a long term right, sometimes for five years, 10 years or even page patient Lifetime. I would say the newer thing that we're seeing now just in the last year or so is genomics research. There's so much concentration now on pharmaceutical and biotechnology around genomics. That data tends to be very voluminous, you know, it can go from hundreds of terabytes and petabytes, and moreover, they need to run simulations on that to do you know, fast iteration on different drug research. We've now been applied to that problem, and a lot of times we do it with a partner or something like a fast tier one file system and then us as the archive here. But we're seeing that the popularity of that just wrote tremendously within hospitals, hospital groups and also just dedicated research for biotechnology. >>The vault. You talked about volumes there, and the volumes are growing and growing each year as his retention periods, depending on the type of data, the type of of ah, imagery, for example. But from a use case perspective, what is it that you're helping your health care customers achieve? Is it is it backup targets? Is it disaster? Recovery is speed of access All the above. >>Yeah, so where we focus in health care is really on the unstructured data. This is all the file content that they deal with, you know, in a hospital. Think about all the different medical image studies that they have, things like digital files for CAT scans and MRI's. These are becoming huge files, you know. One multi slice X ray or digital scan, for example, can be gigabytes in size and profile, and that's per patient. Now think about the number of patients and the right attention of all of that. It's a perfect use case for what we do, which is capacity optimized storage for long term retention. But we can also be used for other things. For example, backups of the electronic patient records. Those are typically stored in databases, but they need to be backed up. What we found is that we're an ideal long term backup target. So the way hospitals look at us is that they can consolidate multiple use cases, undo our ring system on HP. They can grow it over time. They could just keep adding servers, and typically what they do is they start with a single use case, what they think of as a single modality, perhaps an imaging. And then they grow over time to encompass more and more and eventually think about a comprehensive image management system within a hospital. But those are popular today. Hospitals are also starting to look at other use cases. Obviously, we mentioned genomics, but hybrid cloud is coming at them as well. >>Talk to me about that as we see growing volumes of data, different types of modalities, lots of urgent need to you, said backup data, So data protection is critical. But as as healthcare organizations move to multi cloud, how considerate Ian HP help facilitate that migration? >>Yeah, So what we've noticed is, you know, there's both a feeling that they're fast and they're slow to embrace the public clouds. But one of the things that's obvious is that from a sass perspective, software as a service, they've really embraced it. Most of the big EMR systems, the electronic medical records, are already SAS based, so they are there, and in fact they're probably already multi cloud. But on the data management side, that's where we focus. And we hear a lot of use cases that would involve taking older data from on Prem and perhaps archiving it long term in a HIPAA compliant cloud in the US, for example, for long term retention. But there are other things. For example, they may want to push some data that they've generated on Prem to a public cloud like Amazon or azure, and do some kind of computing against it. Perhaps an analytic service, some kind of image recognition or, you know, image pattern detection. Um, the 3rd 1 that we see now in hybrid cloud is their interest in having second copies of the data so that they can continue operations. Right? I think we all know that hospitals have an absolute uptime need. They need to be running 24 by seven. One of the things that's starting to happen is rather than a second physical data center. They established a second site in a public cloud on and then they stage their applications and we can help with HP. Move the data from on Prem to the public cloud to have this sort of cloud disaster recovery solution. >>So cloud here interesting topic. Do you see there that in healthcare in particular, that hospitals and healthcare organizations are getting less concerned about cloud from a security perspective and more open to it as an enabler of scale? >>I think what they've seen is that the cloud vendors have really matured in terms of providing all of the hardening that you want in terms of data, privacy and data security. You know, 10 years ago, if you looked at the cloud, you would have been extremely nervous about putting your data up there. But now all of the right principles are there in terms of multi tenancy. Ah, secure authentication based on very strong keys. Encryption of the data. One of the first healthcare customers we worked with was completely ready to do this. But then, of course, they said, the images that we store in the cloud must be infected. So we were able to work in collaboration with them, to develop encryption and actually use their own management service for encrypting those images so that our system or the HP servers don't store the keys for encryption. So I would say yes, It's a combination of the cloud's becoming super mature. Some of them are now certified and compliant for this use case on, the customers are just sort of. They passed the first step of trying it on there really to sort of go into these use cases a little bit more broadly. >>And so with that maturity of the technologies and the more the willingness on the part of the customer to try and tell me how to HP and scale a go to market together. >>Yeah, so what we do is we've really focused on specific market verticals, healthcare being one of them, but there are others. Financial services is where we've had other success with them. The way we do it is that we first start by building very specific swim lanes. In HP parlance, that helps aimed the Salesforce on where we can provide a great solution not only with Ring but perhaps with complementary software. Like I mentioned H p e store once for data protection backup. They have other partner solutions that we just love to work with. Vendors like Wicca. Iot has a wonderful fast file system that is now useful in biotech. Um, and they use a system like the ring for storing the data from their file system and the snapshots in that. But the way it's been organized is really by vertical and to go and have specialized kind of teams that understand how to sell that message. We jointly sell with them, so their teams and our teams Goto calls together. It's obviously been very virtual, but we've usually collaborated very extensively in the field working kind of air cover at the marketing level, and now one of the newer things with obviously the new way of working is lots of virtual events were not only doing a discover virtual experience, but we started doing more and more webinars, especially with HP and these other joint part >>and carries in this new virtual era where everything like, he said, This is how we're communicating now. And thankfully, we have the technology. Couple questions on that related to sales and engagement. One. What are some of the things that the sales team but the joint sales teams are hearing now from customers that might be changing requirements given the Koven situation? First >>question. Yeah, I think what one of the things we've certainly seen is that almost nothing has slowed down in these industries. I mean, we're focused on industries that seem to kind of think long term, right? I obviously healthcare. They're dealing with the current crisis as much as they can. But what we've seen is that there still planning, right, so they want to build their I T infrastructure. They're certainly thinking about how to leverage hybrid cloud. I think that's it becomes very clear that they see that as not only a way to offer new services in the future, but also to save money today. They're very interested in that right. How can they save on capital expenses and human talent is an example. I think those have been the themes for us. You know, we do have some exposure to industries that might have a little bit more, you know, sensitivity to the current climate, things like travel related services. But honestly, it's been minor. And what we're finding is that even those companies are still investing in this kind of technology, really to think about the 2 to 3 and you're being horizon and beyond. >>Have you done any any messaging, your positioning changes? I know you also in product marketing or corporate marketing that relate to customers. You know, everybody prepares for different types of disruptions or natural disasters. But now we have this invisible disruptor. Any change in your messaging, your positioning either at stability or with the partnership with HP that will help customers understand if you're not on this journey yet, why they need to be >>so, Yeah, we have looked at how we message the technology and the solution, especially in the light of the pandemic. You know, we stayed true to kind of a top level hybrid cloud data management message, but underneath the covers, what do customers care about? They care about a solution that you provide, but they also care about what they pay for it. Let's let's be honest. One of the things we've done very historically is to have a very simplified pricing model. It's based on usable protected capacity. So the user says I have a petabytes of data. That's the license fee. It's not based on how much disk they have or how many copies they want to create or how many sites they want to spread it across. So one of the things we want to do is make that a little bit more clear. Eso that's come out a bit more in our messaging in recent months. The second is that what we feel is that customers really want to know us as a company. They want to feel assured that were here, that will support them in all cases and that were available at all times. And what that's translated into is a more of a customer community focus. We are very much carrying about, you know, our customers. We see them invest in our systems today, but they also continue to expand. So we're doing things like new community portals where they can engage us in discourse. They can ask questions live. We're online. We have a lot of tips and knowledge available for them. So I would say that those are the two changes that we put in our messaging, both on pricing and on a community involved >>and where community involvement is concerned. It's even more critical now because we can't get together face to face and have conversations or meetings or conferences as chief product officer. Imagine that was a lot of what you were doing before. Tell me what it is from your perspective to engage with the community, to engage with sales and your partners during this TBD timeframe of we don't know when we're going to get back together. What do you find? It works really well for continuing continuing that engagement. >>Yeah, I think the keyword for me has just been transparency. You know, customers have always bonded to know, really, what's what's going on behind the scenes. How does the tech work? Right? What's the architecture? And I think now what we're seeing is there sort of a ramp up on that. For example, what's very important for community is for people to know what's coming right? They want to know the roadmaps. They want to be alerted to new things that are not only the next quarter, but in the next year. Right? So I think that's our focus here is to make this community a place where people can learn absolutely everything so that they can plan not only for the next year, but like we said there, they're thinking three years and beyond. So we're going to do our best to be totally transparent and be expressed as we can possibly be >>transparent entrusted. Paul, those are two great words to end on. We Thank you so much for joining us on the Cube, sharing what's new at stability and with the HP partnership. >>It's been a pleasure. Lisa. Thank you for your time. >>Likewise. For my guest, Paul Scott. Sally, I am Lisa Martin. You're watching the Cube's coverage of HP Discover 2020. The virtual experience. Yeah, yeah, yeah, yeah
SUMMARY :
Discover Virtual experience Brought to you by HP We have, all specially the chief product officer at agility. Thank you. So since it's been a while since we've had you on and your peers are critical to our business and we just love to partner with them. Tell me about the evolution of the partnership as you've evolved On the sales side, we've had super success with them in Europe, also here in the US, and given the fact that we are three months into a global pandemic, anything that's interesting We've now been applied to that problem, and a lot of times we do it with a partner or something like a fast tier Recovery is speed of access All the above. Think about all the different medical image studies that they have, Talk to me about that as we see growing volumes of data, different types of modalities, One of the things that's starting to happen is cloud from a security perspective and more open to it as an enabler of scale? One of the first healthcare customers we worked with was And so with that maturity of the technologies and the more the willingness on the part of the customer to at the marketing level, and now one of the newer things with obviously the new way of working is lots of virtual now from customers that might be changing requirements given the Koven situation? You know, we do have some exposure to industries that might have a little bit more, But now we have this invisible disruptor. So one of the things we want to do is make that a little bit more clear. to engage with sales and your partners during this TBD timeframe of we don't know when we're going to get back So I think that's our focus here is to make this community the Cube, sharing what's new at stability and with the HP partnership. It's been a pleasure. The virtual experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Europe | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Paul Scott | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
Sally | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
80 | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
10 year | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
Ian HP | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
24 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Zenko | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
azure | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
second product | QUANTITY | 0.99+ |
second site | QUANTITY | 0.99+ |
Wicca | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
each year | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
two changes | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
next quarter | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
HP East | ORGANIZATION | 0.98+ |
HIPAA | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
about 60% | QUANTITY | 0.98+ |
Apollo 4000 | COMMERCIAL_ITEM | 0.98+ |
seven | QUANTITY | 0.98+ |
single modality | QUANTITY | 0.98+ |
second copies | QUANTITY | 0.98+ |
first step | QUANTITY | 0.97+ |
Paul Speciale | PERSON | 0.97+ |
Couple questions | QUANTITY | 0.96+ |
40 hospital hospital systems | QUANTITY | 0.96+ |
about three years ago | DATE | 0.95+ |
single use case | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
10 years old | QUANTITY | 0.94+ |
few years ago | DATE | 0.93+ |
Iot | ORGANIZATION | 0.92+ |
Prem | ORGANIZATION | 0.92+ |
two great words | QUANTITY | 0.92+ |
half | QUANTITY | 0.89+ |
Ring | ORGANIZATION | 0.89+ |
about 5.5 years | QUANTITY | 0.89+ |
second physical data center | QUANTITY | 0.89+ |
10 | DATE | 0.89+ |
Said Era | ORGANIZATION | 0.87+ |
3rd 1 | QUANTITY | 0.85+ |
Scality | PERSON | 0.85+ |
Eso | ORGANIZATION | 0.85+ |
Ashesh Badani, Red Hat | Red Hat Summit 2020
>>from around the globe. It's the Cube with digital coverage of Red Hat. Summit 2020 Brought to you by Red Hat. >>Yeah. Hi. And welcome back to the Cube's coverage of Red Hat Summit 2020 on stew. Minimum in this year's event, of course, happened globally. Which means we're talking to Red Hat executives, customers and partners where they are around the globe on and happy to welcome back to the program. One of our cube alumni, Badani, who is the senior vice president. Cloud platforms at Red Hat is great to see you. >>Yeah, thanks a lot for having me back on. >>Yeah, absolutely. So you know, the usual wall to wall coverage that we do in San Francisco? Well, it's now the global digital, a little bit of a dispersed architecture to do these environments. Which reminds me a little bit of your world. So, you know, the main keynote stage. You know, Paul's up There is the, you know, new CEO talking about open hybrid cloud. And of course, the big piece of that is, you know, open shift and the various products, you know, in the portfolio there, So ah, personal. We know there's not, you know, big announcements of, you know, launches and the like, But your team and the product portfolio has been going through a lot of changes. A lot of growth since last time we connected. So bring us up to speed as to what we should know about. >>Sure. Thanks s Oh, yes, not not a huge focus around announcements, this summit, especially given everything going on in the world around us today. Ah, but you know, that being said, we continue our open shift journey. We started that well, you know, many years ago. But in 2015 and we had our first release both the stone kubernetes in a container focused platform. Ever since then, you know, we continue to groan to evolve Atlassian count now over 2000 customers globally. I trusted the platform in industries that literally every industry and also obviously every job around around the globe. So that's been great to see you. And last summit, we actually announced a fairly significant enhancement of a platform with a large fortune before big focus around created manageability ability to use operators which is, you know, kubernetes concept to make applications much more manageable. um, you know, when they're being run natively within within the platform, we continue to invest. There s so there's a new release off the platform. Open shift 4.4 based on kubernetes 1.17 big made available to our customers globally. And then really, sort of this this notion of over the air updates right to create a platform that is almost autonomous in nature, you know, acts more like your your your mobile phone in the way you can manage and and update and upgrade. I think that's a key value proposition that, you know, we're providing to our customers. So we're excited to see that and then be able to share that with you. >>Yeah, so a chef won't want to dig into that a little bit. So one of the discussions we've had in the industry for many years is how much consistency there needs to be across my various environments. We know you know Kubernetes is great, but it is not a silver bullet. You know, customers will have clusters. They will have different environments. I have what I do in my data centers or close. I'm using things in the public clouds and might be using different communities offering. So you know, as you said, there's things that Red Hat is doing. But give us a little insight into your customers as to how should they be thinking about it? How do they manage it? One of the new pieces that we're building it into a little bit, of course, from a management sand point is ACM, which I know open shift today, but going toe support some of the other kubernetes options you know down the road. So how should customers be thinking about this? How does Red Hat think about managing? Did this ever complex world >>Yes, So Student should have been talking about this for several years now, right with regard to just the kind of the customers are doing. And let's start with customers for us, because it's all about you know, the value for them so that this year's summit we're announcing some innovation award winners, right? So a couple of interesting ones BMW and Ford, um, you know BMW, you know, building It's next generation autonomous driving platform using containers. And then, you know, police Massive data platform an open ship for doing a lot of interesting work with regard to, uh, bringing together. It's a development team taking advantage of existing investments in hardware and so on, You know, the in place, you know, with the platform. But also, increasingly, companies that are you know, for example, in all accept. All right, so we've got the Argentine Ministry of Health. We've got a large electricity distribution company adopting containers, adopting middleware technology, for example, on open shift until great value. Right. So network alerts when there's electricity outrage going from three minutes to 10 seconds. And so, as you now see more and more customers doing, you know, more and more if you will mission critical activities on these platforms to your points to your question is a really good one is not got clusters running in multiple markets, right? Perhaps in their own data center, across multiple clouds and managing these clusters at scale, it becomes, you know, more, more critical up. And so, you know, we've been doing a bunch of work with regard to the team, and I actually joined us from IBM has been working on this. Let's remember technology for a while, and it's part of Red Hat. We're now releasing in technology preview. Advanced cluster management trying to solve address questions around. What does it mean to manage the lifecycle of the application process? Clusters. How do I monitor and imbue cluster help? You know, regardless of you know, where they run. How do I have consistent security and compliance for my policies across the different clusters. So really excited, right? It is a really interesting technology. It's probably most advanced placement. That's our market. What? IBM working on it. We know. Well, before you know, the team from from there, you know, joined us. And now we're making it much more >>widely available. Yeah, actually, I just want one of things that really impressed some of those customers. First off. Congratulations. 2000 you know, great milestone there. And yeah, we've had We're gonna have some of the opportunity to talk on the cube. Some of those essential services you talk Ministry of Health. Obviously, with a global pandemic on critically environment, energy companies need to keep up and running. I've got Vodafone idea also from India, talking about how communication service is so essential. Pieces and definitely open shift. You know, big piece of this story asst to how they're working and managing and scaling. Um, you know, everybody talks about scale for years, but the current situation around the globe scale something that you know. It's definitely being stressed and strained and understood. What? What? What's really important? Um, another piece. Really interesting. Like to dig in a little bit here. Talk about open shift is you know, we talk kubernetes and we're talking container. But there's still a lot of virtualization out there. And then from an application development standpoint, there's You know what? Let's throw everything away and go all serverless on there. So I understand. Open shift. Io is embracing the full world and all of the options out there. So help us walk through how Red Hat maybe is doing things a little bit differently. And of course, we know anything right Does is based on open source. So let's talk about those pieces >>Yes, to super interesting areas for us. Um, one is the work we're doing based on open source project called Kube Vert, and that's part of the CN CF incubating projects. And that that is the notion off bringing virtualization into containers. And what does that mean? Obviously There are huge numbers of workloads running in which machines globally and more more customers want, you know, one control plane, one environment, one abstraction to manage workloads, whether they're running in containers or in IBM, I believe you sort of say, Can we take workloads that are running in these, uh, give, um, based which machines or, uh, VMS running in a VM based environment and then bring them natively on, run them as containers and managed by kubernetes orchestrate across this distributed cluster that we've talked about? I've been extremely powerful, and it's a very modern approach to modernizing existing applications as well as thinking about building new services. And so that's a technology that we're introducing into the platform and trying to see some early customer interest. Um, around. So, >>you know, I've got ah, no, I'm gonna have a breakout with Joe Fernandez toe talk about this a little bit, but you know what a note is you're working on. That is, you're bringing a VM into the container world and what red hat does Well, because you know your background and what red hat does is, you know, from an operating system you're really close to the application. So one of my concerns, you know, from early days of virtualization was well, let's shut things in a VM and leave it there and not make any changes as opposed to What you're describing is let's help modernize things. You know, I saw one of the announcements talking about How do I take job of workloads and bring them into the cloud? There's a project called Marcus. So once again, do I hear you right? You're bringing V M's into the container world with help to move towards that journey, to modernize everything so that we were doing a modern platform, not just saying, Hey, I can manage it with the tool that I was doing before. But that application, that's the important piece of it. >>Yeah, and it's a really good point, you know, We've you know, so much to govern, probably too little time to do it right, because the one that you touched on is really interesting. Project called caucuses right again. As you rightly pointed out, everything that is open source up, and so that's a way for us to say, Look, if we were to think about Java and be able to run that in a cloud native way, right? And be able to run, um, that natively within a container and be orchestrated again by kubernetes. What would that look like? Right, How much could be reduced density? How much could be improved performance around those existing job applications taking advantage off all the investments that companies have made but make that available in kubernetes and cloud native world. Right? And so that's what the corpus project is about. I'm seeing a lot of interest, you know, and again, because the open source model right, You don't really have companies that are adopting this, right? So there's I think there's a telecom company based out of Europe that's talking about the work that they're already doing with this. And I already blogged about it, talking about, you know, the value from a performance and use of usability perspective that they're getting with that. And then you got So you couple this idea off. How do I take BMC? Bring them into contempt? Right? Right. Existing workloads. Move that in. Run that native check. Right? Uh, the next one. How do I take existing java workloads and bring them into this modern cloud native Kubernetes space world, you know, making progress with that orchestra check. And then the third area is this notion off several lists, right? Which is, you know, I've got new applications, new services. I want to make sure that they're taking advantage, appropriate resources, but only the exact number of resources that require We do that in a way that's native to kubernetes. Right? So we're been working on implementing a K native based technologies as the foundation as the building blocks, um, off the work we're doing around serving and eventing towards leading. Ah, more confortable several institution, regardless of where you run it across any off your platform prints up. And that will also bring the ability to have functions that made available by really any provider in that same platform. So So if you haven't already to put all the pieces together right that we were thinking about this is the center of gravity is a community space platform that we make fully automated, that we make it very operational, make it easy for different. You know, third party pieces to plug in, writes to sort of make sure that it's in trouble in modular and at the same time that start layering on additional Kim. >>Yeah, I'm a lot of topics. As you said, it's Siachin. I'm glad on the serverless piece we're teasing out because it is complicated. You know, there are some that were just like, Well, from my application developer standpoint, I don't >>need to >>think about all that kubernetes and containers pieces because that's why I love it. Serverless. I just developed to it, and the platform takes care of it. And we would look at this year to go and say, Well, underneath that What is it? Is it containers? And the enter was Well, it could be containers. It depends what the platform is doing. So, you know, from from Red Hat's standpoint, you're saying open shift server lists, you know? Yes, it's kubernetes underneath there. But then I heard you talk about, you know, live aware of it is so, um, I saw there's, you know, a partner of Red Hat. It's in the open source community trigger mesh, which was entering one of the questions I had. You know, when I talk to people about serverless most of the time, it's AWS based stuff, not just lambda lots of other services. You know, I didn't interview with Andy Jassy a few years ago, and he said if I was to rebuild AWS today, everything would be built on serverless. So might some of those have containers and kubernetes under it? Maybe, but Amazon might do their own thing, so they're doing really a connection between that. So how does that plug in with what you're doing? Open shift out. All these various open sourced pieces go together. >>Yes, I would expect for us to have partnerships with several startups, right? You know you name, you know, one in our ecosystem. You know, you can imagine as your functions, you know, running on our serverless platform as well as functions provided by any third party, including those that are built and by red hat itself, Uh, you know, for the portal within this platform. Because ultimately, you know, we're building the platform to be operational, to be managed at scale to create greater productively for developments. Right? So for example, one of things we've been working on we are in the area of developer tools. Give the customers ability. Do you have you know, the product that we have is called cordon Ready workspaces. But essentially this notion off, you know, how can we take containers and give work spaces that are easy for remote developers to work with? Great example. Off customer, actually, in India that's been able to rapidly cut down time to go from Dev Productions weeks, you know, introduced because they're using, you know, things like these remote workspaces running in containers. You know, this is based on the eclipse. Ah, Apache, the the CI Project, You know, for this. So this this notion that you know, we're building a platform that can be used by ops teams? Absolutely true, but the same time the idea is, how can we now start thinking about making sure these abstractions are providing are extremely productive for development teams. >>Yeah, it's such an important piece. Last year I got the chance to go to Answerable Fest for the first time, and it was that kind of discussion that was really important, you know, can tools actually help me? Bridge between was traditionally some of those silos that they talked about, You know, the product developer that the Infrastructure and Ops team and the AB Dev teams all get things in their terminology and where they need but common platforms that cut between them. So sounds like similar methodology. We're seeing other piece of the platforms Any other, you know, guidance. You talked about all your customers there. How are they working through? You know, all of these modernizations adopting so many new technologies. Boy, you talked about like Dev ops tooling it still makes my heads. Then when I look at it, some of these charts is all the various tools and pieces that organizations are supposed to help choose and pick. Ah, out of there, they have. So how how is your team helping customers on kind of the organizational side? >>Yes. So we'll do this glass picture. So one is How do you make sure that the platform is working to help these teams? You know, by that? What I mean is, you know, we are introducing this idea and working very closely with our partners globally and on this notion of operators, right, which is every time I want to run data bases. And you know, there's so many different databases. There are, you know, up there, right? No sequel, no sequel. and in a variety of different ones for different use cases. How can you make sure that we make it easy for customers trial and then be able to to deploy them and manage them? Right? So this notion of an operator lifecycle because application much more manageable when they run with data s O. So you make you make it easier for folks to be able to use them. And then the question is, Well, what other? If you will advise to help me get that right So off late, you probably heard, you know, be hired a bunch of industry experts and brought them into red hat around this notion of a global transformation and be able to bring that expertise to know whether you know, it's the So you know, Our Deep in Dev Ops and the Dev Ops Handbook are you know, some of the things that industry is a lot like the Phoenix project and, you know, just just in various different you know what's your business and be able to start saying looking at these are told, music and share ideas with you on a couple that with things like open innovation labs that come from red hat as well as you know, similar kinds of offerings from our various partners around the world to help, you know, ease their transition into the >>All right. So final question I have for you, let's go a little bit high level. You know, as you've mentioned you and I have been having this conversation for a number of years last year or so, I've been hearing some of the really big players out there, ones that are, of course, partners of Red Hat. But they say similar things. So you know, whether it's, you know, Microsoft Azure releasing arc. If it's, you know, VM ware, which much of your open ship customers sit on top of it. But now they have, you know, the Project Pacific piece and and do so many of them talk about this, you know, heterogeneous, multi cloud environment. So how should customers be thinking about red hat? Of course. You partner with everyone, but you know, you do tend to do things a little bit different than everybody else. >>Uh, yeah. I hope we do things differently than everyone else. You know, to deliver value to customers, right? So, for example, all the things that we talk about open ship or really is about industry leading. And I think there's a bit of a transformation that's going on a swell right within the way. How Red Hat approaches things. So Sam customers have known Red Hat in the past in many ways for saying, Look, they're giving me an operating system that's, you know, democratizing, if you will. You know what the provider provides, Why I've been given me for all these years. They provided me an application server, right that, you know, uh, it's giving me a better value than what proprietary price. Increasingly, what we're doing with, you know, the work they're doing around, Let's say whether it's open shift or, you know, the next generation which ization that we talked about so on is about how can we help customers fundamentally transform how it is that they were building deploy applications, both in a new cloud native way. That's one of the existing once and what I really want to 0.2 is now. We've got it least a five year history on the open shift platform to look back at you will point out and say here are customers that are running directly on bare metal shears. Why they find, you know, this virtualization solution that you know that we're providing so interesting Here we have customers running in multiple different environments running on open stack running in these multiple private clouds are sorry public clouds on why they want distribute cluster management across all of them. You know, here's the examples that you know we could provide right? You know, here's the work we've done with, you know, whether it's these, you know, government agencies with private enterprises that we've talked to write, you know, receiving innovation awards for the world been doing together. And so I think our approach really has been more about, you know, we want to work on innovation that is fundamentally impacting customers, transforming them, meeting them where they are moving the four into the world we're going into. But they're also ensuring that we're taking advantage of all the existing investments that they've made in their skills. Right? So the advantage of, for example, the years off limits expertise that they have and saying How can we use that? Don't move you forward. >>Well, a chef's Thank you so much Absolutely. I know the customers I've talked to at Red Hat talking about not only how they're ready for today, but feel confident that they're ready to tackle the challenges of tomorrow. So thanks so much. Congratulations on all the progress and definitely look forward to seeing you again in the future. >>Likewise. Thanks, Ian Stewart. >>All right, I'm still Minuteman. And much more coverage from Red Hat Summit 2020 as always. Thanks for watching the Cube. >>Yeah, Yeah, yeah, yeah, yeah, yeah.
SUMMARY :
Summit 2020 Brought to you by Red Hat. Cloud platforms at Red Hat is great to see you. And of course, the big piece of that is, you know, I think that's a key value proposition that, you know, we're providing to our customers. So you know, as you said, the in place, you know, with the platform. Talk about open shift is you know, we talk kubernetes and we're talking container. you know, one control plane, one environment, one abstraction to manage workloads, So one of my concerns, you know, from early days of virtualization was well, let's shut things in a VM Yeah, and it's a really good point, you know, We've you know, so much to govern, probably too little time to do As you said, it's Siachin. um, I saw there's, you know, a partner of Red Hat. So this this notion that you know, and it was that kind of discussion that was really important, you know, can tools actually help it's the So you know, Our Deep in Dev Ops and the Dev Ops Handbook are you So you know, whether it's, you know, Microsoft Azure releasing arc. You know, here's the work we've done with, you know, whether it's these, you know, government agencies you again in the future. And much more coverage from Red Hat Summit 2020 as Yeah, Yeah, yeah,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ford | ORGANIZATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Ian Stewart | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Joe Fernandez | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Badani | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Java | TITLE | 0.99+ |
three minutes | QUANTITY | 0.99+ |
last year | DATE | 0.98+ |
five year | QUANTITY | 0.98+ |
Red Hat Summit 2020 | EVENT | 0.98+ |
Sam | PERSON | 0.98+ |
Argentine Ministry of Health | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Summit 2020 | EVENT | 0.98+ |
10 seconds | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
first time | QUANTITY | 0.96+ |
over 2000 customers | QUANTITY | 0.96+ |
Ministry of Health | ORGANIZATION | 0.95+ |
one environment | QUANTITY | 0.95+ |
ACM | ORGANIZATION | 0.94+ |
first release | QUANTITY | 0.94+ |
Atlassian | ORGANIZATION | 0.92+ |
Answerable Fest | EVENT | 0.92+ |
few years ago | DATE | 0.92+ |
Paul | PERSON | 0.91+ |
third area | QUANTITY | 0.91+ |
Dev Ops Handbook | TITLE | 0.91+ |
Cube | ORGANIZATION | 0.89+ |
this year | DATE | 0.89+ |
many years ago | DATE | 0.88+ |
Kubernetes | TITLE | 0.87+ |
Kim | PERSON | 0.86+ |
CI Project | ORGANIZATION | 0.84+ |
red hat | TITLE | 0.83+ |
Ops | ORGANIZATION | 0.81+ |
kubernetes | OTHER | 0.8+ |
java | TITLE | 0.79+ |
Red Hat | TITLE | 0.79+ |
1.17 | TITLE | 0.72+ |
2000 | QUANTITY | 0.71+ |
Dev Ops | TITLE | 0.66+ |
Joy King, Vertica | Virtual Vertica BDC 2020
>>Yeah, it's the queue covering the virtual vertical Big Data Conference 2020 Brought to You by vertical. >>Welcome back, everybody. My name is Dave Vellante, and you're watching the Cube's coverage of the verdict of Virtual Big Data conference. The Cube has been at every BTC, and it's our pleasure in these difficult times to be covering BBC as a virtual event. This digital program really excited to have Joy King joining us. Joy is the vice president of product and go to market strategy in particular. And if that weren't enough, he also runs marketing and education curve for him. So, Joe, you're a multi tool players. You've got the technical side and the marketing gene, So welcome to the Cube. You're always a great guest. Love to have you on. >>Thank you so much, David. The pleasure, it really is. >>So I want to get in. You know, we'll have some time. We've been talking about the conference and the virtual event, but I really want to dig in to the product stuff. It's a big day for you guys. You announced 10.0. But before we get into the announcements, step back a little bit you know, you guys are riding the waves. I've said to ah, number of our guests that that brick has always been good. It riding the wave not only the initial MPP, but you you embraced, embraced HD fs. You embrace data science and analytics and in the cloud. So one of the trends that you see the big waves that you're writing >>Well, you're absolutely right, Dave. I mean, what what I think is most interesting and important is because verdict is, at its core a true engineering culture founded by, well, a pretty famous guy, right, Dr Stone Breaker, who embedded that very technical vertical engineering culture. It means that we don't pretend to know everything that's coming, but we are committed to embracing the tech. An ology trends, the innovations, things like that. We don't pretend to know it all. We just do it all. So right now, I think I see three big imminent trends that we are addressing. And matters had we have been for a while, but that are particularly relevant right now. The first is a combination of, I guess, a disappointment in what Hadoop was able to deliver. I always feel a little guilty because she's a very reasonably capable elephant. She was designed to be HD fs highly distributed file store, but she cant be an entire zoo, so there's a lot of disappointment in the market, but a lot of data. In HD FM, you combine that with some of the well, not some the explosion of cloud object storage. You're talking about even more data, but even more data silos. So data growth and and data silos is Trend one. Then what I would say Trend, too, is the cloud Reality Cloud brings so many events. There are so many opportunities that public cloud computing delivers. But I think we've learned enough now to know that there's also some reality. The cloud providers themselves. Dave. Don't talk about it well, because not, is it more agile? Can you do things without having to manage your own data center? Of course you can. That the reality is it's a little more pricey than we expected. There are some security and privacy concerns. There's some workloads that can go to the cloud, so hybrid and also multi cloud deployments are the next trend that are mandatory. And then maybe the one that is the most exciting in terms of changing the world we could use. A little change right now is operationalize in machine learning. There's so much potential in the technology, but it's somehow has been stuck for the most part in science projects and data science lab, and the time is now to operationalize it. Those are the three big trends that vertical is focusing on right now. >>That's great. I wonder if I could ask you a couple questions about that. I mean, I like you have a soft spot in my heart for the and the thing about the Hadoop that that was, I think, profound was it got people thinking about, you know, bringing compute to the data and leaving data in place, and it really got people thinking about data driven cultures. It didn't solve all the problems, but it collected a lot of data that we can now take your third trend and apply machine intelligence on top of that data. And then the cloud is really the ability to scale, and it gives you that agility and that it's not really that cloud experience. It's not not just the cloud itself, it's bringing the cloud experience to wherever the data lives. And I think that's what I'm hearing from you. Those are the three big super powers of innovation today. >>That's exactly right. So, you know, I have to say I think we all know that Data Analytics machine learning none of that delivers real value unless the volume of data is there to be able to truly predict and influence the future. So the last 7 to 10 years has been correctly about collecting the data, getting the data into a common location, and H DFS was well designed for that. But we live in a capitalist world, and some companies stepped in and tried to make HD Fs and the broader Hadoop ecosystem be the single solution to big data. It's not true. So now that the key is, how do we take advantage of all of that data? And now that's exactly what verdict is focusing on. So as you know, we began our journey with vertical back in the day in 2007 with our first release, and we saw the growth of the dupe. So we announced many years ago verdict a sequel on that. The idea to be able to deploy vertical on Hadoop nodes and query the data in Hadoop. We wanted to help. Now with Verdict A 10. We are also introducing vertical in eon mode, and we can talk more about that. But Verdict and Ian Mode for HDs, This is a way to apply it and see sequel database management platform to H DFS infrastructure and data in each DFS file storage. And that is a great way to leverage the investment that so many companies have made in HD Fs. And I think it's fair to the elephant to treat >>her well. Okay, let's get into the hard news and auto. Um, she's got, but you got a mature stack, but one of the highlights of append auto. And then we can drill into some of the technologies >>Absolutely so in well in 2018 vertical announced vertical in Deon mode is the separation of compute from storage. Now this is a great example of vertical embracing innovation. Vertical was designed for on premises, data centers and bare metal servers, tightly coupled storage de l three eighties from Hewlett Packard Enterprises, Dell, etcetera. But we saw that cloud computing was changing fundamentally data center architectures, and it made sense to separate compute from storage. So you add compute when you need compute. You add storage when you need storage. That's exactly what the cloud's introduced, but it was only available on the club. So first thing we did was architect vertical and EON mode, which is not a new product. Eight. This is really important. It's a deployment option. And in 2018 our customers had the opportunity to deploy their vertical licenses in EON mode on AWS in September of 2019. We then broke an important record. We brought cloud architecture down to earth and we announced vertical in eon mode so vertical with communal or shared storage, leveraging pure storage flash blade that gave us all the advantages of separating compute from storage. All of the workload, isolation, the scale up scale down the ability to manage clusters. And we did that with on Premise Data Center. And now, with vertical 10 we are announcing verdict in eon mode on HD fs and vertically on mode on Google Cloud. So what we've got here, in summary, is vertical Andy on mode, multi cloud and multiple on premise data that storage, and that gives us the opportunity to help our customers both with the hybrid and multi cloud strategies they have and unifying their data silos. But America 10 goes farther. >>Well, let me stop you there, because I just wanna I want to mention So we talked to Joe Gonzalez and past Mutual, who essentially, he was brought in. And one of this task was the lead into eon mode. Why? Because I'm asking. You still had three separate data silos and they wanted to bring those together. They're investing heavily in technology. Joe is an expert, though that really put data at their core and beyond Mode was a key part of that because they're using S three and s o. So that was Ah, very important step for those guys carry on. What else do we need to know about? >>So one of the reasons, for example, that Mass Mutual is so excited about John Mode is because of the operational advantages. You think about exactly what Joe told you about multiple clusters serving must multiple use cases and maybe multiple divisions. And look, let's be clear. Marketing doesn't always get along with finance and finance doesn't necessarily get along with up, and I t is often caught the middle. Erica and Dion mode allows workload, isolation, meaning allocating the compute resource is that different use cases need without allowing them to interfere with other use cases and allowing everybody to access the data. So it's a great way to bring the corporate world together but still protect them from each other. And that's one of the things that Mass Mutual is going to benefit from, as well, so many of >>our other customers I also want to mention. So when I saw you, ah, last last year at the Pure Storage Accelerate conference just today we are the only company that separates you from storage that that runs on Prem and in the cloud. And I was like I had to think about it. I've researched. I still can't find anybody anybody else who doesn't know. I want to mention you beat actually a number of the cloud players with that capability. So good job and I think is a differentiator, assuming that you're giving me that cloud experience and the licensing and the pricing capability. So I want to talk about that a little >>bit. Well, you're absolutely right. So let's be clear. There is no question that the public cloud public clouds introduced the separation of compute storage and these advantages that they do not have the ability or the interest to replicate that on premise for vertical. We were born to be software only. We make no money on underlying infrastructure. We don't charge as a package for the hardware underneath, so we are totally motivated to be independent of that and also to continuously optimize the software to be as efficient as possible. And we do the exact same thing to your question about life. Cloud providers charge for note indignance. That's how they charge for their underlying infrastructure. Well, in some cases, if you're being, if you're talking about a use case where you have a whole lot of data, but you don't necessarily have a lot of compute for that workload, it may make sense to pay her note. Then it's unlimited data. But what if you have a huge compute need on a relatively small data set that's not so good? Vertical offers per node and four terabyte for our customers, depending on their use case, we also offer perpetual licenses for customers who want capital. But we also offer subscription for companies that they Nope, I have to have opt in. And while this can certainly cause some complexity for our field organization, we know that it's all about choice, that everybody in today's world wants it personalized just for me. And that's exactly what we're doing with our pricing in life. >>So just to clarify, you're saying I can pay by the drink if I want to. You're not going to force me necessarily into a term or Aiken choose to have, you know, more predictable pricing. Is that, Is that correct? >>Well, so it's partially correct. The first verdict, a subscription licensing is a fixed amount for the period of the subscription. We do that so many of our customers cannot, and I'm one of them, by the way, cannot tell finance what the budgets forecast is going to be for the quarter after I spent you say what it's gonna be before, So our subscription facing is a fixed amount for a period of time. However, we do respect the fact that some companies do want usage based pricing. So on AWS, you can use verdict up by the hour and you pay by the hour. We are about to launch the very same thing on Google Cloud. So for us, it's about what do you need? And we make it happen natively directly with us or through AWS and Google Cloud. >>So I want to send so the the fixed isn't some floor. And then if you want a surge above that, you can allow usage pricing. If you're on the cloud, correct. >>Well, you actually license your cluster vertical by the hour on AWS and you run your cluster there. Or you can buy a license from vertical or a fixed capacity or a fixed number of nodes and deploy it on the cloud. And then, if you want to add more nodes or add more capacity, you can. It's not usage based for the license that you bring to the cloud. But if you purchase through the cloud provider, it is usage. >>Yeah, okay. And you guys are in the marketplace. Is that right? So, again, if I want up X, I can do that. I can choose to do that. >>That's awesome. Next usage through the AWS marketplace or yeah, directly from vertical >>because every small business who then goes to a salesforce management system knows this. Okay, great. I can pay by the month. Well, yeah, Well, not really. Here's our three year term in it, right? And it's very frustrating. >>Well, and even in the public cloud you can pay for by the hour by the minute or whatever, but it becomes pretty obvious that you're better off if you have reserved instance types or committed amounts in that by vertical offers subscription. That says, Hey, you want to have 100 terabytes for the next year? Here's what it will cost you. We do interval billing. You want to do monthly orderly bi annual will do that. But we won't charge you for usage that you didn't even know you were using until after you get the bill. And frankly, that's something my finance team does not like. >>Yeah, I think you know, I know this is kind of a wonky discussion, but so many people gloss over the licensing and the pricing, and I think my take away here is Optionality. You know, pricing your way of That's great. Thank you for that clarification. Okay, so you got Google Cloud? I want to talk about storage. Optionality. If I found him up, I got history. I got I'm presuming Google now of you you're pure >>is an s three compatible storage yet So your story >>Google object store >>like Google object store Amazon s three object store HD fs pure storage flash blade, which is an object store on prim. And we are continuing on this theft because ultimately we know that our customers need the option of having next generation data center architecture, which is sort of shared or communal storage. So all the data is in one place. Workloads can be managed independently on that data, and that's exactly what we're doing. But what we already have in two public clouds and to on premise deployment options today. And as you said, I did challenge you back when we saw each other at the conference. Today, vertical is the only analytic data warehouse platform that offers that option on premise and in multiple public clouds. >>Okay, let's talk about the ah, go back through the innovation cocktail. I'll call it So it's It's the data applying machine intelligence to that data. And we've talked about scaling at Cloud and some of the other advantages of Let's Talk About the Machine Intelligence, the machine learning piece of it. What's your story there? Give us any updates on your embracing of tooling and and the like. >>Well, quite a few years ago, we began building some in database native in database machine learning algorithms into vertical, and the reason we did that was we knew that the architecture of MPP Columbia execution would dramatically improve performance. We also knew that a lot of people speak sequel, but at the time, not so many people spoke R or even Python. And so what if we could give act us to machine learning in the database via sequel and deliver that kind of performance? So that's the journey we started out. And then we realized that actually, machine learning is a lot more as everybody knows and just algorithms. So we then built in the full end to end machine learning functions from data preparation to model training, model scoring and evaluation all the way through to fold the point and all of this again sequel accessible. You speak sequel. You speak to the data and the other advantage of this approach was we realized that accuracy was compromised if you down sample. If you moved a portion of the data from a database to a specialty machine learning platform, you you were challenged by accuracy and also what the industry is calling replica ability. And that means if a model makes a decision like, let's say, credit scoring and that decision isn't anyway challenged, well, you have to be able to replicate it to prove that you made the decision correctly. And there was a bit of, ah, you know, blow up in the media not too long ago about a credit scoring decision that appeared to be gender bias. But unfortunately, because the model could not be replicated, there was no way to this Prove that, and that was not a good thing. So all of this is built in a vertical, and with vertical 10. We've taken the next step, just like with with Hadoop. We know that innovation happens within vertical, but also outside of vertical. We saw that data scientists really love their preferred language. Like python, they love their tools and platforms like tensor flow with vertical 10. We now integrate even more with python, which we have for a while, but we also integrate with tensorflow integration and PM ML. What does that mean? It means that if you build and train a model external to vertical, using the machine learning platform that you like, you can import that model into a vertical and run it on the full end to end process. But run it on all the data. No more accuracy challenges MPP Kilometer execution. So it's blazing fast. And if somebody wants to know why a model made a decision, you can replicate that model, and you can explain why those are very powerful. And it's also another cultural unification. Dave. It unifies the business analyst community who speak sequel with the data scientist community who love their tools like Tensorflow and Python. >>Well, I think joy. That's important because so much of machine intelligence and ai there's a black box problem. You can't replicate the model. Then you do run into a potential gender bias. In the example that you're talking about there in their many you know, let's say an individual is very wealthy. He goes for a mortgage and his wife goes for some credit she gets rejected. He gets accepted this to say it's the same household, but the bias in the model that may be gender bias that could be race bias. And so being able to replicate that in and open up and make the the machine intelligence transparent is very, very important, >>It really is. And that replica ability as well as accuracy. It's critical because if you're down sampling and you're running models on different sets of data, things can get confusing. And yet you don't really have a choice. Because if you're talking about petabytes of data and you need to export that data to a machine learning platform and then try to put it back and get the next at the next day, you're looking at way too much time doing it in the database or training the model and then importing it into the database for production. That's what vertical allows, and our customers are. So it right they reopens. Of course, you know, they are the ones that are sort of the Trailblazers they've always been, and ah, this is the next step. In blazing the ML >>thrill joint customers want analytics. They want functional analytics full function. Analytics. What are they pushing you for now? What are you delivering? What's your thought on that? >>Well, I would say the number one thing that our customers are demanding right now is deployment. Flexibility. What? What the what the CEO or the CFO mandated six months ago? Now shout Whatever that thou shalt is is different. And they would, I tell them is it is impossible. No, what you're going to be commanded to do or what options you might have in the future. The key is not having to choose, and they are very, very committed to that. We have a large telco customer who is multi cloud as their commit. Why multi cloud? Well, because they see innovation available in different public clouds. They want to take advantage of all of them. They also, admittedly, the that there's the risk of lock it right. Like any vendor, they don't want that either, so they want multi cloud. We have other customers who say we have some workloads that make sense for the cloud and some that we absolutely cannot in the cloud. But we want a unified analytics strategy, so they are adamant in focusing on deployment flexibility. That's what I'd say is 1st 2nd I would say that the interest in operationalize in machine learning but not necessarily forcing the analytics team to hammer the data science team about which tools or the best tools. That's the probably number two. And then I'd say Number three. And it's because when you look at companies like Uber or the Trade Desk or A T and T or Cerner performance at scale, when they say milliseconds, they think that flow. When they say petabytes, they're like, Yeah, that was yesterday. So performance at scale good enough for vertical is never good enough. And it's why we're constantly building at the core the next generation execution engine, database designer, optimization engine, all that stuff >>I wanna also ask you. When I first started following vertical, we covered the cube covering the BBC. One of things I noticed was in talking to customers and people in the community is that you have a community edition, uh, free addition, and it's not neutered ais that have you maintain that that ethos, you know, through the transitions into into micro focus. And can you talk about that a little bit >>absolutely vertical community edition is vertical. It's all of the verdict of functionality geospatial time series, pattern matching, machine learning, all of the verdict, vertical neon mode, vertical and enterprise mode. All vertical is the community edition. The only limitation is one terabyte of data and three notes, and it's free now. If you want commercial support, where you can file a support ticket and and things like that, you do have to buy the life. But it's free, and we people say, Well, free for how long? Like our field? I've asked that and I say forever and what he said, What do you mean forever? Because we want people to use vertical for use cases that are small. They want to learn that they want to try, and we see no reason to limit that. And what we look for is when they're ready to grow when they need the next set of data that goes beyond a terabyte or they need more compute than three notes, then we're here for them, and it also brings up an important thing that I should remind you or tell you about Davis. You haven't heard it, and that's about the Vertical Academy Academy that vertical dot com well, what is that? That is, well, self paced on demand as well as vertical essential certification. Training and certification means you have seven days with your hands on a vertical cluster hosted in the cloud to go through all the certification. And guess what? All of that is free. Why why would you give it for free? Because for us empowering the market, giving the market the expert East, the learning they need to take advantage of vertical, just like with Community Edition is fundamental to our mission because we see the advantage that vertical can bring. And we want to make it possible for every company all around the world that take advantage >>of it. I love that ethos of vertical. I mean, obviously great product. But it's not just the product. It's the business practices and really progressive progressive pricing and embracing of all these trends and not running away from the waves but really leaning in joy. Thanks so much. Great interview really appreciate it. And, ah, I wished we could have been faced face in Boston, but I think it's prudent thing to do, >>I promise you, Dave we will, because the verdict of BTC and 2021 is already booked. So I will see you there. >>Haas enjoyed King. Thanks so much for coming on the Cube. And thank you for watching. Remember, the Cube is running this program in conjunction with the virtual vertical BDC goto vertical dot com slash BBC 2020 for all the coverage and keep it right there. This is Dave Vellante with the Cube. We'll be right back. >>Yeah, >>yeah, yeah.
SUMMARY :
Yeah, it's the queue covering the virtual vertical Big Data Conference Love to have you on. Thank you so much, David. So one of the trends that you see the big waves that you're writing Those are the three big trends that vertical is focusing on right now. it's bringing the cloud experience to wherever the data lives. So now that the key is, how do we take advantage of all of that data? And then we can drill into some of the technologies had the opportunity to deploy their vertical licenses in EON mode on Well, let me stop you there, because I just wanna I want to mention So we talked to Joe Gonzalez and past Mutual, And that's one of the things that Mass Mutual is going to benefit from, I want to mention you beat actually a number of the cloud players with that capability. for the hardware underneath, so we are totally motivated to be independent of that So just to clarify, you're saying I can pay by the drink if I want to. So for us, it's about what do you need? And then if you want a surge above that, for the license that you bring to the cloud. And you guys are in the marketplace. directly from vertical I can pay by the month. Well, and even in the public cloud you can pay for by the hour by the minute or whatever, and the pricing, and I think my take away here is Optionality. And as you said, I'll call it So it's It's the data applying machine intelligence to that data. So that's the journey we started And so being able to replicate that in and open up and make the the and get the next at the next day, you're looking at way too much time doing it in the What are they pushing you for now? commanded to do or what options you might have in the future. And can you talk about that a little bit the market, giving the market the expert East, the learning they need to take advantage of vertical, But it's not just the product. So I will see you there. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
September of 2019 | DATE | 0.99+ |
Joe Gonzalez | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Joy King | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Joy | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Vertical Academy Academy | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
seven days | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
three notes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Hewlett Packard Enterprises | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
BBC | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Ian Mode | PERSON | 0.99+ |
six months ago | DATE | 0.99+ |
Python | TITLE | 0.99+ |
first release | QUANTITY | 0.99+ |
1st 2nd | QUANTITY | 0.99+ |
three year | QUANTITY | 0.99+ |
Mass Mutual | ORGANIZATION | 0.99+ |
Eight | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Stone Breaker | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
America 10 | TITLE | 0.98+ |
King | PERSON | 0.98+ |
today | DATE | 0.98+ |
four terabyte | QUANTITY | 0.97+ |
John Mode | PERSON | 0.97+ |
Haas | PERSON | 0.97+ |
yesterday | DATE | 0.97+ |
first verdict | QUANTITY | 0.96+ |
one place | QUANTITY | 0.96+ |
s three | COMMERCIAL_ITEM | 0.96+ |
single | QUANTITY | 0.95+ |
first thing | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
both | QUANTITY | 0.95+ |
Tensorflow | TITLE | 0.95+ |
Hadoop | TITLE | 0.95+ |
third trend | QUANTITY | 0.94+ |
MPP Columbia | ORGANIZATION | 0.94+ |
Hadoop | PERSON | 0.94+ |
last last year | DATE | 0.92+ |
three big trends | QUANTITY | 0.92+ |
vertical 10 | TITLE | 0.92+ |
two public clouds | QUANTITY | 0.92+ |
Pure Storage Accelerate conference | EVENT | 0.91+ |
Andy | PERSON | 0.9+ |
few years ago | DATE | 0.9+ |
next day | DATE | 0.9+ |
Mutual | ORGANIZATION | 0.9+ |
Mode | PERSON | 0.89+ |
telco | ORGANIZATION | 0.89+ |
three big | QUANTITY | 0.88+ |
eon | TITLE | 0.88+ |
Verdict | PERSON | 0.88+ |
three separate data | QUANTITY | 0.88+ |
Cube | COMMERCIAL_ITEM | 0.87+ |
petabytes | QUANTITY | 0.87+ |
Google Cloud | TITLE | 0.86+ |
Ian Tien, Mattermost | GitLab Commit 2020
>>from San Francisco. It's the Cube covering. Get lab commit 2020 Brought to you by get lab. >>Welcome back. I'm Stew Minutemen, and this is get lab Commit 2020 here in San Francisco. Happy to welcome to the program. First time guests and TN Who is the co founder and CEO of Matter Most in. Nice to meet you. >>Thanks. Thanks for having me. >>Alright. S O. I always love. When you get the founders, we go back to a little bit of the why. And just from our little bit of conversation, there is a connection with get lab. You have relationships, Syd, Who's the co founder and CEO of get lab? So bring us back and tell us a little bit about that. >>Yeah, thanks. So I'm you know, I'm ex Microsoft. So I came from collaboration for many years there. And then, you know what I did after Microsoft's I started my own started a sort of video game company was backed by Y Combinator and, you know, we had were doing 85. Game engine is very, very fun on. We ran the entire company off of a messaging product. Misses, You know, a little while ago and it happens that messing product got bought by a big company and that got kind neglected. It started crashing and lose data. We were super unhappy. We tried to export and they wouldn't let us export. We had 26 gigs of all information. And when we stop paying our subscription, they would pay one less for our own information. So, you know, very unhappy. And we're like, holy cats. Like what? I'm gonna d'oh! And rather than go to another platform, we actually realized about 10 million hours of people running messaging and video games. Well, why don't we kind of build this ourselves? So we kind of build a little prototype, started using ourselves internally and because, you know, Sid was this a 2015 and said was out of my Combinator, We were y commoner would invent and we started talking. I was showing him what we built and sits like. You should open source that. And he had this really compelling reason. He's like, Well, if you open source it and people like it, you can always close source it again because it's a prototype. But if you open source, it and no one cares. You should stop doing what you do. And he was great. Kind of send me like this email with all the things you need to dio to run open source business. And it was just wonderful. And it just it is a start taking off. We started getting these wonderful, amazing enterprise customers that really saw what mattered most was at the very beginning, which was You know, some people call us open source slack, but what it really is, it's a collaborates, a collaboration platform for real Time Dev ops and it release. For people who are regulated, it's gonna offer flexibility and on Prem deployment and a lot of security and customization. So that's kind of we started and get lab is we kind of started Farley. We started following get labs footsteps and you'll find today with get lab is we're we're bundled with the omnibus. So all you have to do is put what your own would you like matter most on one. Get lab reconfigure and europe running. >>Yeah, I love that. That story would love you to tease out a little bit when you hear you know, open source. You know, communications and secure might not be things that people would necessarily all put together. So help us understand a little bit the underlying architecture. This isn't just, you know, isn't messaging it, Z how is it different from things that people would be familiar with? >>Yeah, that's a great question. So how do you get more secure with open source products? And the one thing look at, I'll just give you one example. Is mobility right? So, in mobile today, if you're pushing them, if you're setting a push notification to an Iowa, sir. An android device, It has a route through, like Google or Android. Right? And whatever app that you're using to send those notifications they're going to see you're going to see your notifications. They have to, right? So you just get encryption all that stuff in order to send to Google and Andrew, you have to send it on encrypted. And you know these applications are not there, not yours. They're owned by another organization. So how do you make that private how to make it secure? So with open source communication, you get the source code. It's an extreme case like we have you know, perhaps you can views, and it's really simple in turnkey. But in the if you want to go in the full privacy, most security you have the full source code. APS. You have the full source code to the system, including what pushes the messages to your APS, and you can compiling with your own certificates. And you can set up a system where you actually have complete privacy and no third party can actually get your information. And why enterprises in many cases want that extreme privacy is because when you're doing incident response and you have information about a vulnerability or breach that could really upset many, many critical systems. If that information leaked out, you really can't. Many people don't want ever to touch 1/3 party. So that's one example of how open source lets you have that privacy and security, because you because you control everything >>all right, what we threw a little bit the speeds and feeds. How many employees do you have? How many did you share? How many customers you have, where you are with funding? >>So where we are funding is, you know, last year we announced a 20 million Siri's A and A 50 million Siri's be who went from about 40 folks the beginning the aired about 100 a t end of the year. We got over 1000 people that contribute to matter most, and what you'll find is what you'll find is every sort of get lab on the bus installations. Gonna have a matter most is gonna have the ability to sort of turn on matter most so very broad reach. It's sort of like one step away. There's lots of customers. You can see it. Get lab commit that are running matter. Most get lab together, so customers are going to include Hey, there's the I T K and Agriculture that's got six times faster deployments running. Get lab in Madame's together, you've got world line. It's got 3000 people in the system, so you've got a lot of so we're growing really quickly. And there's a lot of opportunity working with Get lab to bring get lab into mobile into sort of real times. Dev up scenarios. >>Definitely One of the themes we hear the at the show is that get labs really enabling the remote workforce, especially when you talk about the developers. It sounds like that's very much in line with what matters most is doing. >>Absolutely. Madam Mrs Moat. First, I don't actually know. We're probably in 20 plus countries, and it's it's a remote team. So we use use matter most to collaborate, and we use videoconferencing and issue tracking across a bunch of different systems. And, yeah, it's just it's remote. First, it's how it's how we work. It's very natural. >>Yeah, it just give us a little bit of the inside. How do you make sure, as a CEO that you, you know, have the culture and getting everyone on the same page when many of them, you know, you're not seeing them regularly? Some of them you've probably never met in person, so >>that's a great question. So how do you sort of maintain that culture 11? The concert that get lips pioneered is a continent boring solutions, and it's something that we've taken on as well. What's the most boring solution to preserve culture and to scale? And it's really do what get labs doing right? So get love's hand, looked up. Get lab dot com. We've got handbook that matter most dot com. It's really writing down all the things that how we operate, what our culture is and what are values are so that every person that onboard is gonna get the same experience, right? And then what happens is people think that if you're building, you're gonna have stronger culture because, you know, sort of like, you know, absorbing things. What actually happens is it's this little broken telephone and starts echoing out, and it's opposed to going one source of truth. It's everyone's interpretation. We have a handbook and you're forced to write things down. It's a very unnatural act, and when you force people to write things down, then you get that consistency and every we can go to a source of truth and say, like, This is the way we operate. >>2019 was an interesting year for open source. There were certain companies that were changing their models as toe how they do things. You started it open source to be able to get, you know, direct feedback. But how do you position and talk to people about you know, the role of open source on still being ableto have a business around that >>so open source is, I think there's a generation of open source cos there's three ways you can really make money from open source, right? You can host software, you can provide support, and service is where you can do licensing, which is an open core model. When you see his categories of companies like allowed, you see categories like elastic like Hash corporate Terra Form involved with Get Lab that have chosen the open core model. And this is really becoming sort of a standard on what we do is we fall that standard, and we know that it supports public companies and supports companies with hyper growth like get Lab. So it's a very it's becoming a model that I'm actually quite familiar to the market, and what we see is this this sort of generation, this sort of movement of okay, there was operating systems Windows Circle. Now there's now there's more servers running Lennix than Windows Server. On Azure, you seen virtual ization technology. You've seen databases all sort of go the open source way and we see that it's a natural progression of collaboration. So it's really like we believe collaboration will go the open source way we believe leading the way to do that is through open core because you can generate a sustainable, scalable business that's going to give enterprises the confidence to invest in the right platform. >>All right, in what's on deck for matter most in 2020. >>It's really we would definitely want to work with. Get lab a lot more. We really want to go from this concept of concurrent Dev ops that get labs really champion to say Real time de Bob's. So we've got Dev ops in the world that's taking months and weeks of cycle times. And bring that down to minutes. We want to take you know, all your processes that take hours and take it down to seconds. So what really people, developers air sort of clamoring for a lot is like, Well, how do we get these if I'm regulated if I have a lot of customization needs? If I'm on premise, if I'm in a private network, how do I get to mobile? How do I get quicker interactions on? We really want to support that with instant response with deficit cock use cases and with really having a complete solution that could go from all your infrastructure in your data center, too. You know, that really important person walking through the airport. And that's that's how you speed cycle times and make Deb sec cops available anywhere. And you do it securely and in do it privately. >>All right, thanks so much for meeting with us. And great to hear about matter most. >>Well, thank you. Still >>all right. Be sure to check out the cube dot net for all the coverage that we will have throughout 2020 I'm still minimum. And thanks for watching the cue.
SUMMARY :
Get lab commit 2020 Brought to you by get lab. Nice to meet you. Thanks for having me. When you get the founders, we go back to a little bit of the why. So all you have to do is put what your own would you like matter most on one. That story would love you to tease out a little bit when you hear that stuff in order to send to Google and Andrew, you have to send it on encrypted. How many customers you have, where you are with funding? So where we are funding is, you know, last year we announced a 20 million Siri's A and A 50 million remote workforce, especially when you talk about the developers. So we use use matter most to collaborate, and we use videoconferencing you know, you're not seeing them regularly? people to write things down, then you get that consistency and every we can go to a source of truth and say, But how do you position and talk to people about you know, to do that is through open core because you can generate a sustainable, scalable business that's We want to take you know, all your processes that take hours and take it down And great to hear about matter most. Well, thank you. Be sure to check out the cube dot net for all the coverage that we will have throughout 2020
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ian Tien | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Iowa | LOCATION | 0.99+ |
26 gigs | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
20 plus countries | QUANTITY | 0.99+ |
85 | QUANTITY | 0.99+ |
Syd | PERSON | 0.99+ |
Moat | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
3000 people | QUANTITY | 0.99+ |
Get Lab | ORGANIZATION | 0.99+ |
Windows | TITLE | 0.99+ |
today | DATE | 0.99+ |
ORGANIZATION | 0.98+ | |
three ways | QUANTITY | 0.98+ |
over 1000 people | QUANTITY | 0.98+ |
about 10 million hours | QUANTITY | 0.98+ |
Android | TITLE | 0.98+ |
six times | QUANTITY | 0.98+ |
android | TITLE | 0.98+ |
First time | QUANTITY | 0.98+ |
Windows Circle | TITLE | 0.98+ |
Sid | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
2019 | DATE | 0.97+ |
50 million | QUANTITY | 0.97+ |
20 million | QUANTITY | 0.97+ |
Y Combinator | ORGANIZATION | 0.97+ |
one example | QUANTITY | 0.96+ |
one source | QUANTITY | 0.96+ |
about 40 folks | QUANTITY | 0.96+ |
Azure | TITLE | 0.96+ |
Matter Most | ORGANIZATION | 0.95+ |
get Lab | ORGANIZATION | 0.95+ |
GitLab | ORGANIZATION | 0.93+ |
One | QUANTITY | 0.92+ |
S O. | PERSON | 0.92+ |
I T K and Agriculture | ORGANIZATION | 0.91+ |
europe | LOCATION | 0.91+ |
about 100 | QUANTITY | 0.9+ |
Mattermost | PERSON | 0.87+ |
one step | QUANTITY | 0.86+ |
Stew Minutemen | PERSON | 0.85+ |
end of the year | DATE | 0.8+ |
Hash | ORGANIZATION | 0.78+ |
Lennix | TITLE | 0.75+ |
TN | LOCATION | 0.71+ |
1/3 | QUANTITY | 0.7+ |
Terra Form | TITLE | 0.69+ |
lots of customers | QUANTITY | 0.69+ |
get lab | ORGANIZATION | 0.69+ |
Bob | PERSON | 0.61+ |
a t | DATE | 0.59+ |
seconds | QUANTITY | 0.54+ |
Farley | ORGANIZATION | 0.53+ |
2020 | OTHER | 0.52+ |
Sandra Hamilton, Commvault | Commvault GO 2019
>>Live from Denver, Colorado. It's the cube covering comm vault. Go 2019 brought to you by Combolt. Hey, >>I'll come back to the cube date to have our coverage of Combalt go. 19 Lisa Martin with Stu. Met a man. We are in Colorado. Please welcome to the cube Sandy Hamilton, the VP of customer success. Been a convo four and a half months. So welcome to the Q book and the call. Sandy, thank you very much for having me. I really appreciate the opportunity to sit here with you this morning and share a little bit about what's going on at Commonwealth and it's been great. You guys are here. It's been fantastic. We had a great day yesterday. We got to speak with Sanjay, with Rob, Don foster, Mercer, a whole bunch of your customers. Well exactly the vibe, the positivity from the channel to the customer to the course. Even the OJI calm ball guys that I worked a couple of 10 years ago that are still here, it does really feel like a new combo and you're part of that on. >>Sanjay probably brought you in and the spring of 2019 and we've seen a lot of progress and a lot of momentum from comm vault in terms of leadership changes, sills structured new programs for channel. Exciting stuff. You kicked off this morning's keynote and you had the opportunity to introduce Jimmy Chen who if you haven't seen free solo, I haven't seen it. I'm watching it as soon as they get home from us. Amazing. But what a great way to introduce failure and why it's important to be prepared because it is going to happen. I just thought that was a great tone. Especially talking with you. Who leads customer success. >> Absolutely. Thank you Lisa very much and good morning Sue. Appreciate it. You know it's interesting cause when I think about customer success here at Comvalt, there's so many different facets to it. There really is all about engaging with our customers across everything that they do and we want to make sure our customers are prepared for something that will likely happen to them someday. >>Right. We have one of our customers talking about a cyber attack down there on their environment and how we were actually able to help them recover. So it's also that preparedness that Jimmy talked about, right? And making sure that you are training as much as you can, being prepared for what may come and knowing how to recover from that as he, as he talked about. I also think one of the things that we do really well is we listened to our customers when they give us feedback. So it's about how did those customers use what we did differently or how did they try it? And it wasn't exactly what they thought. And so how do we continue to innovate with the feedback from our customers? >>Sandy, one of the things we're hearing loud and clear from your customers is they're not alone. They're ready. I love, we have, Matthew is coming on a little bit later talking about, he's like, I'm here and my other person that does disaster, he's here too. So you know, I'm doing my own free solo. We've been talking about in tech, it's the technology and the people working together. You talked a little bit in your keynote about automated workflows, machine learning, talk about some of those pieces as to how the innovation that Combolt's bringing out is going to enable and simplify the lives of, >>yeah, I mean I think it, I think it does come down to how are we really taking care of the backend, if you will, from a technology perspective and what can we make more automated, you know, more secure. You know, you think about things like, I was even talking about new automated workflows around scheduling, even your backup windows, right? And if you think about, you know, the complexity that goes into scheduling all of that across all of your environments, we have the ability to actually have you just set what your windows should be and we'll manage all the complexities in the background, which allows you to go do things like this for customers to come to do things like this. >>So Sandy, I tell you, some of us, there's that little bit of nervousness around automation and even customers talking about, Oh well I can just do it over text. And I'm just thinking back to the how many times have I responded to the wrong text thread and Oh my gosh, what if that was my, you know, data that I did the wrong thing with. >>Yeah. Yeah. I mean, you know, one of the things that I love about this company, and again I've been here for a short period of time, but our worldwide customer support organization is just, you know, one of the hallmarks I think of this company, right? And how we're actually there for those customers at any point in time whenever they need any type of um, you know, help and support. And it isn't just the, you know, when you actually need that, when something goes wrong, it's also proactively we have professional services people, you know, we have all kinds of folks in between. Our partners play a huge role in making sure that our customers are successful with what they have going on. Let's dig into and dissect the customer life cycle. Help us understand what that's like for one and existing combo customer. Cause we talked to a couple of yesterday who've been combo customers for you know, a decade. >>So walk us through a customer life cycle for an incumbent customer as well as a new customer who is like Sanjay said yesterday, one of the things that surprised him is that a lot of customers don't know Combolt so what's the life cycle like for the existing customers and those new ones? >> Yeah, so you know, our fantastic install base of customers that we have today, one of the things that we are striving to continue to do is to make sure we're engaged with them from the beginning to the end. And the end isn't when they end, it's when you know, we're then fully deployed helping them do what they need to go in their environment. I think one of the great things about where we are with Comvalt right now is we actually have new products, new technologies, right? Have you guys had been exposed to, how are we making sure that the customers that we've had for a while are truly understanding what those new capabilities are? >>So if you think about it for us, it's how are we helping them to actually do more with their existing Convolt investment and potentially leverage us in other ways across their environment. Um, so we have, you know, our team of, you know, great, uh, you know, sales reps as well as our fantastic, you know, sales engineers, um, all the way through. Again, you know, PS and support, those people are always in contact with our customers, helping them to understand what we can really do across that life cycle and if they need to make changes along the way, we're here to help them, you know, do that as well. For a newer customer. One of the things that we're really focused on right now is that initial sort of onboarding for them and what set experience like for those customers. So having more of a, of a programmatic touch with those customers to make sure that we're more consistent in what we're doing. So they are actually receiving a lot of the same information at the same time and we're able to actually help them actually frankly in a more accelerated fashion, which is I think really important for them to get up and running as well. >>And when we talked about metallic yesterday with Rob and some other folks and I think a gentleman from Sirius, one of your launch partners, yes, Michael Gump. And you know the fact that that technology has the ability for partners to evaluate exactly what is going on with their customers so that they can potentially be even predictive to customers in terms of whether they're backing up end points or O three 65 I thought that was a really interesting capability that Colombo now has. It's giving that insights and the intelligence even to the partners to be able to help those customers make better decisions before they even know what to do makes exactly. >>They and their son, our partners are such a key part here to everything that we're really trying to do. And especially with the metallic, it's all through partners, right? And so we're really trying to drive that behavior and that means we've really have to ensure that we are bringing all of those partners into the same fold. They should have the same, you know, capabilities that we do. It's one of the, one of the also things that I'm trying to work on right now is how are we making sure our partners are better enabled around the things that we have in the capability. So we're working on, as part of those partner programs that you mentioned is do they have the right tools, if you will, and knowledge to go do what they need to go do to help our customers as well because it really is a partnership. >>Yeah. So Sandy, we've been looking at various different aspects of the change required to deliver metallic, which is now a SAS offering from a services and from a support standpoint, I think of a different experience from SAS as opposed to enterprise software. So bring, bring us, bring us your perspective. Yeah. This >>comes back a little bit to the onboarding experience, right? Where it's got to be much more digital touch. It's gotta be much more hands off cause that's the way the are thinking about buying metallic in the first place. Right? They don't have to have a sales rep, they can go by metallic, you know, frankly on their website right now, metallic.io, you know, you can go there, you can get everything you need to get started. Um, and so we want to make sure that the customers have different ways of engaging. And so some of that could very much be digital. Some of that can be, you know, different avenues of how they're working. They're wanting to work with us. But when you also then think about that type of a model, you start to think about consumption matters, right? And how much they're using and are they using everything that they purchased. >>And so we actually have a small team of customer success managers right now in the organization that are working with all of the new customers that we have in the SAS world to say, how are you doing? How's that going? You know, how's your touch? Is there anything that's presenting a challenge for you? Making sure they really do fully understand the capabilities end to end of that technology so that we can really get them onboarded super quick. As you probably know from talking to those guys, we're not having any services really around metallic cause it's not designed to need those services, which is huge. You know, I think in not only the SAS space but for Convolt as well. I think it's a new era and it also provides, frankly an opportunity for our partners to continue to engage with those customers going forward as well. >>One of the first things that I reacted to when I saw metallic, a Combalt venture was venture. I wanted to understand that. And so as we were talking yesterday with some of the gentlemen I mentioned, it's a startup within Combalt. Yeah. So coming from puppet but shoot dead in which Sonjay Mirchandani ran very successfully. Got puppet global. Your take on going from a startup like puppet to an incumbent like convo and now having this venture within it. Yeah. You know, I think it's one of the brilliant things that Sanjay and the team did very early on to recognize what Rob Calu, Ian and the rest of the folks were doing around this idea of what is now metallic. And they had been noodling it and Sanjay's like, that's got a really good opportunity. However we got to go capitalize on that now and bring that to market for our customers now. >>And if we had continued on in the way that we were, which is where it was night jobs and we didn't necessarily have all the dedicated people to go do it, you know, we may not have metallic right now. And so it was, it was really a great thing within the company to really go pull those resources out of what they were doing and say, you guys are a little startup, you know, here you go do it. And we actually had a little celebratory toast the other night with that team because of what, just a fantastic job that they've done. And one of the common threads in something everybody said was the collaboration that it really brought, not only within that team but across Combalt because there's a singular goal in bringing this to market for our customers. So it's been a great experience. I think we're going to leverage it and do more. So Sandy, >>before we let you go, need to talk a little bit about the. >>Fabulous. If I had one here I would, but I don't. So, um, a couple of months ago at VMworld, I don't know if you guys were there, you guys were probably there. Um, we actually started this thing called the D data therapy dog park. And there we had a number of puppies and they were outside. Folks came by, you know, visited. They stopped, they distressed, they got to pet a puppy. I mean, the social media was just out of this world, right? And we had San Francisco policemen there. It was, it was, it was great. Even competitors, I will say even competitors were there. It was, it was pretty funny. But, um, by the end of it, over 50% of the dogs that were there actually got adopted out, um, you know, into homes where they otherwise wouldn't have. Um, since then there've been a couple of people that have actually copied this little idea and you know, P places are springing up. >>So we have a, what we call it, data therapy dog park here where you can go in and get your puppy fix, you know, sit with the dogs and relax for a bit. But you know, we're super excited about it as well because, you know, it's sort of a fun play on what we do, but, but it's also, I think, you know, a great thing for the community and something that is near and dear to my heart. I have four dogs. Um, and so I'm not planning on taking another one home, but I'm doing my best to get some of these adopted. So if anybody out there is interested, just let me know. >>Oh, that was adoptable. All of them cheese. I'm picking up a new puppy and about eight days. So other ones of friends. I've got to have dogs enough for you. Do you need a third? We'll have a friend that has two puppies at the same time and said it's not that much more. I have had one before. You're good to go. We can, we can hook you up. Oh no. But one of the great things is it also, first of all, imitation is the highest form of flattery or for other competitors that are doing something similar, but you also just speak to the fact that we're all people, right? We are. We're traveling, especially for people that go to a lot of conferences and it's just one of those nice human elements that similar with the stories that customers share about, Hey, this is a failure that we had and this is how it helped us to recover from that. It's the same thing with, you can't be in a bad mood with, I think puppies, cupcakes and balloons. So if there were, I know that I could finish a show today >>that's like I took one of the little puppies when I was rehearsing yesterday on main stage. I took one of them with me out there and I was just holding it the whole time, you know? It was really, >>this was great. I'm afraid to venture back into the data therapy document. You're proud taking another one home OU was. Andy. It's been a pleasure to have very much. I appreciate it. Appreciate the time. Thank you and hope you have a great rest of the event. If you need anything, let us know. I'm sure we will and I can't wait to talk to you next year when you've been a comm vault for a whole like 16 months and hearing some great stories we do as well. All right. Take care. First two men, a man, Sandy Hamilton, the puppies, and I'm Lisa Martin. You're watching the cue from Convault go and 19 thanks for watching.
SUMMARY :
Go 2019 brought to you by Combolt. here with you this morning and share a little bit about what's going on at Commonwealth and it's been great. morning's keynote and you had the opportunity to introduce Jimmy Chen who success here at Comvalt, there's so many different facets to it. And making sure that you are training So you know, I'm doing my own free solo. to actually have you just set what your windows should be and we'll manage all the complexities in the background, what if that was my, you know, data that I did the wrong thing with. And it isn't just the, you know, when you actually need that, it's when you know, we're then fully deployed helping them do what they need to go in that life cycle and if they need to make changes along the way, we're here to help them, you know, do that as well. fact that that technology has the ability for partners to evaluate exactly what is They should have the same, you know, capabilities that we do. to enterprise software. They don't have to have a sales rep, they can go by metallic, you know, frankly on As you probably know from talking to those guys, we're not having any services really around metallic cause One of the first things that I reacted to when I saw metallic, a Combalt venture was venture. have all the dedicated people to go do it, you know, we may not have metallic right now. Um, since then there've been a couple of people that have actually copied this little idea and you know, So we have a, what we call it, data therapy dog park here where you can go in and get your puppy fix, for other competitors that are doing something similar, but you also just speak to the fact that we're all people, just holding it the whole time, you know? I'm sure we will and I can't wait to talk to you next year when you've been a comm vault for a whole like 16
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sandy Hamilton | PERSON | 0.99+ |
Matthew | PERSON | 0.99+ |
Jimmy | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ian | PERSON | 0.99+ |
Colorado | LOCATION | 0.99+ |
Sandra Hamilton | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
Sandy | PERSON | 0.99+ |
Jimmy Chen | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Sanjay | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Rob Calu | PERSON | 0.99+ |
Michael Gump | PERSON | 0.99+ |
Sue | PERSON | 0.99+ |
Sonjay Mirchandani | PERSON | 0.99+ |
two puppies | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Combolt | ORGANIZATION | 0.99+ |
16 months | QUANTITY | 0.99+ |
spring of 2019 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Combalt | ORGANIZATION | 0.99+ |
Comvalt | ORGANIZATION | 0.99+ |
Mercer | PERSON | 0.99+ |
next year | DATE | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
Convolt | ORGANIZATION | 0.99+ |
four dogs | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.98+ |
today | DATE | 0.98+ |
four and a half months | QUANTITY | 0.98+ |
VMworld | ORGANIZATION | 0.98+ |
Don foster | PERSON | 0.98+ |
over 50% | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
third | QUANTITY | 0.95+ |
19 | QUANTITY | 0.94+ |
about eight days | QUANTITY | 0.94+ |
SAS | ORGANIZATION | 0.93+ |
2019 | DATE | 0.93+ |
this morning | DATE | 0.93+ |
a man | QUANTITY | 0.93+ |
first things | QUANTITY | 0.93+ |
OJI | ORGANIZATION | 0.91+ |
Colombo | ORGANIZATION | 0.88+ |
a couple of months ago | DATE | 0.87+ |
Go | COMMERCIAL_ITEM | 0.86+ |
a decade | QUANTITY | 0.86+ |
10 years ago | DATE | 0.85+ |
metallic | ORGANIZATION | 0.83+ |
D data therapy dog park | ORGANIZATION | 0.8+ |
Convault | TITLE | 0.8+ |
this morning | DATE | 0.8+ |