Image Title

Search Results for 20 node:

Chris Jones QA Session **DO NOT PUBLISH**


 

(upbeat music) >> Okay, welcome back everyone. I'm John Furrier here in theCUBE, in Palo Alto for "CUBE Conversation" with Chris Jones, Director of Product Management at Platform9. I've got a series of questions, had a great conversation earlier. Chris, I have a couple questions for you, what do you think? >> Let's do it, John. >> Okay, how does Platform9 Solution, you- can it be used on any infrastructure anywhere, cloud, edge, on-premise? >> It can, that's the beauty of our control plane, right? It was born in the cloud, and we primarily deliver that SaaS, which allows it to work in your data center, on bare metal, on VMs, or with public cloud infrastructure. We now give you the ability to take that control plane, install it in your data center, and then use it with anything, or even in air gap. And that includes capabilities with bare metal orchestration as well. >> Second question. How does Platform9 ensure maximum uptime, and proactive issue resolution? >> Oh, that's a good question. So if you come to Platform nine we're going to talk about always on assurance. What is driving that is a system of three components around self-healing, monitoring, and proactive assistance. So our software will heal broken things on nodes, right? If something stops running that should be running, it will attempt to restart that. We also have monitoring that's deployed with everything. So you build a cluster in AWS, well, we put open source monitoring agents, that are actually Prometheus, on every single node. That means it's resilient, right? So if you lose a node, you don't lose monitoring. But that data importantly comes back to our control plane, and that's the control plane that you can put in your data center as well. That data is what alerts us, and you as a user, anytime of the day that something's going wrong. Let's say etcd latency, good example, etcd is going slow. We'll find out, we might not be able to take restorative action immediately, but we're definitely going to reach out and say,, "You have a problem, let's get ahead of this and let's prevent that from becoming a bigger problem." And that's what we're delivering. When we say always on assurance, we're talking about self-healing, we're talking about remote monitoring, we're talking about being proactive with our customers, not waiting for the phone call or the support desk ticket saying, "Oh we think something's not working." Or worse, the customer has an outage. >> Awesome. Thanks for sharing. Can you explain the process for implementing Platform9 within a company's existing infrastructure. >> Are we doing air gap, or on-prem or SaaS approached? SaaS approach I think is by far the easiest, right? We can build a dedicated Platform9 control plane instance in a manner of minutes, for any customer. So when we do a proof of concept or onboarding, we just literally put in an email address, put in the name you want for your fully qualified domain name, and your instance is up. From that point onwards, the user can just log in, and using our CLI, talk to any number of, say, virtual machines, or physical servers in their environment for, you know, doing this in a data center or colo, and say, "I want these to be my Kubernetes control plane nodes. Here's the five of them. Here's the VIP for the load balancing, the API server and here are all of my compute nodes." And that CLI will work with the SaaS control plane, and go and build the cluster. That's as simple as it, CentOS, Ubuntu, just plain old operating system. Our software takes care of all the prerequisites, installing all the pieces, putting down MetalLB, CoreDNS, Metrics Server, Kubernetes dashboard, etcd backups. You built some servers. That's essentially what you've done, and the rest is being handled by Platform9. It's as simple as that. >> Great, thanks for that. What are the two traditional paths for companies considering the cloud native journey? The two paths. >> The traditional paths. I think that's your engineering team running so fast that before you even realize that you've got, you know, 10 EKS clusters. Or, hey, we can do this. You know, I've got the I can build it mentality. Let's go DIY completely open source Kubernetes on our infrastructure, and we're going to piecemeal build it all up together. They're, I think the pathways that people traditionally look at this journey, as opposed to having that third alternative saying can I just consume it on my infrastructure, be it cloud or on-premise or at the edge. >> Third is the new way, you guys do that. >> That's been our focus since the company was, you know, brought together back in the open OpenStack days. >> Awesome, what's the makeup of your customer base? Is there a certain pattern to the size or environments that you guys work with? Is there a pattern or consistency to your customer base? >> It's a spread, right? We've got large enterprises like Juniper, and we go all the way down to people with 20, 30, 50 nodes in total. We've got people in banking and finance, we've got things all the way through to telecommunications and storage infrastructure. >> What's your favorite feature of Platform9? >> My favorite feature? You know, if I ask, should I say this as a pre-sales engineer, let me show you a favorite thing. My immediate response is, I should never do this. (John laughs) To me it's just being able to define my cluster and say, go. And in five minutes I have that environment, I can see everything that's running, right? It's all unified, it's one spot, right? I'm a cluster admin. I said I wanted three control plane, 25 workers. Here's the infrastructure, it creates it, and once it's built, I can see everything that's running, right? All the applications that are there. One UI, I don't have to go click around. I'm not trying to solve things or download things. It's the fact that it's unified and just delivered in one hit. >> What is the one thing that people should know about Platform9 that they might not know about it? >> I think it's that we help developers and engineers as much as we can help our operations teams. I think, for a long time we've sort of targeted that user and said, hey, we, we really help you. It's like, but why are they doing this? Why are they building any infrastructure or any cloud platform? Well, it's to run applications and services, to help their customers, but how do they get there? There's people building and writing those things, and we're helping them, right? For the last two years, we've been really focused on making it simple, and I think that's an important thing to know. >> Chris, thanks so much, appreciate it. >> Yeah, thank you, John. >> Okay, that's theCUBE Q&A session here with Platform9. I'm John Furrier, thanks for watching. (light music)

Published Date : Feb 17 2023

SUMMARY :

Chris, I have a couple questions It can, that's the beauty and proactive issue resolution? and that's the control Can you explain the process and go and build the cluster. What are the two traditional paths be it cloud or on-premise or at the edge. the company was, you know, and we go all the way down It's the fact that it's unified For the last two years, Okay, that's theCUBE Q&A

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Chris JonesPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

25 workersQUANTITY

0.99+

Palo AltoLOCATION

0.99+

five minutesQUANTITY

0.99+

fiveQUANTITY

0.99+

AWSORGANIZATION

0.99+

Platform9ORGANIZATION

0.99+

Platform9TITLE

0.99+

JuniperORGANIZATION

0.99+

ThirdQUANTITY

0.99+

CentOSTITLE

0.99+

Second questionQUANTITY

0.99+

one spotQUANTITY

0.99+

two pathsQUANTITY

0.98+

UbuntuTITLE

0.97+

one hitQUANTITY

0.97+

20QUANTITY

0.97+

10 EKSQUANTITY

0.96+

One UIQUANTITY

0.96+

third alternativeQUANTITY

0.95+

PrometheusTITLE

0.94+

couple questionsQUANTITY

0.93+

50QUANTITY

0.92+

two traditional pathsQUANTITY

0.9+

one thingQUANTITY

0.89+

30QUANTITY

0.86+

single nodeQUANTITY

0.85+

KubernetesTITLE

0.85+

Platform nineTITLE

0.82+

last two yearsDATE

0.8+

CoreDNSTITLE

0.78+

OpenStackTITLE

0.74+

three componentsQUANTITY

0.71+

three control planeQUANTITY

0.7+

theCUBEORGANIZATION

0.5+

CLITITLE

0.48+

CUBEEVENT

0.32+

Google's PoV on Confidential Computing NO PUB


 

>> Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start, and then Patricia you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm honing a lot of interesting activities in Google and again, security or infrastructure securities that I usually hone, and we're talking about encryption, Antware encryption, and confidential computing is a part of portfolio. In additional areas that I contribute to get with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operating your confidential environment to have end to end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it, okay. Patricia? >> Well I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologies from large corporations, institutions, and a lot of success for startups as well. And we have two main goals. First, we work side by side with some of our largest, more strategic or most strategic customers and we help them solve complex engineering technical problems. And second, we are device Google and Google Cloud engineering and product management on emerging trends in technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent, thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool. And it's one of the tools in our toolbox. And confidential computing is a way how would help our customers to complete this very interesting end to end lifecycle of their data. And when customers bring in the data to Cloud and want to protect it, as they ingest it to the Cloud, they protect it address when they store data in the Cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to Cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to Cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential matters. Because at the end of the day it reduces more and more the customers thrush boundaries and the attack surface, that's about reducing that periphery, the boundary, in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now we are also encrypting data while in use. And among other beneficial I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry. Even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where you are a customer is actually trying to get a finance on an asset, let's say a boat or a house and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the of the data. >> Interesting, and I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can, because there's a narrative out there that says confidential computing is a marketing ploy. I talked about this upfront, by Cloud providers that are just trying to placate people that are scared of the Cloud. And I'm presuming you don't agree with that but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems, it is overhyped by Cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine, it's a crazy statement. But the most importantly is we mixing multiple concepts I guess. And exactly as Patricia said, we need to look at the end-to-end story not again the mechanism of how confidential computing trying to again execute and protect customer's data, and why it's so critically important. Because what confidential computing was able to do it's in addition to isolate our tenants in multi-tenant environments the Cloud over. To offer additional stronger isolation, we called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants that's running on the same host but also us, because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment but also incredibly important, stronger isolation of our customers. So tenants from us, we also writing code, we also software providers will also make mistakes or have some zero days sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants, and amongst those tenants, they're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together this very sensitive data, knowing that this particular protection is available to them. >> Okay, thank you, appreciate that. And I, you know, I think malicious code is often a threat model missed in these narratives. You know, operator access, yeah, could maybe I trust my Clouds provider, but if I can fence off your access even better I'll sleep better at night. Separating a code from the data, everybody's arm Intel, AM, Invidia, others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and industry way of dealing with confidential computing is to ensure as it's three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any Cloud can, something that Google actually pioneered in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 code Titan. Those our specific ASIC specific, again inch by inch system on every single motherboard that we have, that ensures that your low level former, your actually system code, your kernel, the most powerful system, is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing concluded. But for confidential computing what we have to change we bring in a MD again, future silicon vendors, and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity not only our software and our firmware but also firmware and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of the secure processor. It's special Asics best, specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes, or every single worker thread in our Spark capability. We offer all of that, and those keys are not available to us. It's the best keys ever in encryption space. Because when we are talking about encryption the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypt it enough. But the case in confidential computing quite so revolutionary technology, ask Cloud providers who don't have access to the keys. They're sitting in the hardware and they fed to memory controller. And it means when Hypervisors that also know about these wonderful things, saying I need to get access to the memories that this particular VM I'm trying to get access to. They do not encrypt the data, they don't have access to the key. Because those keys are random, ephemeral and VM, but the most importantly in hardware not exportable. And it means now you will be able to have this very interesting role that customers all Cloud providers, will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you're running in VM you actually see your memory in clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, you will not be able to do it. Now you'll see cybernet. And it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified, and OS is modified such way to provide integrity. It means even OS that you're running in UVM bucks is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine, Dave, that's increasing it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance, and scales as they would expect from Cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well you know you've already given me guarantees as a Cloud provider that you don't have access to my data but this gives another level of assurance. Key management as they say is key. Now you're not, humans aren't managing the keys the machines are managing them. So Patricia, my question to you is in addition to, you know, let's go pre-confidential computing days what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality the customer cares then they want to know whether their systems are protected from outside or unauthorized access. And that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret, right? The code is actually looking at the data only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered, with or impacted by outside actors. And what confidential computing insures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data it's also it has not been tempered and preserves integrity. I would also say that this is all verifiable. So you have attestation, and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tempered with. Confidentiality and integrity of code and data. >> Got it, okay, thank you. You know, Nelly, you mentioned, I think I heard you say that the applications, it's transparent,you don't have to change the application it just comes for free essentially. And I'm, we showed some various parts of the stack before. I'm curious as to what's affected but really more importantly what is specifically Google's value add? You know, how do partners, you know, participate in this? The ecosystem or maybe said another way how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees actually a lot of works was done by community. Google is very much operate and open. So again, our operating system we working in this operating system repository OS vendors to ensure that all capabilities that we need is part of their kernels, are part of their releases, and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors, kernel, host kernel, to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this role. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is pulling the lead and also announcing the trusted domain extension very similar architecture and no surprise, it's again a lot of work done with our partners to again, convince, work with them, and make this capability available. The same with ARM this year, actually last year, ARM unknowns are future design for confidential computing. It's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing. For example, simply to mention to ensure interop, as you mentioned, between different confidential environments of Cloud providers. We want to ensure that they can attest to each other. Because when you're communicating with different environments, you need to trust them. And if it's running on different Cloud providers you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at the station, the community based systems that we want to build and influence and work with ARM and every other Cloud providers to ensure that they can interrupt. And it means it doesn't matter where confidential workloads will be hosted but they can exchange the data in secure, verifiable, and controlled by customers way. And to do it, we need to continue what we are doing. Working open again and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let talk about data sovereignty, because when you think about data sharing you think about data sharing across, you know, the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, you know, the technology industry and sometimes is problematic. I know, you know, there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you you know, when you delete data, can you actually prove the data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect, so for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses at all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption, and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software, stack, any operations, that is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the Cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data, an insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty we care about whether it resides, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment. That the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity, safety of the confidential computing environment. And that's why we believe confidential computing is one, necessary and essential technology that will allow us to ensure data sovereignty especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed, so I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post you guys sent in some predictions, and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like, you know, this decade in, in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it'll become utility. It'll become TLS. As of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heading and heading, I don't know if we are there yet yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you, and Patricia, what's your prediction? >> I would double that and say, hey, in the future, in the very near future you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, If I say mode of operation, I like to compare that, today is inconceivable if we talk to the young technologists. It's inconceivable to think that at some point in history and I happen to be alive that we had data at address that was not encrypted. Data in transit, that was not encrypted. And I think that we will be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus, I think the beauty of the this industry is because there's so much competition this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much.

Published Date : Feb 10 2023

SUMMARY :

Patricia, great to have you. and then Patricia you can weigh in. In additional areas that I contribute to Got it, okay. of the CTO, OCTO for Excellent, thank you in the data to Cloud into the architecture a bit and privacy of the of the data. but I'm going to push you a is available to them. we could stay with you and they fed to memory controller. So Patricia, my question to you is and integrity of the data and of the code. that the applications, and ideas of our partners to this role is when you you know, and that the data will be only used of the enforcement. and we will support encrypted traffic. and I happen to be alive and we can double click

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

last yearDATE

0.99+

2017DATE

0.99+

two partiesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

second pointQUANTITY

0.99+

FirstQUANTITY

0.99+

ARMORGANIZATION

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

two decades agoDATE

0.99+

AsicsORGANIZATION

0.99+

secondQUANTITY

0.99+

Gaia XORGANIZATION

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.98+

seven yearsQUANTITY

0.98+

OCTOORGANIZATION

0.98+

one thoughtQUANTITY

0.98+

a decade agoDATE

0.98+

this yearDATE

0.98+

10 years agoDATE

0.98+

InvidiaORGANIZATION

0.98+

'23DATE

0.98+

todayDATE

0.98+

CloudTITLE

0.98+

three pillarsQUANTITY

0.97+

one wayQUANTITY

0.97+

hundred percentQUANTITY

0.97+

zero daysQUANTITY

0.97+

three main propertyQUANTITY

0.95+

third pillarQUANTITY

0.95+

two main goalsQUANTITY

0.95+

CTOORGANIZATION

0.93+

NellPERSON

0.9+

KubernetesTITLE

0.89+

every single VMQUANTITY

0.86+

NellyORGANIZATION

0.83+

Google CloudTITLE

0.82+

every single workerQUANTITY

0.77+

every single nodeQUANTITY

0.74+

AMORGANIZATION

0.73+

doubleQUANTITY

0.71+

single motherboardQUANTITY

0.68+

single siliconQUANTITY

0.57+

SparkTITLE

0.53+

kernelTITLE

0.53+

inchQUANTITY

0.48+

Breaking Analysis: Google's PoV on Confidential Computing


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)

Published Date : Feb 10 2023

SUMMARY :

bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Rob HoofPERSON

0.99+

Cheryl KnightPERSON

0.99+

Nelly PorterPERSON

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BroncoPERSON

0.99+

2019DATE

0.99+

Ken SchiffmanPERSON

0.99+

IntelORGANIZATION

0.99+

AMDORGANIZATION

0.99+

2017DATE

0.99+

ARMORGANIZATION

0.99+

AemORGANIZATION

0.99+

NelliePERSON

0.99+

Kristin MartinPERSON

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

Patricia FlorissiPERSON

0.99+

oneQUANTITY

0.99+

MetaORGANIZATION

0.99+

twoQUANTITY

0.99+

thirdQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

second pointQUANTITY

0.99+

two expertsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

two decades agoDATE

0.99+

'23DATE

0.99+

eachQUANTITY

0.99+

a decade agoDATE

0.99+

threeQUANTITY

0.99+

zero daysQUANTITY

0.98+

fourQUANTITY

0.98+

OCTOORGANIZATION

0.98+

todayDATE

0.98+

Subbu Iyer, Aerospike | AWS re:Invent 2022


 

>>Hey everyone, welcome to the Cube's coverage of AWS Reinvent 2022. Lisa Martin here with you with Subaru ier, one of our alumni who's now the CEO of Aerospike. Sabu. Great to have you on the program. Thank you for joining us. >>Great as always, to be on the cube. Luisa, good to meet you. >>So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized, yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >>Well, you know, we, we see this across the board when I talk to customers and prospects. There's a desire from the business and from it actually to leverage data to really fuel newer applications, newer services, newer business lines, if you will, for companies. I think the struggle is one, I think one the, you know, the plethora of data that is created, you know, surveys say that over the next three years data is gonna be, you know, by 2025, around 175 zetabytes, right? A hundred and zetabytes of data is gonna be created. And that's really a, a, a growth of north of 30% year over year. But the more important, and the interesting thing is the real time component of that data is actually growing at, you know, 35% cagr. And what enterprises desire is decisions that are made in real time or near real time. >>And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient if you'll, so you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you, for both you, both users, so to speak? And the last point that we see out there is even if you're able to, you know, bring all that data, you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one, capturing the data, you know, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >>You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data, it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >>Yeah. When, when, when we started Aerospike, right when the company started, it started with the premise that data is gonna grow, number one, exponentially. Two, when applications open up to the internet, there's gonna be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply side and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years, what we've seen is as digitization has actually permeated every industry out there, the need to harness data in real time is pretty much present in every industry. >>Whether that's retail, whether that's financial services, telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather, are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't wanna be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, the customer exp you know, customer experience is paramount and we as customers expect answers in, you know, an instant in real time. And on the other hand, the way they make decisions is based on a large data set because you know, larger data sets actually propel better decisions. So there's competing pressures here, which essentially drive the need. One from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an inces need to actually make decisions in real or near real time. >>You know, I think one of the things that's been in short supply over the last couple of years is patients we do expect as consumers, whether we're in our business lives, our personal lives that we're going to be getting, be given information and data that's relevant, it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >>So, you know, going back to your initial question Lisa, around why is data really a high value but underutilized or underleveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus and they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? >>It's really easy to build an application that operates at low scale or low throughput or low concurrency, but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a, a really robust data platform that can be up on a five, nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer, which is, can you operate all of this at a cost point? Which is not prohibitive, but it makes sense from a TCO perspective. Cuz a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey, the revenue starts going up, the user base starts going up, but the cost basis starts crossing over the revenue and they're losing money on the service, ironically, as the service becomes more popular. So really unlimited scale, predictable performance always on, on a globally resilient basis and low tco. These are the four essential capabilities of any modern data platform. >>So then talk to me with those as the four main core functionalities of a modern data platform. How does aerospace deliver that? >>So we were built, as I said, from the from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers, we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are, who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know, globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here, essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid state devices as essentially extended memory. So you're getting memory performance, but you're accessing these SSDs, you're not paying memory prices, but you're getting memory performance as a result of that. >>You can attach a lot more data to each node or each server in your distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with aerospike, the same things at 60 to 80% lower server count and as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said, that's the key kind of starting point to the innovation. We layer around capabilities like, you know, replication change, data notification, you know, synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service, you can have a single aerospace cluster with one node in San Francisco, one northern New York, another one in London. And this would be basically seamlessly operating. So that, you know, this is strongly consistent. >>Very few no SQL data platforms are strongly consistent or if they are strongly consistent, they will actually suffer performance degradation. And what strongly consistent means is, you know, all your data is always available, it's guaranteed to be available, there is no data lost anytime. So in this configuration that I talked about, if the node in London goes down, your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up, it rejoins the cluster and everything is back to kind of the way it was before, you know, London left the cluster so to speak. So the op, the ability to do this globally resilient, highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or hybrid memory architecture and then we start building out a lot of these other capabilities around the platform. >>And then over the years, what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in a silo. So aerospace gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, pulsar, so that as you're ingesting data from a variety of data sources, you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike, you can actually run spark jobs across that data in a, in a multithreaded parallel fashion to get really insight from that data at really high, high throughput and high speed, >>High throughput, high speed, incredibly important, especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, edge IOT devices, the workforce embracing more and more hybrid these days. How are you ex helping customers to extract more value from data while also lowering costs? Go into some customer examples cause I know you have some great ones. >>Yeah, you know, I think we have, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples, let me talk to you about some of kind of the use cases which we see out there. We see a lot of aerospace being used in fraud detection. We see us being used in recommendations and since we use get used in customer data profiles or customer profiles, customer 360 stores, you know, multiplayer gaming and entertainment, these are kind of the repeated use case digital payments. We power most of the digital payment systems across the globe. Specific example from a, from a specific example perspective, the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you actually paying somebody your transaction is, you know, being sent through aero spike to really decide whether this is a fraudulent transaction or not. >>And when you do that, you know, you and I as a customer not gonna wait around for 10 seconds for PayPal to say yay or me, we expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal for every transaction that goes through PayPal before us, you know, PayPal was missing out on about 2% of their SLAs, which was essentially millions of dollars, which they were losing because, you know, they were letting transactions go through and taking the risk that it, it's not a fraudulent transaction with the aerospace. They can now actually get a much better sla and the data set on which they compute the fraud score has gone up by, you know, several factors. So by 30 x if you will. So not only has the data size that is powering the fraud engine actually grown up 30 x with Aerospike. Yeah. But they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's, >>And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >>Yes. And so that's a, that's a really powerful use case and you know, it's, it's a great customer, great customer success story. The other one I would talk about is really Wayfair, right? From retail and you know, from e-commerce. So everybody knows Wayfair global leader in really, you know, online home furnishings and they use us to power their recommendations engine and you know, it's basically if you're purchasing this, people who bought this but also bought these five other things, so on and so forth, they have actually seen the card size at checkout go by up to 30% as a result of actually powering their recommendations in G by through Aerospike. And they, they were able to do this by reducing the server count by nine x. So on one ninth of the servers that were there before aerospace, they're now powering their recommendation engine and seeing card size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair >>Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized, relevant experience that's gonna show me if I bought this, show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >>Exactly. And you know, another great example you asked about, you know, customer stories, Adobe, who doesn't know Adobe, you know, they, they're on a, they're on a mission to deliver the best customer experience that they can and they're talking about, you know, great customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this. With Aerospike going to Aerospike, basically what they have seen is their throughput go up by 70%, their cost has been reduced by three x. So essentially doing it at one third of the cost while their annual data growth continues at, you know, about north of 30%. So not only is their data growing, they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know, on a dataset which is constantly growing at north, north of 30% in this case. >>Those are three great examples, PayPal, Wayfair, Adobe talking about, especially with Wayfair when you talk about increasing their cart checkout sizes, but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >>Yep. I, I'll give you a fun one here. So, you know, you may not have heard about this company, it's called Dream 11 and it's a company based out of India, but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform and you know, India is a nation which is cricket crazy. So you know, when, when they have their premier league going on, you know, there's millions of users logged onto the dream alone platform building their fantasy lead teams and you know, playing on that particular platform, it has a hundred million users, a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered a, an amazing success story in, in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by aerospace where think about that they are able to deliver all of this and support a hundred million users, 5.5 million concurrent users all with you know, 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is you know, world renowned but at least you know from a what we see out there, it's an amazing success story of operating at scale. >>Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike aws, the partnership GRAVITON two better together. What are you guys doing together there? >>Great partnership. AWS has multiple layers in terms of partnerships. So you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know, those instance types work well for us. And then we just released support for Aerospike on the graviton platform and we just announced a benchmark of Aerospike running on graviton on aws. And what we see out there is with the benchmark, a 1.6 x improvement in price performance and you know, about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on graviton. So this is an amazing story from a price performance perspective, performance per wat for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aero Aerospike and aws, not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >>And it sounds like a great sustainability story. I wish we had more time so we would talk about this, but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >>Thank you very much. I mean, if, if folks are at reinvent next week or this week, come on and see us at our booth. We are in the data analytics pavilion. You can find us pretty easily. Would love to talk to you. >>Perfect. We'll send them there. So Ira, thank you so much for joining me on the program today. We appreciate your insights. >>Thank you Lisa. >>I'm Lisa Martin. You're watching The Cubes coverage of AWS Reinvent 2022. Thanks for watching.

Published Date : Dec 7 2022

SUMMARY :

Great to have you on the program. Great as always, to be on the cube. So, you know, every company these days has got to be a data company, the, you know, the plethora of data that is created, you know, surveys say that over the next three years you know, making decisions from it in real time and really operating it You know, you bring up a great point with respect to real time data access. on which ad to put in front of you and I so that we would click or engage with that particular the way they make decisions is based on a large data set because you know, larger data sets actually capabilities of a modern data platform that need to be delivered to meet demanding lot of the data platforms that, you know, some of these applications were built on have goes back to my first answer, which is, can you operate all of this at a cost So then talk to me with those as the four main core functionalities of deliver the always on, you know, operations. So that, you know, this is strongly consistent. the way it was before, you know, London left the cluster so to speak. Once the data is in Aerospike, you can actually run you ex helping customers to extract more value from data while also lowering So, you know, before I get into specific customer examples, let me talk to you about some 10 seconds for PayPal to say yay or me, we expect, you know, the decision to be made in an And that's what we expect as consumers, right? really powerful in terms of the business outcome and what we are able to, you know, We have this expectation that needs to be really fueled by technology. And you know, another great example you asked about, you know, especially with Wayfair when you talk about increasing their cart onto the dream alone platform building their fantasy lead teams and you know, What are you guys doing together there? So you know, we engage with AWS at the executive level. but thank you so much for talking about the main capabilities of a modern data platform, Thank you very much. So Ira, thank you so much for joining me on the program today. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

LondonLOCATION

0.99+

IraPERSON

0.99+

LisaPERSON

0.99+

60QUANTITY

0.99+

LuisaPERSON

0.99+

AdobeORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

PayPalORGANIZATION

0.99+

30%QUANTITY

0.99+

70%QUANTITY

0.99+

10 secondsQUANTITY

0.99+

WayfairORGANIZATION

0.99+

35%QUANTITY

0.99+

AerospikeORGANIZATION

0.99+

each serverQUANTITY

0.99+

OneQUANTITY

0.99+

IndiaLOCATION

0.99+

27%QUANTITY

0.99+

nineQUANTITY

0.99+

10 yearsQUANTITY

0.99+

30 xQUANTITY

0.99+

32%QUANTITY

0.99+

99.95%QUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

awsORGANIZATION

0.99+

each nodeQUANTITY

0.99+

next weekDATE

0.99+

2025DATE

0.99+

fiveQUANTITY

0.99+

less than one millisecondQUANTITY

0.99+

millions of usersQUANTITY

0.99+

SubaruORGANIZATION

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

first answerQUANTITY

0.99+

one thirdQUANTITY

0.99+

this weekDATE

0.99+

millions of dollarsQUANTITY

0.99+

over 70%QUANTITY

0.99+

SabuPERSON

0.99+

both usersQUANTITY

0.99+

threeQUANTITY

0.98+

todayDATE

0.98+

80%QUANTITY

0.98+

KafkaTITLE

0.98+

1.6 xQUANTITY

0.98+

northern New YorkLOCATION

0.98+

5.5 million concurrent usersQUANTITY

0.98+

GRAVITONORGANIZATION

0.98+

hundred million usersQUANTITY

0.97+

Dream 11ORGANIZATION

0.97+

TwoQUANTITY

0.97+

eachQUANTITY

0.97+

AerospikeTITLE

0.97+

third thingQUANTITY

0.96+

hundred million usersQUANTITY

0.96+

The CubesTITLE

0.95+

around 175 zetabytesQUANTITY

0.95+

Chuck Svoboda, Red Hat & Ted Stanton, AWS | AWS re:Invent 2022


 

>>Hey everyone, it's Vegas. Welcome back. We know you've been watching all day. We appreciate that. We always love being able to bring you some great content on the Cube Live from AWS Reinvented 22. Lisa Martin here with Paul Gill. And Paul, we've had such a great event. We've, I think we've done nearly 70 interviews since we started on the Cube on >>Monday night. I believe we just hit 70. Yeah, we just hit 70. You must feel like you've done half of >>Them. I really do. But we've been having great conversations. There's so much innovation going on at aws. Nothing slowed them down during the pandemic. We love also talking about the innovation, the flywheel that is their partner ecosystem. We're gonna have a great conversation about that >>Next. And as we've said, going back to day one, the energy of the show is remarkable. And here we are, we're getting late in the afternoon on day two, and there's just as much activity, just as much energy out there as, as the beginning of the first day. I have no doubt day three will be the >>Same. I agree. There's been no slowdown. We've got two guests here. We're gonna have a great conversation. Chuck Kubota joins us, senior Director of Cloud Services, GTM at Red Hat. Great to have you on the program. And Ted Stanton, global head of Sales, red Hat at IBM at aws. Welcome. >>Thanks for having us. >>How's the show going so far for you guys? >>It's a blur. Is it? Oh my gosh. >>Don't they all >>Blur? Well, yes, yes. I actually like last year a bit better. It was half the size. Yeah. And a lot easier to get around, but this is back to normal, so >>It is back to normal. Yeah. And and Ted, we're hearing north of 50,000 in-person attendees. I heard a, something I think was published. I heard the second hand over like 300,000 online attendees. This is maybe the biggest one we ever had. >>Yeah, yeah, I would agree. And frankly, it's my first time here, so I am massively impressed with the overall show, the meeting with partners, the meeting with customers, the announcements that were made, just fantastic. And >>If you remember back to two years ago, there were a lot of questions about whether in-person conferences would ever return and the volume that we used to see them. And that appears to be >>The case. I think we, I think we've answered, I think AWS has answered that for us, which I'm very pleased to see. Talk about some of those announcements. Ted. There's been so much that that's always one of the things we know and love about re men is there's slew of announcements. You were saying this morning, Paul, and then keynote, you lost, you stopped counting after I >>Lost 15, I lost count for 15. I think it was over 30 announcements this morning alone >>Where IBM and Red Hat are concern. What are some of the things that you are excited about in terms of some of the news, the innovation, and where the partnership is going? >>Well, definitely where the partnership is going, and I think even as we're speaking right now, is a keynote going on with Aruba, talking about some of the partners and the way in which we support partners and the new technologies and the new abilities for partners to take advantage of these technologies to frankly delight our customers is really what most excites me. >>Chuck, what about you? What's going on with Red Hat? You've been there a long time. Sales, everything, picking up customers, massively transforming. What are some of the things that you're seeing and that you're excited >>About? Yeah, I mean, first of all, you know, as customers have, you know, years ago discovered it's not competitively advantageous to manage their own data centers in most cases. So they would like to, you know, give that responsibility to Amazon. We're seeing them move further up the stack, right? So that would be more beyond the operating system, the application platforms like OpenShift. And now we have a managed application platform built on OpenShift called Red Out OpenShift service on AWS or Rosa. And then we're even further going up the stack with that with, we just announced this week that red out OpenShift data science is available in the AWS marketplace, runs on Rosa, helps break the land speed record to getting those data models out there that are so important to make, you know, help organizations become more, much more data driven to remain competitive themselves. >>So talk about Rosa and how it differs from previous iterations of, of OpenShift. I mean, you had, you had an online version of OpenShift several years ago. What's different about Rosa? >>Yeah, so the old OpenShift online that was several years old, right? For one thing, wasn't a joint partnership between Amazon and Red Hat. So we work together, right? Very closely on this, which is great. Also, the awesome thing about Rosa, you know, if you think about like OpenShift for, for, as a matter of fact, Amazon is the number one cloud that OpenShift runs on, right? So a lot of those customers want to take advantage of their committed spins, their EDPs, they want one bill. And so Rosa comes through the one bill comes through the marketplace, right? Which is, which is totally awesome. Not only that or financially backing OpenShift with a 99.95% financially backed sla, right? We didn't have that before either, right? >>When you say financially backed sla, >>What do you mean? That means that if we drop below 99.95% of availability, we're gonna give you some money back, right? So we're really, you know, for lack of better words, putting our money where our mouth is. Absolutely right. >>And, and some of the key reasons that we even work together to build Rosa was frankly we've had a mirror of customers and virtually every single region, every single industry been using OpenShift on AWS for years, right? And we listened to them, they wanted a more managed version of it and we worked very closely together. And what's really great about Rosa too is we built some really fantastic integrations with some of the AWS native services like API gateway, Amazon rds, private link, right? To make it very simple and easy for customers to get started. We talked a little bit about the marketplace, but it's also available just on the AWS console, right? So customers can get started in a pay as you go fashion start to use it. And if they wanna move into a more commitment, more of a set schedule of payments, they can move into a marketplace private offer. >>Chuck, talk about, how about Rosen? How is unlocking the power of technology like containers Kubernetes for customers while dialing down some of the complexity that's >>There? Yeah, I mean if you think about, you know, kind of what we did, you know, earlier on, right? If you think about like virtualization, how it dialed down the complexity of having to get something rack, get a blade rack, stack cable and cooled every time you wanted to deploy new application, right? So what we do is we, our message is this, we want developers to focus on what matters most. And that's build, deploy, and running applications. Most of our customers are not in the business of building app platforms. They're not in the business of building platforms like banks, I, you know, financials, right? Government, et cetera. Right? So what we do is we allow those developers that are, enable those developers that know Java and Node and springing and what have you, just to keep writing what they know. And then, you know, I don't wanna get too technical here, but get pushed through way and, and OpenShift takes care of the rest, builds it for them, runs it through a pipeline, a CICD pipeline, goes through all the testing and quality gates and things like that, deploys it, auto wires it up, you know, to monitoring which is what you need. >>And we have all kinds of other, you know, higher order services and an ecosystem around that. And oh, by the way, also plugging into and taking advantage of the services like rds, right? If you're gonna write an application, a tradition, a cloud native application on Amazon, you're probably going to wanna run it in Rosa and consuming one of those databases, right? Like RDS or Aurora, what have you. >>And I, and I would say it's not even just the customers. We have a variety of ecosystem partners, both of our partners leveraging it as well. We have solos built their executive management system that they go ahead and turn and sell to their customers, streamlines data and collects data from a variety of different sources. They decided, you know, it's better to run that on top of Rosa than manage OpenShift themselves. We've seen IBM restack a lot of their software, you know, to run on top of Rose, take advantage of that capabilities. So lots of partners as well as customers are taking advantage of fully managed stack of that OpenShift that that turnkey capabilities that it provides >>For, for OpenShift customers who wanna move to Rose, is that gonna be a one button migration? Is that gonna be, can they run both environments simultaneously and migrate over time? What kind of tools are you giving them? >>We have quite, we have quite a few migration tools such as conveyor, right? That's one of our projects, part of our migration application toolkit, right? And you know, with those, there's also partners like Trilio, right? Who can help move, you know, applications back 'em up. In fact, we're working on a pretty cool joint go to market with that right now. But generally speaking, the OpenShift experience that the customers that we have know and love and those who have never used OpenShift either are coming to it as well via Rosa, right? The experience is primarily the same. You don't have to really retrain your people, right? If anything, there's a reduction in operational cost. We increase developer productivity cuz we manage so much of the stack for you. We have SRE site reliability engineers that are backing the platform that proactively get ahead of anything that may go wrong. So maybe you don't even notice if something went wrong, wrong. And then also reactively fixing it if it comes to that, right? So, you know, all those kind of things that your customers are having to do on their own or hire a contractor, a consultant, what have to do Now we benefit from a managed offering in the cloud, right? In Amazon, right? And your developers still have that great experience too, like to say, you know, again, break the land speed record to prod. >>I >>Like that. And, and I would actually say migrations from OpenShift are on premise. OpenShift to Rosa maybe only represents about a third of the customers we have. About another third of the customers is frankly existing AWS customers. Maybe they're doing Kubernetes, do it, the, you know, do it themselves. We're struggling with some of the management of that. And so actually started to lean on top of using Rosa as a better platform to actually build upon their applications. And another third, we have quite a few customers that were frankly new OpenShift customers, new Red Hat customers and new AWS customers that were looking to build that next cloud native application. Lots of in the startup space that I've actually chosen to go with Rosa. >>It's funny you mention that because the largest Rosa consumer is new to OpenShift. Oh wow. Right. That's pretty, that's pretty powerful, right? It's not just for existing OpenShift customers, existing OpenShift. If you're running OpenShift, you know, on EC two, right. Self-managed, there's really no better way to run it than Rosa. You know, I think about whether this is the 10th year, 10 year anniversary of re right? Right. Yep. This is also the 10 year anniversary of OpenShift. Yeah, right. I think it one oh came out about sometime around a week, 10 years ago, right? When I came over to Red Hat in 2015. You know, if you, if you know your Kubernetes history was at July 25th, I think was when Kubernetes ga, July 25th, 2015 is when it g you have >>A good >>Memory. Well I remember those days back then, right? Those were fun, right? The, we had a, a large customer roll out on OpenShift three, which is our OpenShift RE based on Kubernetes. And where do you think they ran Amazon, right? Naturally. So, you know, as you move forward and, and, and OpenShift V four came out, the, reduces the operational complexity and becomes even more powerful through our operator framework and things like that. Now they revolved up to Rosa, right? And again, to help those customers focus on what matters most. And that's the applications, not the containers, not those underlying implementation and technical details while critically important, are not necessarily core to the business to most of our customers. >>Tremendous amount of innovation in OpenShift in a decade, >>Pardon me? >>Tremendous amount of innovation in OpenShift in the >>Last decade. Oh absolutely. And, and and tons more to come like every day. Right. I think what you're gonna see more of is, you know, as Kubernetes becomes more, more and more of the plumbing, you know, I call 'em productive abstractions on top of it, as you mentioned earlier, unlocking the power of these technologies while minimizing, even hiding the complexity of them so that you can just move fast Yeah. And safely move fast. >>I wanna be sure we get to, to marketplaces because you have been, red Hat has made, has really stepped up as commitment to the AWS marketplace. Why are you doing that now and how are, how are the marketplaces evolving as a channel for you? >>Well, cuz our customers want us to be there, right? I mean we, we, we are customer centric, customer first approach. Our customers want to buy through the marketplace. If you're an Amazon, if you're an Amazon customer, it's really easy for you to go procure software through the marketplace and have, instead of having to call up Red Hat and get on paper and write a second check, right? One stop shop one bill. Right? That is very, very attractive to our customers. Not only that, it opens up other ways to buy, you know, Ted mentioned earlier, you know, pay as you go buy the drink pricing using exactly what you need right now. Right? You know, AWS pioneered that, right? That provides that elasticity, you know, one of the core tenants at aws, AWS cloud, right? And we weren't able to get that with the traditional self-managed on Red Hat paper subscriptions. >>Talk a little bit about the go to market, what's, you talked about Ted, the kind of the three tenants of, of customer types. But talk a little bit about the gtm, the joint go to market, the joint engineering, so we get an understanding of how customers engage multiple options. >>Yeah, I mean, so if you think about go to market, you know, and the way I think of it is it's the intersection of a few areas, right? So the product and the product experience that we work together has to be so good that a customer or user, actually many start talk, talking about users now cuz it's self-service has a more than likely chance of getting their application to prod without ever talking to a person. Which is historically not what a lot of enterprise software companies are able to do, right? So that's one of those biggest things we do. We want customers to just be successful, turn it on, get going, be productive, right? At the same time we wanna to position the product in such a way that's differentiating that you can't get that experience anywhere else. And then part of that is ensuring that the education and enablement of our customers and our partners as such that they use the platform the right way to get as much value out of as possible. >>All backed by, you know, a very smart field that ensures that the customer get is making the right decision. A customer success org, this is attached to my org now that we can go on site and team with our customers to make sure that they get their first workloads up as quickly as possible, by the way, on our date, our, our dime. And then SRE and CEA backing that up with support and operational integrity to ensure that the service is always up and available so you can sleep, sleep, sleep well at night. Right? Right. One of our PMs of, of of Rosa, he says, what does he say? He says, Rosa allows organizations, enables organizations to go from 24 7 operations to nine to five innovation. Right? And that's powerful. That's how our customers remain more competitive running on Rosa with aws, >>When you're in customer conversations and you have 30 seconds, what are the key differentiators of the solution that you go boom, boom, boom, and they just go, I get it. >>Well, I mean, my 32nd elevator pitch, I think I've already said, I'll say it again. And that is OpenShift allows you to focus on your applications, build, deploy, and run applications while unlocking the power of the technologies like containers and Kubernetes and hiding or minimizing those complexities. So you can do as fast as possible. >>Mic drop Ted, question for you? Sure. Here we are at the, this is the, I leave the 11th reinvent, 10th anniversary, 11th event. You've been in the industry a long time. What is your biggest takeaway from what's been announced and discussed so far at Reinvent 22, where the AWS and and its partner ecosystem is concerned? If you had 30 seconds or if you had a bumper sticker to put on your DeLorean, what would you say? >>I would say we're continuing to innovate on behalf of our customers, but making sure we bring all of our partners and ecosystems along in that innovation. >>Yeah. I love the customer obsession on both sides there. Great work guides. Congrats on the 10th anniversary of OpenShift and so much evolution, the customer obsession is really clear for both of you guys. We appreciate your time. You're gonna have to come back now. Absolutely. Absolutely. Thank you. All right. Thank you so much for joining us. For our guests and for Paul Gillin. I'm Lisa Martin. You're watching The Cube, the leader in live enterprise and emerging tech coverage.

Published Date : Dec 1 2022

SUMMARY :

We always love being able to bring you some great content on the Cube Live from AWS Reinvented I believe we just hit 70. We love also talking about the innovation, And here we are, we're getting late in the afternoon on day two, and there's just as much activity, Great to have you on the program. It's a blur. And a lot easier to get around, I heard the second hand over overall show, the meeting with partners, the meeting with customers, the announcements And that appears to be of the things we know and love about re men is there's slew of announcements. I think it was over 30 announcements this morning alone What are some of the things that you are excited about in terms of some and the new abilities for partners to take advantage of these technologies to frankly delight our What are some of the things that you're seeing and Yeah, I mean, first of all, you know, as customers have, you know, years ago discovered I mean, you had, you had an online version of OpenShift several years ago. you know, if you think about like OpenShift for, for, as a matter of fact, So we're really, you know, for lack of better words, putting our money where our mouth is. And, and some of the key reasons that we even work together to build Rosa was frankly we've had a They're not in the business of building platforms like banks, I, you know, financials, And we have all kinds of other, you know, higher order services and an ecosystem around that. They decided, you know, it's better to run that on top of Rosa than manage OpenShift have that great experience too, like to say, you know, again, break the land speed record to prod. Lots of in the startup space that I've actually chosen to go with Rosa. It's funny you mention that because the largest Rosa consumer is new to OpenShift. And where do you think they ran Amazon, minimizing, even hiding the complexity of them so that you can just move fast Yeah. I wanna be sure we get to, to marketplaces because you have been, red That provides that elasticity, you know, Talk a little bit about the go to market, what's, you talked about Ted, the kind of the three tenants of, Yeah, I mean, so if you think about go to market, you know, and the way I think of it is it's the intersection of a few areas, and operational integrity to ensure that the service is always up and available so you can sleep, of the solution that you go boom, boom, boom, and they just go, I get it. And that is OpenShift allows you to focus on your applications, build, deploy, and run applications while If you had 30 seconds or if you had a bumper sticker to put on your of our partners and ecosystems along in that innovation. OpenShift and so much evolution, the customer obsession is really clear for both of you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ted StantonPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Paul GillPERSON

0.99+

AmazonORGANIZATION

0.99+

Paul GillinPERSON

0.99+

IBMORGANIZATION

0.99+

PaulPERSON

0.99+

Chuck KubotaPERSON

0.99+

Red HatORGANIZATION

0.99+

2015DATE

0.99+

TedPERSON

0.99+

Chuck SvobodaPERSON

0.99+

July 25thDATE

0.99+

30 secondsQUANTITY

0.99+

red HatORGANIZATION

0.99+

two guestsQUANTITY

0.99+

99.95%QUANTITY

0.99+

July 25th, 2015DATE

0.99+

nineQUANTITY

0.99+

ChuckPERSON

0.99+

SREORGANIZATION

0.99+

two years agoDATE

0.99+

OpenShiftTITLE

0.99+

Monday nightDATE

0.99+

15QUANTITY

0.99+

bothQUANTITY

0.99+

JavaTITLE

0.99+

last yearDATE

0.98+

Red HatTITLE

0.98+

one billQUANTITY

0.98+

both sidesQUANTITY

0.98+

10th yearQUANTITY

0.98+

VegasLOCATION

0.98+

OneQUANTITY

0.98+

three tenantsQUANTITY

0.98+

CEAORGANIZATION

0.98+

The CubeTITLE

0.98+

RosaORGANIZATION

0.98+

oneQUANTITY

0.98+

NodeTITLE

0.98+

first timeQUANTITY

0.98+

one buttonQUANTITY

0.97+

first dayQUANTITY

0.97+

10th anniversaryQUANTITY

0.97+

second checkQUANTITY

0.97+

pandemicEVENT

0.97+

10 years agoDATE

0.97+

Reinvent 22EVENT

0.97+

this weekDATE

0.96+

Haseeb Budhani & Anant Verma | AWS re:Invent 2022 - Global Startup Program


 

>> Well, welcome back here to the Venetian. We're in Las Vegas. It is Wednesday, Day 2 of our coverage here of AWS re:Invent, 22. I'm your host, John Walls on theCUBE and it's a pleasure to welcome in two more guests as part of our AWS startup showcase, which is again part of the startup program globally at AWS. I've got Anant Verma, who is the Vice President of Engineering at Elation. Anant, good to see you, sir. >> Good to see you too. >> Good to be with us. And Haseeb Budhani, who is the CEO and co-founder of Rafay Systems. Good to see you, sir. >> Good to see you again. >> Thanks for having, yeah. A cuber, right? You've been on theCUBE? >> Once or twice. >> Many occasions. But a first timer here, as a matter of fact, glad to have you aboard. All right, tell us about Elation. First for those whom who might not be familiar with what you're up to these days, just give it a little 30,000 foot level. >> Sure, sure. So, yeah, Elation is a startup and a leader in the enterprise data intelligence space. That really includes a lot of different things including data search, data discovery, metadata management, data cataloging, data governance, data policy management, a lot of different things that companies want to do with the hoards of data that they have and Elation, our product is the answer to solve some of those problems. We've been doing pretty good. Elation is in running for about 10 years now. We are a series A startup now, we just raised around a few, a couple of months ago. We are already a hundred million plus in revenue. So. >> John: Not shabby. >> Yeah, it's a big benchmark for companies to, startup companies, to cross that milestone. So, yeah. >> And what's the relationship? I know Rafay and you have worked together, in fact, the two of you have, which I find interesting, you have a chance, you've been meeting on Zoom for a number of months, as many of us have it meeting here for the first time. But talk about that relationship with Rafay. >> Yeah, so I actually joined Elation in January and this is part of the move of Elation to a more cloud native solution. So, we have been running on AWS since last year and as part of making our solution more cloud native, we have been looking to containerize our services and run them on Kubernetes. So, that's the reason why I joined Elation in the first place to kind of make sure that this migration or move to a cloud native actually works out really well for us. This is a big move for the companies. A lot of companies that have done in the past, including, you know, Confluent or MongoDB, when they did that, they actually really reap great benefits out of that. So to do that, of course, you know, as we were looking at Kubernetes as a solution, I was personally more looking for a way to speed up things and get things out in production as fast as possible. And that's where I think, Janeb introduced us... >> That's right. >> Two of us. I think we share the same investor actually, so that's how we found each other. And yeah, it was a pretty simple decision in terms of, you know, getting the solution, figuring it out if it's useful for us and then of course, putting it out there. >> So you've hit the keyword, Kubernetes, right? And, so if you would to honestly jump in here, there are challenges, right? That you're trying to help them solve and you're working on the Kubernetes platform. So, you know, just talk about that and how that's influenced the work that the two of you are doing together. >> Absolutely. So, the business we're in is to help companies who adopt Kubernetes as an orchestration platform do it easier, faster. It's a simple story, right? Everybody is using Kubernetes, but it turns out that Kubernetes is actually not that easy to to operationalize, playing in a sandbox is one thing. Operationalizing this at a certain level of scale is not easy. Now, we have a lot of enterprise customers who are deploying their own applications on Kubernetes, and we've had many, many of them. But when it comes to a company like Elation, it's a more complicated problem set because they're taking a very complex application, their application, but then they're providing that as a service to their customers. So then we have a chain of customers we have to make happy. Anant's team, the platform organization, his internal customers who are the developers who are deploying applications, and then, the company has customers, we have to make sure that they get a good experience as they consume this application that happens to be running on Kubernetes. So that presented a really interesting challenge, right? How do we make this partnership successful? So I will say that, we've learned a lot from each other, right? And, end of the day, the goal is, my customer, Anant's specifically, right? He has to feel that, this investment, 'cause he has to pay us money, we would like to get paid. >> John: Sure. (John laughs) >> It reduces his internal expenditure because otherwise he'd have to do it himself. And most importantly, it's not the money part, it's that he can get to a certain goalpost significantly faster because the invention time for Kubernetes management, the platform that you have to build to run Kubernetes is a very complex exercise. It took us four and a half years to get here. You want to do that again, as a company, right? Why? Why do you want to do that? We, as Rafay, the way I think about what we deliver, yes, we sell a product, but to what end? The product is the what, the why, is that every enterprise, every ISV is building a Kubernetes platform in house. They shouldn't, they shouldn't need to. They should be able to consume that as a service. They consume the Kubernetes engine the EKS is Amazon's Kubernetes, they consume that as an engine. But the management layer was a gap in the market. How do I operationalize Kubernetes? And what we are doing is we're going to, you know, the Anant said. So the warden saying, "Hey you, your team is technical, you understand the problem set. Would you like to build it or would you rather consume this as a service so you can go faster?" And, resoundingly the answer is, I don't want to do this anymore. I wouldn't allow to buy. >> Well, you know, as Haseeb is saying, speed is again, when we started talking, it only took us like a couple of months to figure out if Rafay is the right solution for us. And so we ended up purchasing Rafay in April. We launched our product based on Rafay in Kubernetes, in EKS in August. >> August. >> So that's about four months. I've done some things like this before. It takes a couple of years just to sort of figure out, how do you really work with Kubernetes, right? In a production at a large scale. Right now, we are running about a 600 node cluster on Rafay and that's serving our customers. Like, one of the biggest thing that's actually happening on December 8th is we are running what we call a virtual hands on lab. >> A virtual? >> Hands on lab. >> Okay. >> For Elation. And they're probably going to be about 500 people is going to be attending it. It's like a webinar style. But what we do in that hands on lab is we will spin up an Elation instance for each attendee, right on the spot. Okay? Now, think about this enterprise software running and people just sign up for it and it's there for you, right on the spot. And that's the beauty of the software that we have been building. There's the beauty of the work that Rafay has helped us to do over the last few months. >> Okay. >> I think we need to charge them more money, I'm getting from this congregation. I'm going to go work on that. >> I'm going to let the two of you work that out later. All right. I don't want to get in the way of a big deal. But you mentioned that, we heard about it earlier that, it's you that would offer to your cert, to your clients, these services. I assume they have their different levels of tolerance and their different challenges, right? They've got their own complexities and their own organizational barriers. So how are you juggling that end of it? Because you're kind of learning as, well, not learning, but you're experiencing some of the thing. >> Right. Same things. And yet you've got this other client base that has a multitude of experiences that they're going through. >> Right. So I think, you know a lot of our customers, they are large enterprise companies. They got a whole bunch of data that they want work with us. So one of the thing that we have learned over the past few years is that we used to actually ship our software to the customers and then they would manage it for their privacy security reasons. But now, since we're running in the cloud, they're really happy about that because they don't need to juggle with the infrastructure and the software management and upgrades and things like that, we do it for them, right? And, that's the speed for them because now they are only interested in solving the problems with the data that they're working with. They don't need to deal with all these software management issues, right? So that frees our customers up to do the thing that they want to do. Of course it makes our job harder and I'm sure in turn it makes his job harder. >> We get a short end of the stick, for sure. >> That's why he is going to get more money. >> Exactly. >> Yeah, this is a great conversation. >> No, no, no. We'll talk about that. >> So, let's talk about the cloud then. How, in terms of being the platform where all this is happening and AWS, about your relationship with them as part of the startup program and what kind of value that brings to you, what does that do for you when you go out and are looking for work and what kind of cache that brings to you >> Talk about the AWS? >> Yes, sir. >> Okay. Well, so, the thing is really like of course AWS, a lot of programs in terms of making sure that as we move our customers into AWS, they can give us some, I wouldn't call it discount, but there's some credits that you can get as you move your workloads onto AWS. So that's a really great program. Our customers love it. They want us to do more things with AWS. It's a pretty seamless way for us to, as we were talking about or thinking about moving into the cloud, AWS was our number one choice and that's the only cloud that we are in, today. We're not going to go to any other place. >> That's it. >> Yeah. >> How would you characterize? I mean, we've already heard, from one side of the fence here, but. >> Absolutely. So for us, AWS is a make or break partner, frankly. As the EKS team knows very well, we support Azure's Kubernetes and Google's Kubernetes and the community Kubernetes as well. But the number of customers on our platform who are AWS native, either a hundred percent or a large percentage is, you know, that's the majority of our customer base. >> John: Yeah. >> And AWS has made it very easy for us in a variety of ways to make us successful and our customers successful. So Anant mentioned the credit program they have which is very useful 'cause we can, you know, readily kind of bring a customer to try things out and they can do that at no cost, right? So they can spin up infrastructure, play with things and AWS will cover the cost, as one example. So that's a really good thing. Beyond that, there are multiple programs at AWS, ISV accelerate, et cetera. That, you know, you got to over time, you kind of keep getting taller and taller. And you keep getting on bigger and bigger. And as you make progress, what I'm finding is that there's a great ecosystem of support that they provide us. They introduce us to customers, they help us, you know, think through architecture issues. We get access to their roadmap. We work very, very closely with the guest team, for example. Like the, the GM for Kubernetes at AWS is a gentleman named Barry Cooks who was my sponsor, right? So, we spend a lot of time together. In fact, right after this, I'm going to be spending time with him because look, they take us seriously as a partner. They spend time with us because end of the day, they understand that if they make their partners, in this case, Rafay successful, at the end of the day helps the customer, right? Anant's customer, my customer, their AWS customers, also. So they benefit because we are collectively helping them solve a problem faster. The goal of the cloud is to help people modernize, right? Reduce operational costs because data centers are expensive, right? But then if these complex solutions this is an enterprise product, Kubernetes, at the enterprise level is a complex problem. If we don't collectively work together to save the customer effort, essentially, right? Reduce their TCO for whatever it is they're doing, right? Then the cost of the cloud is too high. And AWS clearly understands and appreciates that and that's why they are going out of their air, frankly, to make us successful and make other companies successful in the startup program. >> Well. >> I would just add a couple of things there. Yeah, so, you know, cloud is not new. It's been there for a while. You know, people used to build things on their own. And so what AWS has really done is they have advanced technology enough where everything is really simple as just turning on a switch and using it, right? So, just a recent example, and I, by the way, I love managed services, right? So the reason is really because I don't need to put my own people to build and manage those things, right? So, if you want to use a search, they got the open search, if you want to use caching, they got elastic caching and stuff like that. So it's really simple and easy to just pick and choose which services you want to use and they're ready to be consumed right away. And that's the beautiful, and that that's how we can move really fast and get things done. >> Ease of use, right? Efficiency, saving money. It's a winning combination. Thanks for sharing this story, appreciate. Anant, Haseeb thanks for being with us. >> Yeah, thank you so much having us. >> We appreciate it. >> Thank you so much. >> You have been a part of the global startup program at AWS and startup showcase. Proud to feature this great collaboration. I'm John Walls. You're watching theCUBE, which is of course the leader in high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

and it's a pleasure to Good to be with us. Thanks for having, yeah. glad to have you aboard. and Elation, our product is the answer startup companies, to the two of you have, So, that's the reason why I joined Elation you know, getting the solution, that the two of you are doing together. And, end of the day, the goal is, John: Sure. the platform that you have to build the right solution for us. Like, one of the biggest thing And that's the beauty of the software I'm going to go work on that. of you work that out later. that they're going through. So one of the thing that we have learned of the stick, for sure. going to get more money. We'll talk about that. and what kind of cache that brings to you and that's the only cloud from one side of the fence here, but. and the community Kubernetes as well. The goal of the cloud is to and that that's how we Ease of use, right? the global startup program

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Haseeb BudhaniPERSON

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

Barry CooksPERSON

0.99+

AprilDATE

0.99+

RafayPERSON

0.99+

December 8thDATE

0.99+

Anant VermaPERSON

0.99+

JanuaryDATE

0.99+

Las VegasLOCATION

0.99+

ElationORGANIZATION

0.99+

AnantPERSON

0.99+

twoQUANTITY

0.99+

AugustDATE

0.99+

Rafay SystemsORGANIZATION

0.99+

TwoQUANTITY

0.99+

last yearDATE

0.99+

FirstQUANTITY

0.99+

twiceQUANTITY

0.99+

four and a half yearsQUANTITY

0.99+

JanebPERSON

0.99+

firstQUANTITY

0.99+

RafayORGANIZATION

0.99+

HaseebPERSON

0.99+

OnceQUANTITY

0.99+

one exampleQUANTITY

0.99+

EKSORGANIZATION

0.98+

oneQUANTITY

0.98+

first timeQUANTITY

0.98+

GoogleORGANIZATION

0.98+

VenetianLOCATION

0.97+

ConfluentORGANIZATION

0.97+

one sideQUANTITY

0.97+

30,000 footQUANTITY

0.97+

AnantORGANIZATION

0.97+

about four monthsQUANTITY

0.97+

KubernetesORGANIZATION

0.96+

each attendeeQUANTITY

0.96+

one thingQUANTITY

0.96+

two more guestsQUANTITY

0.95+

KubernetesTITLE

0.95+

about 10 yearsQUANTITY

0.93+

Wednesday, Day 2DATE

0.92+

about 500 peopleQUANTITY

0.91+

todayDATE

0.91+

ZoomORGANIZATION

0.9+

Subbu Iyer


 

>> And it'll be the fastest 15 minutes of your day from there. >> In three- >> We go Lisa. >> Wait. >> Yes >> Wait, wait, wait. I'm sorry I didn't pin the right speed. >> Yap, no, no rush. >> There we go. >> The beauty of not being live. >> I think, in the background. >> Fantastic, you all ready to go there, Lisa? >> Yeah. >> We are speeding around the horn and we are coming to you in five, four, three, two. >> Hey everyone, welcome to theCUBE's coverage of AWS re:Invent 2022. Lisa Martin here with you with Subbu Iyer one of our alumni who's now the CEO of Aerospike. Subbu, great to have you on the program. Thank you for joining us. >> Great as always to be on theCUBE Lisa, good to meet you. >> So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >> Well, you know, we see this across the board. When I talk to customers and prospects there is a desire from the business and from IT actually to leverage data to really fuel newer applications, newer services newer business lines if you will, for companies. I think the struggle is one, I think one the, the plethora of data that is created. Surveys say that over the next three years data is going to be you know by 2025 around 175 zettabytes, right? A hundred and zettabytes of data is going to be created. And that's really a growth of north of 30% year over year. But the more important and the interesting thing is the real time component of that data is actually growing at, you know 35% CAGR. And what enterprises desire is decisions that are made in real time or near real time. And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient to fuel. So you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you for both users, so to speak. And the last point that we see out there is even if you're able to, you know bring all that data you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one capturing the data, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >> You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >> Yeah, when we started Aerospike, right? When the company started, it started with the premise that data is going to grow, number one exponentially. Two, when applications open up to the internet there's going to be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply set and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years what we've seen is as digitization has actually permeated every industry out there the need to harness data in real time is pretty much present in every industry. Whether that's retail, whether that's financial services telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't want to be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, you know customer experience is paramount and we as customers expect answers in you know an instant, in real time. And on the other hand, the way they make decisions is based on a large data set because you know larger data sets actually propel better decisions. So there's competing pressures here which essentially drive the need one from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an incessant need to actually make decisions in real or near real time. >> You know, I think one of the things that's been in short supply over the last couple of years is patience. We do expect as consumers whether we're in our business lives our personal lives that we're going to be getting be given information and data that's relevant it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >> So, you know, going back to your initial question Lisa around why is data really a high value but underutilized or under-leveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus. And they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? It's really easy to build an application that operates at low scale or low throughput or low concurrency but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a really robust data platform that can be up on a five nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer which is, can you operate all of this at a cost point which is not prohibitive but it makes sense from a TCO perspective. 'Cause a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey the revenue starts going up, the user base starts going up but the cost basis starts crossing over the revenue and they're losing money on the service, ironically as the service becomes more popular. So really unlimited scale predictable performance always on a globally resilient basis and low TCO. These are the four essential capabilities of any modern data platform. >> So then talk to me with those as the four main core functionalities of a modern data platform, how does Aerospike deliver that? >> So we were built, as I said from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid-state devices as essentially extended memory. So you're getting memory performance but you're accessing these SSDs. You're not paying memory prices but you're getting memory performance. As a result of that you can attach a lot more data to each node or each server in a distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with Aerospike the same things at 60 to 80% lower server count. And as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said that's the key kind of starting point to the innovation. We lay around capabilities like, you know replication, change data notification, you know synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service you can have a single Aerospike cluster with one node in San Francisco one node in New York, another one in London and this would be basically seamlessly operating. So that, you know, this is strongly consistent, very few no SQL data platforms are strongly consistent or if they are strongly consistent they will actually suffer performance degradation. And what strongly consistent means is, you know all your data is always available it's guaranteed to be available there is no data lost any time. So in this configuration that I talked about if the node in London goes down your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up it rejoins the cluster and everything is back to kind of the way it was before, you know London left the cluster so to speak. So the ability to do this globally resilient highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or Hybrid Memory Architecture and then we start building a lot of these other capabilities around the platform. And then over the years what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in the silo. So Aerospike gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, Pulsar, so that as you're ingesting data from a variety of data sources you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike you can actually run Spark jobs across that data in a multi-threaded parallel fashion to get really insight from that data at really high throughput and high speed. >> High throughput, high speed, incredibly important especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, Edge, IoT devices, the workforce embracing more and more hybrid these days. How are you helping customers to extract more value from data while also lowering costs? Go into some customer examples 'cause I know you have some great ones. >> Yeah, you know, I think, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples let me talk to you about some of kind of the use cases which we see out there. We see a lot of Aerospike being used in fraud detection. We see us being used in recommendations engines we get used in customer data profiles, or customer profiles, Customer 360 stores, you know multiplayer gaming and entertainment. These are kind of the repeated use case, digital payments. We power most of the digital payment systems across the globe. Specific example from a specific example perspective the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you're actually paying somebody your transaction is, you know being sent through Aerospike to really decide whether this is a fraudulent transaction or not. And when you do that, you know, you and I as a customer are not going to wait around for 10 seconds for PayPal to say yay or nay. We expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal. For every transaction that goes through PayPal. Before us, you know, PayPal was missing out on about 2% of their SLAs which was essentially millions of dollars which they were losing because, you know, they were letting transactions go through and taking the risk that it's not a fraudulent transaction. With Aerospike they can now actually get a much better SLA and the data set on which they compute the fraud score has gone up by you know, several factors. So by 30X if you will. So not only has the data size that is powering the fraud engine actually gone up 30X with Aerospike but they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's- >> And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >> Yes, and so that's a really powerful use case and you know, it's a great customer success story. The other one I would talk about is really Wayfair, right, from retail and you know from e-commerce. So everybody knows Wayfair global leader in really in online home furnishings and they use us to power their recommendations engine. And you know it's basically if you're purchasing this, people who bought this also bought these five other things, so on and so forth. They have actually seen their cart size at checkout go up by up to 30%, as a result of actually powering their recommendations engine through Aerospike. And they were able to do this by reducing the server count by 9X. So on one ninth of the servers that were there before Aerospike, they're now powering their recommendations engine and seeing cart size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair. >> Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized relevant experience that's going to show me if I bought this show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >> Exactly, and you know, another great example you asked about you know, customer stories, Adobe. Who doesn't know Adobe, you know. They're on a mission to deliver the best customer experience that they can. And they're talking about, you know great Customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this with Aerospike. Going to Aerospike basically what they have seen is their throughput go up by 70%, their cost has been reduced by 3X. So essentially doing it at one third of the cost while their annual data growth continues at, you know about north of 30%. So not only is their data growing they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great Customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know on a data set which is constantly growing at north of 30% in this case. >> Those are three great examples, PayPal, Wayfair, Adobe, talking about, especially with Wayfair when you talk about increasing their cart checkout sizes but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >> Yap, I'll give you a fun one here. So, you know, you may not have heard about this company it's called Dream11 and it's a company based out of India but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform. And you know, India is a nation which is cricket crazy. So you know, when they have their premier league going on and there's millions of users logged onto the Dream11 platform building their fantasy league teams and you know, playing on that particular platform, it has a hundred million users a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered an amazing success story in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by Aerospike. Think about that they're able to deliver all of this and support a hundred million users 5.5 million concurrent users all with, you know 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is, you know, world renowned but at least you know from what we see out there it's an amazing success story of operating at scale. >> Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike AWS the partnership Graviton2 better together. What are you guys doing together there? >> Great partnership. AWS has multiple layers in terms of partnerships. So, you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know those instance types work well for us. And then we just released support for Aerospike on the Graviton platform and we just announced a benchmark of Aerospike running on Graviton on AWS. And what we see out there is with the benchmark a 1.6X improvement in price performance. And you know about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on Graviton. So this is an amazing story from a price performance perspective, performance per watt for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aerospike and AWS not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >> And it sounds like a great sustainability story. I wish we had more time so we would talk about this but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >> Thank you very much. I mean, if folks are at re:Invent next week or this week come on and see us at our booth and we are in the data analytics pavilion and you can find us pretty easily. Would love to talk to you. >> Perfect, we'll send them there. Subbu Iyer, thank you so much for joining me on the program today. We appreciate your insights. >> Thank you Lisa. >> I'm Lisa Martin, you're watching theCUBE's coverage of AWS re:Invent 2022. Thanks for watching. >> Clear- >> Clear cutting. >> Nice job, very nice job.

Published Date : Nov 25 2022

SUMMARY :

the fastest 15 minutes I'm sorry I didn't pin the right speed. and we are coming to you in Subbu, great to have you on the program. Great as always to be on So, you know, every company these days And a lot of the challenges that access to real time data to put in front of you and I and data platforms need to have. One of the reasons we see is So the ability to do How are you helping customers let me talk to you about fraud detection on the swipe and you know, it's a great We have this expectation that needs to be Exactly, and you know, with Wayfair when you talk So you know, when they have What are you guys doing together there? And you know about 18% and how you guys are delivering that. and you can find us pretty easily. for joining me on the program today. of AWS re:Invent 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

60QUANTITY

0.99+

LondonLOCATION

0.99+

LisaPERSON

0.99+

PayPalORGANIZATION

0.99+

New YorkLOCATION

0.99+

15 minutesQUANTITY

0.99+

3XQUANTITY

0.99+

2025DATE

0.99+

WayfairORGANIZATION

0.99+

35%QUANTITY

0.99+

AdobeORGANIZATION

0.99+

30%QUANTITY

0.99+

99.95%QUANTITY

0.99+

10 secondsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

30XQUANTITY

0.99+

70%QUANTITY

0.99+

32%QUANTITY

0.99+

27%QUANTITY

0.99+

1.6XQUANTITY

0.99+

each serverQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

AerospikeORGANIZATION

0.99+

millions of dollarsQUANTITY

0.99+

IndiaLOCATION

0.99+

SubbuPERSON

0.99+

9XQUANTITY

0.99+

fiveQUANTITY

0.99+

99 plus percentQUANTITY

0.99+

first answerQUANTITY

0.99+

third thingQUANTITY

0.99+

less than one millisecondQUANTITY

0.99+

10 yearsQUANTITY

0.99+

this weekDATE

0.99+

Subbu IyerPERSON

0.99+

one thirdQUANTITY

0.99+

millions of usersQUANTITY

0.99+

over 70%QUANTITY

0.98+

both usersQUANTITY

0.98+

Dream11ORGANIZATION

0.98+

80%QUANTITY

0.98+

todayDATE

0.98+

GravitonTITLE

0.98+

each nodeQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

threeQUANTITY

0.98+

fourQUANTITY

0.98+

TwoQUANTITY

0.98+

one nodeQUANTITY

0.98+

hundred million usersQUANTITY

0.98+

first verticalQUANTITY

0.97+

about 2%QUANTITY

0.97+

AerospikeTITLE

0.97+

single clusterQUANTITY

0.96+

Ian Colle, AWS | SuperComputing 22


 

(lively music) >> Good morning. Welcome back to theCUBE's coverage at Supercomputing Conference 2022, live here in Dallas. I'm Dave Nicholson with my co-host Paul Gillin. So far so good, Paul? It's been a fascinating morning Three days in, and a fascinating guest, Ian from AWS. Welcome. >> Thanks, Dave. >> What are we going to talk about? Batch computing, HPC. >> We've got a lot, let's get started. Let's dive right in. >> Yeah, we've got a lot to talk about. I mean, first thing is we recently announced our batch support for EKS. EKS is our Kubernetes, managed Kubernetes offering at AWS. And so batch computing is still a large portion of HPC workloads. While the interactive component is growing, the vast majority of systems are just kind of fire and forget, and we want to run thousands and thousands of nodes in parallel. We want to scale out those workloads. And what's unique about our AWS batch offering, is that we can dynamically scale, based upon the queue depth. And so customers can go from seemingly nothing up to thousands of nodes, and while they're executing their work they're only paying for the instances while they're working. And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop, then we start to dynamically scale down those resources. And so it's extremely powerful. We see lots of distributed machine learning, autonomous vehicle simulation, and traditional HPC workloads taking advantage of AWS Batch. >> So when you have a Kubernetes cluster does it have to be located in the same region as the HPC cluster that's going to be doing the batch processing, or does the nature of batch processing mean, in theory, you can move something from here to somewhere relatively far away to do the batch processing? How does that work? 'Cause look, we're walking around here and people are talking about lengths of cables in order to improve performance. So what does that look like when you peel back the cover and you look at it physically, not just logically, AWS is everywhere, but physically, what does that look like? >> Oh, physically, for us, it depends on what the customer's looking for. We have workflows that are all entirely within a single region. And so where they could have a portion of say the traditional HPC workflow, is within that region as well as the batch, and they're saving off the results, say to a shared storage file system like our Amazon FSx for Lustre, or maybe aging that back to an S3 object storage for a little lower cost storage solution. Or you can have customers that have a kind of a multi-region orchestration layer to where they say, "You know what? "I've got a portion of my workflow that occurs "over on the other side of the country "and I replicate my data between the East Coast "and the West Coast just based upon business needs. "And I want to have that available to customers over there. "And so I'll do a portion of it in the East Coast "a portion of it in the West Coast." Or you can think of that even globally. It really depends upon the customer's architecture. >> So is the intersection of Kubernetes with HPC, is this relatively new? I know you're saying you're, you're announcing it. >> It really is. I think we've seen a growing perspective. I mean, Kubernetes has been a long time kind of eating everything, right, in the enterprise space? And now a lot of CIOs in the industrial space are saying, "Why am I using one orchestration layer "to manage my HPC infrastructure and another one "to manage my enterprise infrastructure?" And so there's a growing appreciation that, you know what, why don't we just consolidate on one? And so that's where we've seen a growth of Kubernetes infrastructure and our own managed Kubernetes EKS on AWS. >> Last month you announced a general availability of Trainium, of a chip that's optimized for AI training. Talk about what's special about that chip or what is is customized to the training workloads. >> Yeah, what's unique about the Trainium, is you'll you'll see 40% price performance over any other GPU available in the AWS cloud. And so we've really geared it to be that most price performance of options for our customers. And that's what we like about the silicon team, that we're part of that Annaperna acquisition, is because it really has enabled us to have this differentiation and to not just be innovating at the software level but the entire stack. That Annaperna Labs team develops our network cards, they develop our ARM cards, they developed this Trainium chip. And so that silicon innovation has become a core part of our differentiator from other vendors. And what Trainium allows you to do is perform similar workloads, just at a lower price performance. >> And you also have a chip several years older, called Inferentia- >> Um-hmm. >> Which is for inferencing. What is the difference between, I mean, when would a customer use one versus the other? How would you move the workload? >> What we've seen is customers traditionally have looked for a certain class of machine, more of a compute type that is not as accelerated or as heavy as you would need for Trainium for their inference portion of their workload. So when they do that training they want the really beefy machines that can grind through a lot of data. But when you're doing the inference, it's a little lighter weight. And so it's a different class of machine. And so that's why we've got those two different product lines with the Inferentia being there to support those inference portions of their workflow and the Trainium to be that kind of heavy duty training work. >> And then you advise them on how to migrate their workloads from one to the other? And once the model is trained would they switch to an Inferentia-based instance? >> Definitely, definitely. We help them work through what does that design of that workflow look like? And some customers are very comfortable doing self-service and just kind of building it on their own. Other customers look for a more professional services engagement to say like, "Hey, can you come in and help me work "through how I might modify my workflow to "take full advantage of these resources?" >> The HPC world has been somewhat slower than commercial computing to migrate to the cloud because- >> You're very polite. (panelists all laughing) >> Latency issues, they want to control the workload, they want to, I mean there are even issues with moving large amounts of data back and forth. What do you say to them? I mean what's the argument for ditching the on-prem supercomputer and going all-in on AWS? >> Well, I mean, to be fair, I started at AWS five years ago. And I can tell you when I showed up at Supercomputing, even though I'd been part of this community for many years, they said, "What is AWS doing at Supercomputing?" I know you care, wait, it's Amazon Web Services. You care about the web, can you actually handle supercomputing workloads? Now the thing that very few people appreciated is that yes, we could. Even at that time in 2017, we had customers that were performing HPC workloads. Now that being said, there were some real limitations on what we could perform. And over those past five years, as we've grown as a company, we've started to really eliminate those frictions for customers to migrate their HPC workloads to the AWS cloud. When I started in 2017, we didn't have our elastic fabric adapter, our low-latency interconnect. So customers were stuck with standard TCP/IP. So for their highly demanding open MPI workloads, we just didn't have the latencies to support them. So the jobs didn't run as efficiently as they could. We didn't have Amazon FSx for Lustre, our managed lustre offering for high performant, POSIX-compliant file system, which is kind of the key to a large portion of HPC workloads is you have to have a high-performance file system. We didn't even, I mean, we had about 25 gigs of networking when I started. Now you look at, with our accelerated instances, we've got 400 gigs of networking. So we've really continued to grow across that spectrum and to eliminate a lot of those really, frictions to adoption. I mean, one of the key ones, we had a open source toolkit that was jointly developed by Intel and AWS called CFN Cluster that customers were using to even instantiate their clusters. So, and now we've migrated that all the way to a fully functional supported service at AWS called AWS Parallel Cluster. And so you've seen over those past five years we have had to develop, we've had to grow, we've had to earn the trust of these customers and say come run your workloads on us and we will demonstrate that we can meet your demanding requirements. And at the same time, there's been, I'd say, more of a cultural acceptance. People have gone away from the, again, five years ago, to what are you doing walking around the show, to say, "Okay, I'm not sure I get it. "I need to look at it. "I, okay, I, now, oh, it needs to be a part "of my architecture but the standard questions, "is it secure? "Is it price performant? "How does it compare to my on-prem?" And really culturally, a lot of it is, just getting IT administrators used to, we're not eliminating a whole field, right? We're just upskilling the people that used to rack and stack actual hardware, to now you're learning AWS services and how to operate within that environment. And it's still key to have those people that are really supporting these infrastructures. And so I'd say it's a little bit of a combination of cultural shift over the past five years, to see that cloud is a super important part of HPC workloads, and part of it's been us meeting the the market segment of where we needed to with innovating both at the hardware level and at the software level, which we're going to continue to do. >> You do have an on-prem story though. I mean, you have outposts. We don't hear a lot of talk about outposts lately, but these innovations, like Inferentia, like Trainium, like the networking innovation you're talking about, are these going to make their way into outposts as well? Will that essentially become this supercomputing solution for customers who want to stay on-prem? >> Well, we'll see what the future lies, but we believe that we've got the, as you noted, we've got the hardware, we've got the network, we've got the storage. All those put together gives you a a high-performance computer, right? And whether you want it to be redundant in your local data center or you want it to be accessible via APIs from the AWS cloud, we want to provide that service to you. >> So to be clear, that's not that's not available now, but that is something that could be made available? >> Outposts are available right now, that have this the services that you need. >> All these capabilities? >> Often a move to cloud, an impetus behind it comes from the highest levels in an organization. They're looking at the difference between OpEx versus CapEx. CapEx for a large HPC environment, can be very, very, very high. Are these HPC clusters consumed as an operational expense? Are you essentially renting time, and then a fundamental question, are these multi-tenant environments? Or when you're referring to batches being run in HPC, are these dedicated HPC environments for customers who are running batches against them? When you think about batches, you think of, there are times when batches are being run and there are times when they're not being run. So that would sort of conjure, in the imagination, multi-tenancy, what does that look like? >> Definitely, and that's been, let me start with your second part first is- >> Yeah. That's been a a core area within AWS is we do not see as, okay we're going to, we're going to carve out this super computer and then we're going to allocate that to you. We are going to dynamically allocate multi-tenant resources to you to perform the workloads you need. And especially with the batch environment, we're going to spin up containers on those, and then as the workloads complete we're going to turn those resources over to where they can be utilized by other customers. And so that's where the batch computing component really is powerful, because as you say, you're releasing resources from workloads that you're done with. I can use those for another portion of the workflow for other work. >> Okay, so it makes a huge difference, yeah. >> You mentioned, that five years ago, people couldn't quite believe that AWS was at this conference. Now you've got a booth right out in the center of the action. What kind of questions are you getting? What are people telling you? >> Well, I love being on the show floor. This is like my favorite part is talking to customers and hearing one, what do they love, what do they want more of? Two, what do they wish we were doing that we're not currently doing? And three, what are the friction points that are still exist that, like, how can I make their lives easier? And what we're hearing is, "Can you help me migrate my workloads to the cloud? "Can you give me the information that I need, "both from a price for performance, "for an operational support model, "and really help me be an internal advocate "within my environment to explain "how my resources can be operated proficiently "within the AWS cloud." And a lot of times it's, let's just take your application a subset of your applications and let's benchmark 'em. And really that, AWS, one of the key things is we are a data-driven environment. And so when you take that data and you can help a customer say like, "Let's just not look at hypothetical, "at synthetic benchmarks, let's take "actually the LS-DYNA code that you're running, perhaps. "Let's take the OpenFOAM code that you're running, "that you're running currently "in your on-premises workloads, "and let's run it on AWS cloud "and let's see how it performs." And then we can take that back to your to the decision makers and say, okay, here's the price for performance on AWS, here's what we're currently doing on-premises, how do we think about that? And then that also ties into your earlier question about CapEx versus OpEx. We have models where actual, you can capitalize a longer-term purchase at AWS. So it doesn't have to be, I mean, depending upon the accounting models you want to use, we do have a majority of customers that will stay with that OpEx model, and they like that flexibility of saying, "Okay, spend as you go." We need to have true ups, and make sure that they have insight into what they're doing. I think one of the boogeyman is that, oh, I'm going to spend all my money and I'm not going to know what's available. And so we want to provide the, the cost visibility, the cost controls, to where you feel like, as an HPC administrator you have insight into what your customers are doing and that you have control over that. And so once you kind of take away some of those fears and and give them the information that they need, what you start to see too is, you know what, we really didn't have a lot of those cost visibility and controls with our on-premises hardware. And we've had some customers tell us we had one portion of the workload where this work center was spending thousands of dollars a day. And we went back to them and said, "Hey, we started to show this, "what you were spending on-premises." They went, "Oh, I didn't realize that." And so I think that's part of a cultural thing that, at an HPC, the question was, well on-premises is free. How do you compete with free? And so we need to really change that culturally, to where people see there is no free lunch. You're paying for the resources whether it's on-premises or in the cloud. >> Data scientists don't worry about budgets. >> Wait, on-premises is free? Paul mentioned something that reminded me, you said you were here in 2017, people said AWS, web, what are you even doing here? Now in 2022, you're talking in terms of migrating to cloud. Paul mentioned outposts, let's say that a customer says, "Hey, I'd like you to put "in a thousand-node cluster in this data center "that I happen to own, but from my perspective, "I want to interact with it just like it's "in your data center." In other words, the location doesn't matter. My experience is identical to interacting with AWS in an AWS data center, in a CoLo that works with AWS, but instead it's my physical data center. When we're tracking the percentage of IT that's that is on-prem versus off-prem. What is that? Is that, what I just described, is that cloud? And in five years are you no longer going to be talking about migrating to cloud because people go, "What do you mean migrating to cloud? "What do you even talking about? "What difference does it make?" It's either something that AWS is offering or it's something that someone else is offering. Do you think we'll be at that point in five years, where in this world of virtualization and abstraction, you talked about Kubernetes, we should be there already, thinking in terms of it doesn't matter as long as it meets latency and sovereignty requirements. So that, your prediction, we're all about insights and supercomputing- >> My prediction- >> In five years, will you still be talking about migrating to cloud or will that be something from the past? >> In five years, I still think there will be a component. I think the majority of the assumption will be that things are cloud-native and you start in the cloud and that there are perhaps, an aspect of that, that will be interacting with some sort of an edge device or some sort of an on-premises device. And we hear more and more customers that are saying, "Okay, I can see the future, "I can see that I'm shrinking my footprint." And, you can see them still saying, "I'm not sure how small that beachhead will be, "but right now I want to at least say "that I'm going to operate in that hybrid environment." And so I'd say, again, the pace of this community, I'd say five years we're still going to be talking about migrations, but I'd say the vast majority will be a cloud-native, cloud-first environment. And how do you classify that? That outpost sitting in someone's data center? I'd say we'd still, at least I'll leave that up to the analysts, but I think it would probably come down as cloud spend. >> Great place to end. Ian, you and I now officially have a bet. In five years we're going to come back. My contention is, no we're not going to be talking about it anymore. >> Okay. >> And kids in college are going to be like, "What do you mean cloud, it's all IT, it's all IT." And they won't remember this whole phase of moving to cloud and back and forth. With that, join us in five years to see the result of this mega-bet between Ian and Dave. I'm Dave Nicholson with theCUBE, here at Supercomputing Conference 2022, day three of our coverage with my co-host Paul Gillin. Thanks again for joining us. Stay tuned, after this short break, we'll be back with more action. (lively music)

Published Date : Nov 17 2022

SUMMARY :

Welcome back to theCUBE's coverage What are we going to talk about? Let's dive right in. in the queue starts to drop, does it have to be of say the traditional HPC workflow, So is the intersection of Kubernetes And now a lot of CIOs in the to the training workloads. And what Trainium allows you What is the difference between, to be that kind of heavy to say like, "Hey, can you You're very polite. to control the workload, to what are you doing I mean, you have outposts. And whether you want it to be redundant that have this the services that you need. Often a move to cloud, to you to perform the workloads you need. Okay, so it makes a What kind of questions are you getting? the cost controls, to where you feel like, And in five years are you no And so I'd say, again, the not going to be talking of moving to cloud and back and forth.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IanPERSON

0.99+

PaulPERSON

0.99+

Dave NicholsonPERSON

0.99+

Paul GillinPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

400 gigsQUANTITY

0.99+

2017DATE

0.99+

Ian CollePERSON

0.99+

thousandsQUANTITY

0.99+

DallasLOCATION

0.99+

40%QUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2022DATE

0.99+

AnnapernaORGANIZATION

0.99+

second partQUANTITY

0.99+

five yearsQUANTITY

0.99+

Last monthDATE

0.99+

IntelORGANIZATION

0.99+

five years agoDATE

0.98+

fiveQUANTITY

0.98+

TwoQUANTITY

0.98+

SupercomputingORGANIZATION

0.98+

LustreORGANIZATION

0.97+

Annaperna LabsORGANIZATION

0.97+

TrainiumORGANIZATION

0.97+

five yearsQUANTITY

0.96+

oneQUANTITY

0.96+

OpExTITLE

0.96+

bothQUANTITY

0.96+

first thingQUANTITY

0.96+

Supercomputing ConferenceEVENT

0.96+

firstQUANTITY

0.96+

West CoastLOCATION

0.96+

thousands of dollars a dayQUANTITY

0.96+

Supercomputing Conference 2022EVENT

0.95+

CapExTITLE

0.94+

threeQUANTITY

0.94+

theCUBEORGANIZATION

0.92+

East CoastLOCATION

0.91+

single regionQUANTITY

0.91+

yearsQUANTITY

0.91+

thousands of nodesQUANTITY

0.88+

Parallel ClusterTITLE

0.87+

about 25 gigsQUANTITY

0.87+

Dhabaleswar “DK” Panda, Ohio State State University | SuperComputing 22


 

>>Welcome back to The Cube's coverage of Supercomputing Conference 2022, otherwise known as SC 22 here in Dallas, Texas. This is day three of our coverage, the final day of coverage here on the exhibition floor. I'm Dave Nicholson, and I'm here with my co-host, tech journalist extraordinaire, Paul Gillum. How's it going, >>Paul? Hi, Dave. It's going good. >>And we have a wonderful guest with us this morning, Dr. Panda from the Ohio State University. Welcome Dr. Panda to the Cube. >>Thanks a lot. Thanks a lot to >>Paul. I know you're, you're chopping at >>The bit, you have incredible credentials, over 500 papers published. The, the impact that you've had on HPC is truly remarkable. But I wanted to talk to you specifically about a product project you've been working on for over 20 years now called mva, high Performance Computing platform that's used by more than 32 organ, 3,200 organizations across 90 countries. You've shepherded this from, its, its infancy. What is the vision for what MVA will be and and how is it a proof of concept that others can learn from? >>Yeah, Paul, that's a great question to start with. I mean, I, I started with this conference in 2001. That was the first time I came. It's very coincidental. If you remember the Finman Networking Technology, it was introduced in October of 2000. Okay. So in my group, we were working on NPI for Marinette Quadrics. Those are the old technology, if you can recollect when Finman was there, we were the very first one in the world to really jump in. Nobody knew how to use Infin van in an HPC system. So that's how the Happy Project was born. And in fact, in super computing 2002 on this exhibition floor in Baltimore, we had the first demonstration, the open source happy, actually is running on an eight node infinite van clusters, eight no zeros. And that was a big challenge. But now over the years, I means we have continuously worked with all infinite van vendors, MPI Forum. >>We are a member of the MPI Forum and also all other network interconnect. So we have steadily evolved this project over the last 21 years. I'm very proud of my team members working nonstop, continuously bringing not only performance, but scalability. If you see now INFIN event are being deployed in 8,000, 10,000 node clusters, and many of these clusters actually use our software, stack them rapid. So, so we have done a lot of, like our focuses, like we first do research because we are in academia. We come up with good designs, we publish, and in six to nine months, we actually bring it to the open source version and people can just download and then use it. And that's how currently it's been used by more than 3000 orange in 90 countries. And, but the interesting thing is happening, your second part of the question. Now, as you know, the field is moving into not just hvc, but ai, big data, and we have those support. This is where like we look at the vision for the next 20 years, we want to design this MPI library so that not only HPC but also all other workloads can take advantage of it. >>Oh, we have seen libraries that become a critical develop platform supporting ai, TensorFlow, and, and the pie torch and, and the emergence of, of, of some sort of default languages that are, that are driving the community. How, how important are these frameworks to the, the development of the progress making progress in the HPC world? >>Yeah, no, those are great. I mean, spite our stencil flow, I mean, those are the, the now the bread and butter of deep learning machine learning. Am I right? But the challenge is that people use these frameworks, but continuously models are becoming larger. You need very first turnaround time. So how do you train faster? How do you do influencing faster? So this is where HPC comes in and what exactly what we have done is actually we have linked floor fighters to our happy page because now you see the MPI library is running on a million core system. Now your fighters and tenor four clan also be scaled to to, to those number of, large number of course and gps. So we have actually done that kind of a tight coupling and that helps the research to really take advantage of hpc. >>So if, if a high school student is thinking in terms of interesting computer science, looking for a place, looking for a university, Ohio State University, bruns, world renowned, widely known, but talk about what that looks like from a day on a day to day basis in terms of the opportunity for undergrad and graduate students to participate in, in the kind of work that you do. What is, what does that look like? And is, and is that, and is that a good pitch to for, for people to consider the university? >>Yes. I mean, we continuously, from a university perspective, by the way, the Ohio State University is one of the largest single campus in, in us, one of the top three, top four. We have 65,000 students. Wow. It's one of the very largest campus. And especially within computer science where I am located, high performance computing is a very big focus. And we are one of the, again, the top schools all over the world for high performance computing. And we also have very strength in ai. So we always encourage, like the new students who like to really work on top of the art solutions, get exposed to the concepts, principles, and also practice. Okay. So, so we encourage those people that wish you can really bring you those kind of experience. And many of my past students, staff, they're all in top companies now, have become all big managers. >>How, how long, how long did you say you've been >>At 31 >>Years? 31 years. 31 years. So, so you, you've had people who weren't alive when you were already doing this stuff? That's correct. They then were born. Yes. They then grew up, yes. Went to university graduate school, and now they're on, >>Now they're in many top companies, national labs, all over the universities, all over the world. So they have been trained very well. Well, >>You've, you've touched a lot of lives, sir. >>Yes, thank you. Thank >>You. We've seen really a, a burgeoning of AI specific hardware emerge over the last five years or so. And, and architectures going beyond just CPUs and GPUs, but to Asics and f PGAs and, and accelerators, does this excite you? I mean, are there innovations that you're seeing in this area that you think have, have great promise? >>Yeah, there is a lot of promise. I think every time you see now supercomputing technology, you see there is sometime a big barrier comes barrier jump. Rather I'll say, new technology comes some disruptive technology, then you move to the next level. So that's what we are seeing now. A lot of these AI chips and AI systems are coming up, which takes you to the next level. But the bigger challenge is whether it is cost effective or not, can that be sustained longer? And this is where commodity technology comes in, which commodity technology tries to take you far longer. So we might see like all these likes, Gaudi, a lot of new chips are coming up, can they really bring down the cost? If that cost can be reduced, you will see a much more bigger push for AI solutions, which are cost effective. >>What, what about on the interconnect side of things, obvi, you, you, your, your start sort of coincided with the initial standards for Infin band, you know, Intel was very, very, was really big in that, in that architecture originally. Do you see interconnects like RDMA over converged ethernet playing a part in that sort of democratization or commoditization of things? Yes. Yes. What, what are your thoughts >>There for internet? No, this is a great thing. So, so we saw the infinite man coming. Of course, infinite Man is, commod is available. But then over the years people have been trying to see how those RDMA mechanisms can be used for ethernet. And then Rocky has been born. So Rocky has been also being deployed. But besides these, I mean now you talk about Slingshot, the gray slingshot, it is also an ethernet based systems. And a lot of those RMA principles are actually being used under the hood. Okay. So any modern networks you see, whether it is a Infin and Rocky Links art network, rock board network, you name any of these networks, they are using all the very latest principles. And of course everybody wants to make it commodity. And this is what you see on the, on the slow floor. Everybody's trying to compete against each other to give you the best performance with the lowest cost, and we'll see whoever wins over the years. >>Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number of years in terms of the fastest supercomputer performance. How important do you think it is for the US to maintain leadership in this area? >>Big, big thing, significantly, right? We are saying that I think for the last five to seven years, I think we lost that lead. But now with the frontier being the number one, starting from the June ranking, I think we are getting that leadership back. And I think it is very critical not only for fundamental research, but for national security trying to really move the US to the leading edge. So I hope us will continue to lead the trend for the next few years until another new system comes out. >>And one of the gating factors, there is a shortage of people with data science skills. Obviously you're doing what you can at the university level. What do you think can change at the secondary school level to prepare students better to, for data science careers? >>Yeah, I mean that is also very important. I mean, we, we always call like a pipeline, you know, that means when PhD levels we are expecting like this even we want to students to get exposed to, to, to many of these concerts from the high school level. And, and things are actually changing. I mean, these days I see a lot of high school students, they, they know Python, how to program in Python, how to program in sea object oriented things. Even they're being exposed to AI at that level. So I think that is a very healthy sign. And in fact we, even from Ohio State side, we are always engaged with all this K to 12 in many different programs and then gradually trying to take them to the next level. And I think we need to accelerate also that in a very significant manner because we need those kind of a workforce. It is not just like a building a system number one, but how do we really utilize it? How do we utilize that science? How do we propagate that to the community? Then we need all these trained personal. So in fact in my group, we are also involved in a lot of cyber training activities for HPC professionals. So in fact, today there is a bar at 1 1 15 I, yeah, I think 1215 to one 15. We'll be talking more about that. >>About education. >>Yeah. Cyber training, how do we do for professionals? So we had a funding together with my co-pi, Dr. Karen Tom Cook from Ohio Super Center. We have a grant from NASA Science Foundation to really educate HPT professionals about cyber infrastructure and ai. Even though they work on some of these things, they don't have the complete knowledge. They don't get the time to, to learn. And the field is moving so fast. So this is how it has been. We got the initial funding, and in fact, the first time we advertised in 24 hours, we got 120 application, 24 hours. We couldn't even take all of them. So, so we are trying to offer that in multiple phases. So, so there is a big need for those kind of training sessions to take place. I also offer a lot of tutorials at all. Different conference. We had a high performance networking tutorial. Here we have a high performance deep learning tutorial, high performance, big data tutorial. So I've been offering tutorials at, even at this conference since 2001. Good. So, >>So in the last 31 years, the Ohio State University, as my friends remind me, it is properly >>Called, >>You've seen the world get a lot smaller. Yes. Because 31 years ago, Ohio, in this, you know, of roughly in the, in the middle of North America and the United States was not as connected as it was to everywhere else in the globe. So that's, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, but globally, and we talk about the world getting smaller, we're sort of in the thick of, of the celebratory seasons where, where many, many groups of people exchange gifts for varieties of reasons. If I were to offer you a holiday gift, that is the result of what AI can deliver the world. Yes. What would that be? What would, what would, what would the first thing be? This is, this is, this is like, it's, it's like the genie, but you only get one wish. >>I know, I know. >>So what would the first one be? >>Yeah, it's very hard to answer one way, but let me bring a little bit different context and I can answer this. I, I talked about the happy project and all, but recently last year actually we got awarded an S f I institute award. It's a 20 million award. I am the overall pi, but there are 14 universities involved. >>And who is that in that institute? >>What does that Oh, the I ici. C e. Okay. I cycle. You can just do I cycle.ai. Okay. And that lies with what exactly what you are trying to do, how to bring lot of AI for masses, democratizing ai. That's what is the overall goal of this, this institute, think of like a, we have three verticals we are working think of like one is digital agriculture. So I'll be, that will be my like the first ways. How do you take HPC and AI to agriculture the world as though we just crossed 8 billion people. Yeah, that's right. We need continuous food and food security. How do we grow food with the lowest cost and with the highest yield? >>Water >>Consumption. Water consumption. Can we minimize or minimize the water consumption or the fertilization? Don't do blindly. Technologies are out there. Like, let's say there is a weak field, A traditional farmer see that, yeah, there is some disease, they will just go and spray pesticides. It is not good for the environment. Now I can fly it drone, get images of the field in the real time, check it against the models, and then it'll tell that, okay, this part of the field has disease. One, this part of the field has disease. Two, I indicate to the, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. That has a big impact. So this is what we are developing in that NSF A I institute I cycle ai. We also have, we have chosen two additional verticals. One is animal ecology, because that is very much related to wildlife conservation, climate change, how do you understand how the animals move? Can we learn from them? And then see how human beings need to act in future. And the third one is the food insecurity and logistics. Smart food distribution. So these are our three broad goals in that institute. How do we develop cyber infrastructure from below? Combining HP c AI security? We have, we have a large team, like as I said, there are 40 PIs there, 60 students. We are a hundred members team. We are working together. So, so that will be my wish. How do we really democratize ai? >>Fantastic. I think that's a great place to wrap the conversation here On day three at Supercomputing conference 2022 on the cube, it was an honor, Dr. Panda working tirelessly at the Ohio State University with his team for 31 years toiling in the field of computer science and the end result, improving the lives of everyone on Earth. That's not a stretch. If you're in high school thinking about a career in computer science, keep that in mind. It isn't just about the bits and the bobs and the speeds and the feeds. It's about serving humanity. Maybe, maybe a little, little, little too profound a statement, I would argue not even close. I'm Dave Nicholson with the Queue, with my cohost Paul Gillin. Thank you again, Dr. Panda. Stay tuned for more coverage from the Cube at Super Compute 2022 coming up shortly. >>Thanks a lot.

Published Date : Nov 17 2022

SUMMARY :

Welcome back to The Cube's coverage of Supercomputing Conference 2022, And we have a wonderful guest with us this morning, Dr. Thanks a lot to But I wanted to talk to you specifically about a product project you've So in my group, we were working on NPI for So we have steadily evolved this project over the last 21 years. that are driving the community. So we have actually done that kind of a tight coupling and that helps the research And is, and is that, and is that a good pitch to for, So, so we encourage those people that wish you can really bring you those kind of experience. you were already doing this stuff? all over the world. Thank this area that you think have, have great promise? I think every time you see now supercomputing technology, with the initial standards for Infin band, you know, Intel was very, very, was really big in that, And this is what you see on the, Sort of a macroeconomic question, Japan, the US and China have been leapfrogging each other for a number the number one, starting from the June ranking, I think we are getting that leadership back. And one of the gating factors, there is a shortage of people with data science skills. And I think we need to accelerate also that in a very significant and in fact, the first time we advertised in 24 hours, we got 120 application, that's pro that's, I i it kind of boggles the mind when you think of that progression over 31 years, I am the overall pi, And that lies with what exactly what you are trying to do, to the tractor or the sprayer saying, okay, spray only pesticide one, you have pesticide two here. I think that's a great place to wrap the conversation here On

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Paul GillumPERSON

0.99+

DavePERSON

0.99+

Paul GillinPERSON

0.99+

October of 2000DATE

0.99+

PaulPERSON

0.99+

NASA Science FoundationORGANIZATION

0.99+

2001DATE

0.99+

BaltimoreLOCATION

0.99+

8,000QUANTITY

0.99+

14 universitiesQUANTITY

0.99+

31 yearsQUANTITY

0.99+

20 millionQUANTITY

0.99+

24 hoursQUANTITY

0.99+

last yearDATE

0.99+

Karen Tom CookPERSON

0.99+

60 studentsQUANTITY

0.99+

Ohio State UniversityORGANIZATION

0.99+

90 countriesQUANTITY

0.99+

sixQUANTITY

0.99+

EarthLOCATION

0.99+

PandaPERSON

0.99+

todayDATE

0.99+

65,000 studentsQUANTITY

0.99+

3,200 organizationsQUANTITY

0.99+

North AmericaLOCATION

0.99+

PythonTITLE

0.99+

United StatesLOCATION

0.99+

Dallas, TexasLOCATION

0.99+

over 500 papersQUANTITY

0.99+

JuneDATE

0.99+

OneQUANTITY

0.99+

more than 32 organQUANTITY

0.99+

120 applicationQUANTITY

0.99+

OhioLOCATION

0.99+

more than 3000 orangeQUANTITY

0.99+

first waysQUANTITY

0.99+

oneQUANTITY

0.99+

nine monthsQUANTITY

0.99+

40 PIsQUANTITY

0.99+

AsicsORGANIZATION

0.99+

MPI ForumORGANIZATION

0.98+

ChinaORGANIZATION

0.98+

TwoQUANTITY

0.98+

Ohio State State UniversityORGANIZATION

0.98+

8 billion peopleQUANTITY

0.98+

IntelORGANIZATION

0.98+

HPORGANIZATION

0.97+

Dr.PERSON

0.97+

over 20 yearsQUANTITY

0.97+

USORGANIZATION

0.97+

FinmanORGANIZATION

0.97+

RockyPERSON

0.97+

JapanORGANIZATION

0.97+

first timeQUANTITY

0.97+

first demonstrationQUANTITY

0.96+

31 years agoDATE

0.96+

Ohio Super CenterORGANIZATION

0.96+

three broad goalsQUANTITY

0.96+

one wishQUANTITY

0.96+

second partQUANTITY

0.96+

31QUANTITY

0.96+

CubeORGANIZATION

0.95+

eightQUANTITY

0.95+

over 31 yearsQUANTITY

0.95+

10,000 node clustersQUANTITY

0.95+

day threeQUANTITY

0.95+

firstQUANTITY

0.95+

INFINEVENT

0.94+

seven yearsQUANTITY

0.94+

Dhabaleswar “DK” PandaPERSON

0.94+

threeQUANTITY

0.93+

S f I instituteTITLE

0.93+

first thingQUANTITY

0.93+

Daniel Rethmeier & Samir Kadoo | Accelerating Business Transformation


 

(upbeat music) >> Hi everyone. Welcome to theCUBE special presentation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got two great guests, one for calling in from Germany, or videoing in from Germany, one from Maryland. We've got VMware and AWS. This is the customer successes with VMware Cloud on AWS Showcase: Accelerating Business Transformation. Here in the Showcase at Samir Kadoo, worldwide VMware strategic alliance solution architect leader with AWS. Samir, great to have you. And Daniel Rethmeier, principal architect global AWS synergy at VMware. Guys, you guys are working together, you're the key players in this relationship as it rolls out and continues to grow. So welcome to theCUBE. >> Thank you, greatly appreciate it. >> Great to have you guys both on. As you know, we've been covering this since 2016 when Pat Gelsinger, then CEO, and then then CEO AWS at Andy Jassy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success of VM workloads in the cloud. VMware's had great success with it since and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later, we got this whole inflection point coming, you're starting to see this idea of higher level services, more performance are coming in at the infrastructure side, more automation, more serverless, I mean and AI. I mean, it's just getting better and better every year in the cloud. Kind of a whole 'nother level. Where are we? Samir, let's start with you on the relationship. >> Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced. And then less than a year later, that's when we officially launched VMware Cloud on AWS. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware. You know, one of the key things... Together, day in, day out, as far as advancing VMware Cloud on AWS. You know, even if you look at the innovation that takes place with the solution, things have modernized, things have changed, there's been advancements. You know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right, more recently. One of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware Cloud on AWS. And even with VMware to other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware Cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware Cloud on AWS service competency. So think about it from the standpoint, there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >> Great stuff. Daniel, I want to get to you in a second upon this principal architect position you have. In your title, you're the global AWS synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly VMworld, talking about how the workloads on IT has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AIOps, you got ITOps changing a lot, you got a lot more automation, edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the relationship? >> So at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware Cloud and AWS, we are also enabling us mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembles globally and also virtually on Slack and the usual suspect tools working together and listening to customers. That's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the best benefits out of VMware Cloud on AWS. And over the time, we really have involved the solution. As Samir mentioned, we just added additional storage solutions to VMware Cloud on AWS. We now have three different instance types that cover a broad range of workloads. So for example, we just edited the I4i host, which is ideally for workloads that require a lot of CPU power, such as, you mentioned it, AI workloads. >> Yeah, so I want to get us just specifically on the customer journey and their transformation, you know, we've been reporting on Silicon angle in theCUBE in the past couple weeks in a big way that the ops teams are now the new devs, right? I mean that sounds a little bit weird, but IT operations is now part of a lot more DataOps, security, writing code, composing. You know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing, what are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >> That's a great point, because originally, VMware and AWS came from very different directions when it comes to speaking people and customers. So for example, AWS, very developer focused, whereas VMware has a very great footprint in the ITOps area. And usually these are very different teams, groups, different cultures, but it's getting together. However, we always try to address the customer needs, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, "Well, we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service. Recoverability as a service, scalability as a service. We want to have this from the infrastructure." That was one of the unique selling points for VMware on-premise and now we are bringing this into the cloud. >> Samir, talk about your perspective. I want to get your thoughts, and not to take a tangent, but we had covered the AWS re:MARS, actually it was Amazon re:MARS, machine learning automation, robotics and space was really kind of the confluence of industrial IoT, software, physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code, automation, you know, "Hey Alexa, deploy a Kubernetes cluster." Yeah, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services, meets workloads. Can you unpack that and share your opinion on what you see there from an Amazon perspective and how it relates to this? >> Yeah. Yeah, totally, right? And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware Cloud on AWS, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you want to leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's going to give you that power to do certain things, such as, for example, like how you mentioned with IoT, even with utilizing Alexa, or if there's any other service that you want to utilize, that's the joining point between both of the offerings right off the top. Though with digital transformation, right, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology even in your business. Leaders are looking to reinvent their business, they're looking to take different steps as far as pursuing a new strategy, maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. >> Okay. >> Then also- >> Oh, go ahead, finish your thought. >> No, no, no, I was going to say what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that vStor admin that's used to their on-premises environment. Now with VMware Cloud on AWS, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, you still have that methodology where you can utilize that in VMware Cloud on AWS too. >> Daniel, I want to get your thoughts on this because at Explore and after the event, as we prep for CubeCon and re:Invent coming up, the big AWS show, I had a couple conversations with a lot of the VMware customers and operators, and it's like hundreds of thousands of users and millions of people talking about and peaked on VMware, interested in VMware. The common thread was one person said, "I'm trying to figure out where I'm going to put my career in the next 10 to 15 years." And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm going to be the next cloud, but there's no like role yet. Architects, is it solution architect, SRE? So you're starting to see the psychology of the operators who now are going to try to make these career decisions. Like what am I going to work on? And then it's kind of fuzzy, but I want to get your thoughts, how would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity? And what's going to happen? >> So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills and trainings? Is everything worthless I learned over the last 15 years of my career? And the answer is to make digital transformation a success, we need not just to talk about technology, but also about process, people, and culture. And this is where VMware really can help because if you are applying VMware Cloud on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment, you can use the same managing and monitoring tools, if you have written, and many customers did this, if you have developed hundreds of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware Cloud on AWS. And that gives not just leaders, but also the architects at customers, the operators at customers, the confidence in such a complex project. >> The consistency, very key point, gives them the confidence to go. And then now that once they're confident, they can start committing themselves to new things. Samir, you're reacting to this because on your side, you've got higher level services, you've got more performance at the hardware level. I mean, a lot improvements. So, okay, nothing's changed, I can still run my job, now I got goodness on the other side. What's the upside? What's in it for the customer there? >> Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware Cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud. But if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you want to utilize any other AWS service in conjunction with that VM that resides maybe on-premises or even in VMware Cloud on AWS, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you want to expand on the skills, you certainly have that capability to do so. >> Great stuff, I love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, 'cause people want to know what's goes on behind the scenes. How does innovation get happen? How does it happen with the relationships? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? Do you guys just have a Zoom meeting, do you guys fly out, you write code, go do you ship things? I mean, I'm making it up, but you get the idea. How does it work? What's going on behind the scenes? >> So we hope to get more frequently together in-person, but of course we had some difficulties over the last two to three years. So we are very used to Zoom conferences and Slack meetings. You always have to have the time difference in mind if you are working globally together. But what we try, for example, we have regular assembles now also in-person, geo-based, so for AMEA, for the Americas, for APJ. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >> What's interesting, you know, as events are coming back, Samir, before you weigh in this, I'll comment as theCUBE's been going back out to events, we're hearing comments like, "What pandemic? We were more productive in the pandemic." I mean, developers know how to work remotely and they've been on all the tools there, but then they get in-person, they're happy to see people, but no one's really missed the beat. I mean, it seems to be very productive, you know, workflow, not a lot of disruption. More, if anything, productivity gains. >> Agreed, right? I think one of the key things to keep in mind is even if you look at AWS's, and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said and meant earlier, right? We might have meetings at different time zones, maybe it's in-person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation in VMware Cloud on AWS as well. But one of the key things to keep in mind is yes, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology, we've been able to still communicate, work with our customers, even with VMware in between, with AWS and whatnot, we had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware Cloud on AWS Outposts, that was something that customers have been asking for. We've been able to leverage the feedback and then continue to drive innovation even around VMware Cloud on AWS Outposts. So even with the on-premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >> In our last segment we did here on this Showcase, we listed the accomplishments and they were pretty significant. I mean geo, you got the global rollouts of the relationship. It's just really been interesting and people can reference that, we won't get into it here. But I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Because again, I think right now, we're at an inflection point more than ever. What can people expect from the relationship and what's coming up with re:Invent? Can you share a little bit of kind of what's coming down the pike? >> So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked for over the last years. Whenever you are requiring additional storage to host your virtual machines, you usually in VMware Cloud on AWS, you have to add additional nodes. Now we have three different node types with different ratios of compute, storage, and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay for it. And now with two solutions which offer choice for the customers, like FS6 wanted a ONTAP and VMware Cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements, at the upcoming events. >> Samir, what's your reaction take on what's coming down on your side? >> Yeah, I think one of the key things to keep in mind is we're looking to help our customers be agile and even scaled with their needs, right? So with VMware Cloud on AWS, that's one of the key things that comes to mind, right? There are going to be announcements, innovations, and whatnot with upcoming events. But together, we're able to leverage that to advance VMware cloud on AWS. To Daniel's point, storage for example, even with host offerings. And then even with decoupling storage from compute and memory, right? Now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware Cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's going to be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events, that's going to give us the options to even advance our own services together. >> Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I want to get both of your reaction to it. And we've been bringing this up in the open conversations on theCUBE is in the old days, let's go back this generation, you had ecosystems, you had VMware had an ecosystem, AWS had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships, and they do business together and they sell each other's products or do some stuff. Now it's more about architecture, 'cause we're now in a distributed large scale environment where the role of ecosystems are intertwining and you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides, they come together. So you have this now almost a three dimensional or multidimensional ecosystem interplay. What's your thoughts on this? Because it's about the architecture, integration is a value, not so much innovations only. You got to do innovation, but when you do innovation, you got to integrate it, you got to connect it. So how do you guys see this as an architectural thing, start to see more technical business deals? >> So we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even closer to specific vendors. We are removing these obstacles. So with VMware Cloud on AWS, moving to the cloud, firstly it's not a dead end. If you decide at one point in time because of latency requirements or maybe some compliance requirements, you need to move back into on-premise, you can do this. If you decide you want to stay with some of your services on-premise and just run a couple of dedicated services in the cloud, you can do this and you can man manage it through a single pane of glass. That's quite important. So cloud is no longer a dead end, it's no longer a binary decision, whether it's on-premise or the cloud, it is the cloud. And the second thing is you can choose the best of both worlds, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware Cloud on AWS either way in a very, very fast cost effective and safe way, then you can enrich, later on enrich these virtual machines with services that are offered by AWS, more than 200 different services ranging from object-based storage, load balancing, and so on. So it's an endless, endless possibility. >> We call that super cloud in the way that we generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is kind of where cloud is right now. You guys are not commodity, amazon's completely differentiating, but there's some commodity things happen. You got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. >> Absolutely. >> And everybody wins. >> Yeah, I 100% agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it's a cross education where there might be someone who's more proficient on the cloud side with AWS, maybe more proficient with the VMware's technology. But then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud, maybe I don't know what the networking constructs are, maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware Cloud on AWS. Maybe you want to leverage any of the native AWS services or even just off the top, 200 plus AWS services, right? But it comes down to that skillset, right? So again, solutions architecture at the back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the day. >> I mean, I just think it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean you don't have to do anything. You still run it. Just spear the way you're working on it and now do new things. This is kind of a cultural shift. >> Yeah, absolutely. And if you look, not every customer, not every organization has the resources to refactor and re-platform everything. And we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time, they can free up resources to develop new innovations and grow their business. >> Awesome. Samir, thank you for coming on. Daniel, thank you for coming to Germany. >> Thank you. Oktoberfest, I know it's evening over there, weekend's here. And thank you for spending the time. Samir, give you the final word. AWS re:Invent's coming up. We're preparing, we're going to have an exclusive with Adam, with Fryer, we'd do a curtain raise, and do a little preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at re:Invent this year? The big show? >> Yeah, so I think Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what are called chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking to sit and listen to a session, yes that's there, but if they want to be hands-on, that is also there too. So personally for me as an IT background, been in sysadmin world and whatnot, being hands-on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. >> Yeah, and re:Invent's an amazing show for the in-person. You guys nail it every year. We'll have three sets this year at theCUBE and it's becoming popular. We have more and more content. You guys got live streams going on, a lot of content, a lot of media. So thanks for sharing that. Samir, Daniel, thank you for coming on on this part of the Showcase episode of really the customer successes with VMware Cloud on AWS, really accelerating business transformation with AWS and VMware. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)

Published Date : Oct 21 2022

SUMMARY :

This is the customer successes Great to have you guys both on. One of the things to keep in mind Daniel, I want to get to you in a second And over the time, we really that the ops teams are in the ITOps area. And so when you look at So that's going to give you even with logging, you in the next 10 to 15 years." And the answer is to make What's in it for the customer there? and that ability to just I'd love to have you guys explain, and to contribute to our community. but no one's really missed the beat. So the key thing is always to maintain But I will ask you guys to comment on, and memory and you have to pay for it. So it comes down to, you know, and you guys are in the is you can choose the best with you on their terms. on the cloud side with AWS, I mean you don't have to do anything. has the resources to refactor Samir, thank you for coming on. And thank you for spending the time. that's one of the key things of really the customer successes

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Daniel RethmeierPERSON

0.99+

DanielPERSON

0.99+

AWSORGANIZATION

0.99+

SamirPERSON

0.99+

MarylandLOCATION

0.99+

Pat GelsingerPERSON

0.99+

amazonORGANIZATION

0.99+

GermanyLOCATION

0.99+

John FurrierPERSON

0.99+

2016DATE

0.99+

100%QUANTITY

0.99+

AdamPERSON

0.99+

Samir KadooPERSON

0.99+

more than 200 different servicesQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

VMwareORGANIZATION

0.99+

twoQUANTITY

0.99+

two solutionsQUANTITY

0.99+

both sidesQUANTITY

0.99+

this yearDATE

0.99+

CubeConEVENT

0.99+

Madhura Maskasky, Platform9 | Cloud Native at Scale


 

(uplifting music) >> Hello and welcome to The Cube, here in Palo Alto, California for a special program on cloud-native at scale, enabling next generation cloud or SuperCloud for modern application cloud-native developers. I'm John Furrier, host of The Cube. My pleasure to have here Madhura Maskasky, co-founder and VP of Product at Platform9. Thanks for coming in today for this cloud-native at scale conversation. >> Thank you for having me. >> So, cloud-native at scale, something that we're talking about because we're seeing the next level of mainstream success of containers, Kubernetes and cloud-native developers, basically DevOps in the CICD pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the SuperCloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on SuperCloud as it fits to cloud-native as scales up? >> Yeah, you know, I think what's interesting, and I think the reason why SuperCloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud-native and cloud deployments have scaled, I think we've reached a point now where, instead of having the traditional data center style model where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right, where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private, on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving that direction. And so you got to refer that with a terminology that indicates the scale and complexity of it. And so I think SuperCloud is an appropriate term for that. >> So, you brought a couple things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. What even know what's around the corner. You got buildings, you got IOT, OT and IT kind of coming together, but you also got this idea of regions, global infrastructure is a big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale. These new challenges there. Can you share, because you got to have edge. So, hybrid cloud is a winning formula. Everybody knows that it's a steady state. >> Madhura: Yeah. >> But across multiple clouds brings in this new un-engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's going to happen. It's only going to get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because it's something business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the SuperCloud or across multiple edges and regions? >> Yeah, absolutely. So, I think, you know, in the context of this, this term of SuperCloud, I think, it's sometimes easier to visualize things in terms of two axes, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then, on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity but potentially manageable. But when you are expanding on both these axes you really get to a point where that scale really needs some well thought out, well structured solutions to address it. Right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when your scale is not at the level. >> Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We're seeing cloud-native becomes successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because about at scale, >> Madhura: Yeah. >> Challenges here. >> Yeah. Absolutely. And I think, you know, I like to call it, you know, the problem that the scale creates, you know, there's various problems, but I think one problem, one way to think about it is you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right. Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, it's not working. And the exact same problem now happens in these distributed environments, but at massive scale, right. Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right. And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster right. But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ballgame of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you got to make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So, I think that's another example of problems that occur. >> Okay. So, I have to ask about scale because there are a lot of multiple steps involved when you see the success of cloud native. You know, you see some, you know, some experimentation. They set up a cluster, say, it's containers and Kubernetes, and then you say, okay, we got this, we configure it. And then, they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you got to scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is. And when companies transition from, I got this to, oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >> Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the two factors of scale, as we talked about start expanding. I think, one of them is what I like to call the, you know, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with P zeros and P ones from support teams, et cetera. And those issues can be really difficult to triage. Right. And so, in the Kubernetes environment, this problem kind of multi-folds, it goes, you know, escalates to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. And so, as you give that change to then run at your production edge location, like say your radio cell tower site or you hand it over to a customer to run it on their cluster, they might not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like (indistinct) hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is you have these distributed clusters at scale, you got to ensure someone's job is on the line to make sure that the security policies are configured properly. >> So, this is a huge problem. I love that comment. That's not happening on my system. It's the classic, you know, debugging mentality. >> Madhura: Yeah. >> But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >> Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what Arlon is, it's an open source project and it is a tool, it's a Kubernetes native tool for a complete end-to-end management of not just your clusters, but your clusters, all of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So, what Arlon lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >> So, what's the elevator pitch simply put for what dissolves in terms of the chaos you guys are reigning in, what's the bumper sticker? >> Yeah. >> What would it do? >> There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that assembly line brings, right? Arlon, and if you look at the logo we've designed, it's this funny little robot, and it's because when we think of Arlon, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well-structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage where again, it gets, you know, processed in a standardized way. And that's what Arlon really does. That's like deliver the pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for those. >> So keeping it smooth, the assembly line, things are flowing, CICD, pipelining. >> Madhura: Exactly. >> So, that's what you're trying to simplify that OPS piece for the developer. I mean, it's not really OPS, it's their OPS, it's coding. >> Yeah. Not just developer, the OPS, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middle layer of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly, monitoring and observability is integrated. And so, it solves problems for both those teams. >> Yeah, it's DevOps. So, the DevOps is the cloud-needed developer. The option teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >> Absolutely. Yeah. And, you know, Kubernetes really introduced or elevated this declarative management, right? Because you know, Kubernetes clusters are, or your, yeah, you know, specifications of components that go in Kubernetes are defined in declarative way, and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlon addresses that problem at the heart of it, and it does that using existing open source, well-known solutions. >> And, I want get into the benefits, what's in it for me as the customer, developer, but I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the current state of the product? You run the product group over there, Platform9, is it open source? And you guys have a product that's commercial. Can you explain the open-source dynamic? And first of all, why open source? >> Madhura: Yeah. >> And what is the consumption? I mean, open source is great, people want open source, they can download it, look up the code, but you know, maybe want to buy the commercial. So, I'm assuming you have that thought through, can you share? >> Madhura: Yeah. >> Open source and commercial relationship. >> Yeah. I think, you know, starting with why open source, I think, it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open-source technologies components, and then we, you know, make them available to our customers at scale through either a SaaS model or on-prem model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open-source economy, it's only right, I think in my mind that, we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with Fission, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why open source and also open source because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind a block box. >> Well, and that's what the developers want too. I mean, what we're seeing in reporting with SuperCloud is the new model of consumption is I want to look at the code and see what's in there. >> Madhura: That's right. >> And then also, if I want to use it, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I want to move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way is, well, but that's the benefit of open source. This is why standards and open source growing so fast, you have that confluence of, you know, a way for us to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian (indistinct) uses the dating metaphor, you know, hey, you know, I want to check it out first before I get married. >> Madhura: Right. >> And that's what open source. So, this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >> Absolutely. Yeah. Yeah. You know, I think in, you know, two things, I think one is just, you know, this cloud-native space is so vast that if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprise's use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so. Right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a Saas-hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open-source version and loved it and want to take it at scale and in production and need a partner to collaborate with, who can, you know, support them for that production environment. >> I have to ask you. Now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlon? What's in it for me? You know. 'Cause if I'm not enthused about it, I'm not going to be confident and it's going to be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlon? I'm a customer. >> Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public cloud-native Kubernetes, and then, we have our CICD pipelines that are automating the deployment of applications, et cetera. And then, there's this gray zone. And the gray zone is well before you can, your CICD pipelines can deploy the apps, somebody needs to do all of that groundwork of, you know, defining those clusters and yeah, you know, properly configuring them. And as these things start by being done hand grown. And then, as you scale, what typically enterprises would do today is they will have their homegrown DIY solutions for this. I mean, a number of folks that I talk to that have built Terraform automation, and then, you know, some of those key developers leave. So, it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course, technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think, (indistinct) would be delighted. The folks that we've talk, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on EKS Amazon, and we want to scale them to few thousands, but we don't think we are ready to do that. And this will give us the ability to, >> Yeah, I think, people are scared. I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale, small mistakes can become large mistakes. This is something that is concerning to enterprises. And I think, this is going to come up at (indistinct) this year where enterprises are going to say, okay, I need to see SLAs. I want to see track record, I want to see other companies that have used it. >> Madhura: Yeah. >> How would you answer that question to, or challenge, you know, hey, I love this, but is there any guarantees? Is there any, what's the SLA, I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >> Yeah, absolutely. So, two parts to that, right? One is Arlon leverages existing open-source components, products that are extremely popular. Two specifically. One is Arlon uses ArgoCD, which is probably one of the highest rated and used CD open-source tools that's out there, right? It's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is Arlon also makes use of cluster API (indistinct), which is a Kubernetes' sub-component, right? For life cycle management of clusters. So, there is enough of, you know, community users, et cetera, around these two products, right? Or open-source projects that will find Arlon to be right up in their alley because they're already comfortable, familiar with ArgoCD. Now, Arlon just extends the scope of what ArgoCD can do. And so, that's one. And then, the second part is going back to your point of the comfort. And that's where, you know, Platform9 has a role to play, which is when you are ready to deploy Arlon at scale, because you've been, you know, playing with it in your (indistinct) test environments, you're happy with what you get with it, then Platform9 will stand behind it and provide that SLA. >> And what's been the reaction from customers you've talked to Platform9 customers with, that are familiar with Argo and then Arlon? What's been some of the feedback? >> Yeah, I think, the feedback's been fantastic. I mean, I can give examples of customers where, you know, initially, you know, when you are telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlon, and we talk about the fact that it uses ArgoCD they start opening up, they say, we have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So, we've had that kind of validation. We've had validation all the way at the beginning of Arlon before we even wrote a single line of code saying, this is something we plan on doing. And the customer said, if you had it today, I would've purchased it. So, it's been really great validation. >> All right. So, next question is, what is the solution to the customer? If I asked you, look at, I have, I'm so busy, my team's overworked. I got a skills gap, I don't need another project that's so I'm so tied up right now, and I'm just chasing my tail. How does Platform9 help me? >> Yeah, absolutely. So I think, you know, one of the core tenants of Platform9 has always been that, we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS-hosted manner for our customers, right? So, our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so, from a customer's perspective, one, something like Arlon will integrate with what they have, so, they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today, and, you know, give you an inventory. And then, >> So, customers have clusters that are growing, that's a sign, >> Correct. >> Call you guys. >> Absolutely. Either they have massive large clusters. Right. That they want to split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now, they have management challenges. >> So, especially, operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure >> Madhura: Yeah. >> And or scale out. >> That's right. Exactly. >> And you provide that layer of policy. >> Absolutely. Yes. >> That's the key value here. >> That's right. >> So, policy-based configuration for cluster scale up. >> Profile and policy-based, declarative configuration and life cycle management for clusters. >> If I asked you how this enables SuperCloud, what would you say to that? >> I think, this is one of the key ingredients to SuperCloud, right? If you think about a SuperCloud environment, there is at least few key ingredients that come to my mind that are really critical. Like they are, you know, life-saving ingredients at that scale. One is having a really good strategy for managing that scale. You know, in a, going back to assembly line in a very consistent, predictable way. So, that Arlon solves, then you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are going to happen and you're going to have to figure out, you know, how to solve them fast. And Arlon by the way, also helps in that direction, but you also need observability tools. And then, especially if you're running at on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make SuperCloud successful. And you know, Arlon flows in one, >> Okay, so now, the next level is, okay, that makes sense. It's under the covers kind of speak under the hood. >> Madhura: Yeah. >> How does that impact the app developers of the cloud-native modern application workflows? Because the impact to me seems the apps are going to be impacted. Are they going to be faster, stronger? I mean, what's the impact, if you do all those things as you mentioned, what's the impact of the apps? >> Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge today where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my OPS counterpart to do their part, right? And so, this really gives them, you know, the right tooling for that. >> So, this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full-stack developer to the more specialized role. But this is a key point, and I have to ask you because if this Arlon solution takes place, as you say, and the apps are going to be (indistinct), they're designed to do, the question is, what does the current pain look like? Are the apps breaking? What is the signals to the customer, >> Madhura: Yeah. >> That they should be calling you guys up into implementing Arlon, Argo, and on all the other goodness to automate, what does some of the signals, is it downtime? Is it failed apps, is it latency? What are some of the things that, >> Madhura: Yeah, absolutely. >> Would be indications of things are F'ed up a little bit. >> Yeah. More frequent down times, down times that are, that take longer to triage. And so your, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners, and they're extremely interested in this because the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own scripts. So, these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your mean time to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small, focused, nimble OPS team, which has an immediate impact on your budget. So, those are the signals. >> This is the cloud-native at scale situation, the innovation going on. Final thought is your reaction to the idea that, if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where IT used to be supporting the business, you know, the back office and the (indistinct) terminals and some PCs and handhelds. Now, if technology's running, the business is the business. >> Yeah. >> Company is the application. >> Yeah. >> So, it can't be down. So, there's a lot of pressure on CSOs and CIOs now and boards is saying, how is technology driving the top-line revenue? That's the number one conversation. >> Yeah. >> Do you see the same thing? >> Yeah, it's interesting. I think there's multiple pressures at the CXO, CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, the technology that's, you know, that's going to drive your top line is going to drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially, when you're talking about, let's say, retailers or those kinds of large-scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So, I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >> Final question. What does cloud-native at scale look like to you? If all the things happen the way we want them to happen, the magic wand, the magic dust, what does it look like? >> What that looks like to me is a CIO sipping at his desk on coffee, production is running absolutely smooth. And he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are just taking care of themselves. >> John: And the CIO doesn't exist and there's no CISO, there at the beach. >> (laughs) Yeah. >> Thank you for coming on, sharing the cloud-native at scale here on The Cube. Thank you for your time. >> Fantastic. Thanks for having me. >> Okay. I'm John Furrier here, for special program presentation, special programming cloud-native at scale, enabling SuperCloud modern applications with Platform9. Thanks for watching. (gentle music)

Published Date : Oct 20 2022

SUMMARY :

My pleasure to have here Madhura Maskasky, and the SuperCloud as we call it, Yeah, you know, I And that's just the beginning. Can you share your view on what So, I think, you know, Can you scope the And that is just, you know, Kubernetes, and then you say, I like to call the, you know, you know, debugging mentality. And you guys have a and along the sites of those in a traditional, let's say, you know, the assembly line, piece for the developer. Because developers, you know, there is, So, the DevOps is the Because you know, Kubernetes clusters are, And you guys have a look up the code, but you know, Open source and And we have, you know, created and built the developers want too. the application, if you will. And that's what open to go that route, you know, enthusiastic view of, you know, And so, and there's multiple, you know, And I think, this is going to I'm an enterprise, I got tight, you know, And that's where, you know, of customers where, you know, and I'm just chasing my tail. clusters that you have today, And now, they have management challenges. That's right. Absolutely. So, policy-based configuration and life cycle management for clusters. at on the public cloud, you Okay, so now, the next level is, Because the impact to me seems the way you expect them to, and I have to ask you Would be indications of points, which is, you know, supporting the business, you know, That's the number one conversation. the technology that's, you know, If all the things happen the What that looks like to me John: And the CIO doesn't Thank you for your time. Thanks for having me. for special program presentation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Madhura MaskaskyPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

MadhuraPERSON

0.99+

second partQUANTITY

0.99+

ArlonORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

one siteQUANTITY

0.99+

TwoQUANTITY

0.99+

first generationQUANTITY

0.99+

two factorsQUANTITY

0.99+

bothQUANTITY

0.99+

two thingsQUANTITY

0.99+

each siteQUANTITY

0.99+

each componentQUANTITY

0.99+

firstQUANTITY

0.99+

Platform9ORGANIZATION

0.99+

one flavorQUANTITY

0.99+

ArgoORGANIZATION

0.98+

two partsQUANTITY

0.98+

secondQUANTITY

0.98+

SecondQUANTITY

0.98+

todayDATE

0.98+

SuperCloudTITLE

0.98+

AdrianPERSON

0.98+

tens of thousands of nodesQUANTITY

0.98+

one problemQUANTITY

0.98+

OneQUANTITY

0.98+

one nodeQUANTITY

0.98+

two productsQUANTITY

0.97+

tens of thousands of sitesQUANTITY

0.97+

one pictureQUANTITY

0.97+

The CubeORGANIZATION

0.96+

one endQUANTITY

0.96+

CloudFlareTITLE

0.96+

Platform9TITLE

0.95+

this yearDATE

0.95+

CXOORGANIZATION

0.95+

two axesQUANTITY

0.94+

three thingsQUANTITY

0.94+

EKSORGANIZATION

0.93+

single lineQUANTITY

0.92+

one exampleQUANTITY

0.91+

single clusterQUANTITY

0.91+

Platform9, Cloud Native at Scale


 

>>Everyone, welcome to the cube here in Palo Alto, California for a special presentation on Cloud native at scale, enabling super cloud modern applications with Platform nine. I'm John Furry, your host of The Cube. We've got a great lineup of three interviews we're streaming today. Mattor Makki, who's the co-founder and VP of Product of Platform nine. She's gonna go into detail around Arlon, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Bickley the chief architect of Platform nine Cube alumni. Going back to the OpenStack days. He's gonna go into why Arlon, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Vascar, Gort, the CEO of Platform nine, is gonna talk with me about his views on Super Cloud and why Platform nine has a scalable solutions to bring cloud native at scale. So enjoy the program, see you soon. Hello and welcome to the cube here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Forry, host of the Cube. Pleasure to have here me Makowski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. >>Thank you for having >>Me. So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on Super cloud as it fits to cloud native as scales up? >>Yeah, you know, I think what's interesting, and I think the reason why Super Cloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model, where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think super cloud is a, is an appropriate term for >>That. So you brought a couple things I want to dig into. You mentioned Edge Notes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. Wouldn't even know what's around the corner. You got buildings, you got iot, o ot, and it kind of coming together, but you also got this idea of regions, global infrastructures, big part of it. I just saw some news around cloud flare shutting down a site here, there's policies being made at scale. These new challenges there. Can you share because you can have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because it's something business consequences as well, but there are technical challenge. Can you share your view on what the technical challenges are for the super cloud across multiple edges and >>Regions? Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that skill really needs some well thought out, well-structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when you scale, is not at the level. >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're talking about at scale Yep. Challenges here. >>Yeah, absolutely. And I think, you know, I I like to call it, you know, the, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is, you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, It's not working. The exact same problem now happens and these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer's site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on all of these various factors at their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >>Okay. So I have to ask about scale because there are a lot of multiple steps involved when you see the success cloud native, you know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can configure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotpot is. And when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the, the two factors of scale is we talked about start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and POS from support teams, et cetera. And those issues can be really difficult to try us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalates to a higher degree because yeah, you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say you radio sell tower site, or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did it, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like ishly hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that these security policies are configured properly. >>So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching, Can you share what our lawn is, this new product, What is it all about? Talk about this new introduction. >>Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arwan is, it's an open source project and it is a tool, it's a Kubernetes native tool for complete end to end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So what alarm lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what this solves in, in terms of the chaos you guys are reigning in. What's the, what's the bumper sticker? Yeah, >>What would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right online. And if you look at the logo we've designed, it's this funny little robot. And it's because when we think of online, we, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each components, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage. But again, it gets, you know, processed in a standardized way. And that's what Arlon really does. That's like the I pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency >>For those. So keeping it smooth, the assembly on things are flowing. C C I CD pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, the developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of application that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secure properly, that they are logging, logs are being collected properly, monitoring and observability integrated. And so it solves problems for both those >>Teams. Yeah. It's DevOps. So the DevOps is the cloud native developer. The OP teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, Kubernetes really in introduced or elevated this declarative management, right? Because, you know, c communities clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined in a declarative way. And Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so online addresses that problem at the heart of it, and it does that using existing open source well known solutions. >>Ed, do I wanna get into the benefits? What's in it for me as the customer developer? But I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over at platform nine, is it open source? And you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SaaS model on from model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fi, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why opensource and also opensource because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a block box. >>Well, and that's, that's what the developers want too. I mean, what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way that, Well, but that's, that's the benefit. Open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for helpers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating me metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think, and you know, two things. I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a SAS hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production >>Environment. I have to ask you now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlo? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo customer? >>Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native es, and then we have our C I CD pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS CD pipelines can deploy the apps. Somebody needs to do all of their groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think Spico would be delighted. The folks that we've talked, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on s Amazon and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us >>Stability. Yeah, I think people are scared, not sc I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises. And, and I think this is gonna come up at co con this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Lon uses Argo cd, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of Intuit team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is arlon also makes use of cluster api capi, which is a ES sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with algo cd. Now Arlan just extends the scope of what Algo CD can do. And so that's one. And then the second part is going back to a point of the comfort. And that's where, you know, Platform nine has a role to play, which is when you are ready to deploy Alon at scale, because you've been, you know, playing with it in your DEF test environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that sla. >>And what's been the reaction from customers you've talked to Platform nine customers with, with, that are familiar with, with Argo and then Arlo? What's been some of the feedback? >>Yeah, I, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo CD and they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our line before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and, you know, give you an inventory and that, >>So customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. So >>Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yeah. And or scale out. >>That's right. Exactly. >>And you provide that layer of policy. >>Absolutely. >>Yes. That's the key value >>Here. That's right. >>So policy based configuration for cluster scale up >>Profile and policy based declarative configuration and life cycle management for clusters. >>If I asked you how this enables Super club, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there's at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a, going back to assembly line in a very consistent, predictable way so that our lot solves then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And alon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And, you know, alarm flows >>In one. Okay, so now the next level is, Okay, that makes sense. There's under the covers kind of speak under the hood. Yeah. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me, seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge to their, where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for >>That. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point, and I have to ask you because if this Arlo solution takes place, as you say, and the apps are gonna be stupid, there's designed to do, the question is, what did, does the current pain look like of the apps breaking? What does the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo, and, and, and on all the other goodness to automate, What are some of the signals? Is it downtime? Is it, is it failed apps, Is it latency? What are some of the things that Yeah, absolutely would be in indications of things are effed up a little bit. >>Yeah. More frequent down times, down times that are, that take longer to triage. And so you are, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners. And they're extremely interested in this because the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your, your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you are looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your, So those are, those are the >>Signals. This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where it used to be supporting the business, you know, the back office and the IIA terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. The company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and see, and boards is saying, how is technology driving the top line revenue? That's the number one conversation. Yeah. Do you see that same thing? >>Yeah. It's interesting. I think there's multiple pressures at the CXO CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloudnative at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things with things. So just >>Taking care of, and the CIO doesn't exist. There's no CSO there at the beach. >>Yeah. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for having >>Me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, the cube and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here at Bickley, who's the chief architect and co-founder of Platform nine b. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or well later, earlier when opens Stack was going. Great to see you and great to see congratulations on the success of platform nine. >>Thank you very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now was realized, and you've seen what Docker's doing with the new docker, the open source Docker now just a success Exactly. Of containerization, right? And now the Kubernetes layer that we've been working on for years is coming, bearing fruit. This is huge. >>Exactly. Yes. >>And so as infrastructure's code comes in, we talked to Bacar talking about Super Cloud, I met her about, you know, the new Arlon, our R lawn you guys just launched, the infrastructure's code is going to another level. And then it's always been DevOps infrastructure is code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon, connect the dots for us. What is the state of infrastructures code today? >>So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures code. But with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure as configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specify. So I think it's, it's a even better version of infrastructures code. >>Yeah, yeah. And, and that really means it's developer just accessing resources. Okay. Not declaring, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source, so popular, you don't have to have to write a lot of code. It's code being developed. And so it's into integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new, new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space that, >>That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is the new breed, the trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your >>View? It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project. Arlon. >>Yeah. So, so we're gonna get into Arlan in a second. I wanna get into the why Arlon. You guys announced that at our GoCon, which was put on here in Silicon Valley at the, at the by intu. They had their own little day over there at their headquarters. But before we get there, Vascar, your CEO came on and he talked about Super Cloud at our inaugural event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or application specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so to deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection for the internet at the, the layer too is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna continue. >>It's interesting. I just really wrote another post today on LinkedIn called the Silicon Wars AMD Stock is down arm has been on rise, we've remember pointing for many years now, that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd wanna have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in, in this community of co con, which will be a covering. So that brings up the whole what's next? You guys just announced our lawn at ar GoCon, which came out of Intuit. We've had Maria Teel at our super cloud event, She's a cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why our lawn, why this announcement? Yeah, >>So the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built AR lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, And >>What's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the, the, this abstraction or thin layer below the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads at the end of the day, and I talk to CXOs and IT folks that, that are now DevOps engineers. They care about the workloads and they want the infrastructure's code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened. They need observability and they need to, to know that it's working. That's right. And here's my workloads running effectively. So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, right? >>So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I want to run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem, like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's coming like an EC two instance, spin up a cluster. We've heard people used words like that. That's >>Right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with AR loan you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call the profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, So >>It's essentially standard, like creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook, just deploy it. Now what there is between say a script like I'm, I have scripts, I can just automate scripts >>Or yes, this is where that declarative API and infrastructure as configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things are controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure as configuration is built kind of on, it's a super set of infrastructures code because it's >>An evolution. >>You need edge's code, but then you can configure the code by just saying do it. You basically declaring saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years. I mean people are now starting to figure out, okay, it's not as easy as it sounds. Kubernetes has value. We're gonna hear this year at CubeCon a lot of this, what does cloud native at scale >>Mean? Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state, and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot rogue, that CubeCon coming up and now this'll be shipping this segment series out before. What do you expect to see at this year? It's the big story this year. What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jockeying for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time and there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of coupon and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, well >>Maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career VMware over decades with them within 12 years with 14 years or something like that. Big number co-founder here a platform. I you's been around for a while at this game, man. We talked about OpenStack, that project we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a Cloud Aati team at that time. We would joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform Nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? Opens Stack was an example and then Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will a, platform nine will be there and we will, you know, take the innovations from the the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart this segment. What is at scale, how many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you, Yeah, how would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. >>Yeah. And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing, doing over $2 billion billions of transactions a year and, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud need of its scale? >>The, the hyper squares? >>Yeah, yeah. A's Azure Google, >>You mean from a business perspective, they're, they have their own interests that, you know, that they're, they will keep catering to, they, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep well, >>They got great performance. I mean, from a, from a hardware standpoint, yes. That's gonna be key, >>Right? Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyper skaters really want to be in the game in terms of, you know, the, the new risk and arm ecosystems, the platforms. >>Yeah. Not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh the cloud is one big distributed computer. It's, it's hardware and you got software and you got middleware and he kinda over, well he's kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. Yes, >>Exactly. >>It's, we're back in the same game. Thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud, cloud native develop for developers. And John Furrier with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up. Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's it all going? Making it super multi-cloud is around the corner and public cloud is winning. Got the private cloud on premise and Edge. Got a great guest here, Vascar Gorde, CEO of Platform nine, just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you >>Again. So Kubernetes is a blocker enabler by, with a question mark I put on on there. Panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's just thing about what you guys are doing at Platform nine? Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm, right? >>Absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know, the application world is moving very fast in trying to become digital and cloud native. There are many options for you to run the infrastructure. The biggest blocking factor now is having a unified platform. And that's what where we come into >>Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days in 2000, 2001 when the first ASPs application service providers came out. Kind of a SaaS vibe, but that was kind of all kind of cloud-like >>It wasn't, >>And web services started then too. So you saw that whole growth. Now, fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>In fact, you know, as we were talking offline, I was in one of those ASPs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflations sea year. It's interesting. This is the first downturn, the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. Nope. Cause pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud. >>Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing infrastructure is not just some, you know, new servers and new application tools. It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. >>I wanna get your thoughts on Super Cloud because one of the things Dave Alon and I want to do with Super Cloud and calling it that was we, I, I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like they're not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, More dynamic, more unreal. >>Yeah. I think the reason why we think Super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? So, but they're not connected. So you can say, okay, it's more than one cloud. So it's, you know, multi-cloud. But super cloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch. You are looking at this as one unit. And that's where we see the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pan or a single platform for you to build your innovations on, regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is as a pilot to get the conversations flowing with, with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third Cloudworks in public cloud. We see the use cases on premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration. And then containers had a good shot with, with Docker. They re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. >>What's, >>What's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere in the journey is going on. And you know, most companies are, 70 plus percent of them have 1, 2, 3 container based, Kubernetes based applications now being rolled out. So it's very much here. It is in production at scale by many customers. And it, the beauty of it is yes, open source, but the biggest gating factor is the skill set. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool and >>Just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores about thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also un on-prem as an air gap version. Can >>You give an example on how you guys are deploying your platform to enable a super cloud experience for your customer? Right. >>So I'll give you two different examples. One is a very large networking company, public networking company. They have hundreds of products, hundreds of r and d teams that are building different, different products. And if you look at few years back, each one was doing it on a different platforms, but they really needed to bring the agility. And they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact, the customer says like, like the Maytag service person, cuz we provide it as a service and it barely takes one or two people to maintain it for them. >>So it's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, >>Whatever they want on their tools. They're using whatever app development tools they use, but they use our platform. What >>Benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely they're speeding. Speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we heard it all here, ops and security teams. Cause they're kind of part of one thing, but option security specifically need to catch up speed wise. Are you delivering that value to ops and security? Right? >>So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams. So >>You working two sides to that coin. You've got the dev side and then >>And then infrastructure >>Side. >>Okay. Another customer that I give an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on it's classic edge. It's classic edge, yeah. Right? They can't send it people to go install rack access servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You, you say little servers like how big one like a box, like a small little box, >>Right? And all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. >>Yep. >>We provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage thousands of >>Them. True plug and play >>Two, plug and play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the locations. >>So you guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud native. >>Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented, very well. >>Tucan, of course Detroit's >>Coming so, so it's already there, right? So we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud, not you have to rewrite and redevelop your application in business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing a lot of our customers in that journey. Now everybody wants to be cloudnative, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to use. Okay? So I think the complexity is there, but the business benefits of agility and uniformity and customer experience are truly being done. >>And I'll give you an example, I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how dominoes actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered. There were a pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow to the application changes what the expectations >>Are >>For the customer. Customer, >>The customer's expectations change, right? Once you get used to a better customer experience, you learn. >>That's to wrap it up. I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have a lot of insights. I want to ask you, given your career where you've been and what you've done and now let's CEO platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super Cloud. >>Sure. I think as you said, a lot of battles. CARSs being been in an asb, being in a real time software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is a couple of paddles come to me. One is, think of it, which was forced to honors like y2k. Everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. >>And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloudnative. You know, that is what I see >>Happening there. I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the gain, it's just changing how you operate, >>How you think and how you operate. See, if you think about the early days of e-commerce, just putting up a shopping cart that made you an e-commerce or e retailer or an e e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Nascar, thank you for coming on, spending the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, we're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Fur with Super Cloud 22 in the Cube. Thanks for watching. >>Thank you. Thank you. >>Hello and welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super Cloud and its challenges, new opportunities around solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.

Published Date : Oct 19 2022

SUMMARY :

So enjoy the program, see you soon. a lot different, but kind of the same as the first generation. And so you gotta rougher and it kind of coming together, but you also got this idea of regions, So I think, you know, in in the context of this, the, Can you scope the scale of the problem? And I think, you know, I I like to call it, you know, And that is just, you know, one example of an issue that happens. you know, you see some, you know, some experimentation. which is, you know, you have your perfectly written code that is operating just fine on your And so as you give that change to then run at your production edge location, And you guys have a solution you're launching, Can you share what So what alarm lets you do in a in terms of the chaos you guys are reigning in. And if you look at the logo we've designed, So keeping it smooth, the assembly on things are flowing. Because developers, you know, there is, the developers are responsible for one picture of So the DevOps is the cloud native developer. And so online addresses that problem at the heart of it, and it does that using So I'm assuming you have that thought through, can you share open source and commercial relationship? products starting all the way with fi, which was a serverless product, you know, that we had built to buy, but also actually kind of date the application, if you will. I think one is just, you know, this, this, this cloud native space is so vast I have to ask you now, let's get into what's in it for the customer. And so, and there's multiple, you know, enterprises that we talk to, shared that this is a major challenge we have today because we have, you know, I'm an enterprise, I got tight, you know, I love the open source trying to It's created by folks that are as part of Intuit team now, you know, And the customer said, If you had it today, I would've purchased it. So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been that And now they have management challenges. Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure That's right. And alon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? And so this really gives them, you know, the right tooling for But this is a key point, and I have to ask you because if this Arlo solution of challenges, and those are the pain points, which is, you know, if you're looking to reduce your, not where it used to be supporting the business, you know, that, you know, that the, the technology that's, you know, that's gonna drive your top line is If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Taking care of, and the CIO doesn't exist. Thank you for your time. Thanks for having of Platform nine b. Great to see you Cube alumni. And now the Kubernetes layer that we've been working on for years is Exactly. you know, the new Arlon, our R lawn you guys just launched, you know, do step A, B, C, and D instead with Kubernetes, I mean now with open source, so popular, you don't have to have to write a lot of code. you know, the emergence of systems and layers to help you manage that complexity is becoming That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is the new breed, the trend of SaaS you know, you think you have things under control, but some people from various teams will make changes here in the industry technical, how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection for the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, all the variations around and you know, compute storage networks the DevOps engineers, they get a a ways to So how do you guys look at the workload side of it? like K native, where you can express your application in more at a higher level, It's coming like an EC two instance, spin up a cluster. And then you can stamp out your app, your applications and your clusters and manage them And it's like a playbook, just deploy it. You just tell the system what you want and then You need edge's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at this year? If you look at a stack necessary for hosting We would joke we, you know, about, about the dream. So the successor to Kubernetes, you know, I don't Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, They have some SaaS apps, but mostly it's the ecosystem. you know, that they're, they will keep catering to, they, they will continue to find I mean, from a, from a hardware standpoint, yes. terms of, you know, the, the new risk and arm ecosystems, It's, it's hardware and you got software and you got middleware and he kinda over, Great to have you on. What's just thing about what you guys are doing at Platform nine? clouds, you know, the application world is moving very fast in trying to Patrick, we were talking before we came on stage here about your background and we were kind of talking about the glory days So you saw that whole growth. In fact, you know, as we were talking offline, I was in one of those And if you look at the tech trends, GDPs down, but not tech. some, you know, new servers and new application tools. you know, more, More dynamic, more unreal. So it's, you know, multi-cloud. the purpose of this event is as a pilot to get the conversations flowing with, with the influencers like yourselves And you know, most companies are, 70 plus percent of them have 1, 2, 3 container It runs on the edge, You give an example on how you guys are deploying your platform to enable a super And if you look at few years back, each one was doing So it's kinda like an SRE vibe. Whatever they want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Are you delivering that value to ops and security? Our buyer is usually, you know, the product divisions of companies You've got the dev side and then enhance the customer experience that happens when you either order the product or go into And all the person in the store has to do like And so that dramatically brings the velocity for them. of the public clouds. So you guys got some success. How do you explain when someone says what's cloud native, what isn't cloud native? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring I don't know anything about that, but the whole experience of how you order, For the customer. Once you get used to a better customer experience, One of the benefits of chatting with you here and been on the app side, I did the infrastructure right and then tried to build our If you did not adapt and adapt and accelerate I think that that e-commerce is interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your Nascar, thank you for coming on, spending the time to come in and share with our community and being part of Thank you. I hope you enjoyed this program.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VascarPERSON

0.99+

Mattor MakkiPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

Paul MorritzPERSON

0.99+

Sean FurPERSON

0.99+

IBMORGANIZATION

0.99+

PatrickPERSON

0.99+

Vascar GordePERSON

0.99+

Adrian KaroPERSON

0.99+

John ForryPERSON

0.99+

John FurryPERSON

0.99+

John FurPERSON

0.99+

oneQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

50,000QUANTITY

0.99+

Dave AlonPERSON

0.99+

2000DATE

0.99+

Maria TeelPERSON

0.99+

14 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

OracleORGANIZATION

0.99+

tensQUANTITY

0.99+

millionsQUANTITY

0.99+

GortPERSON

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

NascarPERSON

0.99+

2001DATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

OneQUANTITY

0.99+

4,000 engineersQUANTITY

0.99+

one siteQUANTITY

0.99+

TwoQUANTITY

0.99+

second partQUANTITY

0.99+

VMwareORGANIZATION

0.99+

two peopleQUANTITY

0.99+

ArlonORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Office 365TITLE

0.99+

MakowskiPERSON

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

ArloORGANIZATION

0.99+

two sidesQUANTITY

0.99+

John FurrierPERSON

0.99+

two partsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

bothQUANTITY

0.99+

next yearDATE

0.99+

first generationQUANTITY

0.99+

22 years laterDATE

0.99+

1QUANTITY

0.99+

first downturnQUANTITY

0.99+

Platform nineORGANIZATION

0.99+

one unitQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.98+

one flavorQUANTITY

0.98+

more than one cloudQUANTITY

0.98+

two thousandsQUANTITY

0.98+

One personQUANTITY

0.98+

BickleyPERSON

0.98+

BacarPERSON

0.98+

12 yearsQUANTITY

0.98+

first timeQUANTITY

0.98+

GoConEVENT

0.98+

each siteQUANTITY

0.98+

thousands of storesQUANTITY

0.98+

AzureTITLE

0.98+

20 years laterDATE

0.98+

Madhura Maskasky, Platform9 Cloudnative at Scale


 

>>Hello everyone. Welcome to the cube here in Palo Alto, California for a special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Forer, host of the Cube. My pleasure to have here me Makoski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. Thank >>You for having >>Me. So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native as scales up? >>Yeah. You know, I think what's interesting, and I think the reason why Super Cloud is a really good and a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributors of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of micro sites. These micro sites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think super cloud is a, is an appropriate term >>For that. So you brought a couple things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. We even know what's around the corner. You got buildings, you got I O D OT and IT kind of coming together. But you also got this idea of regions, global infrastructure is big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale. These new challenges there, can you share because you gotta have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's some business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud or across multiple edges and regions? >>Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have, deploy number of clusters in the Kubernetes space. And then on the other access you would have your distribution factor, right? Which is, do you have these tens of thousands of notes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when your scale is not at the level, >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud data become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're about at scale Yep. Challenges here. Yeah, >>Absolutely. And I think, you know, I I like to call it, you know, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is you know, it works on my cluster problem, right? So, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this change, everything was fantastic, it worked flawlessly on my machine, on production, it's not working. And the exact same problem now happens in these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that >>Occur. Okay. So I have to ask about scale because there are a lot of multiple steps involved when you see the success cloud native, you know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can figure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is. And when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the, the two factors of scale is we talked about start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and POS from support teams, et cetera. And those issues can be really difficult to triage us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalate to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say your radio cell tower site or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes like ishly hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that the security policies are configured >>Properly. So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >>Yeah, absolutely. I'm very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arwan is, it's an open source project and it is a tool, it's a Kubernetes native tool for complete end-to-end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So what Arlan lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what dissolves in, in terms of the chaos you guys are reigning in, what's the, what's the bumper sticker? Yeah, >>What would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right? Lon. And if you look at the logo we've designed, it's this funny little robot, and it's because when we think of lon, we think of these enterprise large scale environments, you know, sprawling at scale creating chaos because there isn't necessarily a well thought through, well-structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage where again, it gets, you know, processed in a standardized way. And that's what Alon really does. That's like the deliver pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for those. >>So keeping it smooth, the assembly line, things are flowing. See c i CD pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops is coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly, monitoring and observability is integrated. And so it solves problems for both those teams. >>Yeah, it's dev op, So the DevOps is the cloud needed developer, The kins have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, es really in introduced or elevated this declarative management, right? Because you know, Kubernetes clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined in a declarative way. And Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlan addresses that problem at the heart of it, and it does that using existing open source, well known solutions. >>Medo, I want to get into the benefits, what's in it for me as the customer developer, but I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over there, Platform nine, is it open source? And you guys have a product that's commercial. Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share that open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SAS model or onpro model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fi, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why open source and also open source because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a blog box. >>Well, and that's, that's what the developers want too. And what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way it is. Well, but that's, that's the benefit. Open source. This is why standards and open source growing so fast, you have that confluence of, you know, a way fors to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think in, you know, two things, I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprise's use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a sa hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production environment. I >>Have to ask you now, let's get into what's in it for the customer. I'm a customer, why should I be enthused about Arlo? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo if I'm a >>Customer? Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you will hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native Kubernetes, and then we have our C I C D pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS CD pipelines can deploy the apps. Somebody needs to do all of that groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think spic would be delighted. The folks that we've spoken, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on s Amazon and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us the ability. >>Yeah, I think people are scared. Not, I won't say scare, that's a a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises and, and I think this is gonna come up at Cuban this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the sla I'm an enterprise, I got tight, you know, I love the open source kind of free, fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Arlan uses Argo cd, which is probably one of the highest rated and used CD open source tools that's out there, right? It's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is arlon also makes use of cluster api capi, which is a sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with algo cd. Now Arlan just extends the scope of what Algo CD can do. And so that's one. And then the second part is going back to your point of the comfort. And that's where, you know, Platform nine has a role to play, which is when you are ready to deploy arlon at scale, because you've been, you know, playing with it in your dev tested environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that sla. >>And what's been the reaction from customers you've talked to Platform nine customers with, with, that are familiar with, with Argo and then Arlo? What's been some of the feedback? >>Yeah, I, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo cdn, they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our lawn before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SAS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customer's hands and offloading it to our hands, right? And giving them that full white glove treatment as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and, you know, give you an inventory. And so >>Customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. >>So especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yep. And or scale out. >>That's right. Exactly. And >>You provide that layer of policy. >>Absolutely. Yes. >>That's the key value >>Here. That's right. >>So policy based configuration for cluster scale >>Up, well profile and policy based declarative configuration and lifecycle management for >>Clusters. If I asked you how this enables Super Cloud, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there is at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale. You know, in a, going back to assembly line in a very consistent, predictable way. So that are land solves, then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And arlon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running at, on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And, you know, our long flows >>In one. Okay, so now the next level is, Okay, that makes sense. Is under the covers kind of speak under the hood. Yeah. How does that impact the app developers of the cloud native modern application workflows? Because the impact to me seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge today where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own these stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for >>That. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the fulls stack developer to the more specialized role. But this is a key point, and I have to ask you because if this, our low solution takes place, as you say, and the apps are gonna be stupid, they designed to do, the question is, what did, does the current pain look like? Are the apps breaking? What is the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo and, and all the other goodness to automate? What does some of the signals, is it downtime? Is it, is it failed apps, is it latency? What are some of the things that Yeah, absolutely. That would be indications of things are effed up a little bit. >>Yeah. More frequent down times, down times that are, that take longer to triage. And so your, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they're, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners, and they're extremely interested in this because the, the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own scripts. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your budget. So those are, those are the signals. >>This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application not where it used to be supporting the business, you know, the back office and the immediate terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. Company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and boards are saying, How is technology driving the top line revenue? That's the number one conversation. Yep. Do you see the same thing? >>Yeah, it's interesting. I think there's multiple pressures at the cx, OCI O level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, the, the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloud native at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are just taking >>Care and the CIO doesn't exist. There's no seeso there at the beach. >>Yep. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for >>Having me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching.

Published Date : Oct 18 2022

SUMMARY :

I'm John Forer, host of the Cube. a lot different, but kind of the same as the first generation. And so you gotta rougher that with a terminology that, Can you share your view on what the technical challenges So I think, you know, in in the context of this, the, this, Can you scope the scale of the problem? the problem that the scale creates, you know, there's various problems, but I think one, And that is just, you know, one example of an issue that happens. cloud native, you know, you see some, you know, some experimentation. you know, you have your perfectly written code that is operating just fine on your machine, And so as you give that change to then run at your production edge location, And you guys have a solution you're launching. So what Arlan lets you do in a then handing to the next stage where again, it gets, you know, processed in a standardized way. So keeping it smooth, the assembly line, things are flowing. Because developers, you know, there is, developers are responsible for one picture of Yeah, it's dev op, So the DevOps is the cloud needed developer, The kins have to kind of set policies. of that world of a single cluster, and when you actually talk about defining the clusters or defining And you guys have a product that's commercial. products starting all the way with fi, which was a serverless product, you know, that we had built to of date the application, if you will. choose to go that route, you know, once they have used the open source enthusiastic view of, you know, why I should be enthused about Arlo if I'm a And so, and there's multiple, you know, enterprises that we talk to, The folks that we've spoken, you know, spoken with, have been absolutely excited Is there any, what's the sla I'm an enterprise, I got tight, you know, I love the open source kind of free, It's created by folks that are as part of into team now, you know, you know, initially, you know, when you are, when you're telling them about your entire So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been that And now they have management challenges. So especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure And Absolutely. And arlon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things as you mentioned, And so this really gives them, you know, the right tooling for But this is a key point, and I have to ask you because if this, our low solution So these are the kinds of challenges, and those are the pain points, which is, you know, to be supporting the business, you know, the back office and the immediate terminals and some that, you know, the, the, the technology that's, you know, that's gonna drive your top line is gonna If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Care and the CIO doesn't exist. Thank you for your time. Thanks for at scale, enabling super cloud modern applications with Platform nine.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Madhura MaskaskyPERSON

0.99+

Adrian KaroPERSON

0.99+

John ForerPERSON

0.99+

John FurPERSON

0.99+

second partQUANTITY

0.99+

AmazonORGANIZATION

0.99+

TwoQUANTITY

0.99+

one siteQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two thingsQUANTITY

0.99+

two partsQUANTITY

0.99+

two factorsQUANTITY

0.99+

one flavorQUANTITY

0.99+

bothQUANTITY

0.99+

tens of thousands of notesQUANTITY

0.99+

oneQUANTITY

0.99+

first generationQUANTITY

0.99+

each componentQUANTITY

0.99+

one pictureQUANTITY

0.99+

firstQUANTITY

0.98+

each siteQUANTITY

0.98+

todayDATE

0.98+

MedoPERSON

0.98+

SecondQUANTITY

0.98+

OneQUANTITY

0.98+

ArlanORGANIZATION

0.98+

secondQUANTITY

0.98+

tens of thousands of sitesQUANTITY

0.98+

three thingsQUANTITY

0.98+

ArgoORGANIZATION

0.98+

MakoskiPERSON

0.97+

two productsQUANTITY

0.97+

Platform nineTITLE

0.96+

one problemQUANTITY

0.96+

single lineQUANTITY

0.96+

ArlonORGANIZATION

0.95+

this yearDATE

0.95+

CloudFlareTITLE

0.95+

one nodeQUANTITY

0.95+

algo cdTITLE

0.94+

customersQUANTITY

0.93+

hundredsQUANTITY

0.92+

lonORGANIZATION

0.92+

ArlanPERSON

0.92+

arlonORGANIZATION

0.91+

one exampleQUANTITY

0.91+

KubernetesTITLE

0.9+

single clusterQUANTITY

0.89+

ArloORGANIZATION

0.89+

Platform nineORGANIZATION

0.87+

one wayQUANTITY

0.85+

day twoQUANTITY

0.85+

day oneQUANTITY

0.82+

CloudnativeORGANIZATION

0.8+

two accessQUANTITY

0.79+

one endQUANTITY

0.78+

CubanLOCATION

0.78+

Platform9ORGANIZATION

0.78+

AlonORGANIZATION

0.77+

thousandsQUANTITY

0.77+

Platform9, Cloud Native at Scale


 

>>Hello, welcome to the Cube here in Palo Alto, California for a special presentation on Cloud native at scale, enabling super cloud modern applications with Platform nine. I'm John Furr, your host of The Cube. We had a great lineup of three interviews we're streaming today. Meor Ma Makowski, who's the co-founder and VP of Product of Platform nine. She's gonna go into detail around Arlon, the open source products, and also the value of what this means for infrastructure as code and for cloud native at scale. Bickley the chief architect of Platform nine Cube alumni. Going back to the OpenStack days. He's gonna go into why Arlon, why this infrastructure as code implication, what it means for customers and the implications in the open source community and where that value is. Really great wide ranging conversation there. And of course, Vascar, Gort, the CEO of Platform nine, is gonna talk with me about his views on Super Cloud and why Platform nine has a scalable solutions to bring cloudnative at scale. So enjoy the program. See you soon. Hello everyone. Welcome to the cube here in Palo Alto, California for special program on cloud native at scale, enabling next generation cloud or super cloud for modern application cloud native developers. I'm John Furry, host of the Cube. A pleasure to have here, me Makoski, co-founder and VP of product at Platform nine. Thanks for coming in today for this Cloudnative at scale conversation. Thank >>You for having me. >>So Cloudnative at scale, something that we're talking about because we're seeing the, the next level of mainstream success of containers Kubernetes and cloud native develop, basically DevOps in the C I C D pipeline. It's changing the landscape of infrastructure as code, it's accelerating the value proposition and the super cloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on super cloud as it fits to cloud native as scales up? >>Yeah, you know, I think what's interesting, and I think the reason why Super Cloud is a really good, in a really fit term for this, and I think, I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multi-cloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model where you have a few large distributions of infrastructure and workload at a few locations, I think the model is kind of flipped around, right? Where you have a large number of microsites, these microsites could be your public cloud deployment, your private on-prem infrastructure deployments, or it could be your edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you gotta rougher that with a terminology that, that, that indicates the scale and complexity of it. And so I think supercloud is a, is an appropriate term for that. >>So you brought a couple of things I want to dig into. You mentioned edge nodes. We're seeing not only edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning. Wouldn't even know what's around the corner. You got buildings, you got iot, ot, and IT kind of coming together, but you also got this idea of regions, global infras infrastructures, big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale, These new challenges there. Can you share because you can have edge. So hybrid cloud is a winning formula. Everybody knows that it's a steady state. Yeah. But across multiple clouds brings in this new un engineered area, yet it hasn't been done yet. Spanning clouds. People say they're doing it, but you start to see the toe in the water, it's happening, it's gonna happen. It's only gonna get accelerated with the edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's something business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the super cloud or across multiple edges and regions? >>Yeah, absolutely. So I think, you know, in in the context of this, the, this, this term of super cloud, I think it's sometimes easier to visualize things in terms of two access, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have deploy a number of clusters in the Kubernetes space. And then on the other axis you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site or do you have them distributed across tens of thousands of sites with one node at each site? Right? And if you have just one flavor of this, there is enough complexity, but potentially manageable. But when you are expanding on both these access, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this or when you, when you scale, is not at the level. >>Can you scope the complexity? Because I mean, I hear a lot of moving parts going on there, the technology's also getting better. We we're seeing cloud native become successful. There's a lot to configure, there's a lot to install. Can you scope the scale of the problem? Because we're talking about at scale Yep. Challenges here. Yeah, >>Absolutely. And I think, you know, I I like to call it, you know, the, the, the problem that the scale creates, you know, there's various problems, but I think one, one problem, one way to think about it is, is, you know, it works on my cluster problem, right? So I, you know, I come from engineering background and there's a, you know, there's a famous saying between engineers and QA and the support folks, right? Which is, it works on my laptop, which is I tested this chain, everything was fantastic, it worked flawlessly on my machine, on production, It's not working. The exact same problem now happens and these distributed environments, but at massive scale, right? Which is that, you know, developers test their applications, et cetera within the sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right? >>And the production deployment could be going at the radio cell tower at the edge location where a cluster is running there, or it could be sending, you know, these applications and having them run at my customer site where they might not have configured that cluster exactly the same way as I configured it, or they configured the cluster, right? But maybe they didn't deploy the security policies, or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors are their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ball game of issues come in the context of security, right? Because when you are deploying applications at scale in a distributed manner, you gotta make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >>Okay. So I have to ask about scale, because there are a lot of multiple steps involved when you see the success of cloud native. You know, you see some, you know, some experimentation. They set up a cluster, say it's containers and Kubernetes, and then you say, Okay, we got this, we can figure it. And then they do it again and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you gotta scale it. Then you're seeing security breaches, you're seeing configuration errors. This seems to be where the hotspot is in when companies transition from, I got this to, Oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >>Yeah, so, you know, I think it's interesting. There's multiple problems that occur when, you know, the two factors of scale, as we talked about, start expanding. I think one of them is what I like to call the, you know, it, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem, which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with p zeros and pos from support teams, et cetera. And those issues can be really difficult to triage us, right? And so in the Kubernetes environment, this problem kind of multi folds, it goes, you know, escalates to a higher degree because you have your sandbox developer environments, they have their clusters and things work perfectly fine in those clusters because these clusters are typically handcrafted or a combination of some scripting and handcrafting. >>And so as you give that change to then run at your production edge location, like say your radio cell tower site, or you hand it over to a customer to run it on their cluster, they might not have not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so the things don't work. And when things don't work, triaging them becomes nightmarishly hard, right? It's just one of the examples of the problem, another whole bucket of issues is security, which is, is you have these distributed clusters at scale, you gotta ensure someone's job is on the line to make sure that these security policies are configured properly. >>So this is a huge problem. I love that comment. That's not not happening on my system. It's the classic, you know, debugging mentality. Yeah. But at scale it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching. Can you share what Arlon is this new product? What is it all about? Talk about this new introduction. >>Yeah, absolutely. Very, very excited. You know, it's one of the projects that we've been working on for some time now because we are very passionate about this problem and just solving problems at scale in on-prem or at in the cloud or at edge environments. And what arlon is, it's an open source project, and it is a tool, it's a Kubernetes native tool for complete end to end management of not just your clusters, but your clusters. All of the infrastructure that goes within and along the site of those clusters, security policies, your middleware, plug-ins, and finally your applications. So what our LA you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >>So what's the elevator pitch simply put for what dissolves in, in terms of the chaos you guys are reigning in, what's the, what's the bumper sticker? Yeah, what >>Would it do? There's a perfect analogy that I love to reference in this context, which is think of your assembly line, you know, in a traditional, let's say, you know, an auto manufacturing factory or et cetera, and the level of efficiency at scale that that assembly line brings, right? Our line, and if you look at the logo we've designed, it's this funny little robot. And it's because when we think of online, we think of these enterprise large scale environments, you know, sprawling at scale, creating chaos because there isn't necessarily a well thought through, well structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage. But again, it gets, you know, processed in a standardized way. And that's what arlon really does. That's like the deliver pitch. If you have problems of scale of managing your infrastructure, you know, that is distributed. Arlon brings the assembly line level of efficiency and consistency for >>Those. So keeping it smooth, the assembly on things are flowing. See c i CD pipe pipelining. Exactly. So that's what you're trying to simplify that ops piece for the developer. I mean, it's not really ops, it's their ops, it's coding. >>Yeah. Not just developer, the ops, the operations folks as well, right? Because developers, you know, there is, developers are responsible for one picture of that layer, which is my apps, and then maybe that middleware of applications that they interface with, but then they hand it over to someone else who's then responsible to ensure that these apps are secure properly, that they are logging, logs are being collected properly, monitoring and observability integrated. And so it solves problems for both >>Those teams. Yeah. It's DevOps. So the DevOps is the cloud needed developer's. That's right. The option teams have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >>Absolutely. Yeah. And, and, and, and you know, ES really in introduced or elevated this declarative management, right? Because, you know, s clusters are Yeah. Or your, yeah, you know, specifications of components that go in Kubernetes are defined a declarative way, and Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlon addresses that problem at the heart of it, and it does that using existing open source well known solutions. >>And do I want to get into the benefits? What's in it for me as the customer developer? But I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the, what's the current state of the product? You run the product group over at Platform nine, is it open source? And you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? Yeah. And what is the consumption? I mean, open source is great, People want open source, they can download it, look up the code, but maybe wanna buy the commercial. So I'm assuming you have that thought through, can you share open source and commercial relationship? >>Yeah, I think, you know, starting with why open source? I think it's, you know, we as a company, we have, you know, one of the things that's absolutely critical to us is that we take mainstream open source technologies components and then we, you know, make them available to our customers at scale through either a SaaS model or on-prem model, right? But, so as we are a company or startup or a company that benefits, you know, in a massive way by this open source economy, it's only right, I think in my mind that we do our part of the duty, right? And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products starting all the way with fision, which was a serverless product, you know, that we had built to various other, you know, examples that I can give. But that's one of the main reasons why opensource and also open source, because we want the community to really firsthand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind, behind a block box. >>Well, and that's, that's what the developers want too. And what we're seeing in reporting with Super Cloud is the new model of consumption is I wanna look at the code and see what's in there. That's right. And then also, if I want to use it, I'll do it. Great. That's open source, that's the value. But then at the end of the day, if I wanna move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way that long. But that's, that's the benefit. Open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for developers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Karo uses the dating met metaphor, you know, Hey, you know, I wanna check it out first before I get married. Right? And that's what open source, So this is the new, this is how people are selling. This is not just open source, this is how companies are selling. >>Absolutely. Yeah. Yeah. You know, I think, and you know, two things. I think one is just, you know, this, this, this cloud native space is so vast that if you, if you're building a close flow solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case if they choose to do so, right? But at the same time, what's also critical to us is we are able to provide a supported version of it with an SLA that we, you know, that's backed by us, a SAS hosted version of it as well, for those customers who choose to go that route, you know, once they have used the open source version and loved it and want to take it at scale and in production and need, need, need a partner to collaborate with, who can, you know, support them for that production >>Environment. I have to ask you now, let's get into what's in it for the customer. I'm a customer. Yep. Why should I be enthused about Arla? What's in it for me? You know? Cause if I'm not enthused about it, I'm not gonna be confident and it's gonna be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlo? I'm a >>Customer. Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, you know, our customers, where this is a very kind of typical story that you hear, which is we have, you know, a Kubernetes distribution. It could be on premise, it could be public clouds, native Kubernetes, and then we have our C I C D pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is well before you can you, your CS c D pipelines can deploy the apps. Somebody needs to do all of that groundwork of, you know, defining those clusters and yeah. You know, properly configuring them. And as these things, these things start by being done hand grown. And then as the, as you scale, what typically enterprises would do today is they will have their home homegrown DIY solutions for this. >>I mean, the number of folks that I talk to that have built Terra from automation, and then, you know, some of those key developers leave. So it's a typical open source or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits the problem. And so that's that pitch. I think Ops FICO would be delighted. The folks that we've talk, you know, spoken with, have been absolutely excited and have, you know, shared that this is a major challenge we have today because we have, you know, few hundreds of clusters on ecos Amazon, and we wanna scale them to few thousands, but we don't think we are ready to do that. And this will give us the >>Ability to, Yeah, I think people are scared. Not sc I won't say scare, that's a bad word. Maybe I should say that they feel nervous because, you know, at scale small mistakes can become large mistakes. This is something that is concerning to enterprises. And, and I think this is gonna come up at co con this year where enterprises are gonna say, Okay, I need to see SLAs. I wanna see track record, I wanna see other companies that have used it. Yeah. How would you answer that question to, or, or challenge, you know, Hey, I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight, you know, I love the open source trying to free fast and loose, but I need hardened code. >>Yeah, absolutely. So, so two parts to that, right? One is Arlan leverages existing open source components, products that are extremely popular. Two specifically. One is Arlan uses Argo cd, which is probably one of the highest and used CD open source tools that's out there. Right's created by folks that are as part of into team now, you know, really brilliant team. And it's used at scale across enterprises. That's one. Second is Alon also makes use of Cluster api cappi, which is a Kubernetes sub-component, right? For lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products, right? Or, or, or open source projects that will find Arlan to be right up in their alley because they're already comfortable, familiar with Argo cd. Now Arlan just extends the scope of what City can do. And so that's one. And then the second part is going back to a point of the comfort. And that's where, you know, platform line has a role to play, which is when you are ready to deploy online at scale, because you've been, you know, playing with it in your DEF test environments, you're happy with what you get with it, then Platform nine will stand behind it and provide that >>Sla. And what's been the reaction from customers you've talked to Platform nine customers with, with that are familiar with, with Argo and then rlo? What's been some of the feedback? >>Yeah, I, I think the feedback's been fantastic. I mean, I can give you examples of customers where, you know, initially, you know, when you are, when you're telling them about your entire portfolio of solutions, it might not strike a card right away. But then we start talking about Arlan and, and we talk about the fact that it uses Argo adn, they start opening up, they say, We have standardized on Argo and we have built these components, homegrown, we would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of our land before we even wrote a single line of code saying this is something we plan on doing. And the customer said, If you had it today, I would've purchased it. So it's been really great validation. >>All right. So next question is, what is the solution to the customer? If I asked you, Look it, I have, I'm so busy, my team's overworked. I got a skills gap. I don't need another project that's, I'm so tied up right now and I'm just chasing my tail. How does Platform nine help me? >>Yeah, absolutely. So I think, you know, one of the core tenets of Platform nine has always been been that we try to bring that public cloud like simplicity by hosting, you know, this in a lot of such similar tools in a SaaS hosted manner for our customers, right? So our goal behind doing that is taking away or trying to take away all of that complexity from customers' hands and offloading it to our hands, right? And giving them that full white glove treatment, as we call it. And so from a customer's perspective, one, something like arlon will integrate with what they have so they don't have to rip and replace anything. In fact, it will, even in the next versions, it may even discover your clusters that you have today and you know, give you an inventory. And that will, >>So if customers have clusters that are growing, that's a sign correct call you guys. >>Absolutely. Either they're, they have massive large clusters, right? That they wanna split into smaller clusters, but they're not comfortable doing that today, or they've done that already on say, public cloud or otherwise. And now they have management challenges. So >>Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and reconfigure Yep. And or scale out. >>That's right. Exactly. And >>You provide that layer of policy. >>Absolutely. >>Yes. That's the key value here. >>That's right. >>So policy based configuration for cluster scale up, >>Well profile and policy based declarative configuration and lifecycle management for clusters. >>If I asked you how this enables supercloud, what would you say to that? >>I think this is one of the key ingredients to super cloud, right? If you think about a super cloud environment, there's at least few key ingredients that that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a, going back to assembly line in a very consistent, predictable way so that our lot solves then you, you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are gonna happen and you're gonna have to figure out, you know, how to solve them fast. And arlon by the way, also helps in that direction, but you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Super Cloud successful. And you know, our alarm fills in >>One. Okay. So now the next level is, Okay, that makes sense. Is under the covers kind of speak under the hood. Yeah. How does that impact the app developers and the cloud native modern application workflows? Because the impact to me, seems the apps are gonna be impacted. Are they gonna be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >>Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through, and any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge to their, where there's a split responsibility, right? I'm responsible for my code, I'm responsible for some of these other plugins, but I don't own the stack end to end. I have to rely on my ops counterpart to do their part, right? And so this really gives them, you know, the right tooling for that. >>So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, you're starting to see this fragmentation gone of the days of the full stack developer to the more specialized role. But this is a key point, and I have to ask you because if this RLO solution takes place, as you say, and the apps are gonna be stupid, they're designed to do, the question is, what did does the current pain look like of the apps breaking? What does the signals to the customer Yeah. That they should be calling you guys up into implementing Arlo, Argo and, and all the other goodness to automate? What are some of the signals? Is it downtime? Is it, is it failed apps, Is it latency? What are some of the things that Yeah, absolutely would be indications of things are effed up a little bit. Yeah. >>More frequent down times, down times that are, that take longer to triage. And so you are, you know, the, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they're, they have a number of folks on in the field that have to take these apps and run them at customer sites. And that's one of our partners. And they're extremely interested in this because they're the, the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to reduce your meantime to resolution, if you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you are looking to manage these at scale environments with a relatively small, focused, nimble ops team, which has an immediate impact on your budget. So those are, those are the signals. >>This is the cloud native at scale situation, the innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application, not where it used to be supporting the business, you know, the back office and the maybe terminals and some PCs and handhelds. Now if technology's running, the business is the business. Yeah. Company's the application. Yeah. So it can't be down. So there's a lot of pressure on, on CSOs and CIOs now and boards is saying, How is technology driving the top line revenue? That's the number one conversation. Yep. Do you see that same thing? >>Yeah. It's interesting. I think there's multiple pressures at the CXO CIO level, right? One is that there needs to be that visibility and clarity and guarantee almost that, you know, that the, the technology that's, you know, that's gonna drive your top line is gonna drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right? Especially when you're talking about, let's say retailers or those kinds of large scale vendors, they many times make money by lowering the amount that they spend on, you know, providing those goods to their end customers. So I think those, both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >>Final question. What does cloudnative at scale look like to you? If all the things happen the way we want 'em to happen, The magic wand, the magic dust, what does it look like? >>What that looks like to me is a CIO sipping at his desk on coffee production is running absolutely smooth. And his, he's running that at a nimble, nimble team size of at the most, a handful of folks that are just looking after things, but things are >>Just taking care of the CIO doesn't exist. There's no ciso, they're at the beach. >>Yep. >>Thank you for coming on, sharing the cloud native at scale here on the cube. Thank you for your time. >>Fantastic. Thanks for >>Having me. Okay. I'm John Fur here for special program presentation, special programming cloud native at scale, enabling super cloud modern applications with Platform nine. Thanks for watching. Welcome back everyone to the special presentation of cloud native at scale, the cube and platform nine special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development. We're here with Bickley, who's the chief architect and co-founder of Platform nine Pick. Great to see you Cube alumni. We, we met at an OpenStack event in about eight years ago, or later, earlier when OpenStack was going. Great to see you and great to see congratulations on the success of platform nine. >>Thank you very much. >>Yeah. You guys have been at this for a while and this is really the, the, the year we're seeing the, the crossover of Kubernetes because of what happens with containers. Everyone now has realized, and you've seen what Docker's doing with the new docker, the open source Docker now just the success Exactly. Of containerization, right? And now the Kubernetes layer that we've been working on for years is coming, bearing fruit. This is huge. >>Exactly. Yes. >>And so as infrastructures code comes in, we talked to Bacar talking about Super Cloud, I met her about, you know, the new Arlon, our, our lawn, and you guys just launched the infrastructures code is going to another level, and then it's always been DevOps infrastructures code. That's been the ethos that's been like from day one, developers just code. Then you saw the rise of serverless and you see now multi-cloud or on the horizon, connect the dots for us. What is the state of infrastructure as code today? >>So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures code. But with Kubernetes, I think that project has evolved at the concept even further. And these dates, it's infrastructure is configuration, right? So, which is an evolution of infrastructure as code. So instead of telling the system, here's how I want my infrastructure by telling it, you know, do step A, B, C, and D instead with Kubernetes, you can describe your desired state declaratively using things called manifest resources. And then the system kind of magically figures it out and tries to converge the state towards the one that you specified. So I think it's, it's a even better version of infrastructures code. >>Yeah. And that really means it's developer just accessing resources. Okay. That declare, Okay, give me some compute, stand me up some, turn the lights on, turn 'em off, turn 'em on. That's kind of where we see this going. And I like the configuration piece. Some people say composability, I mean now with open source so popular, you don't have to have to write a lot of code, this code being developed. And so it's into integration, it's configuration. These are areas that we're starting to see computer science principles around automation, machine learning, assisting open source. Cuz you got a lot of code that's right in hearing software, supply chain issues. So infrastructure as code has to factor in these new dynamics. Can you share your opinion on these new dynamics of, as open source grows, the glue layers, the configurations, the integration, what are the core issues? >>I think one of the major core issues is with all that power comes complexity, right? So, you know, despite its expressive power systems like Kubernetes and declarative APIs let you express a lot of complicated and complex stacks, right? But you're dealing with hundreds if not thousands of these yamo files or resources. And so I think, you know, the emergence of systems and layers to help you manage that complexity is becoming a key challenge and opportunity in, in this space. >>That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is a new breed. The trend of SaaS companies moving our consumer comp consumer-like thinking into the enterprise has been happening for a long time, but now more than ever, you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in. Now with open source, it's speed, simplification and integration, right? These are the new dynamic power dynamics for developers. Yeah. So as companies are starting to now deploy and look at Kubernetes, what are the things that need to be in place? Because you have some, I won't say technical debt, but maybe some shortcuts, some scripts here that make it look like infrastructure is code. People have done some things to simulate or or make infrastructure as code happen. Yes. But to do it at scale Yes. Is harder. What's your take on this? What's your view? >>It's hard because there's a per proliferation of methods, tools, technologies. So for example, today it's very common for DevOps and platform engineering tools, I mean, sorry, teams to have to deploy a large number of Kubernetes clusters, but then apply the applications and configurations on top of those clusters. And they're using a wide range of tools to do this, right? For example, maybe Ansible or Terraform or bash scripts to bring up the infrastructure and then the clusters. And then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the clusters. So you have this sprawl of tools. You, you also have this sprawl of configurations and files because the more objects you're dealing with, the more resources you have to manage. And there's a risk of drift that people call that where, you know, you think you have things under control, but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them. So I think there's real need to kind of unify, simplify, and try to solve these problems using a smaller, more unified set of tools and methodologies. And that's something that we try to do with this new project. Arlon. >>Yeah. So, so we're gonna get into Arlan in a second. I wanna get into the why Arlon. You guys announced that at AR GoCon, which was put on here in Silicon Valley at the, at the community meeting by in two, they had their own little day over there at their headquarters. But before we get there, vascar, your CEO came on and he talked about Super Cloud at our in AAL event. What's your definition of super cloud? If you had to kind of explain that to someone at a cocktail party or someone in the industry technical, how would you look at the super cloud trend that's emerging? It's become a thing. What's your, what would be your contribution to that definition or the narrative? >>Well, it's, it's, it's funny because I've actually heard of the term for the first time today, speaking to you earlier today. But I think based on what you said, I I already get kind of some of the, the gist and the, the main concepts. It seems like super cloud, the way I interpret that is, you know, clouds and infrastructure, programmable infrastructure, all of those things are becoming commodity in a way. And everyone's got their own flavor, but there's a real opportunity for people to solve real business problems by perhaps trying to abstract away, you know, all of those various implementations and then building better abstractions that are perhaps business or applications specific to help companies and businesses solve real business problems. >>Yeah, I remember that's a great, great definition. I remember, not to date myself, but back in the old days, you know, IBM had a proprietary network operating system, so of deck for the mini computer vendors, deck net and SNA respectively. But T C P I P came out of the osi, the open systems interconnect and remember, ethernet beat token ring out. So not to get all nerdy for all the young kids out there, look, just look up token ring, you'll see, you've probably never heard of it. It's IBM's, you know, connection for the internet at the, the layer two is Amazon, the ethernet, right? So if T C P I P could be the Kubernetes and the container abstraction that made the industry completely change at that point in history. So at every major inflection point where there's been serious industry change and wealth creation and business value, there's been an abstraction Yes. Somewhere. Yes. What's your reaction to that? >>I think this is, I think a saying that's been heard many times in this industry and, and I forgot who originated it, but I think that the saying goes like, there's no problem that can't be solved with another layer of indirection, right? And we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified, you know, computing and, and infrastructure management. And I believe this trend is going to continue, right? The next set of problems are going to be solved with these insertions of additional abstraction layers. I think that that's really a, yeah, it's gonna >>Continue. It's interesting. I just, when I wrote another post today on LinkedIn called the Silicon Wars AMD stock is down arm has been on a rise. We remember pointing for many years now that arm's gonna be hugely, it has become true. If you look at the success of the infrastructure as a service layer across the clouds, Azure, aws, Amazon's clearly way ahead of everybody. The stuff that they're doing with the silicon and the physics and the, the atoms, the pro, you know, this is where the innovation, they're going so deep and so strong at ISAs, the more that they get that gets come on, they have more performance. So if you're an app developer, wouldn't you want the best performance and you'd wanna have the best abstraction layer that gives you the most ability to do infrastructures, code or infrastructure for configuration, for provisioning, for managing services. And you're seeing that today with service MeSHs, a lot of action going on in the service mesh area in in this community of, of co con, which will be a covering. So that brings up the whole what's next? You guys just announced our lawn at Argo Con, which came out of Intuit. We've had Mariana Tessel at our super cloud event. She's the cto, you know, they're all in the cloud. So they contributed that project. Where did Arlon come from? What was the origination? What's the purpose? Why our lawn, why this announcement? >>Yeah, so the, the inception of the project, this was the result of us realizing that problem that we spoke about earlier, which is complexity, right? With all of this, these clouds, these infrastructure, all the variations around and, you know, compute storage networks and the proliferation of tools we talked about the Ansibles and Terraforms and Kubernetes itself. You can, you can think of that as another tool, right? We saw a need to solve that complexity problem, and especially for people and users who use Kubernetes at scale. So when you have, you know, hundreds of clusters, thousands of applications, thousands of users spread out over many, many locations, there, there needs to be a system that helps simplify that management, right? So that means fewer tools, more expressive ways of describing the state that you want and more consistency. And, and that's why, you know, we built our lawn and we built it recognizing that many of these problems or sub problems have already been solved. So Arlon doesn't try to reinvent the wheel, it instead rests on the shoulders of several giants, right? So for example, Kubernetes is one building block, GI ops, and Argo CD is another one, which provides a very structured way of applying configuration. And then we have projects like cluster API and cross plane, which provide APIs for describing infrastructure. So arlon takes all of those building blocks and builds a thin layer, which gives users a very expressive way of defining configuration and desired state. So that's, that's kind of the inception of, And >>What's the benefit of that? What does that give the, what does that give the developer, the user, in this case, >>The developers, the, the platform engineer, team members, the DevOps engineers, they get a a ways to provision not just infrastructure and clusters, but also applications and configurations. They get a way, a system for provisioning, configuring, deploying, and doing life cycle management in a, in a much simpler way. Okay. Especially as I said, if you're dealing with a large number of applications. >>So it's like an operating fabric, if you will. Yes. For them. Okay, so let's get into what that means for up above and below the the, this abstraction or thin layer below as the infrastructure. We talked a lot about what's going on below that. Yeah. Above our workloads. At the end of the day, you know, I talk to CXOs and IT folks that are now DevOps engineers. They care about the workloads and they want the infrastructures code to work. They wanna spend their time getting in the weeds, figuring out what happened when someone made a push that that happened or something happened. They need observability and they need to, to know that it's working. That's right. And is my workloads running effectively? So how do you guys look at the workload side of it? Cuz now you have multiple workloads on these fabric, >>Right? So workloads, so Kubernetes has defined kind of a standard way to describe workloads and you can, you know, tell Kubernetes, I want to run this container this particular way, or you can use other projects that are in the Kubernetes cloud native ecosystem like K native, where you can express your application in more at a higher level, right? But what's also happening is in addition to the workloads, DevOps and platform engineering teams, they need to very often deploy the applications with the clusters themselves. Clusters are becoming this commodity. It's, it's becoming this host for the application and it kind of comes bundled with it. In many cases it is like an appliance, right? So DevOps teams have to provision clusters at a really incredible rate and they need to tear them down. Clusters are becoming more, >>It's kinda like an EC two instance, spin up a cluster. We very, people used words like that. That's >>Right. And before arlon you kind of had to do all of that using a different set of tools as, as I explained. So with Armon you can kind of express everything together. You can say I want a cluster with a health monitoring stack and a logging stack and this ingress controller and I want these applications and these security policies. You can describe all of that using something we call a profile. And then you can stamp out your app, your applications and your clusters and manage them in a very, so >>Essentially standard creates a mechanism. Exactly. Standardized, declarative kind of configurations. And it's like a playbook. You deploy it. Now what's there is between say a script like I'm, I have scripts, I could just automate scripts >>Or yes, this is where that declarative API and infrastructures configuration comes in, right? Because scripts, yes you can automate scripts, but the order in which they run matters, right? They can break, things can break in the middle and, and sometimes you need to debug them. Whereas the declarative way is much more expressive and powerful. You just tell the system what you want and then the system kind of figures it out. And there are these things about controllers which will in the background reconcile all the state to converge towards your desire. It's a much more powerful, expressive and reliable way of getting things done. >>So infrastructure has configuration is built kind of on, it's as super set of infrastructures code because it's >>An evolution. >>You need edge's code, but then you can configure the code by just saying do it. You basically declaring and saying Go, go do that. That's right. Okay, so, alright, so cloud native at scale, take me through your vision of what that means. Someone says, Hey, what does cloud native at scale mean? What's success look like? How does it roll out in the future as you, not future next couple years? I mean people are now starting to figure out, okay, it's not as easy as it sounds. Could be nice, it has value. We're gonna hear this year coan a lot of this. What does cloud native at scale >>Mean? Yeah, there are different interpretations, but if you ask me, when people think of scale, they think of a large number of deployments, right? Geographies, many, you know, supporting thousands or tens or millions of, of users there, there's that aspect to scale. There's also an equally important a aspect of scale, which is also something that we try to address with Arran. And that is just complexity for the people operating this or configuring this, right? So in order to describe that desired state and in order to perform things like maybe upgrades or updates on a very large scale, you want the humans behind that to be able to express and direct the system to do that in, in relatively simple terms, right? And so we want the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible. So there's, I think there's gonna be a number and there have been a number of CNCF and cloud native projects that are trying to attack that complexity problem as well. And Arlon kind of falls in in that >>Category. Okay, so I'll put you on the spot road that CubeCon coming up and obviously this will be shipping this segment series out before. What do you expect to see at Coan this year? What's the big story this year? What's the, what's the most important thing happening? Is it in the open source community and also within a lot of the, the people jogging for leadership. I know there's a lot of projects and still there's some white space in the overall systems map about the different areas get run time and there's ability in all these different areas. What's the, where's the action? Where, where's the smoke? Where's the fire? Where's the piece? Where's the tension? >>Yeah, so I think one thing that has been happening over the past couple of cons and I expect to continue and, and that is the, the word on the street is Kubernetes is getting boring, right? Which is good, right? >>Boring means simple. >>Well, well >>Maybe, >>Yeah, >>Invisible, >>No drama, right? So, so the, the rate of change of the Kubernetes features and, and all that has slowed but in, in a, in a positive way. But there's still a general sentiment and feeling that there's just too much stuff. If you look at a stack necessary for hosting applications based on Kubernetes, there are just still too many moving parts, too many components, right? Too much complexity. I go, I keep going back to the complexity problem. So I expect Cube Con and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce further simplifications to, to the stack. >>Yeah. Vic, you've had an storied career, VMware over decades with them obviously in 12 years with 14 years or something like that. Big number co-founder here at Platform. Now you guys have been around for a while at this game. We, man, we talked about OpenStack, that project you, we interviewed at one of their events. So OpenStack was the beginning of that, this new revolution. And I remember the early days it was, it wasn't supposed to be an alternative to Amazon, but it was a way to do more cloud cloud native. I think we had a cloud ERO team at that time. We would to joke we, you know, about, about the dream. It's happening now, now at Platform nine. You guys have been doing this for a while. What's the, what are you most excited about as the chief architect? What did you guys double down on? What did you guys tr pivot from or two, did you do any pivots? Did you extend out certain areas? Cuz you guys are in a good position right now, a lot of DNA in Cloud native. What are you most excited about and what does Platform nine bring to the table for customers and for people in the industry watching this? >>Yeah, so I think our mission really hasn't changed over the years, right? It's been always about taking complex open source software because open source software, it's powerful. It solves new problems, you know, every year and you have new things coming out all the time, right? OpenStack was an example when the Kubernetes took the world by storm. But there's always that complexity of, you know, just configuring it, deploying it, running it, operating it. And our mission has always been that we will take all that complexity and just make it, you know, easy for users to consume regardless of the technology, right? So the successor to Kubernetes, you know, I don't have a crystal ball, but you know, you have some indications that people are coming up of new and simpler ways of running applications. There are many projects around there who knows what's coming next year or the year after that. But platform will a, platform nine will be there and we will, you know, take the innovations from the the community. We will contribute our own innovations and make all of those things very consumable to customers. >>Simpler, faster, cheaper. Exactly. Always a good business model technically to make that happen. Yes. Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into the scale. Final question before we depart this segment. What is at scale, how many clusters do you see that would be a watermark for an at scale conversation around an enterprise? Is it workloads we're looking at or, or clusters? How would you, Yeah, how would you describe that? When people try to squint through and evaluate what's a scale, what's the at scale kind of threshold? >>Yeah. And, and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large. But roughly speaking when we say, you know, large scale cluster deployments, we're talking about maybe hundreds, two thousands. >>Yeah. And final final question, what's the role of the hyperscalers? You got AWS continuing to do well, but they got their core ias, they got a PAs, they're not too too much putting a SaaS out there. They have some SaaS apps, but mostly it's the ecosystem. They have marketplaces doing over $2 billion billions of transactions a year and, and it's just like, just sitting there. It hasn't really, they're now innovating on it, but that's gonna change ecosystems. What's the role the cloud play in the cloud native of its scale? >>The, the hyperscalers, >>Yeahs Azure, Google. >>You mean from a business perspective? Yeah, they're, they have their own interests that, you know, that they're, they will keep catering to, they, they will continue to find ways to lock their users into their ecosystem of services and, and APIs. So I don't think that's gonna change, right? They're just gonna keep, >>Well they got great I performance, I mean from a, from a hardware standpoint, yes, that's gonna be key, right? >>Yes. I think the, the move from X 86 being the dominant way and platform to run workloads is changing, right? That, that, that, that, and I think the, the hyperscalers really want to be in the game in terms of, you know, the the new risk and arm ecosystems and the platforms. >>Yeah, not joking aside, Paul Morritz, when he was the CEO of VMware, when he took over once said, I remember our first year doing the cube. Oh the cloud is one big distributed computer, it's, it's hardware and he got software and you got middleware and he kind over, well he's kind of tongue in cheek, but really you're talking about large compute and sets of services that is essentially a distributed computer. >>Yes, >>Exactly. It's, we're back on the same game. Vic, thank you for coming on the segment. Appreciate your time. This is cloud native at scale special presentation with Platform nine. Really unpacking super cloud Arlon open source and how to run large scale applications on the cloud Cloud Native Phil for developers and John Furrier with the cube. Thanks for Washington. We'll stay tuned for another great segment coming right up. Hey, welcome back everyone to Super Cloud 22. I'm John Fur, host of the Cuba here all day talking about the future of cloud. Where's it all going? Making it super multi-cloud clouds around the corner and public cloud is winning. Got the private cloud on premise and edge. Got a great guest here, Vascar Gorde, CEO of Platform nine, just on the panel on Kubernetes. An enabler blocker. Welcome back. Great to have you on. >>Good to see you >>Again. So Kubernetes is a blocker enabler by, with a question mark. I put on on that panel was really to discuss the role of Kubernetes. Now great conversation operations is impacted. What's interest thing about what you guys are doing at Platform nine? Is your role there as CEO and the company's position, kind of like the world spun into the direction of Platform nine while you're at the helm? Yeah, right. >>Absolutely. In fact, things are moving very well and since they came to us, it was an insight to call ourselves the platform company eight years ago, right? So absolutely whether you are doing it in public clouds or private clouds, you know, the application world is moving very fast in trying to become digital and cloud native. There are many options for you do on the infrastructure. The biggest blocking factor now is having a unified platform. And that's what we, we come into, >>Patrick, we were talking before we came on stage here about your background and we were gonna talk about the glory days in 2000, 2001, when the first as piece application service providers came out, kind of a SaaS vibe, but that was kind of all kind of cloudlike. >>It wasn't, >>And and web services started then too. So you saw that whole growth. Now, fast forward 20 years later, 22 years later, where we are now, when you look back then to here and all the different cycles, >>I, in fact you, you know, as we were talking offline, I was in one of those ASPs in the year 2000 where it was a novel concept of saying we are providing a software and a capability as a service, right? You sign up and start using it. I think a lot has changed since then. The tooling, the tools, the technology has really skyrocketed. The app development environment has really taken off exceptionally well. There are many, many choices of infrastructure now, right? So I think things are in a way the same but also extremely different. But more importantly now for any company, regardless of size, to be a digital native, to become a digital company is extremely mission critical. It's no longer a nice to have everybody's in the journey somewhere. >>Everyone is going digital transformation here. Even on a so-called downturn recession that's upcoming inflation's here. It's interesting. This is the first downturn in the history of the world where the hyperscale clouds have been pumping on all cylinders as an economic input. And if you look at the tech trends, GDPs down, but not tech. >>Nope. >>Cuz the pandemic showed everyone digital transformation is here and more spend and more growth is coming even in, in tech. So this is a unique factor which proves that that digital transformation's happening and company, every company will need a super cloud. >>Everyone, every company, regardless of size, regardless of location, has to become modernize their infrastructure. And modernizing Infras infrastructure is not just some new servers and new application tools, It's your approach, how you're serving your customers, how you're bringing agility in your organization. I think that is becoming a necessity for every enterprise to survive. >>I wanna get your thoughts on Super Cloud because one of the things Dave Ante and I want to do with Super Cloud and calling it that was we, I, I personally, and I know Dave as well, he can, I'll speak from, he can speak for himself. We didn't like multi-cloud. I mean not because Amazon said don't call things multi-cloud, it just didn't feel right. I mean everyone has multiple clouds by default. If you're running productivity software, you have Azure and Office 365. But it wasn't truly distributed. It wasn't truly decentralized, it wasn't truly cloud enabled. It didn't, it felt like they're not ready for a market yet. Yet public clouds booming on premise. Private cloud and Edge is much more on, you know, more, more dynamic, more real. >>Yeah. I think the reason why we think super cloud is a better term than multi-cloud. Multi-cloud are more than one cloud, but they're disconnected. Okay, you have a productivity cloud, you have a Salesforce cloud, you may have, everyone has an internal cloud, right? So, but they're not connected. So you can say okay, it's more than one cloud. So it's you know, multi-cloud. But super cloud is where you are actually trying to look at this holistically. Whether it is on-prem, whether it is public, whether it's at the edge, it's a store at the branch. You are looking at this as one unit. And that's where we see the term super cloud is more applicable because what are the qualities that you require if you're in a super cloud, right? You need choice of infrastructure, you need, but at the same time you need a single pain, a single platform for you to build your innovations on regardless of which cloud you're doing it on, right? So I think Super Cloud is actually a more tightly integrated orchestrated management philosophy we think. >>So let's get into some of the super cloud type trends that we've been reporting on. Again, the purpose of this event is to, as a pilots, to get the conversations flowing with with the influencers like yourselves who are running companies and building products and the builders, Amazon and Azure are doing extremely well. Google's coming up in third cloudworks in public cloud. We see the use cases on premises use cases. Kubernetes has been an interesting phenomenon because it's become from the developer side a little bit, but a lot of ops people love Kubernetes. It's really more of an ops thing. You mentioned OpenStack earlier. Kubernetes kind of came out of that open stack. We need an orchestration and then containers had a good shot with, with Docker. They re pivoted the company. Now they're all in an open source. So you got containers booming and Kubernetes as a new layer there. What's the, what's the take on that? What does that really mean? Is that a new defacto enabler? It >>Is here. It's for here for sure. Every enterprise somewhere else in the journey is going on. And you know, most companies are, 70 plus percent of them have won two, three container based, Kubernetes based applications now being rolled out. So it's very much here, it is in production at scale by many customers. And the beauty of it is, yes, open source, but the biggest gating factor is the skill set. And that's where we have a phenomenal engineering team, right? So it's, it's one thing to buy a tool >>And just be clear, you're a managed service for Kubernetes. >>We provide, provide a software platform for cloud acceleration as a service and it can run anywhere. It can run in public private. We have customers who do it in truly multi-cloud environments. It runs on the edge, it runs at this in stores are thousands of stores in a retailer. So we provide that and also for specific segments where data sovereignty and data residency are key regulatory reasons. We also un OnPrem as an air gap version. >>Can you give an example on how you guys are deploying your platform to enable a super cloud experience for your >>Customer? Right. So I'll give you two different examples. One is a very large networking company, public networking company. They have, I dunno, hundreds of products, hundreds of r and d teams that are building different, different products. And if you look at few years back, each one was doing it on a different platforms but they really needed to bring the agility and they worked with us now over three years where we are their build test dev pro platform where all their products are built on, right? And it has dramatically increased their agility to release new products. Number two, it actually is a light out operation. In fact the customer says like, like the Maytag service person cuz we provide it as a service and it barely takes one or two people to maintain it for them. >>So it's kinda like an SRE vibe. One person managing a >>Large 4,000 engineers building infrastructure >>On their tools, >>Whatever they want on their tools. They're using whatever app development tools they use, but they use our platform. >>What benefits are they seeing? Are they seeing speed? >>Speed, definitely. Okay. Definitely they're speeding. Speed uniformity because now they're building able to build, so their customers who are using product A and product B are seeing a similar set of tools that are being used. >>So a big problem that's coming outta this super cloud event that we're, we're seeing and we've heard it all here, ops and security teams cuz they're kind of too part of one theme, but ops and security specifically need to catch up speed wise. Are you delivering that value to ops and security? Right. >>So we, we work with ops and security teams and infrastructure teams and we layer on top of that. We have like a platform team. If you think about it, depending on where you have data centers, where you have infrastructure, you have multiple teams, okay, but you need a unified platform. Who's your buyer? Our buyer is usually, you know, the product divisions of companies that are looking at or the CTO would be a buyer for us functionally cio definitely. So it it's, it's somewhere in the DevOps to infrastructure. But the ideal one we are beginning to see now many large corporations are really looking at it as a platform and saying we have a platform group on which any app can be developed and it is run on any infrastructure. So the platform engineering teams, >>You working two sides of that coin. You've got the dev side and then >>And then infrastructure >>Side side, okay. >>Another customer like give you an example, which I would say is kind of the edge of the store. So they have thousands of stores. Retail, retail, you know food retailer, right? They have thousands of stores that are on the globe, 50,000, 60,000. And they really want to enhance the customer experience that happens when you either order the product or go into the store and pick up your product or buy or browse or sit there. They have applications that were written in the nineties and then they have very modern AIML applications today. They want something that will not have to send an IT person to install a rack in the store or they can't move everything to the cloud because the store operations has to be local. The menu changes based on, It's a classic edge. It's classic edge. Yeah. Right. They can't send it people to go install rack access servers then they can't sell software people to go install the software and any change you wanna put through that, you know, truck roll. So they've been working with us where all they do is they ship, depending on the size of the store, one or two or three little servers with instructions that >>You, you say little servers like how big one like a net box box, like a small little >>Box and all the person in the store has to do like what you and I do at home and we get a, you know, a router is connect the power, connect the internet and turn the switch on. And from there we pick it up. >>Yep. >>We provide the operating system, everything and then the applications are put on it. And so that dramatically brings the velocity for them. They manage >>Thousands of them. True plug and play >>Two, plug and play thousands of stores. They manage it centrally. We do it for them, right? So, so that's another example where on the edge then we have some customers who have both a large private presence and one of the public clouds. Okay. But they want to have the same platform layer of orchestration and management that they can use regardless of the location. So >>You guys got some success. Congratulations. Got some traction there. It's awesome. The question I want to ask you is that's come up is what is truly cloud native? Cuz there's lift and shift of the cloud >>That's not cloud native. >>Then there's cloud native. Cloud native seems to be the driver for the super cloud. How do you talk to customers? How do you explain when someone says what's cloud native, what isn't cloud native? >>Right. Look, I think first of all, the best place to look at what is the definition and what are the attributes and characteristics of what is truly a cloud native, is CNC foundation. And I think it's very well documented where you, well >>Con of course Detroit's >>Coming here, so, so it's already there, right? So, so we follow that very closely, right? I think just lifting and shifting your 20 year old application onto a data center somewhere is not cloud native. Okay? You can't put to cloud native, you have to rewrite and redevelop your application and business logic using modern tools. Hopefully more open source and, and I think that's what Cloudnative is and we are seeing a lot of our customers in that journey. Now everybody wants to be cloudnative, but it's not that easy, okay? Because it's, I think it's first of all, skill set is very important. Uniformity of tools that there's so many tools there. Thousands and thousands of tools you could spend your time figuring out which tool to use. Okay? So I think the complexities there, but the business benefits of agility and uniformity and customer experience are truly them. >>And I'll give you an example. I don't know how clear native they are, right? And they're not a customer of ours, but you order pizzas, you do, right? If you just watch the pizza industry, how dominoes actually increase their share and mind share and wallet share was not because they were making better pizzas or not, I don't know anything about that, but the whole experience of how you order, how you watch what's happening, how it's delivered. There were a pioneer in it. To me, those are the kinds of customer experiences that cloud native can provide. >>Being agility and having that flow to the application changes what the expectations of the, for the customer. >>Customer, the customer's expectations change, right? Once you get used to a better customer experience, you learn >>Best car. To wrap it up, I wanna just get your perspective again. One of the benefits of chatting with you here and having you part of the Super Cloud 22 is you've seen many cycles, you have a lot of insights. I want to ask you, given your career where you've been and what you've done and now the CEO platform nine, how would you compare what's happening now with other inflection points in the industry? And you've been, again, you've been an entrepreneur, you sold your company to Oracle, you've been seeing the big companies, you've seen the different waves. What's going on right now put into context this moment in time around Super >>Cloud. Sure. I think as you said, a lot of battles. Cars being been, been in an asp, been in a realtime software company, being in large enterprise software houses and a transformation. I've been on the app side, I did the infrastructure right and then tried to build our own platforms. I've gone through all of this myself with a lot of lessons learned in there. I think this is an event which is happening now for companies to go through to become cloud native and digitalize. If I were to look back and look at some parallels of the tsunami that's going on is a couple of paddles come to me. One is, think of it, which was forced to honors like y2k. Everybody around the world had to have a plan, a strategy, and an execution for y2k. I would say the next big thing was e-commerce. I think e-commerce has been pervasive right across all industries. >>And disruptive. >>And disruptive, extremely disruptive. If you did not adapt and adapt and accelerate your e-commerce initiative, you were, it was an existence question. Yeah. I think we are at that pivotal moment now in companies trying to become digital and cloudnative that know that is what I see >>Happening there. I think that that e-commerce was interesting and I think just to riff with you on that is that it's disrupting and refactoring the business models. I think that is something that's coming out of this is that it's not just completely changing the game, it's just changing how you operate, >>How you think, and how you operate. See, if you think about the early days of eCommerce, just putting up a shopping cart didn't made you an eCommerce or an E retailer or an e e customer, right? Or so. I think it's the same thing now is I think this is a fundamental shift on how you're thinking about your business. How are you gonna operate? How are you gonna service your customers? I think it requires that just lift and shift is not gonna work. >>Mascar, thank you for coming on, spending the time to come in and share with our community and being part of Super Cloud 22. We really appreciate, we're gonna keep this open. We're gonna keep this conversation going even after the event, to open up and look at the structural changes happening now and continue to look at it in the open in the community. And we're gonna keep this going for, for a long, long time as we get answers to the problems that customers are looking for with cloud cloud computing. I'm Sean Feer with Super Cloud 22 in the Cube. Thanks for watching. >>Thank you. Thank you, John. >>Hello. Welcome back. This is the end of our program, our special presentation with Platform nine on cloud native at scale, enabling the super cloud. We're continuing the theme here. You heard the interviews Super Cloud and its challenges, new opportunities around the solutions around like Platform nine and others with Arlon. This is really about the edge situations on the internet and managing the edge multiple regions, avoiding vendor lock in. This is what this new super cloud is all about. The business consequences we heard and and the wide ranging conversations around what it means for open source and the complexity problem all being solved. I hope you enjoyed this program. There's a lot of moving pieces and things to configure with cloud native install, all making it easier for you here with Super Cloud and of course Platform nine contributing to that. Thank you for watching.

Published Date : Oct 18 2022

SUMMARY :

See you soon. but kind of the same as the first generation. And so you gotta rougher and IT kind of coming together, but you also got this idea of regions, So I think, you know, in in the context of this, the, this, Can you scope the scale of the problem? the problem that the scale creates, you know, there's various problems, but I think one, And that is just, you know, one example of an issue that happens. Can you share your reaction to that and how you see this playing out? which is, you know, you have your perfectly written code that is operating just fine on your And so as you give that change to then run at your production edge location, And you guys have a solution you're launching. So what our LA you do in a But again, it gets, you know, processed in a standardized way. So keeping it smooth, the assembly on things are flowing. Because developers, you know, there is, developers are responsible for one picture of So the DevOps is the cloud needed developer's. And so Arlon addresses that problem at the heart of it, and it does that using existing So I'm assuming you have that thought through, can you share open source and commercial relationship? products starting all the way with fision, which was a serverless product, you know, that we had built to buy, but also actually kind of date the application, if you will. I think one is just, you know, this, this, this cloud native space is so vast I have to ask you now, let's get into what's in it for the customer. And so, and there's multiple, you know, enterprises that we talk to, shared that this is a major challenge we have today because we have, you know, I'm an enterprise, I got tight, you know, I love the open source trying And that's where, you know, platform line has a role to play, which is when been some of the feedback? And the customer said, If you had it today, I would've purchased it. So next question is, what is the solution to the customer? So I think, you know, one of the core tenets of Platform nine has always been been that And now they have management challenges. Especially operationalizing the clusters, whether they want to kind of reset everything and remove things around and And And arlon by the way, also helps in that direction, but you also need I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? And so this really gives them, you know, the right tooling for that. So this is actually a great kind of relevant point, you know, as cloud becomes more scalable, So these are the kinds of challenges, and those are the pain points, which is, you know, if you're looking to to be supporting the business, you know, the back office and the maybe terminals and that, you know, that the, the technology that's, you know, that's gonna drive your top line is If all the things happen the way we want 'em to happen, The magic wand, the magic dust, he's running that at a nimble, nimble team size of at the most, Just taking care of the CIO doesn't exist. Thank you for your time. Thanks for Great to see you and great to see congratulations on the success And now the Kubernetes layer that we've been working on for years is Exactly. you know, the new Arlon, our, our lawn, and you guys just launched the So I think, I think I'm, I'm glad you mentioned it, everybody or most people know about infrastructures I mean now with open source so popular, you don't have to have to write a lot of code, you know, the emergence of systems and layers to help you manage that complexity is becoming That's, I wrote a LinkedIn post today was comments about, you know, hey, enterprise is a new breed. you know, you think you have things under control, but some people from various teams will make changes here in the industry technical, how would you look at the super cloud trend that's emerging? the way I interpret that is, you know, clouds and infrastructure, It's IBM's, you know, connection for the internet at the, this layer that has simplified, you know, computing and, the physics and the, the atoms, the pro, you know, this is where the innovation, the state that you want and more consistency. the DevOps engineers, they get a a ways to So how do you guys look at the workload native ecosystem like K native, where you can express your application in more at It's kinda like an EC two instance, spin up a cluster. And then you can stamp out your app, your applications and your clusters and manage them And it's like a playbook. You just tell the system what you want and then You need edge's code, but then you can configure the code by just saying do it. And that is just complexity for the people operating this or configuring this, What do you expect to see at Coan this year? If you look at a stack necessary for hosting We would to joke we, you know, about, about the dream. So the successor to Kubernetes, you know, I don't Yeah, I think the, the reigning in the chaos is key, you know, Now we have now visibility into But roughly speaking when we say, you know, They have some SaaS apps, but mostly it's the ecosystem. you know, that they're, they will keep catering to, they, they will continue to find terms of, you know, the the new risk and arm ecosystems it's, it's hardware and he got software and you got middleware and he kind over, Great to have you on. What's interest thing about what you guys are doing at Platform nine? clouds, you know, the application world is moving very fast in trying to Patrick, we were talking before we came on stage here about your background and we were gonna talk about the glory days in So you saw that whole growth. So I think things are in And if you look at the tech trends, GDPs down, but not tech. Cuz the pandemic showed everyone digital transformation is here and more And modernizing Infras infrastructure is not you know, more, more dynamic, more real. So it's you know, multi-cloud. So you got containers And you know, most companies are, 70 plus percent of them have won two, It runs on the edge, And if you look at few years back, each one was doing So it's kinda like an SRE vibe. Whatever they want on their tools. to build, so their customers who are using product A and product B are seeing a similar set Are you delivering that value to ops and security? Our buyer is usually, you know, the product divisions of companies You've got the dev side and then that happens when you either order the product or go into the store and pick up your product or like what you and I do at home and we get a, you know, a router is And so that dramatically brings the velocity for them. Thousands of them. of the public clouds. The question I want to ask you is that's How do you explain when someone says what's cloud native, what isn't cloud native? is the definition and what are the attributes and characteristics of what is truly a cloud native, Thousands and thousands of tools you could spend your time figuring out which I don't know anything about that, but the whole experience of how you order, Being agility and having that flow to the application changes what the expectations of One of the benefits of chatting with you here and been on the app side, I did the infrastructure right and then tried to build our own If you did not adapt and adapt and accelerate I think that that e-commerce was interesting and I think just to riff with you on that is that it's disrupting How are you gonna service your Mascar, thank you for coming on, spending the time to come in and share with our community and being part of Thank you, John. I hope you enjoyed this program.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

AmazonORGANIZATION

0.99+

PatrickPERSON

0.99+

Paul MorritzPERSON

0.99+

VascarPERSON

0.99+

Adrian KaroPERSON

0.99+

Sean FeerPERSON

0.99+

2000DATE

0.99+

John FurryPERSON

0.99+

oneQUANTITY

0.99+

IBMORGANIZATION

0.99+

50,000QUANTITY

0.99+

JohnPERSON

0.99+

twoQUANTITY

0.99+

John FurrPERSON

0.99+

Vascar GordePERSON

0.99+

John FurPERSON

0.99+

Meor Ma MakowskiPERSON

0.99+

Silicon ValleyLOCATION

0.99+

MakoskiPERSON

0.99+

thousandsQUANTITY

0.99+

14 yearsQUANTITY

0.99+

OracleORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

2001DATE

0.99+

GortPERSON

0.99+

MascarPERSON

0.99+

AWSORGANIZATION

0.99+

Mariana TesselPERSON

0.99+

GoogleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

TwoQUANTITY

0.99+

OneQUANTITY

0.99+

millionsQUANTITY

0.99+

two partsQUANTITY

0.99+

tensQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

next yearDATE

0.99+

ArlonORGANIZATION

0.99+

todayDATE

0.99+

KubernetesTITLE

0.99+

eight years agoDATE

0.99+

one siteQUANTITY

0.99+

ThousandsQUANTITY

0.99+

second partQUANTITY

0.99+

bothQUANTITY

0.99+

each componentQUANTITY

0.99+

AMDORGANIZATION

0.99+

Office 365TITLE

0.99+

one unitQUANTITY

0.99+

one flavorQUANTITY

0.99+

4,000 engineersQUANTITY

0.99+

first generationQUANTITY

0.99+

Super CloudTITLE

0.99+

Dave AntePERSON

0.99+

firstQUANTITY

0.99+

VicPERSON

0.99+

two sidesQUANTITY

0.99+

VMwareORGANIZATION

0.99+

two thousandsQUANTITY

0.99+

BickleyPERSON

0.98+

tens of thousands of nodesQUANTITY

0.98+

AzureTITLE

0.98+

two peopleQUANTITY

0.98+

each siteQUANTITY

0.98+

KubernetesPERSON

0.98+

super cloudTITLE

0.98+

One personQUANTITY

0.98+

two factorsQUANTITY

0.98+

ArlanORGANIZATION

0.98+

Horizon3.ai Signal | Horizon3.ai Partner Program Expands Internationally


 

hello I'm John Furrier with thecube and welcome to this special presentation of the cube and Horizon 3.ai they're announcing a global partner first approach expanding their successful pen testing product Net Zero you're going to hear from leading experts in their staff their CEO positioning themselves for a successful Channel distribution expansion internationally in Europe Middle East Africa and Asia Pacific in this Cube special presentation you'll hear about the expansion the expanse partner program giving Partners a unique opportunity to offer Net Zero to their customers Innovation and Pen testing is going International with Horizon 3.ai enjoy the program [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're here with Jennifer Lee head of Channel sales at Horizon 3.ai Jennifer welcome to the cube thanks for coming on great well thank you for having me so big news around Horizon 3.aa driving Channel first commitment you guys are expanding the channel partner program to include all kinds of new rewards incentives training programs help educate you know Partners really drive more recurring Revenue certainly cloud and Cloud scale has done that you got a great product that fits into that kind of Channel model great Services you can wrap around it good stuff so let's get into it what are you guys doing what are what are you guys doing with this news why is this so important yeah for sure so um yeah we like you said we recently expanded our Channel partner program um the driving force behind it was really just um to align our like you said our Channel first commitment um and creating awareness around the importance of our partner ecosystems um so that's it's really how we go to market is is through the channel and a great International Focus I've talked with the CEO so you know about the solution and he broke down all the action on why it's important on the product side but why now on the go to market change what's the what's the why behind this big this news on the channel yeah for sure so um we are doing this now really to align our business strategy which is built on the concept of enabling our partners to create a high value high margin business on top of our platform and so um we offer a solution called node zero it provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture um so we our company vision we have this tagline that states that our pen testing enables organizations to see themselves Through The Eyes of an attacker and um we use the like the attacker's perspective to identify exploitable weaknesses and vulnerabilities so we created this partner program from a perspective of the partner so the partner's perspective and we've built It Through The Eyes of our partner right so we're prioritizing really what the partner is looking for and uh will ensure like Mutual success for us yeah the partners always want to get in front of the customers and bring new stuff to them pen tests have traditionally been really expensive uh and so bringing it down in one to a service level that's one affordable and has flexibility to it allows a lot of capability so I imagine people getting excited by it so I have to ask you about the program What specifically are you guys doing can you share any details around what it means for the partners what they get what's in it for them can you just break down some of the mechanics and mechanisms or or details yeah yep um you know we're really looking to create business alignment um and like I said establish Mutual success with our partners so we've got two um two key elements that we were really focused on um that we bring to the partners so the opportunity the profit margin expansion is one of them and um a way for our partners to really differentiate themselves and stay relevant in the market so um we've restructured our discount model really um you know highlighting profitability and maximizing profitability and uh this includes our deal registration we've we've created deal registration program we've increased discount for partners who take part in our partner certification uh trainings and we've we have some other partner incentives uh that we we've created that that's going to help out there we've we put this all so we've recently Gone live with our partner portal um it's a Consolidated experience for our partners where they can access our our sales tools and we really view our partners as an extension of our sales and Technical teams and so we've extended all of our our training material that we use internally we've made it available to our partners through our partner portal um we've um I'm trying I'm thinking now back what else is in that partner portal here we've got our partner certification information so all the content that's delivered during that training can be found in the portal we've got deal registration uh um co-branded marketing materials pipeline management and so um this this portal gives our partners a One-Stop place to to go to find all that information um and then just really quickly on the second part of that that I mentioned is our technology really is um really disruptive to the market so you know like you said autonomous pen testing it's um it's still it's well it's still still relatively new topic uh for security practitioners and um it's proven to be really disruptive so um that on top of um just well recently we found an article that um that mentioned by markets and markets that reports that the global pen testing markets really expanding and so it's expected to grow to like 2.7 billion um by 2027. so the Market's there right the Market's expanding it's growing and so for our partners it's just really allows them to grow their revenue um across their customer base expand their customer base and offering this High profit margin while you know getting in early to Market on this just disruptive technology big Market a lot of opportunities to make some money people love to put more margin on on those deals especially when you can bring a great solution that everyone knows is hard to do so I think that's going to provide a lot of value is there is there a type of partner that you guys see emerging or you aligning with you mentioned the alignment with the partners I can see how that the training and the incentives are all there sounds like it's all going well is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this yeah absolutely so we work with all different kinds of Partners we work with our traditional resale Partners um we've worked we're working with systems integrators we have a really strong MSP mssp program um we've got Consulting partners and the Consulting Partners especially with the ones that offer pen test services so we they use us as a as we act as a force multiplier just really offering them profit margin expansion um opportunity there we've got some technology partner partners that we really work with for co-cell opportunities and then we've got our Cloud Partners um you'd mentioned that earlier and so we are in AWS Marketplace so our ccpo partners we're part of the ISP accelerate program um so we we're doing a lot there with our Cloud partners and um of course we uh we go to market with uh distribution Partners as well gotta love the opportunity for more margin expansion every kind of partner wants to put more gross profit on their deals is there a certification involved I have to ask is there like do you get do people get certified or is it just you get trained is it self-paced training is it in person how are you guys doing the whole training certification thing because is that is that a requirement yeah absolutely so we do offer a certification program and um it's been very popular this includes a a seller's portion and an operator portion and and so um this is at no cost to our partners and um we operate both virtually it's it's law it's virtually but live it's not self-paced and we also have in person um you know sessions as well and we also can customize these to any partners that have a large group of people and we can just we can do one in person or virtual just specifically for that partner well any kind of incentive opportunities and marketing opportunities everyone loves to get the uh get the deals just kind of rolling in leads from what we can see if our early reporting this looks like a hot product price wise service level wise what incentive do you guys thinking about and and Joint marketing you mentioned co-sell earlier in pipeline so I was kind of kind of honing in on that piece sure and yes and then to follow along with our partner certification program we do incentivize our partners there if they have a certain number certified their discount increases so that's part of it we have our deal registration program that increases discount as well um and then we do have some um some partner incentives that are wrapped around meeting setting and um moving moving opportunities along to uh proof of value gotta love the education driving value I have to ask you so you've been around the industry you've seen the channel relationships out there you're seeing companies old school new school you know uh Horizon 3.ai is kind of like that new school very cloud specific a lot of Leverage with we mentioned AWS and all the clouds um why is the company so hot right now why did you join them and what's why are people attracted to this company what's the what's the attraction what's the vibe what do you what do you see and what what do you use what did you see in in this company well this is just you know like I said it's very disruptive um it's really in high demand right now and um and and just because because it's new to Market and uh a newer technology so we are we can collaborate with a manual pen tester um we can you know we can allow our customers to run their pen test um with with no specialty teams and um and and then so we and like you know like I said we can allow our partners can actually build businesses profitable businesses so we can they can use our product to increase their services revenue and um and build their business model you know around around our services what's interesting about the pen test thing is that it's very expensive and time consuming the people who do them are very talented people that could be working on really bigger things in the in absolutely customers so bringing this into the channel allows them if you look at the price Delta between a pen test and then what you guys are offering I mean that's a huge margin Gap between street price of say today's pen test and what you guys offer when you show people that they follow do they say too good to be true I mean what are some of the things that people say when you kind of show them that are they like scratch their head like come on what's the what's the catch here right so the cost savings is a huge is huge for us um and then also you know like I said working as a force multiplier with a pen testing company that offers the services and so they can they can do their their annual manual pen tests that may be required around compliance regulations and then we can we can act as the continuous verification of their security um um you know that that they can run um weekly and so it's just um you know it's just an addition to to what they're offering already and an expansion so Jennifer thanks for coming on thecube really appreciate you uh coming on sharing the insights on the channel uh what's next what can we expect from the channel group what are you thinking what's going on right so we're really looking to expand our our Channel um footprint and um very strategically uh we've got um we've got some big plans um for for Horizon 3.ai awesome well thanks for coming on really appreciate it you're watching thecube the leader in high tech Enterprise coverage [Music] [Music] hello and welcome to the Cube's special presentation with Horizon 3.ai with Raina Richter vice president of emea Europe Middle East and Africa and Asia Pacific APAC for Horizon 3 today welcome to this special Cube presentation thanks for joining us thank you for the invitation so Horizon 3 a guy driving Global expansion big international news with a partner first approach you guys are expanding internationally let's get into it you guys are driving this new expanse partner program to new heights tell us about it what are you seeing in the momentum why the expansion what's all the news about well I would say uh yeah in in international we have I would say a similar similar situation like in the US um there is a global shortage of well-educated penetration testers on the one hand side on the other side um we have a raising demand of uh network and infrastructure security and with our approach of an uh autonomous penetration testing I I believe we are totally on top of the game um especially as we have also now uh starting with an international instance that means for example if a customer in Europe is using uh our service node zero he will be connected to a node zero instance which is located inside the European Union and therefore he has doesn't have to worry about the conflict between the European the gdpr regulations versus the US Cloud act and I would say there we have a total good package for our partners that they can provide differentiators to their customers you know we've had great conversations here on thecube with the CEO and the founder of the company around the leverage of the cloud and how successful that's been for the company and honestly I can just Connect the Dots here but I'd like you to weigh in more on how that translates into the go to market here because you got great Cloud scale with with the security product you guys are having success with great leverage there I've seen a lot of success there what's the momentum on the channel partner program internationally why is it so important to you is it just the regional segmentation is it the economics why the momentum well there are it's there are multiple issues first of all there is a raising demand in penetration testing um and don't forget that uh in international we have a much higher level in number a number or percentage in SMB and mid-market customers so these customers typically most of them even didn't have a pen test done once a year so for them pen testing was just too expensive now with our offering together with our partners we can provide different uh ways how customers could get an autonomous pen testing done more than once a year with even lower costs than they had with with a traditional manual paint test so and that is because we have our uh Consulting plus package which is for typically pain testers they can go out and can do a much faster much quicker and their pain test at many customers once in after each other so they can do more pain tests on a lower more attractive price on the other side there are others what even the same ones who are providing um node zero as an mssp service so they can go after s p customers saying okay well you only have a couple of hundred uh IP addresses no worries we have the perfect package for you and then you have let's say the mid Market let's say the thousands and more employees then they might even have an annual subscription very traditional but for all of them it's all the same the customer or the service provider doesn't need a piece of Hardware they only need to install a small piece of a Docker container and that's it and that makes it so so smooth to go in and say okay Mr customer we just put in this this virtual attacker into your network and that's it and and all the rest is done and within within three clicks they are they can act like a pen tester with 20 years of experience and that's going to be very Channel friendly and partner friendly I can almost imagine so I have to ask you and thank you for calling the break calling out that breakdown and and segmentation that was good that was very helpful for me to understand but I want to follow up if you don't mind um what type of partners are you seeing the most traction with and why well I would say at the beginning typically you have the the innovators the early adapters typically Boutique size of Partners they start because they they are always looking for Innovation and those are the ones you they start in the beginning so we have a wide range of Partners having mostly even um managed by the owner of the company so uh they immediately understand okay there is the value and they can change their offering they're changing their offering in terms of penetration testing because they can do more pen tests and they can then add other ones or we have those ones who offer 10 tests services but they did not have their own pen testers so they had to go out on the open market and Source paint testing experts um to get the pen test at a particular customer done and now with node zero they're totally independent they can't go out and say okay Mr customer here's the here's the service that's it we turn it on and within an hour you're up and running totally yeah and those pen tests are usually expensive and hard to do now it's right in line with the sales delivery pretty interesting for a partner absolutely but on the other hand side we are not killing the pain testers business we do something we're providing with no tiers I would call something like the foundation work the foundational work of having an an ongoing penetration testing of the infrastructure the operating system and the pen testers by themselves they can concentrate in the future on things like application pen testing for example so those Services which we we're not touching so we're not killing the paint tester Market we're just taking away the ongoing um let's say foundation work call it that way yeah yeah that was one of my questions I was going to ask is there's a lot of interest in this autonomous pen testing one because it's expensive to do because those skills are required are in need and they're expensive so you kind of cover the entry level and the blockers that are in there I've seen people say to me this pen test becomes a blocker for getting things done so there's been a lot of interest in the autonomous pen testing and for organizations to have that posture and it's an overseas issue too because now you have that that ongoing thing so can you explain that particular benefit for an organization to have that continuously verifying an organization's posture yep certainly so I would say um typically you are you you have to do your patches you have to bring in new versions of operating systems of different Services of uh um operating systems of some components and and they are always bringing new vulnerabilities the difference here is that with node zero we are telling the customer or the partner package we're telling them which are the executable vulnerabilities because previously they might have had um a vulnerability scanner so this vulnerability scanner brought up hundreds or even thousands of cves but didn't say anything about which of them are vulnerable really executable and then you need an expert digging in one cve after the other finding out is it is it really executable yes or no and that is where you need highly paid experts which we have a shortage so with notes here now we can say okay we tell you exactly which ones are the ones you should work on because those are the ones which are executable we rank them accordingly to the risk level how easily they can be used and by a sudden and then the good thing is convert it or indifference to the traditional penetration test they don't have to wait for a year for the next pain test to find out if the fixing was effective they weren't just the next scan and say Yes closed vulnerability is gone the time is really valuable and if you're doing any devops Cloud native you're always pushing new things so pen test ongoing pen testing is actually a benefit just in general as a kind of hygiene so really really interesting solution really bring that global scale is going to be a new new coverage area for us for sure I have to ask you if you don't mind answering what particular region are you focused on or plan to Target for this next phase of growth well at this moment we are concentrating on the countries inside the European Union Plus the United Kingdom um but we are and they are of course logically I'm based into Frankfurt area that means we cover more or less the countries just around so it's like the total dark region Germany Switzerland Austria plus the Netherlands but we also already have Partners in the nordics like in Finland or in Sweden um so it's it's it it's rapidly we have Partners already in the UK and it's rapidly growing so I'm for example we are now starting with some activities in Singapore um um and also in the in the Middle East area um very important we uh depending on let's say the the way how to do business currently we try to concentrate on those countries where we can have um let's say um at least English as an accepted business language great is there any particular region you're having the most success with right now is it sounds like European Union's um kind of first wave what's them yes that's the first definitely that's the first wave and now we're also getting the uh the European instance up and running it's clearly our commitment also to the market saying okay we know there are certain dedicated uh requirements and we take care of this and and we're just launching it we're building up this one uh the instance um in the AWS uh service center here in Frankfurt also with some dedicated Hardware internet in a data center in Frankfurt where we have with the date six by the way uh the highest internet interconnection bandwidth on the planet so we have very short latency to wherever you are on on the globe that's a great that's a great call outfit benefit too I was going to ask that what are some of the benefits your partners are seeing in emea and Asia Pacific well I would say um the the benefits is for them it's clearly they can they can uh talk with customers and can offer customers penetration testing which they before and even didn't think about because it penetrates penetration testing in a traditional way was simply too expensive for them too complex the preparation time was too long um they didn't have even have the capacity uh to um to support a pain an external pain tester now with this service you can go in and say even if they Mr customer we can do a test with you in a couple of minutes within we have installed the docker container within 10 minutes we have the pen test started that's it and then we just wait and and I would say that is we'll we are we are seeing so many aha moments then now because on the partner side when they see node zero the first time working it's like this wow that is great and then they work out to customers and and show it to their typically at the beginning mostly the friendly customers like wow that's great I need that and and I would say um the feedback from the partners is that is a service where I do not have to evangelize the customer everybody understands penetration testing I don't have to say describe what it is they understand the customer understanding immediately yes penetration testing good about that I know I should do it but uh too complex too expensive now with the name is for example as an mssp service provided from one of our partners but it's getting easy yeah it's great and it's great great benefit there I mean I gotta say I'm a huge fan of what you guys are doing I like this continuous automation that's a major benefit to anyone doing devops or any kind of modern application development this is just a godsend for them this is really good and like you said the pen testers that are doing it they were kind of coming down from their expertise to kind of do things that should have been automated they get to focus on the bigger ticket items that's a really big point so we free them we free the pain testers for the higher level elements of the penetration testing segment and that is typically the application testing which is currently far away from being automated yeah and that's where the most critical workloads are and I think this is the nice balance congratulations on the international expansion of the program and thanks for coming on this special presentation really I really appreciate it thank you you're welcome okay this is thecube special presentation you know check out pen test automation International expansion Horizon 3 dot AI uh really Innovative solution in our next segment Chris Hill sector head for strategic accounts will discuss the power of Horizon 3.ai and Splunk in action you're watching the cube the leader in high tech Enterprise coverage foreign [Music] [Music] welcome back everyone to the cube and Horizon 3.ai special presentation I'm John Furrier host of thecube we're with Chris Hill sector head for strategic accounts and federal at Horizon 3.ai a great Innovative company Chris great to see you thanks for coming on thecube yeah like I said uh you know great to meet you John long time listener first time caller so excited to be here with you guys yeah we were talking before camera you had Splunk back in 2013 and I think 2012 was our first splunk.com and boy man you know talk about being in the right place at the right time now we're at another inflection point and Splunk continues to be relevant um and continuing to have that data driving Security in that interplay and your CEO former CTO of his plug as well at Horizon who's been on before really Innovative product you guys have but you know yeah don't wait for a breach to find out if you're logging the right data this is the topic of this thread Splunk is very much part of this new international expansion announcement uh with you guys tell us what are some of the challenges that you see where this is relevant for the Splunk and Horizon AI as you guys expand uh node zero out internationally yeah well so across so you know my role uh within Splunk it was uh working with our most strategic accounts and so I looked back to 2013 and I think about the sales process like working with with our small customers you know it was um it was still very siled back then like I was selling to an I.T team that was either using this for it operations um we generally would always even say yeah although we do security we weren't really designed for it we're a log management tool and we I'm sure you remember back then John we were like sort of stepping into the security space and and the public sector domain that I was in you know security was 70 of what we did when I look back to sort of uh the transformation that I was witnessing in that digital transformation um you know when I look at like 2019 to today you look at how uh the IT team and the security teams are being have been forced to break down those barriers that they used to sort of be silent away would not commute communicate one you know the security guys would be like oh this is my box I.T you're not allowed in today you can't get away with that and I think that the value that we bring to you know and of course Splunk has been a huge leader in that space and continues to do Innovation across the board but I think what we've we're seeing in the space and I was talking with Patrick Coughlin the SVP of uh security markets about this is that you know what we've been able to do with Splunk is build a purpose-built solution that allows Splunk to eat more data so Splunk itself is ulk know it's an ingest engine right the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it but without data it doesn't do anything right so how do you drive and how do you bring more data in and most importantly from a customer perspective how do you bring the right data in and so if you think about what node zero and what we're doing in a horizon 3 is that sure we do pen testing but because we're an autonomous pen testing tool we do it continuously so this whole thought I'd be like oh crud like my customers oh yeah we got a pen test coming up it's gonna be six weeks the week oh yeah you know and everyone's gonna sit on their hands call me back in two months Chris we'll talk to you then right not not a real efficient way to test your environment and shoot we saw that with Uber this week right um you know and that's a case where we could have helped oh just right we could explain the Uber thing because it was a contractor just give a quick highlight of what happened so you can connect the doctor yeah no problem so um it was uh I got I think it was yeah one of those uh you know games where they would try and test an environment um and with the uh pen tester did was he kept on calling them MFA guys being like I need to reset my password we need to set my right password and eventually the um the customer service guy said okay I'm resetting it once he had reset and bypassed the multi-factor authentication he then was able to get in and get access to the building area that he was in or I think not the domain but he was able to gain access to a partial part of that Network he then paralleled over to what I would assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains and So within minutes they had access and that's the sort of stuff that we do you know a lot of these tools like um you know you think about the cacophony of tools that are out there in a GTA architect architecture right I'm gonna get like a z-scale or I'm going to have uh octum and I have a Splunk I've been into the solar system I mean I don't mean to name names we have crowdstriker or Sentinel one in there it's just it's a cacophony of things that don't work together they weren't designed work together and so we have seen so many times in our business through our customer support and just working with customers when we do their pen tests that there will be 5 000 servers out there three are misconfigured those three misconfigurations will create the open door because remember the hacker only needs to be right once the defender needs to be right all the time and that's the challenge and so that's what I'm really passionate about what we're doing uh here at Horizon three I see this my digital transformation migration and security going on which uh we're at the tip of the spear it's why I joined sey Hall coming on this journey uh and just super excited about where the path's going and super excited about the relationship with Splunk I get into more details on some of the specifics of that but um you know well you're nailing I mean we've been doing a lot of things on super cloud and this next gen environment we're calling it next gen you're really seeing devops obviously devsecops has already won the it role has moved to the developer shift left is an indicator of that it's one of the many examples higher velocity code software supply chain you hear these things that means that it is now in the developer hands it is replaced by the new Ops data Ops teams and security where there's a lot of horizontal thinking to your point about access there's no more perimeter huge 100 right is really right on things one time you know to get in there once you're in then you can hang out move around move laterally big problem okay so we get that now the challenges for these teams as they are transitioning organizationally how do they figure out what to do okay this is the next step they already have Splunk so now they're kind of in transition while protecting for a hundred percent ratio of success so how would you look at that and describe the challenge is what do they do what is it what are the teams facing with their data and what's next what are they what are they what action do they take so let's use some vernacular that folks will know so if I think about devsecops right we both know what that means that I'm going to build security into the app it normally talks about sec devops right how am I building security around the perimeter of what's going inside my ecosystem and what are they doing and so if you think about what we're able to do with somebody like Splunk is we can pen test the entire environment from Soup To Nuts right so I'm going to test the end points through to its I'm going to look for misconfigurations I'm going to I'm going to look for um uh credential exposed credentials you know I'm going to look for anything I can in the environment again I'm going to do it at light speed and and what what we're doing for that SEC devops space is to you know did you detect that we were in your environment so did we alert Splunk or the Sim that there's someone in the environment laterally moving around did they more importantly did they log us into their environment and when do they detect that log to trigger that log did they alert on us and then finally most importantly for every CSO out there is going to be did they stop us and so that's how we we do this and I think you when speaking with um stay Hall before you know we've come up with this um boils but we call it fine fix verifying so what we do is we go in is we act as the attacker right we act in a production environment so we're not going to be we're a passive attacker but we will go in on credentialed on agents but we have to assume to have an assumed breach model which means we're going to put a Docker container in your environment and then we're going to fingerprint the environment so we're going to go out and do an asset survey now that's something that's not something that Splunk does super well you know so can Splunk see all the assets do the same assets marry up we're going to log all that data and think and then put load that into this long Sim or the smoke logging tools just to have it in Enterprise right that's an immediate future ad that they've got um and then we've got the fix so once we've completed our pen test um we are then going to generate a report and we can talk about these in a little bit later but the reports will show an executive summary the assets that we found which would be your asset Discovery aspect of that a fix report and the fixed report I think is probably the most important one it will go down and identify what we did how we did it and then how to fix that and then from that the pen tester or the organization should fix those then they go back and run another test and then they validate like a change detection environment to see hey did those fixes taste play take place and you know snehaw when he was the CTO of jsoc he shared with me a number of times about it's like man there would be 15 more items on next week's punch sheet that we didn't know about and it's and it has to do with how we you know how they were uh prioritizing the cves and whatnot because they would take all CBDs it was critical or non-critical and it's like we are able to create context in that environment that feeds better information into Splunk and whatnot that brings that brings up the efficiency for Splunk specifically the teams out there by the way the burnout thing is real I mean this whole I just finished my list and I got 15 more or whatever the list just can keeps growing how did node zero specifically help Splunk teams be more efficient like that's the question I want to get at because this seems like a very scale way for Splunk customers and teams service teams to be more so the question is how does node zero help make Splunk specifically their service teams be more efficient so so today in our early interactions we're building customers we've seen are five things um and I'll start with sort of identifying the blind spots right so kind of what I just talked about with you did we detect did we log did we alert did they stop node zero right and so I would I put that you know a more Layman's third grade term and if I was going to beat a fifth grader at this game would be we can be the sparring partner for a Splunk Enterprise customer a Splunk Essentials customer someone using Splunk soar or even just an Enterprise Splunk customer that may be a small shop with three people and just wants to know where am I exposed so by creating and generating these reports and then having um the API that actually generates the dashboard they can take all of these events that we've logged and log them in and then where that then comes in is number two is how do we prioritize those logs right so how do we create visibility to logs that that um are have critical impacts and again as I mentioned earlier not all cves are high impact regard and also not all or low right so if you daisy chain a bunch of low cves together boom I've got a mission critical AP uh CPE that needs to be fixed now such as a credential moving to an NT box that's got a text file with a bunch of passwords on it that would be very bad um and then third would be uh verifying that you have all of the hosts so one of the things that splunk's not particularly great at and they'll literate themselves they don't do asset Discovery so dude what assets do we see and what are they logging from that um and then for from um for every event that they are able to identify one of the cool things that we can do is actually create this low code no code environment so they could let you know Splunk customers can use Splunk sword to actually triage events and prioritize that event so where they're being routed within it to optimize the Sox team time to Market or time to triage any given event obviously reducing MTR and then finally I think one of the neatest things that we'll be seeing us develop is um our ability to build glass cables so behind me you'll see one of our triage events and how we build uh a Lockheed Martin kill chain on that with a glass table which is very familiar to the community we're going to have the ability and not too distant future to allow people to search observe on those iocs and if people aren't familiar with it ioc it's an instant of a compromise so that's a vector that we want to drill into and of course who's better at Drilling in the data and smoke yeah this is a critter this is an awesome Synergy there I mean I can see a Splunk customer going man this just gives me so much more capability action actionability and also real understanding and I think this is what I want to dig into if you don't mind understanding that critical impact okay is kind of where I see this coming got the data data ingest now data's data but the question is what not to log you know where are things misconfigured these are critical questions so can you talk about what it means to understand critical impact yeah so I think you know going back to the things that I just spoke about a lot of those cves where you'll see um uh low low low and then you daisy chain together and they're suddenly like oh this is high now but then your other impact of like if you're if you're a Splunk customer you know and I had it I had several of them I had one customer that you know terabytes of McAfee data being brought in and it was like all right there's a lot of other data that you probably also want to bring but they could only afford wanted to do certain data sets because that's and they didn't know how to prioritize or filter those data sets and so we provide that opportunity to say hey these are the critical ones to bring in but there's also the ones that you don't necessarily need to bring in because low cve in this case really does mean low cve like an ILO server would be one that um that's the print server uh where the uh your admin credentials are on on like a printer and so there will be credentials on that that's something that a hacker might go in to look at so although the cve on it is low is if you daisy chain with somebody that's able to get into that you might say Ah that's high and we would then potentially rank it giving our AI logic to say that's a moderate so put it on the scale and we prioritize those versus uh of all of these scanners just going to give you a bunch of CDs and good luck and translating that if I if I can and tell me if I'm wrong that kind of speaks to that whole lateral movement that's it challenge right print serve a great example looks stupid low end who's going to want to deal with the print server oh but it's connected into a critical system there's a path is that kind of what you're getting at yeah I use Daisy Chain I think that's from the community they came from uh but it's just a lateral movement it's exactly what they're doing in those low level low critical lateral movements is where the hackers are getting in right so that's the beauty thing about the uh the Uber example is that who would have thought you know I've got my monthly Factor authentication going in a human made a mistake we can't we can't not expect humans to make mistakes we're fallible right the reality is is once they were in the environment they could have protected themselves by running enough pen tests to know that they had certain uh exposed credentials that would have stopped the breach and they did not had not done that in their environment and I'm not poking yeah but it's an interesting Trend though I mean it's obvious if sometimes those low end items are also not protected well so it's easy to get at from a hacker standpoint but also the people in charge of them can be fished easily or spearfished because they're not paying attention because they don't have to no one ever told them hey be careful yeah for the community that I came from John that's exactly how they they would uh meet you at a uh an International Event um introduce themselves as a graduate student these are National actor States uh would you mind reviewing my thesis on such and such and I was at Adobe at the time that I was working on this instead of having to get the PDF they opened the PDF and whoever that customer was launches and I don't know if you remember back in like 2008 time frame there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it and John that's or LinkedIn hey I want to get a joke we want to hire you double the salary oh I'm gonna click on that for sure you know yeah right exactly yeah the one thing I would say to you is like uh when we look at like sort of you know because I think we did 10 000 pen tests last year is it's probably over that now you know we have these sort of top 10 ways that we think and find people coming into the environment the funniest thing is that only one of them is a cve related vulnerability like uh you know you guys know what they are right so it's it but it's it's like two percent of the attacks are occurring through the cves but yeah there's all that attention spent to that and very little attention spent to this pen testing side which is sort of this continuous threat you know monitoring space and and this vulnerability space where I think we play a such an important role and I'm so excited to be a part of the tip of the spear on this one yeah I'm old enough to know the movie sneakers which I loved as a you know watching that movie you know professional hackers are testing testing always testing the environment I love this I got to ask you as we kind of wrap up here Chris if you don't mind the the benefits to Professional Services from this Alliance big news Splunk and you guys work well together we see that clearly what are what other benefits do Professional Services teams see from the Splunk and Horizon 3.ai Alliance so if you're I think for from our our from both of our uh Partners uh as we bring these guys together and many of them already are the same partner right uh is that uh first off the licensing model is probably one of the key areas that we really excel at so if you're an end user you can buy uh for the Enterprise by the number of IP addresses you're using um but uh if you're a partner working with this there's solution ways that you can go in and we'll license as to msps and what that business model on msps looks like but the unique thing that we do here is this C plus license and so the Consulting plus license allows like a uh somebody a small to mid-sized to some very large uh you know Fortune 100 uh consulting firms use this uh by buying into a license called um Consulting plus where they can have unlimited uh access to as many IPS as they want but you can only run one test at a time and as you can imagine when we're going and hacking passwords and um checking hashes and decrypting hashes that can take a while so but for the right customer it's it's a perfect tool and so I I'm so excited about our ability to go to market with uh our partners so that we understand ourselves understand how not to just sell to or not tell just to sell through but we know how to sell with them as a good vendor partner I think that that's one thing that we've done a really good job building bring it into the market yeah I think also the Splunk has had great success how they've enabled uh partners and Professional Services absolutely you know the services that layer on top of Splunk are multi-fold tons of great benefits so you guys Vector right into that ride that way with friction and and the cool thing is that in you know in one of our reports which could be totally customized uh with someone else's logo we're going to generate you know so I I used to work in another organization it wasn't Splunk but we we did uh you know pen testing as for for customers and my pen testers would come on site they'd do the engagement and they would leave and then another release someone would be oh shoot we got another sector that was breached and they'd call you back you know four weeks later and so by August our entire pen testings teams would be sold out and it would be like well even in March maybe and they're like no no I gotta breach now and and and then when they do go in they go through do the pen test and they hand over a PDF and they pack on the back and say there's where your problems are you need to fix it and the reality is that what we're going to generate completely autonomously with no human interaction is we're going to go and find all the permutations of anything we found and the fix for those permutations and then once you've fixed everything you just go back and run another pen test it's you know for what people pay for one pen test they can have a tool that does that every every Pat patch on Tuesday and that's on Wednesday you know triage throughout the week green yellow red I wanted to see the colors show me green green is good right not red and one CIO doesn't want who doesn't want that dashboard right it's it's exactly it and we can help bring I think that you know I'm really excited about helping drive this with the Splunk team because they get that they understand that it's the green yellow red dashboard and and how do we help them find more green uh so that the other guys are in red yeah and get in the data and do the right thing and be efficient with how you use the data know what to look at so many things to pay attention to you know the combination of both and then go to market strategy real brilliant congratulations Chris thanks for coming on and sharing um this news with the detail around the Splunk in action around the alliance thanks for sharing John my pleasure thanks look forward to seeing you soon all right great we'll follow up and do another segment on devops and I.T and security teams as the new new Ops but and super cloud a bunch of other stuff so thanks for coming on and our next segment the CEO of horizon 3.aa will break down all the new news for us here on thecube you're watching thecube the leader in high tech Enterprise coverage [Music] yeah the partner program for us has been fantastic you know I think prior to that you know as most organizations most uh uh most Farmers most mssps might not necessarily have a a bench at all for penetration testing uh maybe they subcontract this work out or maybe they do it themselves but trying to staff that kind of position can be incredibly difficult for us this was a differentiator a a new a new partner a new partnership that allowed us to uh not only perform services for our customers but be able to provide a product by which that they can do it themselves so we work with our customers in a variety of ways some of them want more routine testing and perform this themselves but we're also a certified service provider of horizon 3 being able to perform uh penetration tests uh help review the the data provide color provide analysis for our customers in a broader sense right not necessarily the the black and white elements of you know what was uh what's critical what's high what's medium what's low what you need to fix but are there systemic issues this has allowed us to onboard new customers this has allowed us to migrate some penetration testing services to us from from competitors in the marketplace But ultimately this is occurring because the the product and the outcome are special they're unique and they're effective our customers like what they're seeing they like the routineness of it many of them you know again like doing this themselves you know being able to kind of pen test themselves parts of their networks um and the the new use cases right I'm a large organization I have eight to ten Acquisitions per year wouldn't it be great to have a tool to be able to perform a penetration test both internal and external of that acquisition before we integrate the two companies and maybe bringing on some risk it's a very effective partnership uh one that really is uh kind of taken our our Engineers our account Executives by storm um you know this this is a a partnership that's been very valuable to us [Music] a key part of the value and business model at Horizon 3 is enabling Partners to leverage node zero to make more revenue for themselves our goal is that for sixty percent of our Revenue this year will be originated by partners and that 95 of our Revenue next year will be originated by partners and so a key to that strategy is making us an integral part of your business models as a partner a key quote from one of our partners is that we enable every one of their business units to generate Revenue so let's talk about that in a little bit more detail first is that if you have a pen test Consulting business take Deloitte as an example what was six weeks of human labor at Deloitte per pen test has been cut down to four days of Labor using node zero to conduct reconnaissance find all the juicy interesting areas of the of the Enterprise that are exploitable and being able to go assess the entire organization and then all of those details get served up to the human to be able to look at understand and determine where to probe deeper so what you see in that pen test Consulting business is that node zero becomes a force multiplier where those Consulting teams were able to cover way more accounts and way more IPS within those accounts with the same or fewer consultants and so that directly leads to profit margin expansion for the Penn testing business itself because node 0 is a force multiplier the second business model here is if you're an mssp as an mssp you're already making money providing defensive cyber security operations for a large volume of customers and so what they do is they'll license node zero and use us as an upsell to their mssb business to start to deliver either continuous red teaming continuous verification or purple teaming as a service and so in that particular business model they've got an additional line of Revenue where they can increase the spend of their existing customers by bolting on node 0 as a purple team as a service offering the third business model or customer type is if you're an I.T services provider so as an I.T services provider you make money installing and configuring security products like Splunk or crowdstrike or hemio you also make money reselling those products and you also make money generating follow-on services to continue to harden your customer environments and so for them what what those it service providers will do is use us to verify that they've installed Splunk correctly improved to their customer that Splunk was installed correctly or crowdstrike was installed correctly using our results and then use our results to drive follow-on services and revenue and then finally we've got the value-added reseller which is just a straight up reseller because of how fast our sales Cycles are these vars are able to typically go from cold email to deal close in six to eight weeks at Horizon 3 at least a single sales engineer is able to run 30 to 50 pocs concurrently because our pocs are very lightweight and don't require any on-prem customization or heavy pre-sales post sales activity so as a result we're able to have a few amount of sellers driving a lot of Revenue and volume for us well the same thing applies to bars there isn't a lot of effort to sell the product or prove its value so vars are able to sell a lot more Horizon 3 node zero product without having to build up a huge specialist sales organization so what I'm going to do is talk through uh scenario three here as an I.T service provider and just how powerful node zero can be in driving additional Revenue so in here think of for every one dollar of node zero license purchased by the IT service provider to do their business it'll generate ten dollars of additional revenue for that partner so in this example kidney group uses node 0 to verify that they have installed and deployed Splunk correctly so Kitty group is a Splunk partner they they sell it services to install configure deploy and maintain Splunk and as they deploy Splunk they're going to use node 0 to attack the environment and make sure that the right logs and alerts and monitoring are being handled within the Splunk deployment so it's a way of doing QA or verifying that Splunk has been configured correctly and that's going to be internally used by kidney group to prove the quality of their services that they've just delivered then what they're going to do is they're going to show and leave behind that node zero Report with their client and that creates a resell opportunity for for kidney group to resell node 0 to their client because their client is seeing the reports and the results and saying wow this is pretty amazing and those reports can be co-branded where it's a pen testing report branded with kidney group but it says powered by Horizon three under it from there kidney group is able to take the fixed actions report that's automatically generated with every pen test through node zero and they're able to use that as the starting point for a statement of work to sell follow-on services to fix all of the problems that node zero identified fixing l11r misconfigurations fixing or patching VMware or updating credentials policies and so on so what happens is node 0 has found a bunch of problems the client often lacks the capacity to fix and so kidney group can use that lack of capacity by the client as a follow-on sales opportunity for follow-on services and finally based on the findings from node zero kidney group can look at that report and say to the customer you know customer if you bought crowdstrike you'd be able to uh prevent node Zero from attacking and succeeding in the way that it did for if you bought humano or if you bought Palo Alto networks or if you bought uh some privileged access management solution because of what node 0 was able to do with credential harvesting and attacks and so as a result kidney group is able to resell other security products within their portfolio crowdstrike Falcon humano Polito networks demisto Phantom and so on based on the gaps that were identified by node zero and that pen test and what that creates is another feedback loop where kidney group will then go use node 0 to verify that crowdstrike product has actually been installed and configured correctly and then this becomes the cycle of using node 0 to verify a deployment using that verification to drive a bunch of follow-on services and resell opportunities which then further drives more usage of the product now the way that we licensed is that it's a usage-based license licensing model so that the partner will grow their node zero Consulting plus license as they grow their business so for example if you're a kidney group then week one you've got you're going to use node zero to verify your Splunk install in week two if you have a pen testing business you're going to go off and use node zero to be a force multiplier for your pen testing uh client opportunity and then if you have an mssp business then in week three you're going to use node zero to go execute a purple team mssp offering for your clients so not necessarily a kidney group but if you're a Deloitte or ATT these larger companies and you've got multiple lines of business if you're Optive for instance you all you have to do is buy one Consulting plus license and you're going to be able to run as many pen tests as you want sequentially so now you can buy a single license and use that one license to meet your week one client commitments and then meet your week two and then meet your week three and as you grow your business you start to run multiple pen tests concurrently so in week one you've got to do a Splunk verify uh verify Splunk install and you've got to run a pen test and you've got to do a purple team opportunity you just simply expand the number of Consulting plus licenses from one license to three licenses and so now as you systematically grow your business you're able to grow your node zero capacity with you giving you predictable cogs predictable margins and once again 10x additional Revenue opportunity for that investment in the node zero Consulting plus license my name is Saint I'm the co-founder and CEO here at Horizon 3. I'm going to talk to you today about why it's important to look at your Enterprise Through The Eyes of an attacker the challenge I had when I was a CIO in banking the CTO at Splunk and serving within the Department of Defense is that I had no idea I was Secure until the bad guys had showed up am I logging the right data am I fixing the right vulnerabilities are my security tools that I've paid millions of dollars for actually working together to defend me and the answer is I don't know does my team actually know how to respond to a breach in the middle of an incident I don't know I've got to wait for the bad guys to show up and so the challenge I had was how do we proactively verify our security posture I tried a variety of techniques the first was the use of vulnerability scanners and the challenge with vulnerability scanners is being vulnerable doesn't mean you're exploitable I might have a hundred thousand findings from my scanner of which maybe five or ten can actually be exploited in my environment the other big problem with scanners is that they can't chain weaknesses together from machine to machine so if you've got a thousand machines in your environment or more what a vulnerability scanner will do is tell you you have a problem on machine one and separately a problem on machine two but what they can tell you is that an attacker could use a load from machine one plus a low from machine two to equal to critical in your environment and what attackers do in their tactics is they chain together misconfigurations dangerous product defaults harvested credentials and exploitable vulnerabilities into attack paths across different machines so to address the attack pads across different machines I tried layering in consulting-based pen testing and the issue is when you've got thousands of hosts or hundreds of thousands of hosts in your environment human-based pen testing simply doesn't scale to test an infrastructure of that size moreover when they actually do execute a pen test and you get the report oftentimes you lack the expertise within your team to quickly retest to verify that you've actually fixed the problem and so what happens is you end up with these pen test reports that are incomplete snapshots and quickly going stale and then to mitigate that problem I tried using breach and attack simulation tools and the struggle with these tools is one I had to install credentialed agents everywhere two I had to write my own custom attack scripts that I didn't have much talent for but also I had to maintain as my environment changed and then three these types of tools were not safe to run against production systems which was the the majority of my attack surface so that's why we went off to start Horizon 3. so Tony and I met when we were in Special Operations together and the challenge we wanted to solve was how do we do infrastructure security testing at scale by giving the the power of a 20-year pen testing veteran into the hands of an I.T admin a network engineer in just three clicks and the whole idea is we enable these fixers The Blue Team to be able to run node Zero Hour pen testing product to quickly find problems in their environment that blue team will then then go off and fix the issues that were found and then they can quickly rerun the attack to verify that they fixed the problem and the whole idea is delivering this without requiring custom scripts be developed without requiring credential agents be installed and without requiring the use of external third-party consulting services or Professional Services self-service pen testing to quickly Drive find fix verify there are three primary use cases that our customers use us for the first is the sock manager that uses us to verify that their security tools are actually effective to verify that they're logging the right data in Splunk or in their Sim to verify that their managed security services provider is able to quickly detect and respond to an attack and hold them accountable for their slas or that the sock understands how to quickly detect and respond and measuring and verifying that or that the variety of tools that you have in your stack most organizations have 130 plus cyber security tools none of which are designed to work together are actually working together the second primary use case is proactively hardening and verifying your systems this is when the I that it admin that network engineer they're able to run self-service pen tests to verify that their Cisco environment is installed in hardened and configured correctly or that their credential policies are set up right or that their vcenter or web sphere or kubernetes environments are actually designed to be secure and what this allows the it admins and network Engineers to do is shift from running one or two pen tests a year to 30 40 or more pen tests a month and you can actually wire those pen tests into your devops process or into your detection engineering and the change management processes to automatically trigger pen tests every time there's a change in your environment the third primary use case is for those organizations lucky enough to have their own internal red team they'll use node zero to do reconnaissance and exploitation at scale and then use the output as a starting point for the humans to step in and focus on the really hard juicy stuff that gets them on stage at Defcon and so these are the three primary use cases and what we'll do is zoom into the find fix verify Loop because what I've found in my experience is find fix verify is the future operating model for cyber security organizations and what I mean here is in the find using continuous pen testing what you want to enable is on-demand self-service pen tests you want those pen tests to find attack pads at scale spanning your on-prem infrastructure your Cloud infrastructure and your perimeter because attackers don't only state in one place they will find ways to chain together a perimeter breach a credential from your on-prem to gain access to your cloud or some other permutation and then the third part in continuous pen testing is attackers don't focus on critical vulnerabilities anymore they know we've built vulnerability Management Programs to reduce those vulnerabilities so attackers have adapted and what they do is chain together misconfigurations in your infrastructure and software and applications with dangerous product defaults with exploitable vulnerabilities and through the collection of credentials through a mix of techniques at scale once you've found those problems the next question is what do you do about it well you want to be able to prioritize fixing problems that are actually exploitable in your environment that truly matter meaning they're going to lead to domain compromise or domain user compromise or access your sensitive data the second thing you want to fix is making sure you understand what risk your crown jewels data is exposed to where is your crown jewels data is in the cloud is it on-prem has it been copied to a share drive that you weren't aware of if a domain user was compromised could they access that crown jewels data you want to be able to use the attacker's perspective to secure the critical data you have in your infrastructure and then finally as you fix these problems you want to quickly remediate and retest that you've actually fixed the issue and this fine fix verify cycle becomes that accelerator that drives purple team culture the third part here is verify and what you want to be able to do in the verify step is verify that your security tools and processes in people can effectively detect and respond to a breach you want to be able to integrate that into your detection engineering processes so that you know you're catching the right security rules or that you've deployed the right configurations you also want to make sure that your environment is adhering to the best practices around systems hardening in cyber resilience and finally you want to be able to prove your security posture over a time to your board to your leadership into your regulators so what I'll do now is zoom into each of these three steps so when we zoom in to find here's the first example using node 0 and autonomous pen testing and what an attacker will do is find a way to break through the perimeter in this example it's very easy to misconfigure kubernetes to allow an attacker to gain remote code execution into your on-prem kubernetes environment and break through the perimeter and from there what the attacker is going to do is conduct Network reconnaissance and then find ways to gain code execution on other machines in the environment and as they get code execution they start to dump credentials collect a bunch of ntlm hashes crack those hashes using open source and dark web available data as part of those attacks and then reuse those credentials to log in and laterally maneuver throughout the environment and then as they loudly maneuver they can reuse those credentials and use credential spraying techniques and so on to compromise your business email to log in as admin into your cloud and this is a very common attack and rarely is a CV actually needed to execute this attack often it's just a misconfiguration in kubernetes with a bad credential policy or password policy combined with bad practices of credential reuse across the organization here's another example of an internal pen test and this is from an actual customer they had 5 000 hosts within their environment they had EDR and uba tools installed and they initiated in an internal pen test on a single machine from that single initial access point node zero enumerated the network conducted reconnaissance and found five thousand hosts were accessible what node 0 will do under the covers is organize all of that reconnaissance data into a knowledge graph that we call the Cyber terrain map and that cyber Terrain map becomes the key data structure that we use to efficiently maneuver and attack and compromise your environment so what node zero will do is they'll try to find ways to get code execution reuse credentials and so on in this customer example they had Fortinet installed as their EDR but node 0 was still able to get code execution on a Windows machine from there it was able to successfully dump credentials including sensitive credentials from the lsas process on the Windows box and then reuse those credentials to log in as domain admin in the network and once an attacker becomes domain admin they have the keys to the kingdom they can do anything they want so what happened here well it turns out Fortinet was misconfigured on three out of 5000 machines bad automation the customer had no idea this had happened they would have had to wait for an attacker to show up to realize that it was misconfigured the second thing is well why didn't Fortinet stop the credential pivot in the lateral movement and it turned out the customer didn't buy the right modules or turn on the right services within that particular product and we see this not only with Ford in it but we see this with Trend Micro and all the other defensive tools where it's very easy to miss a checkbox in the configuration that will do things like prevent credential dumping the next story I'll tell you is attackers don't have to hack in they log in so another infrastructure pen test a typical technique attackers will take is man in the middle uh attacks that will collect hashes so in this case what an attacker will do is leverage a tool or technique called responder to collect ntlm hashes that are being passed around the network and there's a variety of reasons why these hashes are passed around and it's a pretty common misconfiguration but as an attacker collects those hashes then they start to apply techniques to crack those hashes so they'll pass the hash and from there they will use open source intelligence common password structures and patterns and other types of techniques to try to crack those hashes into clear text passwords so here node 0 automatically collected hashes it automatically passed the hashes to crack those credentials and then from there it starts to take the domain user user ID passwords that it's collected and tries to access different services and systems in your Enterprise in this case node 0 is able to successfully gain access to the Office 365 email environment because three employees didn't have MFA configured so now what happens is node 0 has a placement and access in the business email system which sets up the conditions for fraud lateral phishing and other techniques but what's especially insightful here is that 80 of the hashes that were collected in this pen test were cracked in 15 minutes or less 80 percent 26 of the user accounts had a password that followed a pretty obvious pattern first initial last initial and four random digits the other thing that was interesting is 10 percent of service accounts had their user ID the same as their password so VMware admin VMware admin web sphere admin web Square admin so on and so forth and so attackers don't have to hack in they just log in with credentials that they've collected the next story here is becoming WS AWS admin so in this example once again internal pen test node zero gets initial access it discovers 2 000 hosts are network reachable from that environment if fingerprints and organizes all of that data into a cyber Terrain map from there it it fingerprints that hpilo the integrated lights out service was running on a subset of hosts hpilo is a service that is often not instrumented or observed by security teams nor is it easy to patch as a result attackers know this and immediately go after those types of services so in this case that ILO service was exploitable and were able to get code execution on it ILO stores all the user IDs and passwords in clear text in a particular set of processes so once we gain code execution we were able to dump all of the credentials and then from there laterally maneuver to log in to the windows box next door as admin and then on that admin box we're able to gain access to the share drives and we found a credentials file saved on a share Drive from there it turned out that credentials file was the AWS admin credentials file giving us full admin authority to their AWS accounts not a single security alert was triggered in this attack because the customer wasn't observing the ILO service and every step thereafter was a valid login in the environment and so what do you do step one patch the server step two delete the credentials file from the share drive and then step three is get better instrumentation on privileged access users and login the final story I'll tell is a typical pattern that we see across the board with that combines the various techniques I've described together where an attacker is going to go off and use open source intelligence to find all of the employees that work at your company from there they're going to look up those employees on dark web breach databases and other forms of information and then use that as a starting point to password spray to compromise a domain user all it takes is one employee to reuse a breached password for their Corporate email or all it takes is a single employee to have a weak password that's easily guessable all it takes is one and once the attacker is able to gain domain user access in most shops domain user is also the local admin on their laptop and once your local admin you can dump Sam and get local admin until M hashes you can use that to reuse credentials again local admin on neighboring machines and attackers will start to rinse and repeat then eventually they're able to get to a point where they can dump lsas or by unhooking the anti-virus defeating the EDR or finding a misconfigured EDR as we've talked about earlier to compromise the domain and what's consistent is that the fundamentals are broken at these shops they have poor password policies they don't have least access privilege implemented active directory groups are too permissive where domain admin or domain user is also the local admin uh AV or EDR Solutions are misconfigured or easily unhooked and so on and what we found in 10 000 pen tests is that user Behavior analytics tools never caught us in that lateral movement in part because those tools require pristine logging data in order to work and also it becomes very difficult to find that Baseline of normal usage versus abnormal usage of credential login another interesting Insight is there were several Marquee brand name mssps that were defending our customers environment and for them it took seven hours to detect and respond to the pen test seven hours the pen test was over in less than two hours and so what you had was an egregious violation of the service level agreements that that mssp had in place and the customer was able to use us to get service credit and drive accountability of their sock and of their provider the third interesting thing is in one case it took us seven minutes to become domain admin in a bank that bank had every Gucci security tool you could buy yet in 7 minutes and 19 seconds node zero started as an unauthenticated member of the network and was able to escalate privileges through chaining and misconfigurations in lateral movement and so on to become domain admin if it's seven minutes today we should assume it'll be less than a minute a year or two from now making it very difficult for humans to be able to detect and respond to that type of Blitzkrieg attack so that's in the find it's not just about finding problems though the bulk of the effort should be what to do about it the fix and the verify so as you find those problems back to kubernetes as an example we will show you the path here is the kill chain we took to compromise that environment we'll show you the impact here is the impact or here's the the proof of exploitation that we were able to use to be able to compromise it and there's the actual command that we executed so you could copy and paste that command and compromise that cubelet yourself if you want and then the impact is we got code execution and we'll actually show you here is the impact this is a critical here's why it enabled perimeter breach affected applications will tell you the specific IPS where you've got the problem how it maps to the miter attack framework and then we'll tell you exactly how to fix it we'll also show you what this problem enabled so you can accurately prioritize why this is important or why it's not important the next part is accurate prioritization the hardest part of my job as a CIO was deciding what not to fix so if you take SMB signing not required as an example by default that CVSs score is a one out of 10. but this misconfiguration is not a cve it's a misconfig enable an attacker to gain access to 19 credentials including one domain admin two local admins and access to a ton of data because of that context this is really a 10 out of 10. you better fix this as soon as possible however of the seven occurrences that we found it's only a critical in three out of the seven and these are the three specific machines and we'll tell you the exact way to fix it and you better fix these as soon as possible for these four machines over here these didn't allow us to do anything of consequence so that because the hardest part is deciding what not to fix you can justifiably choose not to fix these four issues right now and just add them to your backlog and surge your team to fix these three as quickly as possible and then once you fix these three you don't have to re-run the entire pen test you can select these three and then one click verify and run a very narrowly scoped pen test that is only testing this specific issue and what that creates is a much faster cycle of finding and fixing problems the other part of fixing is verifying that you don't have sensitive data at risk so once we become a domain user we're able to use those domain user credentials and try to gain access to databases file shares S3 buckets git repos and so on and help you understand what sensitive data you have at risk so in this example a green checkbox means we logged in as a valid domain user we're able to get read write access on the database this is how many records we could have accessed and we don't actually look at the values in the database but we'll show you the schema so you can quickly characterize that pii data was at risk here and we'll do that for your file shares and other sources of data so now you can accurately articulate the data you have at risk and prioritize cleaning that data up especially data that will lead to a fine or a big news issue so that's the find that's the fix now we're going to talk about the verify the key part in verify is embracing and integrating with detection engineering practices so when you think about your layers of security tools you've got lots of tools in place on average 130 tools at any given customer but these tools were not designed to work together so when you run a pen test what you want to do is say did you detect us did you log us did you alert on us did you stop us and from there what you want to see is okay what are the techniques that are commonly used to defeat an environment to actually compromise if you look at the top 10 techniques we use and there's far more than just these 10 but these are the most often executed nine out of ten have nothing to do with cves it has to do with misconfigurations dangerous product defaults bad credential policies and it's how we chain those together to become a domain admin or compromise a host so what what customers will do is every single attacker command we executed is provided to you as an attackivity log so you can actually see every single attacker command we ran the time stamp it was executed the hosts it executed on and how it Maps the minor attack tactics so our customers will have are these attacker logs on one screen and then they'll go look into Splunk or exabeam or Sentinel one or crowdstrike and say did you detect us did you log us did you alert on us or not and to make that even easier if you take this example hey Splunk what logs did you see at this time on the VMware host because that's when node 0 is able to dump credentials and that allows you to identify and fix your logging blind spots to make that easier we've got app integration so this is an actual Splunk app in the Splunk App Store and what you can come is inside the Splunk console itself you can fire up the Horizon 3 node 0 app all of the pen test results are here so that you can see all of the results in one place and you don't have to jump out of the tool and what you'll show you as I skip forward is hey there's a pen test here are the critical issues that we've identified for that weaker default issue here are the exact commands we executed and then we will automatically query into Splunk all all terms on between these times on that endpoint that relate to this attack so you can now quickly within the Splunk environment itself figure out that you're missing logs or that you're appropriately catching this issue and that becomes incredibly important in that detection engineering cycle that I mentioned earlier so how do our customers end up using us they shift from running one pen test a year to 30 40 pen tests a month oftentimes wiring us into their deployment automation to automatically run pen tests the other part that they'll do is as they run more pen tests they find more issues but eventually they hit this inflection point where they're able to rapidly clean up their environment and that inflection point is because the red and the blue teams start working together in a purple team culture and now they're working together to proactively harden their environment the other thing our customers will do is run us from different perspectives they'll first start running an RFC 1918 scope to see once the attacker gained initial access in a part of the network that had wide access what could they do and then from there they'll run us within a specific Network segment okay from within that segment could the attacker break out and gain access to another segment then they'll run us from their work from home environment could they Traverse the VPN and do something damaging and once they're in could they Traverse the VPN and get into my cloud then they'll break in from the outside all of these perspectives are available to you in Horizon 3 and node zero as a single SKU and you can run as many pen tests as you want if you run a phishing campaign and find that an intern in the finance department had the worst phishing behavior you can then inject their credentials and actually show the end-to-end story of how an attacker fished gained credentials of an intern and use that to gain access to sensitive financial data so what our customers end up doing is running multiple attacks from multiple perspectives and looking at those results over time I'll leave you two things one is what is the AI in Horizon 3 AI those knowledge graphs are the heart and soul of everything that we do and we use machine learning reinforcement techniques reinforcement learning techniques Markov decision models and so on to be able to efficiently maneuver and analyze the paths in those really large graphs we also use context-based scoring to prioritize weaknesses and we're also able to drive collective intelligence across all of the operations so the more pen tests we run the smarter we get and all of that is based on our knowledge graph analytics infrastructure that we have finally I'll leave you with this was my decision criteria when I was a buyer for my security testing strategy what I cared about was coverage I wanted to be able to assess my on-prem cloud perimeter and work from home and be safe to run in production I want to be able to do that as often as I wanted I want to be able to run pen tests in hours or days not weeks or months so I could accelerate that fine fix verify loop I wanted my it admins and network Engineers with limited offensive experience to be able to run a pen test in a few clicks through a self-service experience and not have to install agent and not have to write custom scripts and finally I didn't want to get nickeled and dimed on having to buy different types of attack modules or different types of attacks I wanted a single annual subscription that allowed me to run any type of attack as often as I wanted so I could look at my Trends in directions over time so I hope you found this talk valuable uh we're easy to find and I look forward to seeing seeing you use a product and letting our results do the talking when you look at uh you know kind of the way no our pen testing algorithms work is we dynamically select uh how to compromise an environment based on what we've discovered and the goal is to become a domain admin compromise a host compromise domain users find ways to encrypt data steal sensitive data and so on but when you look at the the top 10 techniques that we ended up uh using to compromise environments the first nine have nothing to do with cves and that's the reality cves are yes a vector but less than two percent of cves are actually used in a compromise oftentimes it's some sort of credential collection credential cracking uh credential pivoting and using that to become an admin and then uh compromising environments from that point on so I'll leave this up for you to kind of read through and you'll have the slides available for you but I found it very insightful that organizations and ourselves when I was a GE included invested heavily in just standard vulnerability Management Programs when I was at DOD that's all disa cared about asking us about was our our kind of our cve posture but the attackers have adapted to not rely on cves to get in because they know that organizations are actively looking at and patching those cves and instead they're chaining together credentials from one place with misconfigurations and dangerous product defaults in another to take over an environment a concrete example is by default vcenter backups are not encrypted and so as if an attacker finds vcenter what they'll do is find the backup location and there are specific V sender MTD files where the admin credentials are parsippled in the binaries so you can actually as an attacker find the right MTD file parse out the binary and now you've got the admin credentials for the vcenter environment and now start to log in as admin there's a bad habit by signal officers and Signal practitioners in the in the Army and elsewhere where the the VM notes section of a virtual image has the password for the VM well those VM notes are not stored encrypted and attackers know this and they're able to go off and find the VMS that are unencrypted find the note section and pull out the passwords for those images and then reuse those credentials across the board so I'll pause here and uh you know Patrick love you get some some commentary on on these techniques and other things that you've seen and what we'll do in the last say 10 to 15 minutes is uh is rolled through a little bit more on what do you do about it yeah yeah no I love it I think um I think this is pretty exhaustive what I like about what you've done here is uh you know we've seen we've seen double-digit increases in the number of organizations that are reporting actual breaches year over year for the last um for the last three years and it's often we kind of in the Zeitgeist we pegged that on ransomware which of course is like incredibly important and very top of mind um but what I like about what you have here is you know we're reminding the audience that the the attack surface area the vectors the matter um you know has to be more comprehensive than just thinking about ransomware scenarios yeah right on um so let's build on this when you think about your defense in depth you've got multiple security controls that you've purchased and integrated and you've got that redundancy if a control fails but the reality is that these security tools aren't designed to work together so when you run a pen test what you want to ask yourself is did you detect node zero did you log node zero did you alert on node zero and did you stop node zero and when you think about how to do that every single attacker command executed by node zero is available in an attacker log so you can now see you know at the bottom here vcenter um exploit at that time on that IP how it aligns to minor attack what you want to be able to do is go figure out did your security tools catch this or not and that becomes very important in using the attacker's perspective to improve your defensive security controls and so the way we've tried to make this easier back to like my my my the you know I bleed Green in many ways still from my smoke background is you want to be able to and what our customers do is hey we'll look at the attacker logs on one screen and they'll look at what did Splunk see or Miss in another screen and then they'll use that to figure out what their logging blind spots are and what that where that becomes really interesting is we've actually built out an integration into Splunk where there's a Splunk app you can download off of Splunk base and you'll get all of the pen test results right there in the Splunk console and from that Splunk console you're gonna be able to see these are all the pen tests that were run these are the issues that were found um so you can look at that particular pen test here are all of the weaknesses that were identified for that particular pen test and how they categorize out for each of those weaknesses you can click on any one of them that are critical in this case and then we'll tell you for that weakness and this is where where the the punch line comes in so I'll pause the video here for that weakness these are the commands that were executed on these endpoints at this time and then we'll actually query Splunk for that um for that IP address or containing that IP and these are the source types that surface any sort of activity so what we try to do is help you as quickly and efficiently as possible identify the logging blind spots in your Splunk environment based on the attacker's perspective so as this video kind of plays through you can see it Patrick I'd love to get your thoughts um just seeing so many Splunk deployments and the effectiveness of those deployments and and how this is going to help really Elevate the effectiveness of all of your Splunk customers yeah I'm super excited about this I mean I think this these kinds of purpose-built integration snail really move the needle for our customers I mean at the end of the day when I think about the power of Splunk I think about a product I was first introduced to 12 years ago that was an on-prem piece of software you know and at the time it sold on sort of Perpetual and term licenses but one made it special was that it could it could it could eat data at a speed that nothing else that I'd have ever seen you can ingest massively scalable amounts of data uh did cool things like schema on read which facilitated that there was this language called SPL that you could nerd out about uh and you went to a conference once a year and you talked about all the cool things you were splunking right but now as we think about the next phase of our growth um we live in a heterogeneous environment where our customers have so many different tools and data sources that are ever expanding and as you look at the as you look at the role of the ciso it's mind-blowing to me the amount of sources Services apps that are coming into the ciso span of let's just call it a span of influence in the last three years uh you know we're seeing things like infrastructure service level visibility application performance monitoring stuff that just never made sense for the security team to have visibility into you um at least not at the size and scale which we're demanding today um and and that's different and this isn't this is why it's so important that we have these joint purpose-built Integrations that um really provide more prescription to our customers about how do they walk on that Journey towards maturity what does zero to one look like what does one to two look like whereas you know 10 years ago customers were happy with platforms today they want integration they want Solutions and they want to drive outcomes and I think this is a great example of how together we are stepping to the evolving nature of the market and also the ever-evolving nature of the threat landscape and what I would say is the maturing needs of the customer in that environment yeah for sure I think especially if if we all anticipate budget pressure over the next 18 months due to the economy and elsewhere while the security budgets are not going to ever I don't think they're going to get cut they're not going to grow as fast and there's a lot more pressure on organizations to extract more value from their existing Investments as well as extracting more value and more impact from their existing teams and so security Effectiveness Fierce prioritization and automation I think become the three key themes of security uh over the next 18 months so I'll do very quickly is run through a few other use cases um every host that we identified in the pen test were able to score and say this host allowed us to do something significant therefore it's it's really critical you should be increasing your logging here hey these hosts down here we couldn't really do anything as an attacker so if you do have to make trade-offs you can make some trade-offs of your logging resolution at the lower end in order to increase logging resolution on the upper end so you've got that level of of um justification for where to increase or or adjust your logging resolution another example is every host we've discovered as an attacker we Expose and you can export and we want to make sure is every host we found as an attacker is being ingested from a Splunk standpoint a big issue I had as a CIO and user of Splunk and other tools is I had no idea if there were Rogue Raspberry Pi's on the network or if a new box was installed and whether Splunk was installed on it or not so now you can quickly start to correlate what hosts did we see and how does that reconcile with what you're logging from uh finally or second to last use case here on the Splunk integration side is for every single problem we've found we give multiple options for how to fix it this becomes a great way to prioritize what fixed actions to automate in your soar platform and what we want to get to eventually is being able to automatically trigger soar actions to fix well-known problems like automatically invalidating passwords for for poor poor passwords in our credentials amongst a whole bunch of other things we could go off and do and then finally if there is a well-known kill chain or attack path one of the things I really wish I could have done when I was a Splunk customer was take this type of kill chain that actually shows a path to domain admin that I'm sincerely worried about and use it as a glass table over which I could start to layer possible indicators of compromise and now you've got a great starting point for glass tables and iocs for actual kill chains that we know are exploitable in your environment and that becomes some super cool Integrations that we've got on the roadmap between us and the Splunk security side of the house so what I'll leave with actually Patrick before I do that you know um love to get your comments and then I'll I'll kind of leave with one last slide on this wartime security mindset uh pending you know assuming there's no other questions no I love it I mean I think this kind of um it's kind of glass table's approach to how do you how do you sort of visualize these workflows and then use things like sore and orchestration and automation to operationalize them is exactly where we see all of our customers going and getting away from I think an over engineered approach to soar with where it has to be super technical heavy with you know python programmers and getting more to this visual view of workflow creation um that really demystifies the power of Automation and also democratizes it so you don't have to have these programming languages in your resume in order to start really moving the needle on workflow creation policy enforcement and ultimately driving automation coverage across more and more of the workflows that your team is seeing yeah I think that between us being able to visualize the actual kill chain or attack path with you know think of a of uh the soar Market I think going towards this no code low code um you know configurable sore versus coded sore that's going to really be a game changer in improve or giving security teams a force multiplier so what I'll leave you with is this peacetime mindset of security no longer is sustainable we really have to get out of checking the box and then waiting for the bad guys to show up to verify that security tools are are working or not and the reason why we've got to really do that quickly is there are over a thousand companies that withdrew from the Russian economy over the past uh nine months due to the Ukrainian War there you should expect every one of them to be punished by the Russians for leaving and punished from a cyber standpoint and this is no longer about financial extortion that is ransomware this is about punishing and destroying companies and you can punish any one of these companies by going after them directly or by going after their suppliers and their Distributors so suddenly your attack surface is no more no longer just your own Enterprise it's how you bring your goods to Market and it's how you get your goods created because while I may not be able to disrupt your ability to harvest fruit if I can get those trucks stuck at the border I can increase spoilage and have the same effect and what we should expect to see is this idea of cyber-enabled economic Warfare where if we issue a sanction like Banning the Russians from traveling there is a cyber-enabled counter punch which is corrupt and destroy the American Airlines database that is below the threshold of War that's not going to trigger the 82nd Airborne to be mobilized but it's going to achieve the right effect ban the sale of luxury goods disrupt the supply chain and create shortages banned Russian oil and gas attack refineries to call a 10x spike in gas prices three days before the election this is the future and therefore I think what we have to do is shift towards a wartime mindset which is don't trust your security posture verify it see yourself Through The Eyes of the attacker build that incident response muscle memory and drive better collaboration between the red and the blue teams your suppliers and Distributors and your information uh sharing organization they have in place and what's really valuable for me as a Splunk customer was when a router crashes at that moment you don't know if it's due to an I.T Administration problem or an attacker and what you want to have are different people asking different questions of the same data and you want to have that integrated triage process of an I.T lens to that problem a security lens to that problem and then from there figuring out is is this an IT workflow to execute or a security incident to execute and you want to have all of that as an integrated team integrated process integrated technology stack and this is something that I very care I cared very deeply about as both a Splunk customer and a Splunk CTO that I see time and time again across the board so Patrick I'll leave you with the last word the final three minutes here and I don't see any open questions so please take us home oh man see how you think we spent hours and hours prepping for this together that that last uh uh 40 seconds of your talk track is probably one of the things I'm most passionate about in this industry right now uh and I think nist has done some really interesting work here around building cyber resilient organizations that have that has really I think helped help the industry see that um incidents can come from adverse conditions you know stress is uh uh performance taxations in the infrastructure service or app layer and they can come from malicious compromises uh Insider threats external threat actors and the more that we look at this from the perspective of of a broader cyber resilience Mission uh in a wartime mindset uh I I think we're going to be much better off and and will you talk about with operationally minded ice hacks information sharing intelligence sharing becomes so important in these wartime uh um situations and you know we know not all ice acts are created equal but we're also seeing a lot of um more ad hoc information sharing groups popping up so look I think I think you framed it really really well I love the concept of wartime mindset and um I I like the idea of applying a cyber resilience lens like if you have one more layer on top of that bottom right cake you know I think the it lens and the security lens they roll up to this concept of cyber resilience and I think this has done some great work there for us yeah you're you're spot on and that that is app and that's gonna I think be the the next um terrain that that uh that you're gonna see vendors try to get after but that I think Splunk is best position to win okay that's a wrap for this special Cube presentation you heard all about the global expansion of horizon 3.ai's partner program for their Partners have a unique opportunity to take advantage of their node zero product uh International go to Market expansion North America channel Partnerships and just overall relationships with companies like Splunk to make things more comprehensive in this disruptive cyber security world we live in and hope you enjoyed this program all the videos are available on thecube.net as well as check out Horizon 3 dot AI for their pen test Automation and ultimately their defense system that they use for testing always the environment that you're in great Innovative product and I hope you enjoyed the program again I'm John Furrier host of the cube thanks for watching

Published Date : Sep 28 2022

SUMMARY :

that's the sort of stuff that we do you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Patrick CoughlinPERSON

0.99+

Jennifer LeePERSON

0.99+

ChrisPERSON

0.99+

TonyPERSON

0.99+

2013DATE

0.99+

Raina RichterPERSON

0.99+

SingaporeLOCATION

0.99+

EuropeLOCATION

0.99+

PatrickPERSON

0.99+

FrankfurtLOCATION

0.99+

JohnPERSON

0.99+

20-yearQUANTITY

0.99+

hundredsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

seven minutesQUANTITY

0.99+

95QUANTITY

0.99+

FordORGANIZATION

0.99+

2.7 billionQUANTITY

0.99+

MarchDATE

0.99+

FinlandLOCATION

0.99+

seven hoursQUANTITY

0.99+

sixty percentQUANTITY

0.99+

John FurrierPERSON

0.99+

SwedenLOCATION

0.99+

John FurrierPERSON

0.99+

six weeksQUANTITY

0.99+

seven hoursQUANTITY

0.99+

19 credentialsQUANTITY

0.99+

ten dollarsQUANTITY

0.99+

JenniferPERSON

0.99+

5 000 hostsQUANTITY

0.99+

Horizon 3TITLE

0.99+

WednesdayDATE

0.99+

30QUANTITY

0.99+

eightQUANTITY

0.99+

Asia PacificLOCATION

0.99+

American AirlinesORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

three licensesQUANTITY

0.99+

two companiesQUANTITY

0.99+

2019DATE

0.99+

European UnionORGANIZATION

0.99+

sixQUANTITY

0.99+

seven occurrencesQUANTITY

0.99+

70QUANTITY

0.99+

three peopleQUANTITY

0.99+

Horizon 3.aiTITLE

0.99+

ATTORGANIZATION

0.99+

Net ZeroORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

UberORGANIZATION

0.99+

fiveQUANTITY

0.99+

less than two percentQUANTITY

0.99+

less than two hoursQUANTITY

0.99+

2012DATE

0.99+

UKLOCATION

0.99+

AdobeORGANIZATION

0.99+

four issuesQUANTITY

0.99+

Department of DefenseORGANIZATION

0.99+

next yearDATE

0.99+

three stepsQUANTITY

0.99+

node 0TITLE

0.99+

15 minutesQUANTITY

0.99+

hundred percentQUANTITY

0.99+

node zeroTITLE

0.99+

10xQUANTITY

0.99+

last yearDATE

0.99+

7 minutesQUANTITY

0.99+

one licenseQUANTITY

0.99+

second thingQUANTITY

0.99+

thousands of hostsQUANTITY

0.99+

five thousand hostsQUANTITY

0.99+

next weekDATE

0.99+

Jennifer Lee, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

(upbeat music) >> Welcome back everyone to theCUBE and Horizon3.ai special presentation. I'm John Furrier, host of theCUBE. We're here with Jennifer Lee head of channel sales Horizon3.ai, Jennifer, welcome to theCUBE, thanks for coming on. >> Great, well thank you for having me >> So big news around Horizon3.ai driving channel, first commitment you guys are expanding the channel partner program to include all kinds of new rewards, incentives, training programs to help educate, you know, partners, really drive more recurring revenue, certainly cloud and cloud scale has done that. You got a great product that fits into that kind of channel model, great services you can wrap around it, good stuff. So let's get into it. What are you guys doing? What are you guys doing with this news? Why is this so important? >> Yeah, for sure. So, yeah, we, like you said, we recently expanded our channel partner program. The driving force behind it was really just to align our, like you said, our channel first commitment and creating awareness around the importance of our partner ecosystems. So that's, it's really how we go to market, is through the channel. >> And a great international focus. I've talked with the CEO, you know, about the solution and he broke down all the action on why it's important on the product side, but why now on the go to market change? What's the why behind this big, this news on the channel? >> Yeah, for sure. So we are doing this now, really to align our business strategy, which is built on the concept of enabling our partners to create a high value, high margin business on top of our platform. And so we offer a solution called node zero. It provides autonomous pen testing as a service and it allows organizations to continuously verify their security posture. So our, we, our company vision, we have this tagline that states that our pen testing enables organizations to see themselves through the eyes of an attacker. And we use the, like the attacker's perspective to identify exploitable weaknesses and vulnerabilities. So we created this partner program from a perspective of the partner. So the partner's perspective and we've built it through the eyes of our partner, right? So we're prioritizing really what the partner is looking for and will ensure like mutual success for us. >> Yeah, the partners always want to get in front of the customers and bring new stuff to them. Pen tests have traditionally been really expensive. And so bringing it down and in one, to a service level that's, one, affordable and has flexibility to it allows a lot of capabilities. So I imagine people are going to get excited by it. So I have to ask you about the program. What specifically are you guys doing? Can you share any details around what it means for the partners, what they get, what's in it for them? Can you just break down some of the mechanics and mechanisms or details? >> Yeah. Yep, so, you know, we're really looking to create business alignment. And like I said, established mutual success with our partners, so we've got 2 key elements that we were really focused on that we bring to the partners. So the opportunity, the profit margin expansion is one of 'em and a way for our partners to really differentiate themselves and stay relevant in the market. So we've restructured our discount model, really, you know, highlighting profitability and maximizing profitability. And this includes our deal registration. We've created a deal registration program. We've increased discount for partners who take part in our partner certification trainings, and we've, we have some other partner incentives that we've created that's going to help out there. We've put this all, so we've recently gone live with our partner portal, it's a consolidated experience for our partners where they can access our sales tools. And we really view our partners as an extension of our sales and technical teams. And so we've extended all of our training material that we use internally, we've made it available to our partners through our partner portal. We've, I'm trying, I'm thinking now back, what else is in that partner portal here? We've got our partner certification information. So all the content that's delivered during that training can be found in the portal. We've got deal registration, co-branded marketing materials, pipeline management. And so this portal gives our partners a one stop place to go to final event information. And then just really quickly on the second part of that, that I mentioned is our technology really is really disruptive to the market. So, you know, like you said, autonomous pen testing, it's still, it's, well, it's still a relatively new topic for security practitioners and it's proving to be really disruptive. So that on top of just, well, recently we found an article that mentioned by markets to markets that reports that the global pen testing market's really expanding. And so it's expected to grow to like 2.7 billion by 2027. So the market's there, right? The market's expanding, it's growing. And so for our partners, it just really allows them to grow their revenue across their customer base, expand their customer base and offering this high profit margin while, you know, getting in early to market on this disruptive technology. >> Big market, a lot of opportunities to make some money. People love to put more margin on those deals, especially when you can bring a great solution that everyone knows is hard to do. So I think that's going to provide a lot of value. Is there a type of partner that you guys see emerging or you aligning with, you mentioned the alignment with the partners. I can see how that, the training and the incentives are all there. Sounds like it's all going well. Is there a type of partner that's resonating the most or is there categories of partners that can take advantage of this? >> Yeah, absolutely. So we work with all different kinds of partners. We work with our traditional resale partners. We're working with systems integrators. We have a really strong MSP, MSSP program. We've got consulting partners and the consulting partners especially with the ones that offer pen test services. So we, they use us as a, we act as a force multiplier, just really offering them profit margin expansion opportunity there. We've got some technology partners that we really work with for co-sell opportunities. And then we've got our cloud partners. You had mentioned that earlier and so we are in AWS marketplace, our CCPO partners, we're part of the ISV accelerate program. So we're doing a lot there with our cloud partners. And of course we go to market with distribution partners as well. >> Got to love the opportunity for more margin expansion. Every kind of partner wants to put more gross profit on their deals. Is there a certification involved, I have to ask? Is there like, do you get, do people get certified or is it just, you get train? Is it self-paced training? Is it in person? How are you guys doing the whole training, certification thing? Is that a requirement, or not? >> Yeah, absolutely. So we do offer a certification program and it's been very popular. This includes a seller's portion and an operator portion. And so this is at no cost to our partners and we offer it both virtually, it's live, it's virtually, but live, it's not self-paced. And we also have in person, you know, sessions as well. And we also can customize these to any partners that have a large group of people. And we can just, we can do one in person or virtual just specifically for that partner. >> Well, any kind of incentive opportunities and marketing opportunities? Everyone loves to get the deals just kind of rolling in leads, from what we can see, out early reportings, this looks like a hot product, price wise, service level wise. What incentives do you guys start thinking about and joint marketing, you mentioned co-sell earlier in pipeline, so I was kind of owning in on that piece. >> Sure and yes, and then to follow along with our partner certification program, we do incentivize our partners there. If they have a certain number certified, their discount increases. So that's part of it. We have our deal registration program that increases discount as well. And then we do have some partner incentives that are wrapped around meeting setting, and moving opportunities along to proof of value. >> Got to love the education driving value. I have to ask you, so you do, you've been around the industry, you've seen the channel relationships out there. You've seen companies, old school, new school, you know, Horizon3.ai is kind of like that new school, very cloud specific, a lot of leverage with, well, you mentioned AWS and all the clouds. Why is the company so hot right now? Why did you join them? And what's, why are people attracted to this company? What's the attraction, what's the vibe? What do you see and what do you, what did you see in this company? >> Well, this is just, you know, like I said, it's very disruptive. It's really in high demand right now. And just because it's new to market and a newer technology, so we are, we can collaborate with a manual pen tester. We can, you know, we can allow our customers to run their pen test with no specialty teams. And then, so we, and like, you know, like I said, we can allow, our partners can actually build businesses, profitable businesses, so we can, they can use our product to increase their services revenue and build their business model, you know, around, around our services. >> What's interesting about the pen testing is that it's very expensive and time consuming. And the people who do them are very talented people that could be working on really bigger things in the- >> Absolutely. >> In the customers. So bringing this into the channel allows them, if you look at the price dealt between a pen test and then what you guys are offering. I mean, that's a huge margin gap between street price of say today's pen test and what you guys offer. When you show people that, do they fall, do they say too good to be true? I mean, what are some of the things that people say when you kind of show 'em that? Are they like scratch their head, like, come on, what's the catch here? >> Right, so the cost savings is a huge, is huge for us. And then also, you know, like I said, working as a force multiplier with a pen testing company that offers the services and so they can do their annual manual pen test that may be required around compliance regulations. And then we can act as the continuous verification of their security, you know, that they can run weekly. And so it's just, you know, it's just an addition to what they're offering already and an expansion. >> So, Jennifer, thanks for coming on theCUBE, really appreciate you coming on, sharing the insights on the channel. What's next? What can we expect from the channel group? What are you thinking, what's going on? >> Right, so we're really looking to expand our channel footprint and very strategically, we've got some big plans for Horizon3.ai. >> Awesome, well, thanks for coming on. Really appreciate it, you're watching theCUBE, the leader in high tech enterprise coverage. (upbeat music)

Published Date : Sep 27 2022

SUMMARY :

Welcome back everyone to theCUBE What are you guys doing? like you said, our now on the go to market change? And so we offer a So I have to ask you about the program. And so it's expected to grow that you guys see emerging And of course we go to market How are you guys doing the whole training, And so this is at no cost to our partners What incentives do you And then we do have new school, you know, And then, so we, and like, you know, And the people who do them and what you guys offer. And then also, you know, like I said, really appreciate you coming on, really looking to expand the leader in high tech

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JenniferPERSON

0.99+

Jennifer LeePERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

2027DATE

0.99+

2.7 billionQUANTITY

0.99+

second partQUANTITY

0.99+

2 key elementsQUANTITY

0.99+

todayDATE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.96+

theCUBEORGANIZATION

0.96+

Horizon3.aiTITLE

0.89+

node zeroTITLE

0.83+

Horizon3.ai Partner ProgramTITLE

0.76+

first commitmentQUANTITY

0.75+

first commitmentQUANTITY

0.75+

Horizon3.aiORGANIZATION

0.73+

Chris Hill, Horizon3.ai | Horizon3.ai Partner Program Expands Internationally


 

>>Welcome back everyone to the Cube and Horizon three.ai special presentation. I'm John Furrier, host of the Cube. We with Chris Hill, Sector head for strategic accounts and federal@horizonthree.ai. Great innovative company. Chris, great to see you. Thanks for coming on the Cube. >>Yeah, like I said, you know, great to meet you John. Long time listener. First time call. So excited to be here with >>You guys. Yeah, we were talking before camera. You had Splunk back in 2013 and I think 2012 was our first splunk.com. Yep. And boy man, you know, talk about being in the right place at the right time. Now we're at another inflection point and Splunk continues to be relevant and continuing to have that data driving security and that interplay. And your ceo, former CTO of Splunk as well at Horizons Neha, who's been on before. Really innovative product you guys have, but you know, Yeah, don't wait for a brief to find out if you're locking the right data. This is the topic of this thread. Splunk is very much part of this new international expansion announcement with you guys. Tell us what are some of the challenges that you see where this is relevant for the Splunk and the Horizon AI as you guys expand Node zero out internationally? >>Yeah, well so across, so you know, my role within Splunk was working with our most strategic accounts. And so I look back to 2013 and I think about the sales process like working with, with our small customers. You know, it was, it was still very siloed back then. Like I was selling to an IT team that was either using us for IT operations. We generally would always even say, yeah, although we do security, we weren't really designed for it. We're a log management tool. And you know, we, and I'm sure you remember back then John, we were like sort of stepping into the security space and in the public sector domain that I was in, you know, security was 70% of what we did. When I look back to sort of the transformation that I was, was witnessing in that digital transformation, you know when I, you look at like 2019 to today, you look at how the IT team and the security teams are, have been forced to break down those barriers that they used to sort of be silo away, would not communicate one, you know, the security guys would be like, Oh this is my BA box it, you're not allowed in today. >>You can't get away with that. And I think that the value that we bring to, you know, and of course Splunk has been a huge leader in that space and continues to do innovation across the board. But I think what we've we're seeing in the space that I was talking with Patrick Kauflin, the SVP of security markets about this, is that, you know, what we've been able to do with Splunk is build a purpose built solution that allows Splunk to eat more data. So Splunk itself, as you well know, it's an ingest engine, right? So the great reason people bought it was you could build these really fast dashboards and grab intelligence out of it, but without data it doesn't do anything, right? So how do you drive and how do you bring more data in? And most importantly from a customer perspective, how do you bring the right data in? >>And so if you think about what node zero and what we're doing in a Horizon three is that, sure we do pen testing, but because we're an autonomous pen testing tool, we do it continuously. So this whole thought of being like, Oh, crud like my customers, Oh yeah, we got a pen test coming up, it's gonna be six weeks. The wait. Oh yeah. You know, and everyone's gonna sit on their hands, Call me back in two months, Chris, we'll talk to you then. Right? Not, not a real efficient way to test your environment and shoot, we, we saw that with Uber this week. Right? You know, and that's a case where we could have helped. >>Well just real quick, explain the Uber thing cause it was a contractor. Just give a quick highlight of what happened so you can connect the >>Dots. Yeah, no problem. So there it was, I think it was one of those, you know, games where they would try and test an environment. And what the pen tester did was he kept on calling them MFA guys being like, I need to reset my password re to set my password. And eventually the customer service guy said, Okay, I'm resetting it. Once he had reset and bypassed the multifactor authentication, he then was able to get in and get access to the domain area that he was in or the, not the domain, but he was able to gain access to a partial part of the network. He then paralleled over to what would I assume is like a VA VMware or some virtual machine that had notes that had all of the credentials for logging into various domains. And so within minutes they had access. And that's the sort of stuff that we do under, you know, a lot of these tools. >>Like not, and I'm not, you know, you think about the cacophony of tools that are out there in a CTA orchestra architecture, right? I'm gonna get like a Zscaler, I'm gonna have Okta, I'm gonna have a Splunk, I'm gonna do this sore system. I mean, I don't mean to name names, we're gonna have crowd strike or, or Sentinel one in there. It's just, it's a cacophony of things that don't work together. They weren't designed work together. And so we have seen so many times in our business through our customer support and just working with customers when we do their pen test, that there will be 5,000 servers out there. Three are misconfigured. Those three misconfigurations will create the open door. Cause remember the hacker only needs to be right once, the defender needs to be right all the time. And that's the challenge. And so that's why I'm really passionate about what we're doing here at Horizon three. I see this my digital transformation, migration and security going on, which we're at the tip of the sp, it's why I joined say Hall coming on this journey and just super excited about where the path's going and super excited about the relationship with Splunk. I get into more details on some of the specifics of that. But you know, >>I mean, well you're nailing, I mean we've been doing a lot of things around super cloud and this next gen environment, we're calling it NextGen. You're really seeing DevOps, obviously Dev SecOps has, has already won the IT role has moved to the developer shift left as an indicator of that. It's one of the many examples, higher velocity code software supply chain. You hear these things. That means that it is now in the developer hands, it is replaced by the new ops, data ops teams and security where there's a lot of horizontal thinking. To your point about access, there's no more perimeter. So >>That there is no perimeter. >>Huge. A hundred percent right, is really right on. I don't think it's one time, you know, to get in there. Once you're in, then you can hang out, move around, move laterally. Big problem. Okay, so we get that. Now, the challenges for these teams as they are transitioning organizationally, how do they figure out what to do? Okay, this is the next step. They already have Splunk, so now they're kind of in transition while protecting for a hundred percent ratio of success. So how would you look at that and describe the challenges? What do they do? What is, what are the teams facing with their data and what's next? What do they, what do they, what action do they take? >>So let's do some vernacular that folks will know. So if I think about dev sec ops, right? We both know what that means, that I'm gonna build security into the app, but no one really talks about SEC DevOps, right? How am I building security around the perimeter of what's going inside my ecosystem and what are they doing? And so if you think about what we're able to do with somebody like Splunk is we could pen test the entire environment from soup to nuts, right? So I'm gonna test the end points through to it. So I'm gonna look for misconfigurations, I'm gonna, and I'm gonna look for credential exposed credentials. You know, I'm gonna look for anything I can in the environment. Again, I'm gonna do it at at light speed. And, and what we're, what we're doing for that SEC dev space is to, you know, did you detect that we were in your environment? >>So did we alert Splunk or the SIM that there's someone in the environment laterally moving around? Did they, more importantly, did they log us into their environment? And when did they detect that log to trigger that log? Did they alert on us? And then finally, most importantly, for every CSO out there is gonna be did they stop us? And so that's how we, we, we do this in, I think you, when speaking with Stay Hall, before, you know, we've come up with this boils U Loop, but we call it fine fix verify. So what we do is we go in is we act as the attacker, right? We act in a production environment. So we're not gonna be, we're a passive attacker, but we will go in un credentialed UN agents. But we have to assume, have an assumed breach model, which means we're gonna put a Docker container in your environment and then we're going to fingerprint the environment. >>So we're gonna go out and do an asset survey. Now that's something that's not something that Splunk does super well, you know, so can Splunk see all the assets, do the same assets marry up? We're gonna log all that data and think then put load that into the Splunk sim or the smoke logging tools just to have it in enterprise, right? That's an immediate future ad that they've got. And then we've got the fix. So once we've completed our pen test, we are then gonna generate a report and we could talk about about these in a little bit later. But the reports will show an executive summary the assets that we found, which would be your asset discovery aspect of that, a fixed report. And the fixed report I think is probably the most important one. It will go down and identify what we did, how we did it, and then how to fix that. >>And then from that, the pen tester or the organization should fix those. Then they go back and run another test. And then they validate through like a change detection environment to see, hey, did those fixes taste, play take place? And you know, SNA Hall, when he was the CTO of JS o, he shared with me a number of times about, he's like, Man, there would be 15 more items on next week's punch sheet that we didn't know about. And it's, and it has to do with how we, you know, how they were prioritizing the CVEs and whatnot because they would take all CVS was critical or non-critical. And it's like we are able to create context in that environment that feeds better information into Splunk and whatnot. That >>Was a lot. That brings, that brings up the, the efficiency for Splunk specifically. The teams out there. By the way, the burnout thing is real. I mean, this whole, I just finished my list and I got 15 more or whatever the list just can, keeps, keeps growing. How did Node zero specifically help Splunk teams be more efficient? Now that's the question I want to get at, because this seems like a very scalable way for Splunk customers and teams, service teams to be more efficient. So the question is, how does Node zero help make Splunk specifically their service teams be more efficient? >>So to, so today in our early interactions with building Splunk customers, what we've seen are five things, and I'll start with sort of identifying the blind spots, right? So kind of what I just talked about with you. Did we detect, did we log, did we alert? Did they stop node zero, right? And so I would, I put that at, you know, a a a more layman's third grade term. And if I was gonna beat a fifth grader at this game would be, we can be the sparring partner for a Splunk enterprise customer, a Splunk essentials customer, someone using Splunk soar, or even just an enterprise Splunk customer that may be a small shop with three people and, and just wants to know where am I exposed. So by creating and generating these reports and then having the API that actually generates the dashboard, they can take all of these events that we've logged and log them in. >>And then where that then comes in is number two is how do we prioritize those logs, right? So how do we create visibility to logs that are, have critical impacts? And again, as I mentioned earlier, not all CVEs are high impact regard and also not all are low, right? So if you daisy chain a bunch of low CVEs together, boom, I've got a mission critical AP CVE that needs to be fixed now, such as a credential moving to an NT box that's got a text file with a bunch of passwords on it, that would be very bad. And then third would be verifying that you have all of the hosts. So one of the things that Splunk's not particularly great at, and they, they themselves, they don't do asset discovery. So do what assets do we see and what are they logging from that? And then for, from, for every event that they are able to identify the, one of the cool things that we can do is actually create this low-code, no-code environment. >>So they could let, you know, float customers can use Splunk. So to actually triage events and prioritize that events or where they're being routed within it to optimize the SOX team time to market or time to triage any given event. Obviously reducing mtr. And then finally, I think one of the neatest things that we'll be seeing us develop is our ability to build glass tables. So behind me you'll see one of our triage events and how we build a lock Lockheed Martin kill chain on that with a glass table, which is very familiar to this Splunk community. We're going to have the ability, not too distant future to allow people to search, observe on those IOCs. And if people aren't familiar with an ioc, it's an incident of compromise. So that's a vector that we want to drill into. And of course who's better at drilling in into data and Splunk. >>Yeah, this is a critical, this is awesome synergy there. I mean I can see a Splunk customer going, Man, this just gives me so much more capability. Action actionability. And also real understanding, and I think this is what I wanna dig into, if you don't mind understanding that critical impact, okay. Is kind of where I see this coming. I got the data, data ingest now data's data. But the question is what not to log, You know, where are things misconfigured? These are critical questions. So can you talk about what it means to understand critical impact? >>Yeah, so I think, you know, going back to those things that I just spoke about, a lot of those CVEs where you'll see low, low, low and then you daisy chain together and you're suddenly like, oh, this is high now. But then to your other impact of like if you're a, if you're a a Splunk customer, you know, and I had, I had several of them, I had one customer that, you know, terabytes of McAfee data being brought in and it was like, all right, there's a lot of other data that you probably also wanna bring, but they could only afford, wanted to do certain data sets because that's, and they didn't know how to prioritize or filter those data sets. And so we provide that opportunity to say, Hey, these are the critical ones to bring in. But there's also the ones that you don't necessarily need to bring in because low CVE in this case really does mean low cve. >>Like an ILO server would be one that, that's the print server where the, your admin credentials are on, on like a, a printer. And so there will be credentials on that. That's something that a hacker might go in to look at. So although the CVE on it is low, if you daisy chain was something that's able to get into that, you might say, ah, that's high. And we would then potentially rank it giving our AI logic to say that's a moderate. So put it on the scale and we prioritize though, versus a, a vulner review scanner's just gonna give you a bunch of CVEs and good luck. >>And translating that if I, if I can and tell me if I'm wrong, that kind of speaks to that whole lateral movement. That's it. Challenge, right? Print server, great example, look stupid low end, who's gonna wanna deal with the print server? Oh, but it's connected into a critical system. There's a path. Is that kind of what you're getting at? >>Yeah, I used daisy chain. I think that's from the community they came from. But it's, it's just a lateral movement. It's exactly what they're doing. And those low level, low critical lateral movements is where the hackers are getting in. Right? So that's what the beauty thing about the, the Uber example is that who would've thought, you know, I've got my multifactor authentication going in a human made a mistake. We can't, we can't not expect humans to make mistakes. Were fall, were fallible, right? Yeah. The reality is is once they were in the environment, they could have protected themselves by running enough pen tests to know that they had certain exposed credentials that would've stopped the breach. Yeah. And they did not, had not done that in their environment. And I'm not poking. Yeah, >>They put it's interesting trend though. I mean it's obvious if sometimes those low end items are also not protected well. So it's easy to get at from a hacker standpoint, but also the people in charge of them can be fished easily or spear fished because they're not paying attention. Cause they don't have to. No one ever told them, Hey, be careful of what you collect. >>Yeah. For the community that I came from, John, that's exactly how they, they would meet you at a, an international event introduce themselves as a graduate student. These are national actor states. Would you mind reviewing my thesis on such and such? And I was at Adobe at the time though I was working on this and start off, you get the pdf, they opened the PDF and whoever that customer was launches, and I don't know if you remember back in like 2002, 2008 time frame, there was a lot of issues around IP being by a nation state being stolen from the United States and that's exactly how they did it. And John, that's >>Or LinkedIn. Hey I wanna get a joke, we wanna hire you double the salary. Oh I'm gonna click on that for sure. You know? Yeah, >>Right. Exactly. Yeah. The one thing I would say to you is like when we look at like sort of, you know, cuz I think we did 10,000 pen test last year is it's probably over that now, you know, we have these sort of top 10 ways that we think then fine people coming into the environment. The funniest thing is that only one of them is a, a CVE related vulnerability. Like, you know, you guys know what they are, right? So it's it, but it's, it's like 2% of the attacks are occurring through the CVEs, but yet there's all that attention spent to that. Yeah. And very little attention spent to this pen testing side. Yeah. Which is sort of this continuous threat, you know, monitoring space and, and, and this vulnerability space where I think we play such an important role and I'm so excited to be a part of the tip of the spear on this one. >>Yeah. I'm old enough to know the movie sneakers, which I love as a, you know, watching that movie, you know, professional hackers are testing, testing, always testing the environment. I love this. I gotta ask you, as we kind of wrap up here, Chris, if you don't mind the benefits to team professional services from this alliance, big news Splunk and you guys work well together. We see that clearly. What are, what other benefits do professional services teams see from the Splunk and Horizon three AI alliance? >>So if you're a, I think for, from our, our, from both of our partners as we bring these guys together and many of them already are the same partner, right? Is that first off, the licensing model is probably one of the key areas that we really excel at. So if you're an end user, you can buy for the enterprise by the enter of IP addresses you're using. But if you're a partner working with this, there's solution ways that you can go in and we'll license as to MSPs and what that business model on our MSPs looks like. But the unique thing that we do here is this c plus license. And so the Consulting Plus license allows like a, somebody a small to midsize to some very large, you know, Fortune 100, you know, consulting firms uses by buying into a license called Consulting Plus where they can have unlimited access to as many ips as they want. >>But you can only run one test at a time. And as you can imagine when we're going and hacking passwords and checking hashes and decrypting hashes, that can take a while. So, but for the right customer, it's, it's a perfect tool. And so I I'm so excited about our ability to go to market with our partners so that we underhand to sell, understand how not to just sell too or not tell just to sell through, but we know how to sell with them as a good vendor partner. I think that that's one thing that we've done a really good job building bringing into market. >>Yeah. I think also the Splunk has had great success how they've enabled partners and professional services. Absolutely. They've, you know, the services that layer on top of Splunk are multifold tons of great benefits. So you guys vector right into that ride, that wave with >>Friction. And, and the cool thing is that in, you know, in one of our reports, which could be totally customized with someone else's logo, we're going to generate, you know, so I, I used to work at another organization, it wasn't Splunk, but we, we did, you know, pen testing as a, as a for, for customers and my pen testers would come on site, they, they do the engagement and they would leave. And then another really, someone would be, oh shoot, we got another sector that was breached and they'd call you back, you know, four weeks later. And so by August our entire pen testings teams would be sold out and it would be like, wow. And in March maybe, and they'd like, No, no, no, I gotta breach now. And, and, and then when they do go in, they go through, do the pen test and they hand over a PDF and they pat you on the back and say, there's where your problems are, you need to fix it. And the reality is, is that what we're gonna generate completely autonomously with no human interaction is we're gonna go and find all the permutations that anything we found and the fix for those permutations and then once you fixed everything, you just go back and run another pen test. Yeah. It's, you know, for what people pay for one pen test, they could have a tool that does that. Every, every pat patch on Tuesday pen test on Wednesday, you know, triage throughout the week, >>Green, yellow, red. I wanted to see colors show me green, green is good, right? Not red. >>And once CIO doesn't want, who doesn't want that dashboard, right? It's, it's, it is exactly it. And we can help bring, I think that, you know, I'm really excited about helping drive this with the Splunk team cuz they get that, they understand that it's the green, yellow, red dashboard and, and how do we help them find more green so that the other guys are >>In Yeah. And get in the data and do the right thing and be efficient with how you use the data, Know what to look at. So many things to pay attention to, you know, the combination of both and then, then go to market strategy. Real brilliant. Congratulations Chris. Thanks for coming on and sharing this news with the detail around this Splunk in action around the alliance. Thanks for sharing, >>John. My pleasure. Thanks. Look forward to seeing you soon. >>All right, great. We'll follow up and do another segment on DevOps and IT and security teams as the new new ops, but, and Super cloud, a bunch of other stuff. So thanks for coming on. And our next segment, the CEO of Verizon, three AA, will break down all the new news for us here on the cube. You're watching the cube, the leader in high tech enterprise coverage.

Published Date : Sep 27 2022

SUMMARY :

I'm John Furrier, host of the Cube. Yeah, like I said, you know, great to meet you John. And boy man, you know, talk about being in the right place at the right time. the security space and in the public sector domain that I was in, you know, security was 70% And I think that the value that we bring to, you know, And so if you think about what node zero and what we're doing in a Horizon three is that, Just give a quick highlight of what happened so you And that's the sort of stuff that we do under, you know, a lot of these tools. Like not, and I'm not, you know, you think about the cacophony of tools that are That means that it is now in the developer hands, So how would you look at that and And so if you think about what we're able to do with before, you know, we've come up with this boils U Loop, but we call it fine fix verify. you know, so can Splunk see all the assets, do the same assets marry up? And you know, SNA Hall, when he was the CTO of JS o, So the question is, And so I would, I put that at, you know, a a a more layman's third grade term. And then third would be verifying that you have all of the hosts. So they could let, you know, float customers can use Splunk. So can you talk about what Yeah, so I think, you know, going back to those things that I just spoke about, a lot of those CVEs So put it on the scale and we prioritize though, versus a, a vulner review scanner's just gonna give you a bunch of Is that kind of what you're getting at? is that who would've thought, you know, I've got my multifactor authentication going in a Hey, be careful of what you collect. time though I was working on this and start off, you get the pdf, they opened the PDF and whoever that customer was Oh I'm gonna click on that for sure. Which is sort of this continuous threat, you know, monitoring space and, services from this alliance, big news Splunk and you guys work well together. And so the Consulting Plus license allows like a, somebody a small to midsize to And as you can imagine when we're going and hacking passwords They've, you know, the services that layer on top of Splunk are multifold And, and the cool thing is that in, you know, in one of our reports, which could be totally customized I wanted to see colors show me green, green is good, And we can help bring, I think that, you know, I'm really excited about helping drive this with the Splunk team cuz So many things to pay attention to, you know, the combination of both and then, then go to market strategy. Look forward to seeing you soon. And our next segment, the CEO of Verizon,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Patrick KauflinPERSON

0.99+

2013DATE

0.99+

70%QUANTITY

0.99+

MarchDATE

0.99+

Chris HillPERSON

0.99+

VerizonORGANIZATION

0.99+

2019DATE

0.99+

SplunkORGANIZATION

0.99+

McAfeeORGANIZATION

0.99+

John FurrierPERSON

0.99+

WednesdayDATE

0.99+

UberORGANIZATION

0.99+

six weeksQUANTITY

0.99+

last yearDATE

0.99+

AdobeORGANIZATION

0.99+

three peopleQUANTITY

0.99+

5,000 serversQUANTITY

0.99+

2008DATE

0.99+

2002DATE

0.99+

TuesdayDATE

0.99+

bothQUANTITY

0.99+

Horizons NehaORGANIZATION

0.99+

four weeks laterDATE

0.99+

LinkedInORGANIZATION

0.99+

next weekDATE

0.99+

todayDATE

0.99+

United StatesLOCATION

0.99+

oneQUANTITY

0.99+

AugustDATE

0.99+

firstQUANTITY

0.99+

2012DATE

0.99+

2%QUANTITY

0.98+

thirdQUANTITY

0.98+

one pen testQUANTITY

0.98+

one timeQUANTITY

0.98+

this weekDATE

0.98+

one testQUANTITY

0.98+

hundred percentQUANTITY

0.98+

NextGenORGANIZATION

0.98+

15 more itemsQUANTITY

0.97+

two monthsQUANTITY

0.97+

First timeQUANTITY

0.97+

five thingsQUANTITY

0.96+

SECORGANIZATION

0.96+

one customerQUANTITY

0.96+

Lockheed MartinORGANIZATION

0.96+

15 moreQUANTITY

0.95+

one thingQUANTITY

0.95+

hundred percentQUANTITY

0.95+

Cloud native at scale: A Supercloud conversation with Madhura Maskasky, Platform9


 

(upbeat music) >> Hello, and welcome to theCUBE here in Palo Alto, California, for a special program on Cloud Native at Scale, Enabling Next Generation Cloud or Supercloud for Modern Application Cloud Native Developers. I'm John Furrier, host of theCUBE. My pleasure to have here, me Madhura Maskasky, Co-founder and VP of Product at Platform9. Thanks for coming in today for this cloud native at scale conversation. >> Thank you for having me. >> So cloud native at scale, something that we're talking about because we're seeing the next level of mainstream success of containers, Kubernetes and cloud native develop, basically DevOps in the CI/CD pipeline. It's changing the landscape of infrastructure as code. It's accelerating the value proposition. And the Supercloud as we call it, has been getting a lot of traction because this next generation cloud is looking a lot different, but kind of the same as the first generation. What's your view on Supercloud as it fits to cloud native, it scales up. >> Yeah, you know, I think what's interesting. And I think the reason why Supercloud is a really good and a really fit term for this. And I think I know my CEO was chatting with you as well, and he was mentioning this as well, but I think there needs to be a different term than just multicloud or cloud. And the reason is because as cloud native and cloud deployments have scaled, I think we've reached a point now where instead of having the traditional data center style model, where you have a few large distributions of infrastructure and workload at a few locations, I think the model's kind of flipped around, right? Where you have a large number of micro-sites. These micro-sites could be your public cloud deployment, your private OnPrem infrastructure deployment, or it could be your Edge environment, right? And every single enterprise, every single industry is moving in that direction. And so you got to refer that with a terminology that indicates the scale and complexity of it. And so I think Supercloud is an appropriate term for that. >> So you brought a couple things I want to dig into. You mentioned Edge nodes. We're seeing not only Edge nodes being the next kind of area of innovation, mainly because it's just popping up everywhere. And that's just the beginning, wouldn't even know what's around the corner. You got buildings, you got IoT, OT and IT kind of coming together, but you also got this idea of regions. Global infrastructure is a big part of it. I just saw some news around CloudFlare shutting down a site here. There's policies being made at scale, these new challenges there. Can you share, because you got to have Edge. So hybrid cloud is a winning formula. Everybody knows that, it's a steady state. But across multiple clouds brings in this new un-engineered area yet, It hasn't been done yet, Spanning Clouds. People say they're doing it, but you start to see the toe in the water. It's happening, it's going to happen. It's only going to get accelerated with the Edge and beyond globally. So I have to ask you, what is the technical challenges in doing this? Because there's something, business consequences as well, but there are technical challenges. Can you share your view on what the technical challenges are for the Supercloud across multiple edges and regions? >> Yeah, absolutely. So I think, you know, in the context of this term of Supercloud, I think it's sometimes easier to visualize things in terms of two axis, right? I think on one end you can think of the scale in terms of just pure number of nodes that you have deployed, a number of clusters in the Kubernetes space. And then on the other axis, you would have your distribution factor, right? Which is, do you have these tens of thousands of nodes in one site, or do you have them distributed across tens of thousands of sites, with one node at each site, right? And if you have just one flare of this, there is enough complexity, but potentially manageable. But when you are expanding on both these axis, you really get to a point where that scale really needs some well thought out, well structured solutions to address it, right? A combination of homegrown tooling, along with your, you know, favorite distribution of Kubernetes is not a strategy that can help you in this environment. It may help you when you have one of this, or when your scale is not at the level. >> Can you scope the complexity? Because, I mean, I hear a lot of moving parts going on there. The technology is also getting better. We're seeing cloud native become successful. There's a lot to configure. There's lot to install. Can you scope the scale of the problem because we're about at scale challenges here. >> Yeah absolutely, and I think I like to call it, you know, the problem that the scale creates, there's various problems. But I think one problem, one way to think about it is it works on my cluster problem, right? So, you know, I come from engineering background and there's a famous saying between engineers and QA, and the support folks, right. Which is, it works on my laptop, which is I tested this change, everything was fantastic. It worked flawlessly on my machine. On production, it's not working. The exact same problem now happens in these distributed environments, but at massive scale, right. Which is that, you know, developers test their applications, et cetera within these sanctity of their sandbox environments. But once you expose that change in the wild world of your production deployment, right. And the production deployment could be going at the radio cell tower at the Edge location where a cluster is running there. Or it could be sending, you know, these applications and having them run at my customer site, where they might not have configured that cluster exactly the same way as I configured it. Or they configured the cluster right. But maybe they didn't deploy the security policies, or they didn't deploy the other infrastructure plugins that my app relies on. All of these various factors add their own layer of complexity. And there really isn't a simple way to solve that today. And that is just, you know, one example of an issue that happens. I think another, you know, whole new ballgame of issues come in the context of security, right? Because when you are deploying applications at scale, in a distributed manner, you got to make sure someone's job is on the line to ensure that the right security policies are enforced regardless of that scale factor. So I think that's another example of problems that occur. >> Okay, so I have to ask about scale, because there are a lot of multiple steps involved when you see the success of cloud native, you know, you see some experimentation, they set up a cluster, say it's containers and Kubernetes. And then you say, okay, we got this. We configure it. And then they do it again, and again, they call it day two. Some people call it day one, day two operation, whatever you call it. Once you get past the first initial thing, then you got to scale it. Then you're seeing security breaches. You're seeing configuration errors. This seems to be where the hotspot is, in when companies transition from, I got this, to oh no, it's harder than I thought at scale. Can you share your reaction to that and how you see this playing out? >> Yeah, so, you know, I think it's interesting. There's multiple problems that occur when the two factors of scale, as we talked about, start expanding. I think one of them is what I like to call the, it works fine on my cluster problem, which is back in, when I was a developer, we used to call this, it works on my laptop problem. Which is, you know, you have your perfectly written code that is operating just fine on your machine, your sandbox environment. But the moment it runs production, it comes back with P 0s and POS from support teams, et cetera. And those issues can be really difficult to try us, right. And so in the Kubernetes environment, this problem kind of multi-folds. It goes, you know, escalates to a higher degree because you have your sandbox developer environments, they have their clusters, and things work perfectly fine in those clusters, because these clusters are typically handcrafted or a combination of some scripting and handcrafting. And so as you give that change to then run at your production Edge location, like say your radial cell power site, or you hand it over to a customer to run it on their cluster, they might not have configured that cluster exactly how you did, or they might not have configured some of the infrastructure plugins. And so things don't work. And when things don't work, triaging them becomes nightmarishly hard, right? It's just one of the examples of the problem. Another whole bucket of issues is security, which is, as you have these distributed clusters at scale. You got to ensure someone's job is on the line to make sure that the security policies are configured properly. >> So this is a huge problem. I love that comment. That's not happening on my system. It's the classic, you know, debugging mentality. But at scale, it's hard to do that with error prone. I can see that being a problem. And you guys have a solution you're launching, can you share what Arlon is? This new product? What is it all about? Talk about this new introduction. >> Yeah absolutely, I'm very, very excited. You know, it's one of the projects that we've been working on for some time now. Because we are very passionate about this problem and just solving problems at scale in OnPrem or in the cloud or at Edge environments. And what Arlon is, it's an open source project, and it is a tool, a Kubernetes native tool for complete end-to-end management of not just your clusters, but your clusters, all of the infrastructure that goes within and along the sites of those clusters, security policies, your middleware plugins, and finally your applications. So what Arlon lets you do in a nutshell is in a declarative way, it lets you handle the configuration and management of all of these components in at scale. >> So what's the elevator pitch simply put for what this solves in terms of the chaos you guys are reigning in, what's the bumper sticker. What did it do? >> There's a perfect analogy that I love to reference in this context, which is, think of your assembly line, you know, in a traditional, let's say an auto manufacturing factory, or et cetera, and the level of efficiency at scale that that assembly line brings, right. Arlon, and if you look at the logo we've designed, it's this funny little robot. And it's because when we think of Arlon, we think of these enterprise large scale environments, you know, sprawling at scale, creating chaos, because there isn't necessarily a well thought through, well-structured solution that's similar to an assembly line, which is taking each component, you know, addressing them, manufacturing, processing them in a standardized way, then handing to the next stage where again, it gets processed in a standardized way. And that's what Arlon really does. That's like the elevator pitch. If you have problems of scale, of managing your infrastructure, you know, that is distributed, Arlon brings the assembly line level of efficiency and consistency for those problems. >> So keeping it smooth, the assembly line, things are flowing, see CI/CD pipe-lining. So that's what you're trying to simplify that OPS piece for the developer. I mean, it's not really OPS, it's their OPS, it's coding. >> Yeah, not just developer the OPS, the operations folks as well, right. Because developers, you know, developers are responsible for one picture of that layer, which is my apps. And then maybe that middleware of applications that they interface with. But then they hand it over to someone else who's then responsible to ensure that these apps are secured properly, that they are logging, logs are being collected properly. Monitoring and observability is integrated. And so it solves problems for both those teams. >> Yeah, it's DevOps. So the DevOps is the cloud native developer. The OPS team have to kind of set policies. Is that where the declarative piece comes in? Is that why that's important? >> Absolutely, yeah. And you know, Kubernetes really introduced or elevated this declarative management, right. Because you know, Kubernetes clusters are you know your specifications of components that go in Kubernetes are defined in a declarative way. And Kubernetes always keeps that state consistent with your defined state. But when you go outside of that world of a single cluster, and when you actually talk about defining the clusters or defining everything that's around it, there really isn't a solution that does that today. And so Arlon addresses that problem at the heart of it. And it does that using existing open source, well known solutions. >> And do I want to get into the benefits, what's in it for me as the customer, developer, but I want to finish this out real quick and get your thoughts. You mentioned open source. Why open source? What's the current state of the product? You run the product group over there at Platform9. Is it open source, and you guys have a product that's commercial? Can you explain the open source dynamic? And first of all, why open source? And what is the consumption? I mean open source is great. People want opensource, they can download and look up the code, but maybe want to buy the commercial. So I'm assuming you have that thought through. Can you share open source and commercial relationship? >> Yeah, I think, you know, starting with why opensource? I think it's, you know, we, as a company, we have one of the things that's absolutely critical to us is that we take mainstream open source technologies, components, and then we make them available to our customers at scale through either a SaaS model or OnPrem model, right. But so as we are a company or startup, or a company that benefits, you know, in a massive way by this open source economy, it's only right I think in my mind that we do are part of the duty, right. And contribute back to the community that feeds us. And so, you know, we have always held that strongly as one of our principles. And we have, you know, created and built independent products, starting all the way with Fission, which was a serverless product that we had built, to various other examples that I can give. But that's one of the main reasons why open source. And also open source because we want the community to really first-hand engage with us on this problem, which is very difficult to achieve if your product is behind a wall, you know, behind a black box. >> Well, and that's what the developers want too. What we're seeing in reporting with Supercloud is the new model of consumption is I want to look at the code and see what's in there. >> That's right. >> And then also if I want to use it, I'll do it, great. That's open source, that's the value. But then at the end of the day, if I want to move fast, that's when people buy in. So it's a new kind of freemium, I guess, business model. I guess that's the way it is, but that's the benefit of open source. This is why standards and open source is growing so fast. You have that confluence of, you know, a way for developers to try before they buy, but also actually kind of date the application, if you will. We, you know, Adrian Kakroff uses the dating metaphor, you know, hey, you know, I want to check it out first before I get married. And that's what open source is. So this is the new, this is how people are selling. This is not just open source. This is how companies are selling. >> Absolutely, yeah, yeah. You know, I think two things, I think one is just, you know, this cloud native space is so vast that if you're building a cluster solution, sometimes there's also a risk that it may not apply to every single enterprises use cases. And so having it open source gives them an opportunity to extend it, expand it, to make it proper to their use case, if they choose to do so, right. But at the same time, what's also critical to us, is we are able to provide a supported version of it, with an SLA that's backed by us, a SaaS-hosted version of it as well for those customers who choose to go that route. You know, once they have used the open source version and loved it and want to take it at scale and in production and need a partner to collaborate with who can support them for that production environment. >> I have to ask you. Now let's get into what's in it for the customer? I'm a customer. Why should I be enthused about Arlon? What's in it for me? You know, 'cause if I'm not enthused about it, I'm not going to be confident, and it's going to be hard for me to get behind this. Can you share your enthusiastic view of, you know, why I should be enthused about Arlon, if I'm a customer. >> Yeah, absolutely. And so, and there's multiple, you know, enterprises that we talk to, many of them, are customers where this is a very kind of typical story that you will hear, which is we have a Kubernetes distribution. It could be On-Premise. It could be public cloud native Kubernetes. And then we have our CI/CD pipelines that are automating the deployment of applications, et cetera. And then there's this gray zone. And the gray zone is, well before you can, your CI/CD pipelines can deploy the apps, somebody needs to do all of their groundwork of, you know, defining those clusters, and yeah properly configuring them. And as these things start by being done hand-grown. And then as you scale, what typically enterprises would do today is they will have their homegrown DIY solutions for this. I mean, the number of folks that I talk to that have built Terraform automation, and then, you know, some of those key developers leave. So it's a typical open source, or typical, you know, DIY challenge. And the reason that they're writing it themselves is not because they want to. I mean, of course technology is always interesting to everybody, but it's because they can't find a solution that's out there that perfectly fits their problem. And so that's that pitch. I think OPS people would be delighted. The folks that we've talked, you know, spoken with have been absolutely excited and have shared that this is a major challenge we have today, because we have few hundreds of clusters on EKS, Amazon, and we want to scale them to few thousands, but we don't think we are ready to do that. And this will give us the ability to do that. >> Yeah, I think people are scared. I won't say scared, that's a bad word. Maybe I should say that they feel nervous because you know, at scale, small mistakes can become large mistakes. This is something that is concerning to enterprises. And I think this is going to come up at KubeCon this year where enterprises are going to say, okay, I need to see SLAs. I want to see track record. I want to see other companies that have used it. How would you answer that question to, or challenge, you know, hey I love this, but is there any guarantees? Is there any, what's the SLAs? I'm an enterprise, I got tight. You know, I love the open source trying to free, fast and loose, but I need hardened code. >> Yeah, absolutely. So two parts to that, right? One is Arlon leverages, existing opensource components, products that are extremely popular. Two specifically, one is Arlon uses Argo CD, which is probably one of the highest rated and used CD opensource tools that's out there, right. Created by folks that are as part of Intuit team now, you know, really brilliant team, and it's used at scale across enterprises. That's one. Second is Arlon also makes use of cluster API, CAPI, which is a Kubernetes sub-component, right for lifecycle management of clusters. So there is enough of, you know, community users, et cetera, around these two products or open source projects that will find Arlon to be right up in their alley, because they're already comfortable, familiar with Argo CD. Now Arlon just extends the scope of what Argo CD can do. And so that's one. And then the second part is going back to your point of the comfort. And that's where, you know, Platform9 has a role to play, which is when you are ready to deploy Arlon at scale, because you've been, you know playing with it in your DEV test environments, you're happy with what you get with it. Then Platform9 will stand behind it and provide that SLA. >> And what's been the reaction from customers you've talked to, Platform9 customers that are familiar with Argo, and then Arlo? What's been some of the feedback? >> Yeah, I think the feedback's been fantastic. I mean, I can give you examples of customers where you know, initially, when you're telling them about your entire portfolio of solutions, it might not strike a chord right away. But then we start talking about Arlon, and we talk about the fact that it uses Argo CD. They start opening up, they say, we have standardized on Argo, and we have built these components homegrown. We would be very interested. Can we co-develop? Does it support these use cases? So we've had that kind of validation. We've had validation all the way at the beginning of Arlon, before we even wrote a single line of code, saying this is something we plan on doing. And the customer said, if you had it today, I would've purchased it. So it's been really great validation. >> All right, so next question is what is the solution to the customer? If I asked you, look, I'm so busy. My team's overworked, I got a skills gap. I don't need another project. I'm so tied up right now, and I'm just chasing my tail. How does Platform9 help me? >> Yeah, absolutely. So I think, you know, one of the core tenants of Platform9 has always been, that we try to bring that public cloud like simplicity by hosting, you know, this and a lot of such similar tools in a SaaS hosted manner for our customers, right. So our goal behind doing that is taking away, or trying to take away all of that complexity from customer's hands and offloading it to our hands, right. And giving them that full white glove treatment as we call it. And so from a customer's perspective, one, something like Arlon will integrate with what they have, so they don't have to rip and replace anything. In fact, it will even in the next versions, it may even discover your clusters that you have today, and give you an inventory. >> So customers have clusters that are growing. That's a sign, call you guys. >> Absolutely, either they have massive, large clusters, right, that they want to split into smaller clusters, but they're not comfortable doing that today. Or they've done that already on say public cloud or otherwise. And now they have management challenges. >> So, especially operationalizing the clusters, whether they want to kind of reset everything and move things around, and reconfigure, and or scale out. >> That's right, exactly. >> And you provide that layer of policy. >> Absolutely, yes. >> That's the key value here. >> That's right. >> So policy based configuration for cluster scale up. >> Profile and policy based declarative configuration and life cycle management for clusters. >> If I asked you how this enables Supercloud, what would you say to that? >> I think this is one of the key ingredients to Supercloud, right? If you think about a Supercloud environment, there is at least few key ingredients that come to my mind that are really critical. Like they are, you know, life saving ingredients at that scale. One is having a really good strategy for managing that scale, you know, in a going back to assembly line, in a very consistent, predictable way. So that, Arlon solves. Then you need to compliment that with the right kind of observability and monitoring tools at scale, right? Because ultimately issues are going to happen, and you're going to have to figure out, you know, how to solve them fast. And Arlon, by the way also helps in that direction. But you also need observability tools. And then especially if you're running it on the public cloud, you need some cost management tools. In my mind, these three things are like the most necessary ingredients to make Supercloud successful. And you know, Arlon is one of them. >> Okay so now the next level is, okay, that makes sense is under the covers, kind of speak under the hood. How does that impact the app developers of the cloud native modern application workflows? Because the impact to me seems, the apps are going to be impacted. Are they going to be faster, stronger? I mean, what's the impact if you do all those things, as you mentioned, what's the impact of the apps? >> Yeah, the impact is that your apps are more likely to operate in production the way you expect them to, because the right checks and balances have gone through. And any discrepancies have been identified prior to those apps, prior to your customer running into them, right? Because developers run into this challenge today where there's a split responsibility, right. I'm responsible for my code. I'm responsible for some of these other plugins, but I don't own these stack end to end. I have to rely on my OPS counterpart to do their part, right. And so this really gives them the right tooling for that. >> This is actually a great kind of relevant point. You know, as cloud becomes more scalable, you're starting to see this fragmentation, gone are the days of the full stack developer, to the more specialized role. But this is a key point. And I have to ask you, because if this Arlo solution takes place, as you say, and the apps are going to do what they're designed to do, the question is what does the current pain look like? Are the apps breaking? What is the signals to the customer that they should be calling you guys up and implementing Arlo, Argo, and all the other goodness to automate, what are some of the signals? Is it downtime? Is it failed apps? Is it latency? What are some of the things that would be indications of things are effed up a little bit. >> Yeah, more frequent down times, down times that take longer to triage. And so your, you know, your mean times on resolution, et cetera, are escalating or growing larger, right? Like we have environments of customers where they have a number of folks in the field that have to take these apps, and run them at customer sites. And that's one of our partners. And they're extremely interested in this, because the rate of failures they're encountering for this, you know, the field when they're running these apps on site, because the field is automating their clusters that are running on sites using their own script. So these are the kinds of challenges. So those are the pain points, which is, you know, if you're looking to reduce your meantime to resolution. If you're looking to reduce the number of failures that occur on your production site, that's one. And second, if you're looking to manage these at scale environments with a relatively small focused nimble OPS team, which has an immediate impact on your budget. So those are the signals. >> This is the cloud native at scale situation. The innovation going on. Final thought is your reaction to the idea that if the world goes digital, which it is, and the confluence of physical and digital coming together, and cloud continues to do its thing, the company becomes the application. Not where IT used to be supporting the business, you know, the back office, and the immediate terminals and some PCs and handhelds. Now, if technology's running the business, is the business, company's the application. So it can't be down. So there's a lot of pressure on CSOs and CIOs now, and boards are saying, how is technology driving the top line revenue? That's the number one conversation. Do you see the same thing? >> Yeah, it's interesting. I think there's multiple pressures at the CSO, CIO level, right? One, is that there needs to be that visibility and clarity and guarantee almost that, you know, the technology that's going to drive your top line is going to drive that in a consistent, reliable, predictable manner. And then second, there is the constant pressure to do that while always lowering your costs of doing it, right. Especially when you're talking about, let's say retailers, or those kinds of large scale vendors, they many times make money by lowering the amount that they spend providing those goods to their end customers. So I think both those factors kind of come into play and the solution to all of them is usually in a very structured strategy around automation. >> Final question. What does cloud native at scale look like to you? If all the things happen the way we want 'em to happen, the magic wand, the magic dust, what does it look like? >> What that looks like to me is a CIO sipping at his desk on coffee. Production is running absolutely smooth. And he's running that at a nimble, nimble team size of, at the most, a handful of folks that are just looking after things, but things are just taking care of themselves. >> And the CIO doesn't exist. There's no CISO, they're at the beach. >> (laughing) Yeah. >> Madhura, thank you for coming on, sharing the cloud native at scale here on theCUBE. Thank you for your time. >> Fantastic, thanks for having me. >> Okay, I'm John Furrier here for special program presentation, special programming Cloud Native at Scale, Enabling Supercloud Modern Applications with Platform9. Thanks for watching. (upbeat music)

Published Date : Sep 20 2022

SUMMARY :

Co-founder and VP of Product at Platform9. And the Supercloud as we call it, And so you got to refer And that's just the beginning, So I think, you know, in the context Can you scope the complexity? And that is just, you know, And then you say, okay, we got this. And so as you give that change to then run It's the classic, you So what Arlon lets you do in a nutshell you guys are reigning in, Arlon, and if you look at that OPS piece for the developer. Because developers, you know, So the DevOps is the And you know, Kubernetes really introduced So I'm assuming you have or a company that benefits, you know, is the new model of consumption You have that confluence of, you know, I think one is just, you Can you share your enthusiastic view I mean, the number of folks that I talk to And I think this is going to And that's where, you know, where you know, initially, is what is the solution to the customer? clusters that you have today, That's a sign, call you guys. that they want to split operationalizing the clusters, So policy based configuration and life cycle management for clusters. for managing that scale, you know, Because the impact to me seems, the way you expect them to, and the apps are going to do for this, you know, the field that if the world goes and the solution to all of them If all the things happen the What that looks like to me And the CIO doesn't exist. Thank you for your time. for special program presentation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Madhura MaskaskyPERSON

0.99+

Adrian KakroffPERSON

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

MadhuraPERSON

0.99+

oneQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

second partQUANTITY

0.99+

ArlonORGANIZATION

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

tens of thousands of sitesQUANTITY

0.99+

one siteQUANTITY

0.99+

secondQUANTITY

0.99+

todayDATE

0.99+

two partsQUANTITY

0.99+

two factorsQUANTITY

0.99+

one nodeQUANTITY

0.99+

TwoQUANTITY

0.99+

first generationQUANTITY

0.99+

two productsQUANTITY

0.98+

two thingsQUANTITY

0.98+

each siteQUANTITY

0.98+

one problemQUANTITY

0.98+

each componentQUANTITY

0.98+

SupercloudORGANIZATION

0.98+

SecondQUANTITY

0.98+

tens of thousands of nodesQUANTITY

0.98+

ArloORGANIZATION

0.97+

KubeConEVENT

0.97+

Platform9ORGANIZATION

0.97+

single lineQUANTITY

0.97+

one endQUANTITY

0.96+

CloudFlareTITLE

0.96+

one wayQUANTITY

0.96+

ArgoORGANIZATION

0.96+

three thingsQUANTITY

0.96+

OneQUANTITY

0.95+

KubernetesTITLE

0.94+

one flareQUANTITY

0.94+

FissionORGANIZATION

0.93+

single clusterQUANTITY

0.93+

one pictureQUANTITY

0.93+

DevOpsTITLE

0.92+

EKSORGANIZATION

0.91+

this yearDATE

0.91+

one exampleQUANTITY

0.91+

CloudTITLE

0.9+

Architecting SaaS Superclouds | Supercloud22


 

>>Welcome back to super cloud 22, our inaugural event. It's a pilot event here in the cube studios we're live and streaming virtually until we do it in person. Maybe next year. I'm John fury, host of the cube with Dave Lon two great guests, distinguished engineers managers, CTOs investors. Mariana Tessel is a CTO of Intuit ins Ray founder of vertex ventures. Both have a lot of DNA. Founder allow cloud here with mark Andre and Ben Horowitz, a variety of other great ventures you've done. And now you're an investor. Yep. Maria, you've been a seasoned CTO, VP of engineering, VMware Docker Intuit. Now thanks for joining us. >>Absolutely. >>So super cloud is a, is a thing. And apparently it's got a lot of momentum and you guys got stats over there at, at Intuit in, so you're investing and we were challenged on super cloud. Our initial thesis was you build on the clouds, get all that leverage like snowflake, you get a good differentiation and then you compete and then move to other clouds. Now it's becoming a thing where I can do this. Every enterprise could possibly do it. So I want to get your guys thoughts on what you think of super cloud concept and where are the holes in it, what needs to be defined. And so we'll start with you. You've done a lot of cloud things in your day. What >>Do you think? Yeah, it's the whole cloud journey started with a desire to consolidate and desire to actually provide uniformity and, and standards driven ways of doing things. And I think Amazon was a leader there. They helped kind of teach everybody else. You know, when I was in loud cloud, we were trying to do it with proprietary stacks just wouldn't work. But once everyone standardized upon Unix and you know, the chip sets no longer became as relevant. They did a lot of good things there, but what's happened since then is now you've got competing standards at the API layer at the interface layer no longer at the chip set layer, no longer at the operating system layer. Right? So the evolution of the, the, the battles are still there. When you talk about multicloud and super cloud, though, like one of the big things you have to keep in mind is latency is not free. Latency is very expensive and it's getting even more expensive now with, with multi-cloud. So you have to really understand where the separations of boundaries are between your data, your compute, and, and the network is just there as a facilitator to help binding compute and data. Right? And I think there's a lot of bets being made across different vendors like CloudFlare Akamai, as well as Amazon Google Microsoft in terms of how they think we should take computing either to the edge, from the core or back and forth. >>These, this is structural change. I mean, this is structural, >>It's desired by incumbents, but it's not something that I'm seeing from the consumption. I'd love to hear, hear from our end's per perspective, from a consumption point of view, like how much edge computing really matters. Right. >>Mario. >>So I think there's like, there's kind of a, a story of like two, like it's kind of, you can cut it for both edges. No, no pun intended on one end. It is really simplifying to actually go into like a single cloud and standardize on it and just have everything there. But I think what over time companies find is that they end up in multiple clouds, whether like, you know, through acquisitions or through like needing to use a service in another cloud. So you do find yourself in a situation where you have multi multi-cloud and you have to kind of work through it and understand how to make it all like work and latency is an issue, but also for many, many workloads, you can work around it and you can make it work where you have workloads that actually span multiple vendors and clouds. You know, again, having said that, I would say the world is such, that is still a simplifying assumption. When if you go to a single cloud, it's much easier to just go and, and bet on that >>Easier in terms of everything's integrated, IAS works with SAS, they solve a lot of problems. >>Correct. And you can do like for your developers, you can actually provide an environment that's super homogenous, simple. You can use services easily up and down the stack. And, you know, we, we actually made that deliberate decision. When we started migrating to the cloud at the beginning, it was like, oh, let's do like hybrid we'll, you know, make it, so it work anywhere. It was so complicated. It was not worth it. >>When was the, when did you give up, what was the moment? Was there a flash point where you said, oh, this is terrible. This is >>Dead. Yeah. When, when we started to try to make it interoperable and you just see what it requires to do that and the complexity of the architecture that it just became not worth it for the gains you have. >>So speaking obviously as a SAS provider, right. So it just doesn't, it didn't make business case sense for you guys to do that. So it was super cloud. Then an infrastructure thing we just heard from Ben wa deja VI that they're not, they're going beyond instantiating their, their data cloud. They're actually running, you know, their own little snow grid. They called it. And, and then when I asked him, well, what about latency? He said, well, we copied data over, you know, so, okay. That's you have to do, but that's a singular experience with the same governance or the same security. Just wasn't worth it for you guys is what I'm hearing. >>Correct. But again, like for some workload or for some services that we want to use, we are gonna go there and we are gonna then figure out what is the work around the latency issue, whether it's like copy or, you know, redundancy. >>Well, the question I have Dave on snowflake is maybe the question for you and in the panel is snowflake a tan expansion opportunity, or is there a technical reason to go to other clouds? >>I think they wanted to leverage the hyperscale infrastructure globally. And they said that they're out there, it's a free gift. We're gonna go take it. I, I think it started with we're on AWS. Do you think? And then we're on Azure and then we're on Google. And then they said, why don't we just connect all these and make it a singular experience? And yeah, I guess it's a TA expansion as a differentiator and it's, it adds value. Right. If I can share data across that global network, >>We have customers on Azure now, >>Right? Yeah. Yeah. Of course. >>You guys don't need to go CP. What do you think about that? >>Well, I think Snowflake's in a good position cuz they work mostly with analytical workloads and you have capacity. That's always gonna increase like no one subtracts, their analytical workload like ever, right. So there was just compounded growth is like 50% or 80% for, you know, many enterprises despite their best intentions, not to collect more data, they just can't stop doing it. So it's different than if you're like an Oracle or a transactional database where you don't have those, you know, like kind of infinite growth paths. So Snowflake's gonna continue to expand footprint their customers. They don't mind as long as you, they can figure out the, the lowest cost on denominator for, for that. >>Yeah. So it makes sense to be in all the clouds >>For them, for, for them, for sure. Yeah. >>But, but, but Oracle just announced with Microsoft what I would call super cloud, a, a cross cloud database service running on OCI and Azure with very low latency and a database that looks like a, the singular experience. Yeah. With, with a PAs layers >>That lost me after OCI that's >>Okay. You know, but that's the, that's the, the BS answer for all U VCs. The do nobody develops on Oracle? Well, it's a 240 billion market cap company. Show me who you all want be. >>We're gonna talk about SRDF and em C next, you >>All want Oracle. So there we go. You throw that into, you all want Oracle to buy your companies, your funding, you know, cause, cause we all wanna be like Oracle with that kinda cash flow. But, but anyway, >>Here's, here's one thing that I'm noticing that is gonna be really practical. I think for companies that do run SA is because like, you know, you have all these solutions, whether it's like analytics or like monitoring or logging or whatever. And each one of them is very data hungry and all of them have like SAS solutions that end up copy the data, moving data to their cloud, and then they might charge you by the size of your data. It does become kind of overwhelming for companies to use that many tools and basically maybe have that data kind of charge for it, multiple places because you use it for different purposes or just in general, if you have a lot of data, you know, that that is becoming an issue. So that's something that I've noticed in our, in our own kind of, you know, a world, but it's just something that I think companies need to think about how they solve because eventually a lot of companies will say, I cannot have all these solutions, so there's no way I'm gonna be willing to have so many copies of the data and actually pay for that. >>So many times, just something to think about. >>But one of the criticisms of the super cloud concept is that it's just SAS. If I'm running workload on prem and I, and I've got, you know, a connection to the cloud, which you probably do, that's, that's SAS, what's, what's the big deal and that's not anything new or different. So I'd love to get your thoughts on that. But Goldman Sachs, for instance, just announced the service last reinvent with AWS, connecting their tools, their data, and their software from on-prem to AWS, they're offering it as a service. I'm like, Hmm. Kind of looking like Supercloud, but maybe it's just SAS. >>It could be. And like, what I'm talking about is not so much like, you know, like what you wanna connect your data. But the idea is like a lot of the providers of different services, like in the past and, and like higher layer, they're actually COPI the data. They need the data in their cloud or their solution. And it just becomes complicated and expensive is, is kind of like my point. So yes, connecting it like for you to have the data in one place and then be able to connect to it. I think that is a valid, if, if that's kinda what you think about as a super cloud, that is a valid need, I think that companies will >>Have where developers actually want access to tools that might exist. >>Also the key is developers, right? Yeah. Developers decide all decisions, not database on administrators, not, you know, a hundred percent security engineers, not admins. So what's really interesting is where are the developers going next? If you look at the current winners in the current ecosystem, companies like MongoDB, I mean, they capture the minds of yeah. The JavaScript, you know, no JS developers absolutely very early on. And I started catch base and I could tell you like the difference was that capture motion was so important. So developers are basically used to this game-like experience now where they want to see tools that are free, whether it's open source or not, they actually don't care. They just want, and they want it SAS. They want it SAS delivered on demand. Right. And pay as you go. And so there's a lot of these different frameworks coming out next generation, no code, low code, whether it's Java, JavaScript, rust, you know, whatever, you know, go Lang. And there's a lot of people fighting religious wars about how to develop the next kind of modern pattern design pattern. Okay. And that's where a lot of excitement is how we look at like investment opportunities. Like where are those big bets who are, you know, frustrated developers, who are they frustrated, what's wrong with their current environment? You know, do they really enjoy using Kubernetes or trying to use Kubernetes? Yeah. Right. Like developers have a very different view than operator, >>But you mentioned couch base. I mean, I look at couch base what they're doing with Capellas as a form of Supercloud. I mean, I think that's an excellent, they're bringing that out to the edge. We're gonna hear later on from someone from couch base. That's gonna talk about that now. It's kind of a lightweight, you know, sort of, it's gonna be a, a synchronization, but it's the beginning >>A cool new venture deal that I'm not in, but was like duck DB. I'm like, what's duck DB like, well, it's an Emory database that has like this like remote store thing. I'm like, okay, that sounds interesting. Like let's call Mike Olson cuz that sounds like sleepy cat redone red distributed world. But like it's, it's like there's a lot of people refactoring design patterns that we're all grew up with since the popup days of, you know, typical round. Right? >>Yeah. That's the refactory I think that's the big pattern. So I have to ask you guys, what are you guys investing in? We've got a couple minutes left to chat about that. What are you investing at into it from a, from a, a CTO engineering perspective and what are you investing in that feels super cloud like to you? >>Well, the, the thing that like I'm focused on is to make sure that we have absolutely best in the world development environment for our engineers, where it's modern, it's easy to use and it incorporates as many things as we can into that environment. So the engineers don't have to think about it. Like one big example would be security and how we incorporated that into development environment. So again, the engineers don't have to bother with trying to think through how they secure their workloads and every step of the way their other things that we incorporated, whether it's like rollbacks or monitoring or, you know, like baly enough other things. But I think that's really an investment that has panned off for us. We actually started investing in development environment several years ago. We started measure our development velocity and we, it actually went up by six X justly investing. So >>User experience, developer experience and productivity pretty much right. >>Yeah. AB absolutely. Yeah. That's like a big investment area for us that, you know, cloud cloud >>Sounds like super cloudlike factor and I'm assuming it's you're on AWS. >>We are mostly on AWS. Yes. >>And so what are you investing in that from a VC money doling out standpoint? That feels super cloudlike >>So very similar to what we just touched on a lot of developer tool experiences. We have a company that we've invested in called ops level that the service catalogs it's, it's helping, you know, understand your, where your services live and how they could be accessed and, and you know, enterprise kind of that come with that. And then we have a company called Lugo that helps you do serverless debugging container debugging, cuz it turns out debugging distributed, you know, applications is a real problem right now just you can only do so much by log tracing, right? We have a company haven't announced yet that's in the web assembly space. So we're looking at modernizing the next generation past stack and throwing everything out the window, including Java and all of the, you know, current prebuilt components because turns out 90% of enterprise workloads are actually not used. They're they're just policy code. You compiled with they're sitting there as vulnerabilities that no one's actually accessing, but you still have to compile with all of it. So we have a lot of bloatware happening in the enterprise. So we're thinking about how do you skinny that up with the next generation paths that's enterprise capable with security context and frameworks >>Super pass. >>Well, yeah, super pass. That's a kind of good way to, well, is >>It, is it a consistent developer experience across clouds? >>It is. And, and, and, and web assembly is a very raw standard if you can call it that. I mean it's, but it's supported by every modern browser, every major platform, vendor cloud, and Adobe and others, and are using it for their uses. And it's not just about your edge browser compute. It's really, you can take the same framework and compile it down to server side as well as client site, just like JavaScript was a client side tool before it became node. Right. Right. So we're looking at that as a very interesting opportunity. It's very nascent. Yeah. >>Great patterns. Yeah. Well, thanks so much for spending the time outta your busy day. Ariana. Thanks for your commentary. Appreciate your coming on the cubes first in IGUR super cloud event, pilot. Thanks for, for sharing. Thanks for having, thanks for having us. Okay. More coverage here. Super cloud 2022. I'm Jeff David Alane stay with us. We got our cloud ARA panel coming up next.

Published Date : Sep 9 2022

SUMMARY :

I'm John fury, host of the cube with Dave Lon two great guests, distinguished engineers managers, lot of momentum and you guys got stats over there at, at Intuit in, So you have to really understand where the separations of boundaries are between your data, I mean, this is structural, It's desired by incumbents, but it's not something that I'm seeing from the consumption. whether like, you know, through acquisitions or through like needing to use a service And you can do like for your developers, you can actually provide an environment When was the, when did you give up, what was the moment? just became not worth it for the gains you have. They're actually running, you know, their own little snow grid. issue, whether it's like copy or, you know, redundancy. Do you think? Right? What do you think about that? So there was just compounded growth is like 50% or 80% for, you know, many enterprises despite Yeah. that looks like a, the singular experience. Show me who you all want be. You throw that into, you all want Oracle to buy your companies, moving data to their cloud, and then they might charge you by the size of your data. and I, and I've got, you know, a connection to the cloud, which you probably do, that's, And like, what I'm talking about is not so much like, you know, like what you wanna connect your data. And I started catch base and I could tell you like the difference was It's kind of a lightweight, you know, sort of, patterns that we're all grew up with since the popup days of, you know, typical round. So I have to ask you guys, what are you guys investing in? So again, the engineers don't have to bother with trying to think through how you know, cloud cloud We are mostly on AWS. And then we have a company called Lugo that helps you do serverless debugging container debugging, That's a kind of good way to, well, is It's really, you can take the same framework and compile it down to server side as well as client Thanks for your commentary.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Dave LonPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MariaPERSON

0.99+

Ben HorowitzPERSON

0.99+

Mariana TesselPERSON

0.99+

OracleORGANIZATION

0.99+

50%QUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

ArianaPERSON

0.99+

90%QUANTITY

0.99+

80%QUANTITY

0.99+

Mike OlsonPERSON

0.99+

DavePERSON

0.99+

Jeff David AlanePERSON

0.99+

next yearDATE

0.99+

240 billionQUANTITY

0.99+

JavaTITLE

0.99+

GoogleORGANIZATION

0.99+

JavaScriptTITLE

0.99+

John furyPERSON

0.99+

LugoORGANIZATION

0.99+

Intuit insORGANIZATION

0.99+

mark AndrePERSON

0.99+

both edgesQUANTITY

0.99+

AdobeORGANIZATION

0.98+

BothQUANTITY

0.98+

KubernetesTITLE

0.97+

twoQUANTITY

0.97+

MarioPERSON

0.97+

single cloudQUANTITY

0.97+

SASORGANIZATION

0.96+

two great guestsQUANTITY

0.96+

VMware Docker IntuitORGANIZATION

0.96+

each oneQUANTITY

0.95+

UnixTITLE

0.95+

one placeQUANTITY

0.95+

one endQUANTITY

0.95+

SRDFORGANIZATION

0.94+

six XQUANTITY

0.94+

SnowflakeORGANIZATION

0.93+

one thingQUANTITY

0.93+

several years agoDATE

0.93+

oneQUANTITY

0.92+

SupercloudsORGANIZATION

0.92+

Ben wa deja VIPERSON

0.92+

firstQUANTITY

0.9+

IASTITLE

0.88+

MongoDBORGANIZATION

0.88+

SupercloudORGANIZATION

0.88+

super cloudORGANIZATION

0.88+

Supercloud22ORGANIZATION

0.87+

Intuit inORGANIZATION

0.85+

hundred percentQUANTITY

0.85+

nodeTITLE

0.84+

CapellasORGANIZATION

0.84+

ARAORGANIZATION

0.83+

OCIORGANIZATION

0.81+

AzureTITLE

0.81+

coupleQUANTITY

0.8+

IGUR super cloudEVENT

0.8+

super cloud 22EVENT

0.78+

one bigQUANTITY

0.77+

JSTITLE

0.76+

EmoryORGANIZATION

0.75+

CloudFlareTITLE

0.64+

Super cloud 2022EVENT

0.59+

AkamaiORGANIZATION

0.54+

SASTITLE

0.47+

RayPERSON

0.38+

Snehal Antani, Horizon3.ai | AWS Startup Showcase S2 E4 | Cybersecurity


 

(upbeat music) >> Hello and welcome to theCUBE's presentation of the AWS Startup Showcase. This is season two, episode four of the ongoing series covering the exciting hot startups from the AWS ecosystem. Here we're talking about cybersecurity in this episode. I'm your host, John Furrier here we're excited to have CUBE alumni who's back Snehal Antani who's the CEO and co-founder of Horizon3.ai talking about exploitable weaknesses and vulnerabilities with autonomous pen testing. Snehal, it's great to see you. Thanks for coming back. >> Likewise, John. I think it's been about five years since you and I were on the stage together. And I've missed it, but I'm glad to see you again. >> Well, before we get into the showcase about your new startup, that's extremely successful, amazing margins, great product. You have a unique journey. We talked about this prior to you doing the journey, but you have a great story. You left the startup world to go into the startup, like world of self defense, public defense, NSA. What group did you go to in the public sector became a private partner. >> My background, I'm a software engineer by education and trade. I started my career at IBM. I was a CIO at GE Capital, and I think we met once when I was there and I became the CTO of Splunk. And we spent a lot of time together when I was at Splunk. And at the end of 2017, I decided to take a break from industry and really kind of solve problems that I cared deeply about and solve problems that mattered. So I left industry and joined the US Special Operations Community and spent about four years in US Special Operations, where I grew more personally and professionally than in anything I'd ever done in my career. And exited that time, met my co-founder in special ops. And then as he retired from the air force, we started Horizon3. >> So there's really, I want to bring that up one, 'cause it's fascinating that not a lot of people in Silicon Valley and tech would do that. So thanks for the service. And I know everyone who's out there in the public sector knows that this is a really important time for the tactical edge in our military, a lot of things going on around the world. So thanks for the service and a great journey. But there's a storyline with the company you're running now that you started. I know you get the jacket on there. I noticed get a little military vibe to it. Cybersecurity, I mean, every company's on their own now. They have to build their own militia. There is no government supporting companies anymore. There's no militia. No one's on the shores of our country defending the citizens and the companies, they got to offend for themselves. So every company has to have their own military. >> In many ways, you don't see anti-aircraft rocket launchers on top of the JP Morgan building in New York City because they rely on the government for air defense. But in cyber it's very different. Every company is on their own to defend for themselves. And what's interesting is this blend. If you look at the Ukraine, Russia war, as an example, a thousand companies have decided to withdraw from the Russian economy and those thousand companies we should expect to be in the ire of the Russian government and their proxies at some point. And so it's not just those companies, but their suppliers, their distributors. And it's no longer about cyber attack for extortion through ransomware, but rather cyber attack for punishment and retaliation for leaving. Those companies are on their own to defend themselves. There's no government that is dedicated to supporting them. So yeah, the reality is that cybersecurity, it's the burden of the organization. And also your attack surface has expanded to not just be your footprint, but if an adversary wants to punish you for leaving their economy, they can get, if you're in agriculture, they could disrupt your ability to farm or they could get all your fruit to spoil at the border 'cause they disrupted your distributors and so on. So I think the entire world is going to change over the next 18 to 24 months. And I think this idea of cybersecurity is going to become truly a national problem and a problem that breaks down any corporate barriers that we see in previously. >> What are some of the things that inspired you to start this company? And I loved your approach of thinking about the customer, your customer, as defending themselves in context to threats, really leaning into it, being ready and able to defend. Horizon3 has a lot of that kind of military thinking for the good of the company. What's the motivation? Why this company? Why now? What's the value proposition? >> So there's two parts to why the company and why now. The first part was what my observation, when I left industry realm or my military background is watching "Jack Ryan" and "Tropic Thunder" and I didn't come from the military world. And so when I entered the special operations community, step one was to keep my mouth shut, learn, listen, and really observe and understand what made that community so impressive. And obviously the people and it's not about them being fast runners or great shooters or awesome swimmers, but rather there are learn-it-alls that can solve any problem as a team under pressure, which is the exact culture you want to have in any startup, early stage companies are learn-it-alls that can solve any problem under pressure as a team. So I had this immediate advantage when we started Horizon3, where a third of Horizon3 employees came from that special operations community. So one is this awesome talent. But the second part that, I remember this quote from a special operations commander that said we use live rounds in training because if we used fake rounds or rubber bullets, everyone would act like metal of honor winners. And the whole idea there is you train like you fight, you build that muscle memory for crisis and response and so on upfront. So when you're in the thick of it, you already know how to react. And this aligns to a pain I had in industry. I had no idea I was secure until the bad guy showed up. I had no idea if I was fixing the right vulnerabilities, logging the right data in Splunk, or if my CrowdStrike EDR platform was configured correctly, I had to wait for the bad guys to show up. I didn't know if my people knew how to respond to an incident. So what I wanted to do was proactively verify my security posture, proactively harden my systems. I needed to do that by continuously pen testing myself or continuously testing my security posture. And there just wasn't any way to do that where an IT admin or a network engineer could in three clicks have the power of a 20 year pen testing expert. And that was really what we set out to do, not build a autonomous pen testing platform for security people, build it so that anybody can quickly test their security posture and then use the output to fix problems that truly matter. >> So the value preposition, if I get this right is, there's a lot of companies out there doing pen tests. And I know I hate pen tests. They're like, cause you do DevOps, it changes you got to do another pen test. So it makes sense to do autonomous pen testing. So congratulations on seeing that that's obvious to that, but a lot of other have consulting tied to it. Which seems like you need to train someone and you guys taking a different approach. >> Yeah, we actually, as a company have zero consulting, zero professional services. And the whole idea is that build a true software as a service offering where an intern, in fact, we've got a video of a nine year old that in three clicks can run pen tests against themselves. And because of that, you can wire pen tests into your DevOps tool chain. You can run multiple pen tests today. In fact, I've got customers running 40, 50 pen tests a month against their organization. And that what that does is completely lowers the barrier of entry for being able to verify your posture. If you have consulting on average, when I was a CIO, it was at least a three month lead time to schedule consultants to show up and then they'd show up, they'd embarrass the security team, they'd make everyone look bad, 'cause they're going to get in, leave behind a report. And that report was almost identical to what they found last year because the older that report, the one the date itself gets stale, the context changes and so on. And then eventually you just don't even bother fixing it. Or if you fix a problem, you don't have the skills to verify that has been fixed. So I think that consulting led model was acceptable when you viewed security as a compliance checkbox, where once a year was sufficient to meet your like PCI requirements. But if you're really operating with a wartime mindset and you actually need to harden and secure your environment, you've got to be running pen test regularly against your organization from different perspectives, inside, outside, from the cloud, from work, from home environments and everything in between. >> So for the CISOs out there, for the CSOs and the CXOs, what's the pitch to them because I see your jacket that says Horizon3 AI, trust but verify. But this trust is, but is canceled out, just as verify. What's the product that you guys are offering the service. Describe what it is and why they should look at it. >> Yeah, sure. So one, when I back when I was the CIO, don't tell me we're secure in PowerPoint. Show me we're secure right now. Show me we're secure again tomorrow. And then show me we're secure again next week because my environment is constantly changing and the adversary always has a vote and they're always evolving. And this whole idea of show me we're secure. Don't trust that your security tools are working, verify that they can detect and respond and stifle an attack and then verify tomorrow, verify next week. That's the big mind shift. Now what we do is-- >> John: How do they respond to that by the way? Like they don't believe you at first or what's the story. >> I think, there's actually a very bifurcated response. There are still a decent chunk of CIOs and CSOs that have a security is a compliance checkbox mindset. So my attitude with them is I'm not going to convince you. You believe it's a checkbox. I'll just wait for you to get breached and sell to your replacement, 'cause you'll get fired. And in the meantime, I spend all my energy with those that actually care about proactively securing and hardening their environments. >> That's true. People do get fired. Can you give an example of what you're saying about this environment being ready, proving that you're secure today, tomorrow and a few weeks out. Give me an example. >> Of, yeah, I'll give you actually a customer example. There was a healthcare organization and they had about 5,000 hosts in their environment and they did everything right. They had Fortinet as their EDR platform. They had user behavior analytics in place that they had purchased and tuned. And when they ran a pen test self-service, our product node zero immediately started to discover every host on the network. It then fingerprinted all those hosts and found it was able to get code execution on three machines. So it got code execution, dumped credentials, laterally maneuvered, and became a domain administrator, which in IT, if an attacker becomes a domain admin, they've got keys to the kingdom. So at first the question was, how did the node zero pen test become domain admin? How'd they get code execution, Fortinet should have detected and stopped it. Well, it turned out Fortinet was misconfigured on three boxes out of 5,000. And these guys had no idea and it's just automation that went wrong and so on. And now they would've only known they had misconfigured their EDR platform on three hosts if the attacker had showed up. The second question though was, why didn't they catch the lateral movement? Which all their marketing brochures say they're supposed to catch. And it turned out that that customer purchased the wrong Fortinet modules. One again, they had no idea. They thought they were doing the right thing. So don't trust just installing your tools is good enough. You've got to exercise and verify them. We've got tons of stories from patches that didn't actually apply to being able to find the AWS admin credentials on a local file system. And then using that to log in and take over the cloud. In fact, I gave this talk at Black Hat on war stories from running 10,000 pen tests. And that's just the reality is, you don't know that these tools and processes are working for you until the bad guys have shown. >> The velocities there. You can accelerate through logs, you know from the days you've been there. This is now the threat. Being, I won't say lazy, but just not careful or just not thinking. >> Well, I'll do an example. We have a lot of customers that are Horizon3 customers and Splunk customers. And what you'll see their behavior is, is they'll have Horizon3 up on one screen. And every single attacker command executed with its timestamp is up on that screen. And then look at Splunk and say, hey, we were able to dump vCenter credentials from VMware products at this time on this host, what did Splunk see or what didn't they see? Why were no logs generated? And it turns out that they had some logging blind spots. So what they'll actually do is run us to almost like stimulate the defensive tools and then see what did the tools catch? What did they miss? What are those blind spots and how do they fix it. >> So your price called node zero. You mentioned that. Is that specifically a suite, a tool, a platform. How do people consume and engage with you guys? >> So the way that we work, the whole product is designed to be self-service. So once again, while we have a sales team, the whole intent is you don't need to have to talk to a sales rep to start using the product, you can log in right now, go to Horizon3.ai, you can run a trial log in with your Google ID, your LinkedIn ID, start running pen test against your home or against your network against this organization right now, without talking to anybody. The whole idea is self-service, run a pen test in three clicks and give you the power of that 20 year pen testing expert. And then what'll happen is node zero will execute and then it'll provide to you a full report of here are all of the different paths or attack paths or sequences where we are able to become an admin in your environment. And then for every attack path, here is the path or the kill chain, the proof of exploitation for every step along the way. Here's exactly what you've got to do to fix it. And then once you've fixed it, here's how you verify that you've truly fixed the problem. And this whole aha moment is run us to find problems. You fix them, rerun us to verify that the problem has been fixed. >> Talk about the company, how many people do you have and get some stats? >> Yeah, so we started writing code in January of 2020, right before the pandemic hit. And then about 10 months later at the end of 2020, we launched the first version of the product. We've been in the market for now about two and a half years total from start of the company till present. We've got 130 employees. We've got more customers than we do employees, which is really cool. And instead our customers shift from running one pen test a year to 40, 50 pen test. >> John: And it's full SaaS. >> The whole product is full SaaS. So no consulting, no pro serve. You run as often as you-- >> Who's downloading, who's buying the product. >> What's amazing is, we have customers in almost every section or sector now. So we're not overly rotated towards like healthcare or financial services. We've got state and local education or K through 12 education, state and local government, a number of healthcare companies, financial services, manufacturing. We've got organizations that large enterprises. >> John: Security's diverse. >> It's very diverse. >> I mean, ransomware must be a big driver. I mean, is that something that you're seeing a lot. >> It is. And the thing about ransomware is, if you peel back the outcome of ransomware, which is extortion, at the end of the day, what ransomware organizations or criminals or APTs will do is they'll find out who all your employees are online. They will then figure out if you've got 7,000 employees, all it takes is one of them to have a bad password. And then attackers are going to credential spray to find that one person with a bad password or whose Netflix password that's on the dark web is also their same password to log in here, 'cause most people reuse. And then from there they're going to most likely in your organization, the domain user, when you log in, like you probably have local admin on your laptop. If you're a windows machine and I've got local admin on your laptop, I'm going to be able to dump credentials, get the admin credentials and then start to laterally maneuver. Attackers don't have to hack in using zero days like you see in the movies, often they're logging in with valid user IDs and passwords that they've found and collected from somewhere else. And then they make that, they maneuver by making a low plus a low equal a high. And the other thing in financial services, we spend all of our time fixing critical vulnerabilities, attackers know that. So they've adapted to finding ways to chain together, low priority vulnerabilities and misconfigurations and dangerous defaults to become admin. So while we've over rotated towards just fixing the highs and the criticals attackers have adapted. And once again they have a vote, they're always evolving their tactics. >> And how do you prevent that from happening? >> So we actually apply those same tactics. Rarely do we actually need a CVE to compromise your environment. We will harvest credentials, just like an attacker. We will find misconfigurations and dangerous defaults, just like an attacker. We will combine those together. We'll make use of exploitable vulnerabilities as appropriate and use that to compromise your environment. So the tactics that, in many ways we've built a digital weapon and the tactics we apply are the exact same tactics that are applied by the adversary. >> So you guys basically simulate hacking. >> We actually do the hacking. Simulate means there's a fakeness to it. >> So you guys do hack. >> We actually compromise. >> Like sneakers the movie, those sneakers movie for the old folks like me. >> And in fact that was my inspiration. I've had this idea for over a decade now, which is I want to be able to look at anything that laptop, this Wi-Fi network, gear in hospital or a truck driving by and know, I can figure out how to gain initial access, rip that environment apart and be able to opponent. >> Okay, Chuck, he's not allowed in the studio anymore. (laughs) No, seriously. Some people are exposed. I mean, some companies don't have anything. But there's always passwords or so most people have that argument. Well, there's nothing to protect here. Not a lot of sensitive data. How do you respond to that? Do you see that being kind of putting the head in the sand or? >> Yeah, it's actually, it's less, there's not sensitive data, but more we've installed or applied multifactor authentication, attackers can't get in now. Well MFA only applies or does not apply to lower level protocols. So I can find a user ID password, log in through SMB, which isn't protected by multifactor authentication and still upon your environment. So unfortunately I think as a security industry, we've become very good at giving a false sense of security to organizations. >> John: Compliance drives that behavior. >> Compliance drives that. And what we need. Back to don't tell me we're secure, show me, we've got to, I think, change that to a trust but verify, but get rid of the trust piece of it, just to verify. >> Okay, we got a lot of CISOs and CSOs watching this showcase, looking at the hot startups, what's the message to the executives there. Do they want to become more leaning in more hawkish if you will, to use the military term on security? I mean, I heard one CISO say, security first then compliance 'cause compliance can make you complacent and then you're unsecure at that point. >> I actually say that. I agree. One definitely security is different and more important than being compliant. I think there's another emerging concept, which is I'd rather be defensible than secure. What I mean by that is security is a point in time state. I am secure right now. I may not be secure tomorrow 'cause something's changed. But if I'm defensible, then what I have is that muscle memory to detect, respondent and stifle an attack. And that's what's more important. Can I detect you? How long did it take me to detect you? Can I stifle you from achieving your objective? How long did it take me to stifle you? What did you use to get in to gain access? How long did that sit in my environment? How long did it take me to fix it? So on and so forth. But I think it's being defensible and being able to rapidly adapt to changing tactics by the adversary is more important. >> This is the evolution of how the red line never moved. You got the adversaries in our networks and our banks. Now they hang out and they wait. So everyone thinks they're secure. But when they start getting hacked, they're not really in a position to defend, the alarms go off. Where's the playbook. Team springs into action. I mean, you kind of get the visual there, but this is really the issue being defensible means having your own essentially military for your company. >> Being defensible, I think has two pieces. One is you've got to have this culture and process in place of training like you fight because you want to build that incident response muscle memory ahead of time. You don't want to have to learn how to respond to an incident in the middle of the incident. So that is that proactively verifying your posture and continuous pen testing is critical there. The second part is the actual fundamentals in place so you can detect and stifle as appropriate. And also being able to do that. When you are continuously verifying your posture, you need to verify your entire posture, not just your test systems, which is what most people do. But you have to be able to safely pen test your production systems, your cloud environments, your perimeter. You've got to assume that the bad guys are going to get in, once they're in, what can they do? So don't just say that my perimeter's secure and I'm good to go. It's the soft squishy center that attackers are going to get into. And from there, can you detect them and can you stop them? >> Snehal, take me through the use. You got to be sold on this, I love this topic. Alright, pen test. Is it, what am I buying? Just pen test as a service. You mentioned dark web. Are you actually buying credentials online on behalf of the customer? What is the product? What am I buying if I'm the CISO from Horizon3? What's the service? What's the product, be specific. >> So very specifically and one just principles. The first principle is when I was a buyer, I hated being nickled and dimed buyer vendors, which was, I had to buy 15 different modules in order to achieve an objective. Just give me one line item, make it super easy to buy and don't nickel and dime me. Because I've spent time as a buyer that very much has permeated throughout the company. So there is a single skew from Horizon3. It is an annual subscription based on how big your environment is. And it is inclusive of on-prem internal pen tests, external pen tests, cloud attacks, work from home attacks, our ability to harvest credentials from the dark web and from open source sources. Being able to crack those credentials, compromise. All of that is included as a singles skew. All you get as a CISO is a singles skew, annual subscription, and you can run as many pen tests as you want. Some customers still stick to, maybe one pen test a quarter, but most customers shift when they realize there's no limit, we don't nickel and dime. They can run 10, 20, 30, 40 a month. >> Well, it's not nickel and dime in the sense that, it's more like dollars and hundreds because they know what to expect if it's classic cloud consumption. They kind of know what their environment, can people try it. Let's just say I have a huge environment, I have a cloud, I have an on-premise private cloud. Can I dabble and set parameters around pricing? >> Yes you can. So one is you can dabble and set perimeter around scope, which is like manufacturing does this, do not touch the production line that's on at the moment. We've got a hospital that says every time they run a pen test, any machine that's actually connected to a patient must be excluded. So you can actually set the parameters for what's in scope and what's out of scope up front, most again we're designed to be safe to run against production so you can set the parameters for scope. You can set the parameters for cost if you want. But our recommendation is I'd rather figure out what you can afford and let you test everything in your environment than try to squeeze every penny from you by only making you buy what can afford as a smaller-- >> So the variable ratio, if you will is, how much they spend is the size of their environment and usage. >> Just size of the environment. >> So it could be a big ticket item for a CISO then. >> It could, if you're really large, but for the most part-- >> What's large? >> I mean, if you were Walmart, well, let me back up. What I heard is global 10 companies spend anywhere from 50 to a hundred million dollars a year on security testing. So they're already spending a ton of money, but they're spending it on consultants that show up maybe a couple of times a year. They don't have, humans can't scale to test a million hosts in your environment. And so you're already spending that money, spend a fraction of that and use us and run as much as you want. And that's really what it comes down to. >> John: All right. So what's the response from customers? >> What's really interesting is there are three use cases. The first is that SOC manager that is using us to verify that their security tools are actually working. So their Splunk environment is logging the right data. It's integrating properly with CrowdStrike, it's integrating properly with their active directory services and their password policies. So the SOC manager is using us to verify the effectiveness of their security controls. The second use case is the IT director that is using us to proactively harden their systems. Did they install VMware correctly? Did they install their Cisco gear correctly? Are they patching right? And then the third are for the companies that are lucky to have their own internal pen test and red teams where they use us like a force multiplier. So if you've got 10 people on your red team and you still have a million IPs or hosts in your environment, you still don't have enough people for that coverage. So they'll use us to do recon at scale and attack at scale and let the humans focus on the really juicy hard stuff that humans are successful at. >> Love the product. Again, I'm trying to think about how I engage on the test. Is there pilots? Is there a demo version? >> There's a free trials. So we do 30 day free trials. The output can actually be used to meet your SOC 2 requirements. So in many ways you can just use us to get a free SOC 2 pen test report right now, if you want. Go to the website, log in for a free trial, you can log into your Google ID or your LinkedIn ID, run a pen test against your organization and use that to answer your PCI segmentation test requirements, your SOC 2 requirements, but you will be hooked. You will want to run us more often. And you'll get a Horizon3 tattoo. >> The first hits free as they say in the drug business. >> Yeah. >> I mean, so you're seeing that kind of response then, trial converts. >> It's exactly. In fact, we have a very well defined aha moment, which is you run us to find, you fix, you run us to verify, we have 100% technical win rate when our customers hit a find, fix, verify cycle, then it's about budget and urgency. But 100% technical win rate because of that aha moment, 'cause people realize, holy crap, I don't have to wait six months to verify that my problems have actually been fixed. I can just come in, click, verify, rerun the entire pen test or rerun a very specific part of it on what I just patched my environment. >> Congratulations, great stuff. You're here part of the AWS Startup Showcase. So I have to ask, what's the relationship with AWS, you're on their cloud. What kind of actions going on there? Is there secret sauce on there? What's going on? >> So one is we are AWS customers ourselves, our brains command and control infrastructure. All of our analytics are all running on AWS. It's amazing, when we run a pen test, we are able to use AWS and we'll spin up a virtual private cloud just for that pen test. It's completely ephemeral, it's all Lambda functions and graph analytics and other techniques. When the pen test ends, you can delete, there's a single use Docker container that gets deleted from your environment so you have nothing on-prem to deal with and the entire virtual private cloud tears itself down. So at any given moment, if we're running 50 pen tests or a hundred pen tests, self-service, there's a hundred virtual private clouds being managed in AWS that are spinning up, running and tearing down. It's an absolutely amazing underlying platform for us to make use of. Two is that many customers that have hybrid environments. So they've got a cloud infrastructure, an Office 365 infrastructure and an on-prem infrastructure. We are a single attack platform that can test all of that together. No one else can do it. And so the AWS customers that are especially AWS hybrid customers are the ones that we do really well targeting. >> Got it. And that's awesome. And that's the benefit of cloud? >> Absolutely. And the AWS marketplace. What's absolutely amazing is the competitive advantage being part of the marketplace has for us, because the simple thing is my customers, if they already have dedicated cloud spend, they can use their approved cloud spend to pay for Horizon3 through the marketplace. So you don't have to, if you already have that budget dedicated, you can use that through the marketplace. The other is you've already got the vendor processes in place, you can purchase through your existing AWS account. So what I love about the AWS company is one, the infrastructure we use for our own pen test, two, the marketplace, and then three, the customers that span that hybrid cloud environment. That's right in our strike zone. >> Awesome. Well, congratulations. And thanks for being part of the showcase and I'm sure your product is going to do very, very well. It's very built for what people want. Self-service get in, get the value quickly. >> No agents to install, no consultants to hire. safe to run against production. It's what I wanted. >> Great to see you and congratulations and what a great story. And we're going to keep following you. Thanks for coming on. >> Snehal: Phenomenal. Thank you, John. >> This is the AWS Startup Showcase. I'm John John Furrier, your host. This is season two, episode four on cybersecurity. Thanks for watching. (upbeat music)

Published Date : Sep 7 2022

SUMMARY :

of the AWS Startup Showcase. I'm glad to see you again. to you doing the journey, and I became the CTO of Splunk. and the companies, they got over the next 18 to 24 months. And I loved your approach of and "Tropic Thunder" and I didn't come from the military world. So the value preposition, And the whole idea is that build a true What's the product that you and the adversary always has a vote Like they don't believe you and sell to your replacement, Can you give an example And that's just the reality is, This is now the threat. the defensive tools and engage with you guys? the whole intent is you We've been in the market for now about So no consulting, no pro serve. who's buying the product. So we're not overly rotated I mean, is that something and the criticals attackers have adapted. and the tactics we apply We actually do the hacking. Like sneakers the movie, and be able to opponent. kind of putting the head in the sand or? and still upon your environment. that to a trust but verify, looking at the hot startups, and being able to rapidly This is the evolution of and I'm good to go. What is the product? and you can run as many and dime in the sense that, So you can actually set the So the variable ratio, if you will is, So it could be a big and run as much as you want. So what's the response from customers? and let the humans focus on about how I engage on the test. So in many ways you can just use us they say in the drug business. I mean, so you're seeing I don't have to wait six months to verify So I have to ask, what's When the pen test ends, you can delete, And that's the benefit of cloud? And the AWS marketplace. And thanks for being part of the showcase no consultants to hire. Great to see you and congratulations This is the AWS Startup Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
WalmartORGANIZATION

0.99+

40QUANTITY

0.99+

SnehalPERSON

0.99+

January of 2020DATE

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

10QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

ChuckPERSON

0.99+

Snehal AntaniPERSON

0.99+

two partsQUANTITY

0.99+

two piecesQUANTITY

0.99+

30 dayQUANTITY

0.99+

Tropic ThunderTITLE

0.99+

100%QUANTITY

0.99+

CiscoORGANIZATION

0.99+

20 yearQUANTITY

0.99+

second questionQUANTITY

0.99+

GE CapitalORGANIZATION

0.99+

30QUANTITY

0.99+

next weekDATE

0.99+

20QUANTITY

0.99+

New York CityLOCATION

0.99+

130 employeesQUANTITY

0.99+

IBMORGANIZATION

0.99+

10 peopleQUANTITY

0.99+

tomorrowDATE

0.99+

7,000 employeesQUANTITY

0.99+

PowerPointTITLE

0.99+

thirdQUANTITY

0.99+

SplunkORGANIZATION

0.99+

10 companiesQUANTITY

0.99+

5,000QUANTITY

0.99+

second partQUANTITY

0.99+

six monthsQUANTITY

0.99+

end of 2020DATE

0.99+

LinkedInORGANIZATION

0.99+

oneQUANTITY

0.99+

15 different modulesQUANTITY

0.99+

last yearDATE

0.99+

TwoQUANTITY

0.99+

firstQUANTITY

0.99+

CUBEORGANIZATION

0.99+

first partQUANTITY

0.99+

OneQUANTITY

0.99+

first versionQUANTITY

0.99+

Horizon3ORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

three machinesQUANTITY

0.99+

CrowdStrikeTITLE

0.98+

first principleQUANTITY

0.98+

one screenQUANTITY

0.98+

threeQUANTITY

0.98+

one personQUANTITY

0.98+

thousand companiesQUANTITY

0.98+

SOC 2TITLE

0.98+

Jack RyanTITLE

0.98+

one line itemQUANTITY

0.98+

about two and a half yearsQUANTITY

0.98+

twoQUANTITY

0.98+

three use casesQUANTITY

0.98+

zero daysQUANTITY

0.98+

hundredsQUANTITY

0.98+

about four yearsQUANTITY

0.98+

Lie 3, Today’s Modern Data Stack Is Modern | Starburst


 

(energetic music) >> Okay, we're back with Justin Borgman, CEO of Starburst, Richard Jarvis is the CTO of EMIS Health, and Teresa Tung is the cloud first technologist from Accenture. We're on to lie number three. And that is the claim that today's "Modern Data Stack" is actually modern. So (chuckles), I guess that's the lie. Or, is that it's not modern. Justin, what do you say? >> Yeah, I think new isn't modern. Right? I think it's the new data stack. It's the cloud data stack, but that doesn't necessarily mean it's modern. I think a lot of the components actually, are exactly the same as what we've had for 40 years. Rather than Teradata, you have Snowflake. Rather than Informatica, you have Fivetran. So, it's the same general stack, just, y'know, a cloud version of it. And I think a lot of the challenges that have plagued us for 40 years still maintain. >> So, let me come back to you Justin. Okay, but there are differences, right? You can scale. You can throw resources at the problem. You can separate compute from storage. You really, there's a lot of money being thrown at that by venture capitalists, and Snowflake you mentioned, its competitors. So that's different. Is it not? Is that not at least an aspect of modern dial it up, dial it down? So what do you say to that? >> Well, it is. It's certainly taking, y'know what the cloud offers and taking advantage of that. But it's important to note that the cloud data warehouses out there are really just separating their compute from their storage. So it's allowing them to scale up and down, but your data's still stored in a proprietary format. You're still locked in. You still have to ingest the data to get it even prepared for analysis. So a lot of the same structural constraints that exist with the old enterprise data warehouse model on-preem still exist. Just yes, a little bit more elastic now because the cloud offers that. >> So Teresa, let me go to you, 'cause you have cloud-first in your title. So, what's say you to this conversation? >> Well, even the cloud providers are looking towards more of a cloud continuum, right? So the centralized cloud as we know it, maybe data lake, data warehouse in the central place, that's not even how the cloud providers are looking at it. They have use query services. Every provider has one that really expands those queries to be beyond a single location. And if we look at a lot of where our- the future goes, right? That's going to very much fall the same thing. There was going to be more edge. There's going to be more on-premise, because of data sovereignty, data gravity, because you're working with different parts of the business that have already made major cloud investments in different cloud providers, right? So, there's a lot of reasons why the modern, I guess, the next modern generation of the data stack needs to be much more federated. >> Okay, so Richard, how do you deal with this? You've obviously got, you know, the technical debt, the existing infrastructure, it's on the books. You don't want to just throw it out. A lot of conversation about modernizing applications, which a lot of times is, you know, of microservices layer on top of legacy apps. How do you think about the Modern Data Stack? >> Well, I think probably the first thing to say is that the stack really has to include the processes and people around the data as well is all well and good changing the technology. But if you don't modernize how people use that technology, then you're not going to be able to, to scale because just 'cause you can scale CPU and storage doesn't mean you can get more people to use your data to generate you more value for the business. And so what we've been looking at is really changing in very much aligned to data products and, and data mesh. How do you enable more people to consume the service and have the stack respond in a way that keeps costs low? Because that's important for our customers consuming this data but also allows people to occasionally run enormous queries and then tick along with smaller ones when required. And it's a good job we did because during COVID all of a sudden we had enormous pressures on our data platform to answer really important life threatening queries. And if we couldn't scale both our data stack and our teams we wouldn't have been able to answer those as quickly as we had. So I think the stack needs to support a scalable business not just the technology itself. >> Well thank you for that. So Justin let's, let's try to break down what the critical aspects are of the modern data stack. So you think about the past, you know, five seven years cloud obviously has given a different pricing model. Derisked experimentation, you know that we talked about the ability to scale up scale down, but it's, I'm taking away that that's not enough. Based on what Richard just said, the modern data stack has to serve the business and enable the business to build data products. I buy that. I'm you a big fan of the data mesh concepts, even though we're early days. So what are the critical aspects if you had to think about you know, the, maybe putting some guardrails and definitions around the modern data stack, what does that look like? What are some of the attributes and, and principles there >> Of how it should look like or, or how >> Yeah. What it should be? >> Yeah. Yeah. Well, I think, you know, in, in Theresa mentioned this in in a previous segment about the data warehouse is not necessarily going to disappear. It just becomes one node, one element of the overall data mesh. And I certainly agree with that. So by no means, are we suggesting that, you know Snowflake or what Redshift or whatever cloud data warehouse you may be using is going to disappear, but it's it's not going to become the end all be all. It's not the, the central single source of truth. And I think that's the paradigm shift that needs to occur. And I think it's also worth noting that those who were the early adopters of the modern data stack were primarily digital, native born in the cloud young companies who had the benefit of of idealism. They had the benefit of starting with a clean slate that does not reflect the vast majority of enterprises. And even those companies, as they grow up, mature out of that ideal state, they go by a business. Now they've got something on another cloud provider that has a different data stack and they have to deal with that heterogeneity that is just change and change is a part of life. And so I think there is an element here that is almost philosophical. It's like, do you believe in an absolute ideal where I can just fit everything into one place or do I believe in reality? And I think the far more pragmatic approach is really what data mesh represents. So to answer your question directly, I think it's adding you know, the ability to access data that lives outside of the data warehouse, maybe living in open data formats in a data lake or accessing operational systems as well. Maybe you want to directly access data that lives in an Oracle database or a Mongo database or, or what have you. So creating that flexibility to really future proof yourself from the inevitable change that you will you won't encounter over time. >> So thank you. So Theresa, based on what Justin just said, I I might take away there is it's inclusive whether it's a data mart, data hub, data lake, data warehouse, just a node on the mesh. Okay. I get that. Does that include Theresa on, on Preem data? Obviously it has to. What are you seeing in terms of the ability to, to take that data mesh concept on Preem I mean most implementations I've seen and data mesh, frankly really aren't, you know adhering to the philosophy there. Maybe, maybe it's data lake and maybe it's using glue. You look at what JPMC is doing, HelloFresh, a lot of stuff happening on the AWS cloud in that, you know, closed stack, if you will. What's the answer to that Theresa? >> I mean, I think it's a killer case for data mesh. The fact that you have valuable data sources on Preem, and then yet you still want to modernize and take the best of cloud. Cloud is still, like we mentioned, there's a lot of great reasons for it around the economics and the way ability to tap into the innovation that the cloud providers are giving around data and AI architecture. It's an easy button. So the mesh allows you to have the best of both world. You can start using the data products on Preem, or in the existing systems that are working already. It's meaningful for the business. At the same time, you can modernize the ones that make business sense because it needs better performance. It needs, you know, something that is, is cheaper or or maybe just tapping into better analytics to get better insights, right? So you're going to be able to stretch and really have the best of both worlds. That, again, going back to Richard's point, that is meaningful by the business. Not everything has to have that one size fits all set a tool. >> Okay. Thank you. So Richard, you know, talking about data as product wonder if we could give us your perspectives here what are the advantages of treating data as a product? What, what role do data products have in the modern data stack? We talk about monetizing data. What are your thoughts on data products? >> So for us, one of the most important data products that we've been creating is taking data that is healthcare data across a wide variety of different settings. So information about patients, demographics about their their treatment, about their medications and so on, and taking that into a standards format that can be utilized by a wide variety of different researchers because misinterpreting that data or having the data not presented in the way that the user is expecting means that you generate the wrong insight and in any business that's clearly not a desirable outcome but when that insight is so critical as it might be in healthcare or some security settings you really have to have gone to the trouble of understanding the data, presenting it in a format that everyone can clearly agree on. And then letting people consume in a very structured managed way, even if that data comes from a variety of different sources in the first place. And so our data product journey has really begun by standardizing data across a number of different silos through the data mesh. So we can present out both internally and through the right governance externally to, to researchers. >> So that data product through whatever APIs is is accessible, it's discoverable, but it's obviously got to be governed as well. You mentioned appropriately provided to internally. >> Yeah. >> But also, you know, external folks as well. So the, so you've, you've architected that capability today? >> We have and because the data is standard it can generate value much more quickly and we can be sure of the security and value that that's providing, because the data product isn't just about formatting the data into the correct tables, it's understanding what it means to redact the data or to remove certain rows from it or to interpret what a date actually means. Is it the start of the contract or the start of the treatment or the date of birth of a patient? These things can be lost in the data storage without having the proper product management around the data to say in a very clear business context what does this data mean, and what does it mean to process this data for a particular use case. >> Yeah, it makes sense. It's got the context. If the, if the domains on the data, you know you got to cut through a lot of the, the centralized teams, the technical teams that that data agnostic, they don't really have that context. All right, let's end. Justin. How does Starburst fit into this modern data stack? Bring us home. >> Yeah. So I think for us it's really providing our customers with, you know the flexibility to operate and analyze data that lives in a wide variety of different systems. Ultimately giving them that optionality, you know and optionality provides the ability to reduce costs store more in a data lake rather than data warehouse. It provides the ability for the fastest time to insight to access the data directly where it lives. And ultimately with this concept of data products that we've now, you know incorporated into our offering as well you can really create and, and curate, you know data as a product to be shared and consumed. So we're trying to help enable the data mesh, you know model and make that an appropriate compliment to you know, the modern data stack that people have today. >> Excellent. Hey, I want to thank Justin, Teresa, and Richard for joining us today. You guys are great. Big believers in the in the data mesh concept, and I think, you know we're seeing the future of data architecture. So thank you. Now, remember, all these conversations are going to be available on the cube.net for on demand viewing. You can also go to starburst.io. They have some great content on the website and they host some really thought provoking interviews and they have awesome resources. Lots of data mesh conversations over there and really good stuff in, in the resource section. So check that out. Thanks for watching the "Data Doesn't Lie... or Does It?" made possible by Starburst data. This is Dave Vellante for the Cube, and we'll see you next time. (upbeat music)

Published Date : Aug 22 2022

SUMMARY :

And that is the claim It's the cloud data stack, So, let me come back to you Justin. that the cloud data warehouses out there So Teresa, let me go to you, So the centralized cloud as we know it, it's on the books. the first thing to say is of the modern data stack. from the inevitable change that you will What's the answer to that Theresa? So the mesh allows you to in the modern data stack? or having the data not presented So that data product But also, you know, around the data to say in a on the data, you know enable the data mesh, you know in the data mesh concept,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

Teresa TungPERSON

0.99+

JustinPERSON

0.99+

TeresaPERSON

0.99+

Dave VellantePERSON

0.99+

Justin BorgmanPERSON

0.99+

Richard JarvisPERSON

0.99+

40 yearsQUANTITY

0.99+

TheresaPERSON

0.99+

StarburstORGANIZATION

0.99+

JPMCORGANIZATION

0.99+

AWSORGANIZATION

0.99+

InformaticaORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

both worldsQUANTITY

0.99+

todayDATE

0.99+

EMIS HealthORGANIZATION

0.99+

first technologistQUANTITY

0.98+

one elementQUANTITY

0.98+

bothQUANTITY

0.98+

first thingQUANTITY

0.98+

five seven yearsQUANTITY

0.98+

oneQUANTITY

0.97+

TeradataORGANIZATION

0.97+

OracleORGANIZATION

0.97+

cube.netOTHER

0.96+

MongoORGANIZATION

0.95+

one sizeQUANTITY

0.93+

CubeORGANIZATION

0.92+

PreemTITLE

0.92+

both worldQUANTITY

0.91+

one placeQUANTITY

0.91+

Today’sTITLE

0.89+

FivetranORGANIZATION

0.86+

Data Doesn't Lie... or Does It?TITLE

0.86+

single locationQUANTITY

0.85+

HelloFreshORGANIZATION

0.84+

first placeQUANTITY

0.83+

CEOPERSON

0.83+

LieTITLE

0.82+

single sourceQUANTITY

0.79+

firstQUANTITY

0.75+

one nodeQUANTITY

0.72+

SnowflakeORGANIZATION

0.66+

SnowflakeTITLE

0.66+

threeQUANTITY

0.59+

CTOPERSON

0.53+

Data StackTITLE

0.53+

RedshiftTITLE

0.52+

starburst.ioOTHER

0.48+

COVIDTITLE

0.37+

Hillary Ashton, Teradata | Amazon re:MARS


 

(upbeat music) >> And welcome back. I'm John Furrier, host of theCUBE. We're excited to welcome Teradata back to theCUBE and today with us at the ARIA is re:MARS conference coverage. It's great to hear with Hillary Ashton, Chief Product Officer of Teradata. Great to have you on. Thanks for coming on. >> John, thanks so much for having me. I'm super excited to be joining you today. >> So re:MARS, what a great event. It brings together the confluence of machine learning, which is data, automation, robotics, and space. Which is to me, is a whole new genre of conversations, around technology and business value. It is going to be a big kind of area. And it's just, again just getting started any one, as they say, and super excited. Tell us about what you guys are doing there and yourself. >> About two and a half years ago I head up the products organization. That means I have responsibility for our roadmap and our and our strategy overall on the product side. Prior to coming Teradata, gosh, I have spent the last 20 years, if I can say that, in the data and analytics space. I grew up in marketing application space, spent 11 years at SaaS, really cut my teeth on hardcore AI, ML and analytics at SaaS, and most recently was at PTC, where I was in charge of, I was a general manager of augmented reality, the business unit at PTC, focused on IOT data and how IOT data and augmented reality can really bring machines to life. >> It's interesting. You talked about SaaS and kind of your background, you know everything SaaSified with the cloud now. So you think about platform as a service, SaaS models emerging, software is an open source game now. So it's an integration cloud-scale data conversation we're seeing. What's your reaction to that? What's your reaction to that kind of idea that, okay, everything's open to source, software value integrating in with data. What's your reaction to that? >> Yeah, I mean, I think open source absolutely has some awesome things going on there. I think there's great opportunities for commercial, reliable, governed software and open source capabilities to come together in an open ecosystem that allow our customers to choose the best way to deliver the analytic outcomes that they're focused on. >> So you guys have been in the news lately around connecting multicloud data analytics platforms and transforming businesses around there, obviously, the background with Teradata is well documented. What's this news about? What's really going on there? You got Vantage platform. What's happening? Take us through that story. What's the key point? >> Yeah, we've worked super hard to deliver a true, multicloud, hybrid, data platform. So, if you think customers, many of our enterprise customers started with on-premises data systems and are moving violently to the cloud, right? So they're super excited about moving to the cloud but being able to deploy on multiple clouds, I think is important and then importantly, sort of this hybrid notion of being able to leverage data that's on-premises and combine it with data in the cloud on AWS, for example. And so being able to do those hybrid use cases you may have data that's like older and kind of archaic, needs to stay on-premises. There's not a lot of value in moving it to the cloud but you want to combine it with some of the innovative, analytic capabilities that perhaps you're doing on AWS. And so Teradata allows you to live in that hybrid multicloud environment and deliver analytic outcomes wherever your data is. >> Hillary, one of the top conversations is data cloud. You got to have a data cloud. I want to deal with this, move this around, but there's a lot of now integration opportunities to bring data from different sources together whether you're in healthcare, all the verticals have the same use case, multiple access to different databases, bringing them all together, ETL, all that old-school stuff is coming back in and being kind of refactored with machine learning, with cloud scale, with platforms like AWS, there's now this new commitment to bringing this to the next level for enterprises. And you mentioned some of those partnerships. What specifically is going on in the cloud that's notable, that's realistically that customers are executing on now? Not the hype, the reality. >> The reality. Yeah, absolutely. So I mean, I think today with Teradata our customers are leveraging something that we call a query fabric. And so this is the idea, as you said, John, that data might be in a lot of different places and you want to be able to get value out of that data without the difficulty of moving it around unnecessarily. Sometimes you want to move it around but unnecessary data movement is both expensive and an inefficient use of precious time. And so I think that there's an opportunity for this query fabric to be able to do remote push-down queries, wherever that data is and return back the results that you are looking for, analytic results, AI and ML results, combining different data that's in different locations to deliver that analytic outcome quickly without having to move the data around. So I would say query fabric is one of the areas that we are super invested in and, today, is delivering real value for our customers. >> It's really interesting. Data being addressable and available, low latency. I mean, we're talking about space, automation, robotics, real-time, so you have different data types stored in different data vehicles or mechanisms that need to be real-time and available. Because machine learning only works as good as the data they has available to it. So again, this is a key, kind of new way that folks are re-architecting. And again, we're here at, at re:MARS, right? I mean to machine learning automation, robotics and space, kind of the real world, physical, digital, trust, scale, huge concepts here. What's the partnership? How's it working with AWS? Take us through that strong partnership that you guys are developing. >> Yeah. I mean, we have a fantastic relationship with AWS. We're really excited that we signed a strategic collaboration agreement at the end of last year that really puts us in an elite category of AWS partners. We're really committed to co-investing and co-engineering with Amazon and our product development organization and also in go-to market and marketing and other parts of our business. As the Chief Product Officer, I'm really excited about three key areas. First is we've optimized Teradata Vantage to run in the AWS cloud at great scale, with unparalleled scale at the highest level for our customers. And so we've partnered with them to be able to handle some of the complex analytic workloads. And we think of analytic models are one part of a workload. There may be other ELT that you talked about, right? Workloads that you may need to run, all of that running at tremendous scale with AWS in the cloud. The second area is deep integration. So Teradata used to think that we were the ecosystem. We built everything soup connects end-to-end. Today, we live in a really exciting data and analytics space and we partner closely with CSPs like AWS, where we are deeply integrated. We have dozens of AWS native integrations in our AWS offer today. And that lets customers take advantage of AWS X3 for Cloud Lake, for example Amazon Kinesis for data ingestion and streaming and on and on. So we're really focused on the integration area there. And then finally, we've developed, co-developed with AWS, a fast and low risk migration approach to move from on-premises to the cloud for our enterprise customers. >> You know, what's interesting is as we kind of weave together, I hear you talking about those three areas. I mentioned earlier at the top of the interview, how integration is now the competitive advantage. Software is almost going commodity with open source because you mentioned that. All good, right? All good stuff. But when you think about kind of the big trends in this new computing world, it's hybrid cloud, it's edge, and IOT, okay? Again, cloud-scale and these new connected points, trust, access, all these things have to be integrated. So integration, you guys have been in the middle, Teradata has been around for a long time, leader in data warehousing, but now with cloud and in the data types, this is a game changer. I mean, this is notable. Can you share more about how you see this evolving with customers because at the end of the day the integration becomes super critical. >> Yeah, absolutely. And I'm super passionate about the opportunities of IOT streaming data. And that's one of the key areas of partnership with Amazon is taking that streaming data, leveraging the analytic opportunities with Amazon. We'll talk about that in just a second, but I think some of the examples that I could share with you, everyone loves to hear, I love to hear, about what actual customers are doing. So Brinker International, they're one of the world's largest casual dining restaurants. If you've ever been to a Chili's Grill or Maggiano's Little Italy those are the guys, Brinker International owns those brands. So we leveraged Amazon SageMaker and Teradata Vantage together to apply advanced analytic and predictive modeling to be able to understand things like demand. And you're in the middle of COVID and trying to understand how many people should you have on staff today? What is the demand going to look like? What should sales look like? What's foot traffic look like? So that demand forecasting capability across their 1,600 different store fronts or restaurant fronts is one of the examples that I could share with you. The other one is Hertz. So one of the world's largest vehicle rental companies. They are using Vantage and AWS together to track and analyze transaction data across all of its global locations and manage again that complex inventory. And some of that is streaming data, some of that is data that we're getting from the cars themselves, and then create a new value-added program to their loyalty members which is sort of the name of the game. Is customer acquisition and extension of brand across those customers. So those are two examples I can share with you. There's many, many others but I know you probably had some other questions. >> Yeah. I want to come back to the SageMaker thing. I think that's important partnership there because it's been one of the fastest growing services. It's always at the top or in the top two or three whenever I talk to Andy Jassy and the team over there. But I want to talk about scalability and I want to ask you, if you can scope for me the scalability of what's going on with this data challenging, 'cause where are we on that scale? Can you share how you would scope the scale? >> Absolutely. And I love talking about scale because it is a home run for Teradata. I think many customers start looking at the cloud and they start with kind of a little tiny baby footprints but we are an enterprise solution, an enterprise platform. And so I think that we're looking at tens of thousands of users and thousands of business critical applications. That's what our customers are doing and have done for decades with Teradata and bringing all of that scale to the cloud. And with AWS in particular, we recently did 1,000 node testing. I'm going to walk through this a little bit slowly, which is hard for me, as you can tell, but it was a single system of more than 1,000 nodes which is just to give you a sense, that's double our largest on-premises system. So it's huge. It was the single largest system. >> John: Double is your largest customer deployment? >> Double our largest customer deployment on-premises. Yeah, that's right. So it was 1,000 nodes with more than 1,000 different users submitting thousands of concurrent queries. So huge enterprise scale. And this was a real-world use case. We took not a traditional benchmark but a real world customer set of mixed workloads. So lots of long running strategic queries and lots of fast running queries that needed really tight SLAs. All of that running simultaneously. We saw no system down times, we were able to roll out and roll back new capabilities seamlessly in a true software as a service fashion. So that was an awesome test all run on AWS. And I think that their team was just as excited as we were about it. >> Well, I love the scale. I love that test you guys ran. I see you're sponsoring re:MARS which is great, congratulations. We love covering since the beginning, we believe of kind of a whole new genre of programming brings together the confluence of exciting technologies that just a decade ago weren't always working together. They were bespoke. >> That's right. Yeah. >> So now it's all integrated in at cloud scale, you got the test, got thousands of concurrents queries. What else are you showcasing? You mentioned the SageMaker because that's really where Amazon's connecting all these tools. How are you integrating in? It sounds like you're bringing all that Amazon goodness in with Teradata and vice versa. >> Absolutely. We're delivering sort of the best in class to our customers jointly. So here re:MARS today, we're really excited to be talking about SageMaker and our relationship with AWS to be able to deliver that seamless integration between our solutions for machine learning services and Teradata Vantage. So I'm sure it won't come as any surprise to you as we just talked about, but we're finding that massive investments in AI and ML and other advanced analytic capabilities are out there, and many organizations are really only experimenting. They're just starting to explore some of these opportunities. We think that there's tremendous value in this scale that we just talked about, that we can offer, combined with best in class AI and ML capabilities like SageMaker. And so we are excited to talk about it. If you want to see it, we've got a booth set up, you can come and take a look at what we're doing there but I think there's huge opportunities for customers to get to the analytic value with Teradata Vantage and AWS SageMaker. >> Yeah, it's great to see Teradata seeing that headroom opportunity to extend the value proposition to kind of new territory with your customers. I can definitely see it. Love the connection here. Where can they learn more about the Teradata partnership with AWS and Amazon? Is there a site? Is there a program coming? Is there any more content that they can be expecting to see? Take a little plug time to plug the company. >> If you insist, I will, John. Thank you. I think, if you're at the event right now, you can swing by Teradata's booth. We're at booth 111. You can get a demo of our SageMaker integration and learn more about both our enterprise scale and the advanced outcomes that we're able to provide to our customers. If you're not at re:MARS and we really think you should be, we would encourage you to sign up for one of our upcoming SageMaker webinars that we're doing with AWS this year. And if you'd like to, you can also just email us at aws@teradata.com. Again, that's aws@teradata.com and we'll set up a private demo for you. >> Well, Hillary Ashton, great to have you on. Chief Product Officer, Teradata, you must be feeling good. You got a lot to work with. You've got an install base. You have new territory to take down. As the Chief Product Officer, you got the keys to the kingdom. Give us a quick bumper sticker of where you guys are going with the product. >> We are fast and furious. My team will tell you, we are so excited to be here with AWS and Teradata is on an epic trajectory forward in our cloud first approach, so we are so excited about our roadmap. If you'd like to learn more, please swing by teradata.com. >> Lot of innovation happening. Thanks for coming on theCUBE. Okay, this is theCUBE coverage of Amazon re:MARS machine learning, automation, robotics, and space. It cuts the confluence of digital, virtual data and real-world and space. You can't get any more than this. That's a big edge out there in space. Talk about edge computing and space. Of course, theCUBE's here covering it. I'm John Furrier, your host. Stay with us for more coverage here at Amazon re:MARS. (upbeat music)

Published Date : Jun 30 2022

SUMMARY :

Great to have you on. I'm super excited to be joining you today. It is going to be a big kind of area. I have spent the last 20 So you think about platform as a service, to choose the best way to obviously, the background with of being able to leverage and being kind of refactored for this query fabric to be able to do or mechanisms that need to and we partner closely with CSPs like AWS, and in the data types, What is the demand going to look like? and the team over there. that scale to the cloud. All of that running simultaneously. love that test you guys ran. That's right. You mentioned the SageMaker as any surprise to you to extend the value proposition that we're doing with AWS this year. great to have you on. so excited to be here with AWS It cuts the confluence

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

JohnPERSON

0.99+

Hillary AshtonPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

TeradataORGANIZATION

0.99+

Brinker InternationalORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

HillaryPERSON

0.99+

aws@teradata.comOTHER

0.99+

todayDATE

0.99+

FirstQUANTITY

0.99+

two examplesQUANTITY

0.99+

threeQUANTITY

0.99+

HertzORGANIZATION

0.99+

teradata.comOTHER

0.99+

DoubleQUANTITY

0.99+

more than 1,000 different usersQUANTITY

0.99+

Chili's GrillORGANIZATION

0.99+

TodayDATE

0.99+

oneQUANTITY

0.99+

second areaQUANTITY

0.98+

this yearDATE

0.98+

thousandsQUANTITY

0.98+

bothQUANTITY

0.98+

singleQUANTITY

0.97+

About two and a half years agoDATE

0.97+

single systemQUANTITY

0.96+

PTCORGANIZATION

0.96+

VantageORGANIZATION

0.96+

doubleQUANTITY

0.95+

thousands of businessQUANTITY

0.95+

SageMakerTITLE

0.94+

three areasQUANTITY

0.93+

a decade agoDATE

0.93+

first approachQUANTITY

0.93+

Teradata VantageORGANIZATION

0.92+

1,000 nodesQUANTITY

0.92+

Sheila Rohra & Omer Asad, HPE Storage | HPE Discover 2022


 

>> Announcer: "theCUBE" presents HPE Discover 2022. Brought to you by HPE. >> Welcome back to HPE Discover 2022. You're watching "theCUBE's" coverage. This is Day 2, Dave Vellante with John Furrier. Sheila Rohra is here. She's the Senior Vice President and GM of the Data Infrastructure Business at Hewlett Packard Enterprise, and of course, the storage division. And Omer Asad. Welcome back to "theCUBE", Omer. Senior Vice President and General Manager for Cloud Data Services, Hewlett Packard Enterprise storage. Guys, thanks for coming on. Good to see you. >> Thank you. Always a pleasure, man. >> Thank you. >> So Sheila, I'll start with you. Explain the difference. The Data Infrastructure Business and then Omer's Cloud Data Services. You first. >> Okay. So Data Infrastructure Business. So I'm responsible for the primary secondary storage. Basically, what you physically store, the data in a box, I actually own that. So I'm going to have Omer explain his business because he can explain it better than me. (laughing) Go ahead. >> So 100% right. So first, data infrastructure platforms, primary secondary storage. And then what I do from a cloud perspective is wrap up those things into offerings, block storage offerings, data protection offerings, and then put them on top of the GreenLake platform, which is the platform that Antonio and Fidelma talked about on main Keynote stage yesterday. That includes multi-tenancy, customer subscription management, sign on management, and then on top of that we build services. Services are cloud-like services, storage services or block service, data protection service, disaster recovery services. Those services are then launched on top of the platform. Some services like data protection services are software only. Some services are software plus hardware. And the hardware on the platform comes along from the primary storage business and we run the control plane for that block service on the GreenLake platform and that's the cloud service. >> So, I just want to clarify. So what we maybe used to know as 3PAR and Nimble and StoreOnce. Those are the products that you're responsible for? >> That is the primary storage part, right? And just to kind of show that, he and I, we do indeed work together. Right. So if you think about the 3PAR, the primary... Sorry, the Primera, the Alletras, the Nimble, right? All that, right? That's the technology that, you know, my team builds. And what Omer does with his magic is that he turns it into HPE GreenLake for storage, right? And to deliver as a service, right? And basically to create a self-service agility for the customer and also to get a very Cloud operational experience for them. >> So if I'm a customer, just so I get this right, if I'm a customer and I want Hybrid, that's what you're delivering as a Cloud service? >> Yes. >> And I don't care where the data is on-premises, in storage, or on Cloud. >> 100%. >> Is that right? >> So the way that would work is, as a customer, you would come along with the partner, because we're 100% partner-led. You'll come to the GreenLake Console. On the GreenLake Console, you will pick one of our services. Could be a data protection service, could be the block storage service. All services are hybrid in nature. Public Cloud is 100% participant in the ecosystem. You'll choose a service. Once you choose a service, you like the rate card for that service. That rate card is just like a hyperscaler rate card. IOPS, Commitment, MINCOMMIT's, whatever. Once you procure that at the price that you like with a partner, you buy the subscription. Then you go to console.greenLake.com, activate your subscription. Once the subscription is activated, if it's a service like block storage, which we talked about yesterday, service will be activated, and our supply chain will send you our platform gear, and that will get activated in your site. Two things, network cable, power cable, dial into the cloud, service gets activated, and you have a cloud control plane. The key difference to remember is that it is cloud-consumption model and cloud-operation model built in together. It is not your traditional as a service, which is just like hardware leasing. >> Yeah, yeah, yeah. >> That's a thing of the past. >> But this answers a question that I had, is how do you transfer or transform from a company that is, you know, selling boxes, of course, most of you are engineers are software engineers, I get that, to one that is selling services. And it sounds like the answer is you've organized, I know it's inside baseball here, but you organize so that you still have, you can build best of breed products and then you can package them into services. >> Omer: 100%. 100%. >> It's separate but complementary organization. >> So the simplest way to look at it would be, we have a platform side at the house that builds the persistence layers, the innovation, the file systems, the speeds and feeds, and then building on top of that, really, really resilient storage services. Then how the customer consumes those storage services, we've got tremendous feedback from our customers, is that the cloud-operational model has won. It's just a very, very simple way to operate it, right? So from a customer's perspective, we have completely abstracted away out hardware, which is in the back. It could be at their own data center, it could be at an MSP, or they could be using a public cloud region. But from an operational perspective, the customer gets a single pane of glass through our service console, whether they're operating stuff on-prem, or they're operating stuff in the public cloud. >> So they get storage no matter what? They want it in the cloud, they got it that way, and if they want it as a service, it just gets shipped. >> 100%. >> They plug it in and it auto configures. >> Omer: It's ready to go. >> That's right. And the key thing is simplicity. We want to take the headache away from our customers, we want our customers to focus on their business outcomes, and their projects, and we're simplifying it through analytics and through this unified cloud platform, right? On like how their data is managed, how they're stored, how they're secured, that's all taken care of in this operational model. >> Okay, so I have a question. So just now the edge, like take me through this. Say I'm a customer, okay I got the data saved on-premise action, cloud, love that. Great, sir. That's a value proposition. Come to HPE because we provide this easily. Yeah. But now at the edge, I want to deploy it out to some edge node. Could be a tower with Telecom, 5G or whatever, I want to box this out there, I want storage. What happens there? Just ship it out there and connects up? Does it work the same way? >> 100%. So from our infrastructure team, you'll consume one or two platforms. You'll consume either the Hyperconverged form factor, SimpliVity, or you might convert, the Converged form factor, which is proliant servers powered by Alletras. Alletra 6Ks. Either of those... But it's very different the way you would procure it. What you would procure from us is an edge service. That edge service will come configured with certain amount of compute, certain amount of storage, and a certain amount of data protection. Once you buy that on a dollars per gig per month basis, whichever rate card you prefer, storage rate card or a VMware rate card, that's all you buy. From that point on, the platform team automatically configures the back-end hardware from that attribute-based ordering and that is shipped out to your edge. Dial in the network cable, dial in the power cable, GreenLake cloud discovers it, and then you start running the- >> Self-service, configure it, it just shows up, plug it in, done. >> Omer: Self-service but partner-led. >> Yeah. >> Because we have preferred pricing for our partners. Our partners would come in, they will configure the subscriptions, and then we activate those customers, and then send out the hardware. So it's like a hyperscaler on-prem at-scale kind of a model. >> Yeah, I like it a lot. >> So you guys are in the data business. You run the data portion of Hewlett Packard Enterprise. I used to call it storage, even if we still call it storage but really, it's evolving into data. So what's your vision for the data business and your customer's data vision, if you will? How are you supporting that? >> Well, I want to kick it off, and then I'm going to have my friend, Omer, chime in. But the key thing is that what the first step is is that we have to create a unified platform, and in this case we're creating a unified cloud platform, right? Where there's a single pane of glass to manage all that data, right? And also leveraging lots of analytics and telemetry data that actually comes from our infosite, right? We use all that, we make it easy for the customer, and all they have to say, and they're basically given the answers to the test. "Hey, you know, you may want to increase your capacity. You may want to tweak your performance here." And all the customers are like, "Yes. No. Yes, no." Basically it, right? Accept and not accept, right? That's actually the easiest way. And again, as I said earlier, this frees up the bandwidth for the IT teams so then they actually focus more on the business side of the house, rather than figuring out how to actually manage every single step of the way of the data. >> Got it. >> So it's exactly what Sheila described, right? The way this strategy manifests itself across an operational roadmap for us is the ability to change from a storage vendor to a data services vendor, right? >> Sheila: Right. >> And then once we start monetizing these data services to our customers through the GreenLake platform, which gives us cloud consumption model and a cloud operational model, and then certain data services come with the platform layer, certain data services are software only. But all the services, all the data services that we provide are hybrid in nature, where we say, when you provision storage, you could provision it on-prem, or you can provision it in a hyperscaler environment. The challenge that most of our customers have come back and told us, is like, data center control planes are getting fragmented. On-premises, I mean there's no secrecy about it, right? VMware is the predominant hypervisor, and as a result of that, vCenter is the predominant configuration layer. Then there is the public cloud side, which is through either Ajour, or GCP, or AWS, being one of the largest ones out there. But when the customer is dealing with data assets, the persistence layer could be anywhere, it could be in AWS region, it could be your own data center, or it could be your MSP. But what this does is it creates an immense amount of fragmentation in the context in which the customers understand the data. Essentially, John, the customers are just trying to answer three questions: What is it that I store? How much of it do I store? Should I even be storing it in the first place? And surprisingly, those three questions just haven't been answered. And we've gotten more and more fragmented. So what we are trying to produce for our customers, is a context to ware data view, which allows the customer to understand structured and unstructured data, and the lineage of how it is stored in the organization. And essentially, the vision is around simplification and context to ware data management. One of the key things that makes that possible, is again, the age old infosite capability that we have continued to hone and develop over time, which is now up to the stage of like 12 trillion data points that are coming into the system that are not corroborated to give that back. >> And of course cost-optimizing it as well. We're up against the clock, but take us through the announcements, what's new from when we sort of last talked? I guess it was in September. >> Omer: Right. >> Right. What's new that's being announced here and, or, you know, GA? >> Right. So three major announcements that came out, because to keep on establishing the context when we were with you last time. So last time we announced GreenLake backup and recovery service. >> John: Right. >> That was VMware backup and recovery as a complete cloud, sort of SaaS control plane. No backup target management, no BDS server management, no catalog management, it's completely a SaaS service. Provide your vCenter address, boom, off you go. We do the backups, agentless, 100% dedup enabled. We have extended that into the public cloud domain. So now, we can back up AWS, EC2, and EBS instances within the same constructs. So a single catalog, single backup policy, single protection framework that protects you both in the cloud and on-prem, no fragmentation, no multiple solutions to deploy. And the second one is we've extended our Hyperconverged service to now be what we call the Hybrid Cloud On-Demand. So basically, you go to GreenLake Console control plane, and from there, you basically just start configuring virtual machines. It supports VMware and AWS at the same time. So you can provision a virtual machine on-prem, or you can provision a virtual machine in the public cloud. >> Got it. >> And, it's the same framework, the same catalog, the same inventory management system across the board. And then, lastly, we extended our block storage service to also become hybrid in nature. >> Got it. >> So you can manage on-prem and AWS, EBS assets as well. >> And Sheila, do you still make product announcements, or does Antonio not allow that? (Omer laughing) >> Well, we make product announcements, and you're going to see our product announcements actually done through the HPE GreenLake for block storage. >> Dave: Oh, okay. >> So our announcements will be coming through that, because we do want to make it as a service. Again, we want to take all of that headache of "What configuration should I buy? How do I actually deploy it? How do I...?" We really want to take that headache away. So you're going to see more feature announcements that's going to come through this. >> So feature acceleration through GreenLake will be exposed? >> Absolutely. >> This is some cool stuff going on behind the scenes. >> Oh, there's a lot good stuff. >> Hardware still matters, you know. >> Hardware still matters. >> Does it still matter? Does hardware matter? >> Hardware still matters, but what matters more is the experience, and that's actually what we want to bring to the customer. (laughing) >> John: That's good. >> Good answer. >> Omer: 100%. (laughing) >> Guys, thanks so much- >> John: Hardware matters. >> For coming on "theCUBE". Good to see you again. >> John: We got it. >> Thanks. >> And hope the experience was good for you Sheila. >> I know, I know. Thank you. >> Omer: Pleasure as always. >> All right, keep it right there. Dave Vellante and John Furrier will be back from HPE Discover 2022. You're watching "theCUBE". (soft music)

Published Date : Jun 29 2022

SUMMARY :

Brought to you by HPE. and of course, the storage division. Always a pleasure, man. Explain the difference. So I'm responsible for the and that's the cloud service. Those are the products that That's the technology that, you know, the data is on-premises, On the GreenLake Console, you And it sounds like the Omer: 100%. It's separate but is that the cloud-operational and if they want it as a and it auto configures. And the key thing is simplicity. So just now the edge, and that is shipped out to your edge. it just shows up, plug it in, done. and then we activate those customers, for the data business the answers to the test. and the lineage of how it is And of course and, or, you know, GA? establishing the context And the second one is we've extended And, it's the same framework, So you can manage on-prem the HPE GreenLake for block storage. that's going to come through this. going on behind the scenes. and that's actually what we Omer: 100%. Good to see you again. And hope the experience I know, I know. Dave Vellante and John

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SheilaPERSON

0.99+

JohnPERSON

0.99+

DavePERSON

0.99+

Sheila RohraPERSON

0.99+

Dave VellantePERSON

0.99+

SeptemberDATE

0.99+

Dave VellantePERSON

0.99+

three questionsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

100%QUANTITY

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

OmerPERSON

0.99+

John FurrierPERSON

0.99+

two platformsQUANTITY

0.99+

Omer AsadPERSON

0.99+

firstQUANTITY

0.99+

HPEORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

first stepQUANTITY

0.99+

console.greenLake.comOTHER

0.99+

yesterdayDATE

0.99+

second oneQUANTITY

0.99+

OneQUANTITY

0.99+

AntonioPERSON

0.98+

12 trillion data pointsQUANTITY

0.98+

Two thingsQUANTITY

0.98+

AlletrasORGANIZATION

0.97+

HPE StorageORGANIZATION

0.97+

5GORGANIZATION

0.97+

theCUBETITLE

0.97+

bothQUANTITY

0.95+

GALOCATION

0.95+

StoreOnceORGANIZATION

0.95+

EBSORGANIZATION

0.94+

three major announcementsQUANTITY

0.94+

Cloud Data ServicesORGANIZATION

0.93+

PrimeraORGANIZATION

0.92+

AjourORGANIZATION

0.9+

GreenLakeORGANIZATION

0.9+

single paneQUANTITY

0.88+

single backup policyQUANTITY

0.86+

single catalogQUANTITY

0.86+

Day 2QUANTITY

0.85+

single protection frameworkQUANTITY

0.84+

VMwareTITLE

0.82+

theCUBEORGANIZATION

0.82+

EC2TITLE

0.79+

Alletra 6KsTITLE

0.77+

VMwareORGANIZATION

0.73+

KeynoteEVENT

0.72+

single stepQUANTITY

0.72+

HPE DiscoverORGANIZATION

0.7+

dollars per gigQUANTITY

0.7+

Regina Manfredi, Teradata | Amazon re:MARS 2022


 

(light techno music) >> Okay, welcome back, everyone from theCUBE's coverage of AWS re:Mars here in Las Vegas. Back in person, I'm John Furrier, host of theCUBE. Re:MARS stands or Machine learning, Automation, Robotics, and Space. And we're covering all the action two days, day two. And we're here with Regina Manfredi, who's the VP of global CSPs, Cloud Service Providers Alliances with Teradata. Great to see you. Cloud service providers or- >> Cloud services providers, the hyperscalers. >> Hyperscalers, the big guys. All the CapEx, Amazon. >> Yes. >> The big guys. >> Indeed, thanks for having me. >> Yeah, Thanks for coming on. So tell about your role. So alliances, you're here with AWS. What's the role with AWS and Teradata? >> So AWS and Teradata have recently entered into a strategic collaboration agreement where we're really focused on building solutions together, leveraging AWS services, as well as Teradata's outstanding architecture, as it relates to the data analytics platform that we provide for our customers in the cloud today. And we're really trying to drive better outcomes for data scientists, business analysts, etc. >> You know, just recently, did a CUBE conversation with Teradata, and I was really surprised to find, not shocked, but kind of surprised, the scale of the computation that's going on in some of the cloud things you're doing. And you have the legacy on-premises data warehouse traditional business as well. >> Regina: We do. >> And there's a huge shift going on. A lot of the kind of upstarts, "Oh, data warehouse, old school. Data warehouse, it's antiquated, old," but that's not true. You guys have a lot of cloud action. >> We do, we have substantial cloud action that's occurring with our customers today. We actually just released earlier this year an announcement around 1,000 node tests in the cloud together with AWS, and had success, no downtime, no failures at all. And so we're pretty proud about that, and excited about what that's going to hold for our customers who need that level of scale. >> Well, Regina, I got to tell you, I have a little bit of a confession here. I'm a cloud data nerd by my training. And, you know, I've always watched all the different kind of levels of transformation with the industry, and you know, this is going to change that, that's going to kill that. Everything's going to be killed and then it never dies, but it just changes. Even today, SQL is still like the prominent language, it's never going to, in fact it's amplified further because that's what people like. So that just proves that things don't always get replaced. And so I wanted to ask you this because as we're here at this event at re:MARS, you have space, you have all these ambitious positive goals, and they just need to do some machine learning. They need some cloud, they need some, they need to have the solutions. >> Regina: Yes. They're not going to like get in the weed and say, "Oh, this is a better Hadoop cluster than this Kubernetes cluster. So it's not about sometimes the tech, it's about the solution. >> It is, and one of the things that was interesting for us in our session earlier this week was the fact that we had so many customers approach us after that session and say, "I just need help preparing my data. Running my models, training my models, and making sure that they run and can be deployed. And I don't want to move all this data all the time and have all this failure rate that I'm experiencing." And so it was very basic requirements and needs as people begin into their journey on AI/ML for their business. And so it was reaffirming that we're on the right track and driving the right tools for them. I want to get your perspective on what you're thinking about the show, but first, I want to ask this since you brought that up. Swami was on stage and he said, "You can spend your entire time and your career just trying to figure out what's going on, machine learning." >> Regina: Yup. >> "Which open source framework's going to be better than the other one." I mean, it's just a lot of work to even figure it out. We just had the Fiddler's AI CEO on who worked out all the hyperscalers, say Facebook tend to, you know, real, you know, super alpha geek, if you will. And he was saying, and we were talking about open source, free software, integrations are a big part of where cloud scale, and the value is being captured for companies and people who are doing projects. Integrating some managed services, so this is where I see you, guys, going right now with Teradata, having all these cloud services built on the install base. >> Right. Which is not, doesn't hurt that at all. It just only helps it as they would migrate to cloud, its integrations, so you take a little bit of Amazon here, a little bit of Teradata there. >> Regina: Absolutely. >> What's your perspective, what's your reaction to that? >> So, I agree. And we think that's part of our secret sauce. You know, what we want to have is a data analytics platform in the cloud that allows data scientists, and architects, etc., to bring their own tools. So whatever they're utilizing today, we want them to be able to utilize it in vantage, and make sure that, A, can drive some efficiencies, and also, some better, smarter economics, as it relates to their particular projects. And so I agree with you 100% , and would tell you that we view that as somewhat our competitive advantage. It's not about being all proprietary. We want those integrations, and we've got dozens of them with AWS, and- >> Can you give example, can you give a couple examples of some integrations that highlight that? >> Sure, so right now we've got an integration with SageMaker today that allows our customers or data scientists to come in, prepare the data, and actually leverage SageMaker to build and train the models, and then deploy very quickly and easily without having to do all the data movement within their architecture. >> It's just so fascinating. I can't wait to have more conversation with you guys about this because I just think the world's spinning in a direction where, with low code, no code, >> Regina: Yup. >> you can see code, companion whisperer, that they have CodeWhisperer they launched today, they're writing subroutines for machine learning. And so it's not autocomplete, it's subroutine. So you're seeing all these advances on the technology. So it comes back to the building blocks, the integration. It just seems like going to be like a plug and play. That's old, were all, are old words. Mix and match, plug and play, interoperability, were old words, like, in the old days. Now they're becoming more relevant. What's your take on all that? >> Yeah, I would agree. I don't think that we should be competing against the algorithms, and neither do we. We want to just actually build out the toolsets that drive the enablement based on what a customer's requirements and needs are, and based on what the investments that they've already made within their own enterprises. >> You know, what's interesting about this event, I love to get your reaction to what re:MARS means to you because it's machine learning, automation, robotics, and space. Not your typical tech conference. >> Regina: No. >> Okay, little bit of a mixed bag there, so to speak. I love it. I think it's like super alpha geek, very nerdy, super nerds are here. And the topics kind of reflect the future. For the people that are watching that aren't here, what's your vibe on the show? What's your takeaway? How would you explain what's going on here from a market perspective, from a vibe perspective, what's happening? >> This is my first re:MARS actually, and I would have to tell you that I feel like it just, general observation, a few things, one, the conversations are more meaningful and we're getting into the meat of what a data scientist truly needs in order to be successful in their role and help drive their enterprise. That's number one. So I think, to your point, we're all kind of geeking out together here. The other thing that I think is pretty exciting is the amount of use cases, and ways in which we are driving impact. AWS and Teradata driving impact for the business analysts in the enterprise environment, but also for the people, their customers. That's pretty exciting to see. >> You know, it's interesting. When I first, was kind of like thinking about the show and what I was going to expect, it kind of overexceeded my expectations in the sense of what I was thinking about IOT, industrial, and digital innovation. 'Cause that's going to scale. I think now we're at a tipping point with machine learning that the industrial, IOT markets is going to explode 'cause machine learning's ready. But there was a whole positive, save the earth angle >> Regina: Yes. >> that caught my attention. >> Regina: Yes. You know, the discoveries from space are going to potentially have impact for the good, not just a cliche some sustainability messaging. It was actually real. >> Right, I think that that's exciting in an area in which we're excited to explore. We're doing a lot of work behind the scenes around sustainability and ESG initiatives for our customers, but also for the greater good. It's about driving outcomes for the greater good and being responsible with how we approach that. You know, the other thing I noticed too from a robotics standpoint, given I live in California, is a huge robotics culture there, you know. It's like bigger than football and baseball, and some sports. They provide A and B team and people get cut from the B team. There's so much demand to be on the robotics team. It's not a club, it's a team. >> Regina: Right. And so, you look at what's going on robotics, it's so exciting in the sense that if you're young and you're into tech, this is like- >> Regina: This is the place to be. >> I mean, why wouldn't you be hanging out here? >> Yeah, well, and I visited the booth over at University of Michigan, and how they're driving robotics to help support the human body to go further distances, and to drive better performance and health for individuals, and was really impressed with the work that they're doing, and even saw a use case and a need where I thought, you know, I have a quadriplegic sister-in-law, who I thought, "Wow, someday, maybe she'll be upright and walking again." >> John: Yeah. >> And those were exciting conversations to have while I was here. >> The advances on the material management robots I think is fascinating to see that growth. Well, let's get back to Teradata real quick to kind of close out future of what's next. Obviously, a lot of migration to the cloud happening. What's the outlook on the landscape and where do you see it evolving? Because you're seeing what the hyperscalers are doing, the cloud service providers, they're providing the CapEx. In fact, we coined the term supercloud, last re:Invent, that's become a thing. And Charles Fitzgerald would think it's not a thing, he debates us online all the time on Twitter. But it's, you can build on top of a CapEx. >> Regina: Yup. >> They did all the heavy lifting. You know, Snowflake, Databricks, the list goes on and on. So building on top of that to build proprietary advantages or even just sustainable advantages is now easier to do. So superclouds are kind of in play. So that means whoever's got the playbook can win. So you guys seem to be executing that playbook of having the installed base, and then working with AWS >> Regina: Yes. >> to ride that wave. Tell us about the migration strategies you're seeing, and what are your customers doing specifically, and take us through a customer that's leaning into the cloud and driving. >> So when I think about specific customers that are leaning in, you know, the first and most important thing that we're hearing is, you've got to be able to scale. I've got 1,000 nodes or 100 nodes, or whatnot. And so we're addressing that because we think that there's a place for hybrid cloud. We think everyone's moving and rushing towards the cloud, but even one of our competitors last week announced that there's a place for on-prem, and we would agree. >> John: Yeah. >> So that is something that we're really focused on, and you take, for example, the automotive industry. We're seeing a lot of work being done together with our joint customers, AWS and Teradata, and some of these auto manufacturers who are experiencing supply chain issues and challenges today, and also need to drive better quality control measures within their own lines, in the manufacturing lines. And so we're working together with them to look at what type of machine learning and AI can we be leveraging together as part of the overall solution to drive those analytics, and make sure that they have better quality control >> You know, that's really good insight about the on-premise thing. And I think that supports what we're seeing around hybrid. We see hybrid as a steady state going forward, period. >> Regina: Yeah. >> And that will evolve into multi thing. Multi-cloud, you want to call it, or superclouds, and more things. Basically, distributed computing. So if you look at the edge here, the edge is just on-premise. What is the premise? It's an edge or big device, small device, data center is a large edge. >> Regina: Right. >> And so if you're using cloud hybrid, the distinction kind of goes away. And I think this is where we'll going to see the winners emerge in data. Because remember, you go back to 2010, Hadoop was the big thing, big data. And that kind of crashed and burned. And then now you're seeing Databricks picking up a lot of that. Snowflake, you guys are there. And so it's still going on, this transformation in data. >> Regina: It is. And I think hybrid's a huge deal. What are customers saying around that? Because I think they're just trying to figure out cloud scale. >> I think they're trying to figure out cloud scale, I think they're also trying to figure out security. And so, you know, when we're talking to our customers, that absolutely is critical. And I would also suggest that the customer base is really looking for, "Hey, don't just help me migrate, I really need to modernize." And so driving the right use cases for the customer is important. >> You know, another thing that you, guys, have a lot of core expertise in is governance. And we've seen how that has played in all the compliance, and all these conversations are kind of converging. Do you have closed, do you have open? Machine learning needs more data, dow do you protect it? So that set a hot area that I see as well. And that's something that's emerging, 'cause cyber's also involved too, like, you have cyber security threats on code, so I'm curious to see how that turns out. What's your perspective on, what's Teradata's perspective on the security, open, closed perspective? Any- >> It's a priority for, security is a priority for us. And I don't think that we've officially made that determination yet, right? We're still exploring, and we're going to do whatever our customers require of us. In terms of an open, closed perspective, I think we want to be flexible. Again, like I said before, it's about being open and supportive of whatever the customer requirement is especially across the different industries. >> Well, Regina, great to have you on theCUBE. Thanks for coming. I really appreciate it. Great insight, great to catch up on Teradata, cloud play. Very strong move. I think it's a good one. Final question I want to ask you though, is a little bit more about the personnel in the industry, like, obviously, if you're young, you're seeing all this space here, machine learning's not obvious. I know schools now are training it, but you start to see new personas come into the workforce. Where are the gaps? I mean, obviously, we have a lot of new opportunities, like, cybersecurity has a lot of job openings. Is there any observations that you have around or advice to younger folks coming in, from a career standpoint? Because a lot of job openings are skills that weren't even taught in school. >> Regina: Right, that's- >> You know. >> And then you got the women in check, and you have all kinds of opportunities now that aren't just engineering, right? >> Regina: Yes. >> It's not just engineering. It's computer science, so there's a whole in-migration of new talent coming in the industry. >> Yes, I think maintaining a curious mind is really critical, and taking time to invest in learning. You know, there are so many resources available to us at our disposal that that don't cost us a dime. And so my advice to anybody who is curious, remain curious, dig in, and get some experience, and don't be afraid to stick your neck out, and try it. >> Well, in this conference we have robots welcome, you know, in this out there. >> Yeah. (laughs) >> Regina, thanks for coming out here. Really appreciate it >> John, thank you, it's a pleasure. >> CUBE coverage here in Las Vegas for Amazon re:MARS. I'm John Furrier, your host. Stay with more live coverage after this short break. (upbeat bright music)

Published Date : Jun 23 2022

SUMMARY :

And we're here with Regina Manfredi, providers, the hyperscalers. Hyperscalers, the big guys. What's the role with AWS and Teradata? customers in the cloud today. in some of the cloud things you're doing. A lot of the kind of upstarts, in the cloud together with AWS, and they just need to do So it's not about sometimes the tech, and driving the right tools for them. and the value is being captured so you take a little bit of Amazon here, And so I agree with you 100% , prepare the data, with you guys about this advances on the technology. that drive the enablement to what re:MARS means to you And the topics kind of reflect the future. but also for the people, their customers. in the sense of what I You know, the discoveries from space You know, the other thing I noticed too it's so exciting in the and to drive better performance And those I think is fascinating to see that growth. of having the installed base, that's leaning into the cloud and driving. and we would agree. and also need to drive better And I think that supports what What is the premise? And I think this is where And I think hybrid's a huge deal. And so driving the right use cases in all the compliance, And I don't think that to have you on theCUBE. coming in the industry. and don't be afraid to we have robots welcome, you Really appreciate it I'm John Furrier, your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Regina ManfrediPERSON

0.99+

CaliforniaLOCATION

0.99+

ReginaPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

100%QUANTITY

0.99+

John FurrierPERSON

0.99+

Charles FitzgeraldPERSON

0.99+

TeradataORGANIZATION

0.99+

last weekDATE

0.99+

Las VegasLOCATION

0.99+

FacebookORGANIZATION

0.99+

Las VegasLOCATION

0.99+

firstQUANTITY

0.99+

2010DATE

0.99+

todayDATE

0.99+

DatabricksORGANIZATION

0.99+

SnowflakeORGANIZATION

0.98+

oneQUANTITY

0.98+

SwamiPERSON

0.98+

1,000 nodesQUANTITY

0.98+

two daysQUANTITY

0.98+

CapExORGANIZATION

0.98+

earlier this yearDATE

0.98+

TwitterORGANIZATION

0.97+

100 nodesQUANTITY

0.97+

earlier this weekDATE

0.96+

SageMakerTITLE

0.91+

day twoQUANTITY

0.88+

FiddlerORGANIZATION

0.87+

around 1,000 node testsQUANTITY

0.86+

dozensQUANTITY

0.84+

SQLTITLE

0.8+

MARSTITLE

0.78+

earthLOCATION

0.77+

ESGORGANIZATION

0.74+

MichiganLOCATION

0.69+

Benoit Dageville, Snowflake | Snowflake Summit 2022


 

(upbeat music) >> Welcome back everyone, theCUBE's three days of wall to wall coverage of Snowflake Summit '22 is coming to an end, but Dave Vellante and I, Lisa Martin are so pleased to have our final guest as none other than the co-founder and president of products at Snowflake, Benoit Dageville. Benoit, thank you so much for joining us on the program. Welcome. >> Thank you. Thank you, thank you. >> So this is day four, 'cause you guys started on Monday. This is Thursday. The amount of people that are still here speaks volumes. We've had close to 10,000 people here. >> Yeah. >> Could you ever have imagined back in the day, 10 years ago that it would come to something like this in such a short period of time? >> Absolutely not. And I always say if I had imagined that I might not have started Snowflake, right. This is somehow scary. I mean and yeah, it's huge. And you can feel the excitement of everyone. It is like mind boggling and the fact that so many people are still there after four days is great. >> Your keynote on Tuesday was fantastic. Your energy was off the charts. It was standing room only. There were overflow rooms. Like we just mentioned, a lot of people are still here. Talk about the evolution of Snowflake, this week's announcements and what it means for the future of the data cloud. >> Yeah, so evolution, I mean, I will start with the evolution. It's true that that's what we have announced. This week is not where we started necessarily. So we started really very quickly with big data combined with data warehouse as one thing. We saw that the world was moving into fragmented siloing data and we thought with Thierry, we are going to combine big data and data warehouse in one system for the cloud with this elasticity and this service simplicity. So simplicity, amazing elasticity, which is this multi workload architecture that I was explaining during the keynotes and really extreme simplicity with the service. Then we realized that there is one other attribute in the cloud, which is unique, which doesn't exist on-premise, which is collaboration. How you can connect different tenets of the platform together. And Google showed that with Google Docs. I always say to me, it was amazing that you could share document and have direct access to document that you didn't produce and you can collaborate on this document. So we wanted to do the same thing for data and this is where we created the data cloud and the marketplace where you can have all these data sets available and really the next evolution I would say is really about applications that are (indistinct) by that data, but are way simpler to use for all the tenets of the data cloud. And this is the way you can share expertise also, including, ML model, everyone talks about ML and the democratization of ML. How are you going to democratize ML? It's not by making necessary training super easy. Such that everyone can train their ML for themselves. It's by having very specialized application where data and ML is at the core, which are shared, through the marketplace and we shall leverage by many tenets of this marketplace that have no necessary knowledge about building this ML models. So that's where, yeah. >> When you and Thierry started the company, I go back to the improbable rise of Kubernetes and there were other more sophisticated container management systems back then, but they chose to focus on simplicity. And you've told me before, that was our main tenet. We are not going to worry about all the complex database stuff. You knew how to do that, but you chose not to. So my question is, did you envision solving those complex problems over time yourselves or through an ecosystem? Was this by design or did you... As you started to get into it, say let's not even try to go there let's partner to go there. >> Yeah, I mean, it's both. It's a combination of both. Snowflake, the simplicity of the platform is really important because if our partners are struggling to put their solution and build solution on top of Snowflake they will not build it. So it's very important that number one, our platform is really easy to use from day one. And that really has to be built inside the platform. You cannot build simplicity on top. You cannot have a complex solution and all of a sudden realize that, oh, this is complex. I need to build another layer on top of it to make it simpler, that will not work. So it had to be built from day one, but you're right. What is going to be Snowflake? I always say in 10 years from now, we just turn 10 years old or we are going to turn 10 years old in few months. Actually a few months, yes. >> Right. >> So for the next 10 years I really believe that most of Snowflake will not be built by Snowflake. And that's the power of the partners and these applications. When you are going to say I'm using Snowflake, actually, probably you are not going to use directly code developed by Snowflake. That code will leverage our platform, but you will use a solution that has been built on top of Snowflake. And this is the way we are going to decouple, the effort of Snowflake and multiply it. >> It's an interesting balance, isn't it? When I think of what you did with Apache Iceberg, if I use Iceberg and I'm not going to get as much functionality, but I may want that openness, but I'm going to get more functionality inside of the data cloud. And I don't know, but if you know the answer to what's going to happen. >> No, that's a super good question. So to explain what we did with Apache Iceberg, and the fact that now it's a native format for us. So everything that you can do with our internal formats, you can do it with Apache Iceberg, including security, defining masking, data masking all the governors that we have, fine grain security aspects, the replications you can define you can use (indistinct) on top of... >> But there's a but, right? But if I do that with native Snowflake tools, I'm going to get an even greater advantage, am I not? >> Yes. So that's what I'm saying. So that's why we embraced Iceberg, because I think we can bring all the benefit of Snowflake to people who have decided to use Iceberg, I mean open formats. Iceberg is a table format. So and why it was important because people had massive investments in open source in Hadoop. And we had a lot of companies saying, we love Snowflake. We want to be a Snowflake customer, but we cannot really migrate all our data. I mean, it will be really costly. And we have a lot of tools that need access, direct access. So this is why we created Iceberg because we can really... I mean, we really think that we can bring the benefit of Snowflake to this data. >> Gives customers optionality. Okay. I use this term super cloud. You don't use the term, but that's okay. And I get a lot of heat for it. But to me, what you're doing is quite a bit different than multicloud because you're creating that abstraction layer. You're bringing value above it. My question to you is, the most of the heat I get is, oh, that's just SaaS. Are you just SaaS? >> No. I mean, no, absolutely not. I mean, you're right we are a super cloud. I mean it's a much better word than saying we are multicloud. Multicloud is often viewed as oh, I have my system and now I can run this system in the different cloud providers. Snowflake is different. We have one single platform for the world, which happens to have some regions are AWS region, some regions are Azure, some regions are GCP, Google and we merge them together. We have this Snowgrid technology that connects all our regions together so that we have really one platform for the world. And that's very important because when you talk about connections of data and expertise applications you want to have global reach, right. It doesn't exist. We are not siloed by region of the world, right? You have a lot of companies which are multinational that have presence everywhere. And you want to have this global reach. The world is not a independent set of regions and countries, right. And that's the realization. So we had to create this global platform for our customers. >> And now you have people building clouds on top of your data cloud, well that to me is the next signal. In your keynote, you talked about seven pillars, all data, all workloads, global architecture, self-managed, programmable, marketplace, governance, which ones are the most important? >> All of them. It's like when you have kids, you don't want to pick and say, this one is my preferred one, so they are really important. All of them, as I said without data, there is no Snowflake, right? So all data is so important that we can reach every data, wherever it is. And Iceberg is a part of that, but all workload is really important because you don't want to put your data in one platform, if you cannot run all your workloads and workloads are much broader than just data warehousing, there is data engineering, data science, ML engineering, (indistinct) all these workloads applications. So that's critical. Programmable is where we are moving, right. We want to be the place where data applications are built. And we think we have a lot of advantages because data application needs to use many workloads at once, right? It's not that that application will do only data warehousing, they need to store their states, they need to use this new workload that we define, which is Unistore. They need to do data engineering because they need to get data, right. They have to save this data. So they need to combine many workload and if they have to stitch this workload, because the platform was not designed as one single product where everything is consistent and works together, that you have to stitch, it's complicated for this application to make it work. So Snowflake is we believe an ideal platform to run these data applications. So all workloads, programmable, obviously, so that you can program. And programmable has two aspects, which is big part of our announcement. Is both data programmability, which is running Python against petabyte, terabytes of data at scale and doing it scale out. So that's what we call data programmability. So both Java, Python and (indistinct), but also running applications like UI. And we had this acquisition of Streamlit. Streamlit now has been fully integrated in Snowflake. We announced that such that not only you can have this data programmability, but you can expose your data through this nice UIs, interactive UI to business users potentially. So it goes all the way there. Global is super important. As we say, we want to be one platform for the world. And of course, as I said, the last pillar, which is somehow critical for us, because we are cloud, we need to have governance. We need to have security of our data. And why it took us so long to do Python is not because it's out to run Python, right? Everyone can run Python it's because we had to secure it. And I talk about it creating this amazing sandboxing technology, such that when you include third party libraries and third party codes, you are guaranteed that this third party code will not reach to infiltrate your data, right. We control the environment that Snowflake provides. >> Can you share us some of the feedback from the customer? You probably had many customer conversations over the last four days. >> Look at that smile. (interviewer laughing) (Lisa laughing) >> Actually not because I was so busy everywhere. Unfortunately, I didn't speak to many customers. Saying that, I had everyone stopping me and talking about what they heard and yeah, there is a huge excitement about all of this. >> What's been the feedback around the theme of the event? The world of data collaboration. Data collaboration is so critical as every company these days must be a data company to compete, to win. What's been from just some of the feedback that you've had customers really embracing data collaboration, what Snowflake is enabling. >> Yeah. I mean, almost every company which is using Snowflake, is collaborating with data. You have heard, the number of stable edges that we have, and there is a real need for that because your data alone... You cannot make sense of your data if it is just alone. It needs to be connected with other data. You haven't not generated. So all data, when you say the first pillar of Snowflake is all data is not only about your data, but is about all the data that's created around you. That puts perspective on your own data. And that's critical and it's so painful to get. I mean, even your data is difficult to have access to your data, but imagine data that you didn't produce. And so yes, so the data collaboration is critical, and then now we expanded it to application and expertise, sharing models, for example, That's going to have a huge impact. >> All data includes now transaction data, right? >> Yes. >> That's a big part of the announcements that you guys made. >> Yeah. So and that's the motivation for that was really, if we want to run application, full application, we announced native applications, which are fully executed and run inside the (indistinct) data cloud, right. They need all the services that application need and in particular managing their states. And so we created Unistore, which is a new workload, which allows you to combine transactional data, which are generated by this application. And at the same time being able to do analytics directly on this data. So we call it Hybrid Table because it has this hybrid aspect. You can do both transactional access to this data and at the same time analytic here without having data pipeline and moving data and transforming it from the transactional system to the analytical system, right. Snowflake is one system. Again, in the spirit of simplifying everything, this is the Snowflake (indistinct). >> I can ask the same question I ask at first, (indistinct) when was the aha moment that you and Thierry had that said, this is not just a better data warehouse, it's actually more than that. You probably didn't call it a data cloud until later on, but did you know that from the beginning or was that something you kind of stumbled into? >> No. So as I said, we founded Snowflake in 2012 and Thierry and I, we locked in my apartment and we were doing the blueprint of Snowflake and trying to find what is the revolution with the cloud for this data warehouse system and analytical system, both big data and data warehouse. And the aha moment was but of course cloud, okay. What is cloud? It's elasticity, it's service and later collaboration. So in the elasticity aspect, when you ask database people, what is elasticity, they will tell you, oh, you have a cluster of nodes. Like if it is Oracle, it would be a (indistinct) cluster. And the elasticities that you can add one node, two node to this cluster without having too much impact on the existing workload, because you need to shuffle data, right. It's hard and doing it online, right, that's elasticity. If you can do that, you are elastic. We thought that that was not very interesting to do that. What is interesting with elasticity is to plug new workloads. You can plug a workload like that and that workload is running without having any impact on other workloads, which are running on the platform. So elasticity for us was having dedicated computer resources to workloads. And these computer resources could start and be part as soon as the workload starts and will shut down when the workload finishes and they will be sized exactly for the demand of that workload. And we thought the aha moment was, okay if we can do that, now we can run a workload with, let's say 10X more computer resources than what you would have used or 100X more. Okay, let's say 100X more because we paralyzed things. Now this workload can run 100X faster, right? That's assuming we do a good job in the scale, which is our IP. And if we can do that, now the computer resources that you have used, you have used them for 100 times less. So you have used 100 times more resources because you have more nodes, but because you go fast, you use them for less time, right? So if you multiply the two it's constant. So you can run and accelerate workload dramatically 10X, 100X for the same price. Even if we are not better in efficiency than competition, just having that was the magic, right? >> You know how Google founders originally had trouble raising money because who needs another search engine? Did you get from original, like when you started going to raise money, Amazon's got a database, so who needs another cloud database? Did you get that early on or was it just obvious Speiser and companies as well. >> Speiser is a little bit on the crazy side and ambitious and so Speiser is Speiser. And of course he had no doubt, but even him was saying Benoit, Thierry, Hadoop, right. Everyone is saying Hadoop is going to be the revolution. And you guys are betting actually against Hadoop because we told Speiser, Hadoop is a bad system, it's going to fail, but at the time everyone was so bullish about Hadoop, everyone was implementing Hadoop that it didn't look like it was going to fail and we were probably wrong. So there was a lot of skepticism about not leveraging Hadoop and not being an Hadoop. Okay, something being on top of Hadoop. That was number one. There was no cloud warehouse at the time we started. Redshift was not started. It was the pioneer somewhere when Snowflake was founded. So creating a data warehouse in the cloud sounded crazy to people. How am I going to move my data over there? And security and what about security, the cloud is not secure. So that was another... >> So you guys predated that Parexel move by... >> Yes. >> Okay, so that's interesting. And I thought when Redshift... I mean, Amazon announced Redshift, I was sure that Mike Speiser will come and say, guys it's too sad, but they beat you guys and they build something and actually it was the reverse. Mike Speiser was super excited and so it was interesting to me. >> Wow, that's amazing. 'Cause John Furrier and I, we were early with theCUBE. when theCUBE started it was like the beginning of Hadoop. And so we brought theCUBE to, I think it was the second Hadoop World and we was rubbing nickels together at the time. And I was so excited bring compute to storage and it made so much sense. But I remember and I won't say who it was, but an early Hadoop committer told me this is going to fail. And I'm like, what? And he started going age basis crap and all this stuff. And I was sad because I was so excited, but it turned out that you had the same (indistinct). >> Because of complexity. Okay, Hadoop failed for two reasons. One is because they decided that, oh, a lot of this database thing, you don't need transaction, you don't need SQL, you don't necessarily, you don't need to go fast. It'll be batch, normal real time interaction with data, no one needs that. >> Cheap storage. >> So a lot of compromise on the very important technology. And at the same time, extreme complexity and complexity for me was, where I was I knew that it was going to fail big time and we bet Snowflake on the failure of Hadoop indeed. >> And there was no cloud early on in Hadoop. >> And there was no cloud too. >> And that was what killed it. That was like... >> You're right. And the model that Hadoop had for data didn't work on block storage. Block storage is not as efficient as HGFS. So that was also another figure. >> Do you ever sit back and think about... So you think about how much money has poured in to separating compute from storage and cloud databases and you started it all. (interviewer laughing) >> Yeah. No, this is... >> Pretty amazing. >> Yeah. >> Right, so that's good. That means that you're onto a good idea, but a lot of people get confused that again, they think that you're a cloud data warehouse and you're not, I mean, you're much more than that. >> Yeah, I hate that. I have to say, because from day one we were not a cloud data warehouse. As I said, it was all about combining the big data, massive amount of unstructured data, petabytes stored as files. Okay, that's very important, store as files where it's very easy to drop data in the system without... Very low cost to combine with data warehouse, full multi statement transaction when people will tell you today, oh, now we are a data warehouse. They don't have multi statement transaction, right. So we had from day one multi statement transaction really efficient SQL. You could run your dashboard. So combining these two worlds was I think the crazy thing, that's the crazy innovation that Snowflake did initially. >> Yeah. >> And I know it's really easy to build data warehouse somewhere, because if you don't think about big data, petabytes, extremely structured data, you remove a lot of complexity. >> This is why Lisa, when you get excited about technology, but you always have to have a, somebody who really deeply understands technology to stink test it, all right so awesome. Thank you for sharing that story. >> Yeah. >> Fantastic. So over 5,900 customers now. I saw over 500 in the Forbes G2K, over almost 10,000 people here this year. If we think back to 2019, there was about what? Less than 2000 people. >> Yeah. >> What do you think is going to happen next year? >> I don't know. I don't like to think about next year. I mean, I always say, Snowflake is so exciting to me because it is like a TV show, right. Where you wait the next season and we have one season every year. So I'm really excited to know what is going to happen next year. And I don't want to project what I think will happen, but all these movements to the Snowflake being the platform for data application. I want to see what people are going to build on our platform. I mean, that's the excitement. >> Season 11 coming up. >> Yes. Season 11. Yes. >> No binge watching here. Benoit, it's been a pleasure to have you on the program. >> Thank you. >> Congratulations on incredible success, the momentum, the energy is contagious. We love it. (Benoit laughing) >> Thank you so much. >> Thank you. >> Bye bye. >> For Benoit Dageville and Dave Vellante, I'm Lisa Martin. You're watching theCUBE's coverage of Snowflake Summit '22. Dave and I will be right back with a wrap. (upbeat music)

Published Date : Jun 16 2022

SUMMARY :

is coming to an end, Thank you, thank you. you guys started on Monday. And you can feel the future of the data cloud. and the marketplace where you So my question is, did you envision And that really has to be And that's the power of the and I'm not going to get So everything that you can the benefit of Snowflake to this data. My question to you is, the And that's the realization. And now you have people building clouds And of course, as I said, the last pillar, the feedback from the customer? Look at that smile. I was so busy everywhere. the feedback that you've had but imagine data that you didn't produce. announcements that you guys made. So and that's the motivation I can ask the same question And the elasticities that you can add like when you started at the time we started. So you guys predated and so it was interesting to me. And I was so excited you don't need to go fast. And at the same time, extreme complexity And there was no And that was what killed it. And the model that Hadoop had for data and you started it all. No, this is... but a lot of people get I have to say, because from day one because if you don't think about big data, This is why Lisa, when you I saw over 500 in the Forbes G2K, I mean, that's the excitement. Yes. to have you on the program. the momentum, the energy is contagious. Dave and I will be right back with a wrap.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

Mike SpeiserPERSON

0.99+

10XQUANTITY

0.99+

100XQUANTITY

0.99+

100 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Mike SpeiserPERSON

0.99+

2012DATE

0.99+

Benoit DagevillePERSON

0.99+

DavePERSON

0.99+

BenoitPERSON

0.99+

MondayDATE

0.99+

ThierryPERSON

0.99+

ThursdayDATE

0.99+

2019DATE

0.99+

TuesdayDATE

0.99+

SnowflakeTITLE

0.99+

GoogleORGANIZATION

0.99+

next yearDATE

0.99+

two aspectsQUANTITY

0.99+

LisaPERSON

0.99+

PythonTITLE

0.99+

This weekDATE

0.99+

one seasonQUANTITY

0.99+

two reasonsQUANTITY

0.99+

OneQUANTITY

0.99+

HadoopPERSON

0.99+

twoQUANTITY

0.99+

bothQUANTITY

0.99+

Snowflake Summit '22EVENT

0.99+

this weekDATE

0.99+

one platformQUANTITY

0.99+

StreamlitTITLE

0.99+

SpeiserORGANIZATION

0.99+

JavaTITLE

0.99+

one platformQUANTITY

0.99+

10 yearsQUANTITY

0.99+

one systemQUANTITY

0.98+

one nodeQUANTITY

0.98+

Less than 2000 peopleQUANTITY

0.98+

SnowflakeEVENT

0.98+

AWSORGANIZATION

0.98+

two nodeQUANTITY

0.98+

this yearDATE

0.98+

secondQUANTITY

0.98+

todayDATE

0.98+

John FurrierPERSON

0.98+

HadoopTITLE

0.97+

over 5,900 customersQUANTITY

0.97+

10 years agoDATE

0.97+

one single productQUANTITY

0.97+

first pillarQUANTITY

0.97+

Google DocsTITLE

0.97+

SnowflakeORGANIZATION

0.97+

MulticloudTITLE

0.97+

over 500QUANTITY

0.97+

ParexelORGANIZATION

0.96+