Image Title

Search Results for about a 100different possible combinations:

DV trusted Infrastructure part 2 Open


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for, eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now, that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers, which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges, and I'm not saying that SecOps pros are now talented. They are. There just aren't enough of them to go around, and the adversary is also talented and very creative, and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically, we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante, and I'm your host now. Previously, we looked at what trusted infrastructure means >>And the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that devs SEC op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies, and after that we're gonna bring on Mahesh Naar oim, who was a consultant in the networking product management area at Dell. And finally, we're closed with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program.

Published Date : Oct 5 2022

SUMMARY :

provider secures, for example, the S three bucket and what the customer is responsible But the diversity of alternatives and infrastructure implementations continues to Now, one of the very important roles that a technology vendor can play is to take how the industry generally in Dell specifically, are adapting to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jerome WestPERSON

0.99+

DellORGANIZATION

0.99+

FirstQUANTITY

0.99+

Dave AntePERSON

0.99+

todayDATE

0.99+

second partQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh Naar oimPERSON

0.99+

oneQUANTITY

0.98+

DeepakPERSON

0.98+

bothQUANTITY

0.98+

part 2OTHER

0.97+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.95+

HCIORGANIZATION

0.95+

single cloudQUANTITY

0.94+

CubeORGANIZATION

0.9+

WhackamoleTITLE

0.89+

one companyQUANTITY

0.85+

Power EdgeORGANIZATION

0.7+

part twoQUANTITY

0.65+

DevOpsORGANIZATION

0.6+

SecOpsTITLE

0.6+

pointQUANTITY

0.54+

Deepak Rangaraj, Dell technologies


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges. And I'm not saying that SecOps pros are not talented. They are. There just aren't enough of them to go around and the adversary is also talented and very creative and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante and I'm your host now. Previously we looked at what trusted infrastructure means and the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. >>Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that dev sec op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies. And after that we're gonna bring on Mahesh Nagar oim, who was consultant in the networking product management area at Dell. And finally, we're close with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program. Deepak Arage shoes powered security product manager at Dell Technologies. Deepak, great to have you on the program. Thank you. >>Thank you for having me. >>So we're going through the infrastructure stack and in part one of this series we looked at the landscape overall and how cyber has changed and specifically how Dell thinks about data protection in, in security in a manner that both secures infrastructure and minimizes organizational friction. We also hit on the storage part of the portfolio. So now we want to dig into servers. So my first question is, what are the critical aspects of securing server infrastructure that our audience should be aware of? >>Sure. So if you look at compute in general, right, it has rapidly evolved over the past couple of years, especially with trends toward software defined data centers and with also organizations having to deal with hybrid environments where they have private clouds, public cloud locations, remote offices, and also remote workers. So on top of this, there's also an increase in the complexity of the supply chain itself, right? There are companies who are dealing with hundreds of suppliers as part of their supply chain. So all of this complexity provides a lot of opportunity for attackers because it's expanding the threat surface of what can be attacked, and attacks are becoming more frequent, more severe and more sophisticated. And this has also triggered around in the regulatory and mandates around the security needs. >>And these regulations are not just in the government sector, right? So it extends to critical infrastructure and eventually it also get into the private sector. In addition to this, organizations are also looking at their own internal compliance mandates. And this could be based on the industry in which they're operating in, or it could be their own security postures. And this is the landscape in which servers they're operating today. And given that servers are the foundational blocks of the data center, it becomes extremely important to protect them. And given how complex the modern server platforms are, it's also extremely difficult and it takes a lot of effort. And this means protecting everything from the supply chain to the manufacturing and then eventually the assuring the hardware and software integrity of the platforms and also the operations. And there are very few companies that go to the lens that Dell does in order to secure the server. We truly believe in the notion and the security mentality that, you know, security should enable our customers to go focus on their business and proactively innovate on their business and it should not be a burden to them. And we heavily invest to make that possible for our customers. >>So this is really important because the premise that I set up at the beginning of this was really that I, as of security pro, I'm not a security pro, but if I were, I wouldn't want to be doing all this infrastructure stuff because I now have all these new things I gotta deal with. I want a company like Dell who has the resources to build that security in to deal with the supply chain to ensure the providence, et cetera. So I'm glad you you, you hit on that, but so given what you just said, what does cybersecurity resilience mean from a server perspective? For example, are there specific principles that Dell adheres to that are non-negotiable? Let's say, how does Dell ensure that its customers can trust your server infrastructure? >>Yeah, like when, when it comes to security at Dell, right? It's ingrained in our product, so that's the best way to put it. And security is nonnegotiable, right? It's never an afterthought where we come up with a design and then later on figure out how to go make it secure, right? Our security development life cycle, the products are being designed to counter these threats right from the big. And in addition to that, we are also testing and evaluating these products continuously to identify vulnerabilities. We also have external third party audits which supplement this process. And in addition to this, Dell makes the commitment that we will rapidly respond to any mitigations and vulnerability, any vulnerabilities and exposures found out in the field and provide mitigations and patches for in attacking manner. So this security principle is also built into our server life cycle, right? Every phase of it. >>So we want our products to provide cutting edge capabilities when it comes to security. So as part of that, we are constantly evaluating what our security model is done. We are building on it and continuously improving it. So till a few years ago, our model was primarily based on the N framework of protect, detect and rigor. And it's still aligns really well to that framework, but over the past couple of years we have seen how computers evolved, how the threads have evolved, and we have also seen the regulatory trends and we recognize the fact that the best security strategy for the modern world is a zero trust approach. And so now when we are building our infrastructure and tools and offerings for customers, first and foremost, they're cyber resilient, right? What we mean by that is they're capable of anticipating threats, withstanding attacks and rapidly recurring from attacks and also adapting to the adverse conditions in which they're deployed. The process of designing these capabilities and identifying these capabilities however, is done through the zero press framework. And that's very important because now we are also anticipating how our customers will end up using these capabilities at there and to enable their own zero trust IT environments and IT zero trusts deployments. We have completely adapted our security approach to make it easier for customers to work with us no matter where they are in their journey towards zero trust option. >>So thank you for that. You mentioned the, this framework, you talked about zero trust. When I think about n I think as well about layered approaches. And when I think about zero trust, I think about if you, if you don't have access to it, you're not getting access, you've gotta earn that, that access and you've got layers and then you still assume that bad guys are gonna get in. So you've gotta detect that and you've gotta response. So server infrastructure security is so fundamental. So my question is, what is Dell providing specifically to, for example, detect anomalies and breaches from unauthorized activity? How do you enable fast and easy or facile recovery from malicious incidents? >>Right? What is that is exactly right, right? Breachers are bound to happen. And given how complex our current environment is, it's extremely distributed and extremely connected, right? Data and users are no longer contained with an offices where we can set up a perimeter firewall and say, Yeah, everything within that is good. We can trust everything within it. That's no longer true. The best approach to protect data and infrastructure in the current world is to use a zero trust approach, which uses the principles. Nothing is ever trusted, right? Nothing is trusted implicitly. You're constantly verifying every single user, every single device, and every single access in your system at every single level of your ID environment. And this is the principles that we use on power Edge, right? But with an increased focus on providing granular controls and checks based on the principles of these privileged access. >>So the idea is that service first and foremost need to make sure that the threats never enter and they're rejected at the point of entry. But we recognize breaches are going to occur and if they do, they need to be minimized such that the sphere of damage cost by attacker is minimized. So they're not able to move from one part of the network to something else laterally or escalate their privileges and cause more damage, right? So the impact radius for instance, has to be radius. And this is done through features like automated detection capabilities and automation, automated remediation capabilities. So some examples are as part of our end to end boot resilience process, we have what they call a system lockdown, right? We can lock down the configuration of the system and lock on the form versions and all changes to the system. And we have capabilities which automatically detect any drift from that lockdown configuration and we can figure out if the drift was caused to authorized changes or unauthorized changes. >>And if it is an unauthorize change can log it, generate security alerts, and we even have capabilities to automatically roll the firm where, and always versions back to a known good version and also the configurations, right? And this becomes extremely important because as part of zero trust, we need to respond to these things at machine speed and we cannot do it at a human speed. And having these automated capabilities is a big deal when achieving that zero trust strategy. And in addition to this, we also have chassis inclusion detection where if the chassis, the box, the several box is opened up, it logs alerts, and you can figure out even later if there's an AC power cycle, you can go look at the logs to see that the box is opened up and figure out if there was a, like a known authorized access or some malicious actor opening and chain something in your system. >>Great, thank you for that lot. Lot of detail and and appreciate that. I want to go somewhere else now cuz Dell has a renowned supply chain reputation. So what about securing the, the supply chain and the server bill of materials? What does Dell specifically do to track the providence of components it uses in its systems so that when the systems arrive, a customer can be a hundred percent certain that that system hasn't been compromised, >>Right? And we've talked about how complex the modern supply chain is, right? And that's no different for service. We have hundreds of confidence on the server and a lot of these form where in order to be configured and run and this former competence could be coming from third parties suppliers. So now the complexity that we are dealing with like was the end to end approach. And that's where Dell pays a lot of attention into assuring the security approach approaching. And it starts all the way from sourcing competence, right? And then through the design and then even the manufacturing process where we are wetting the personnel leather factories and wetting the factories itself. And the factories also have physical controls, physical security controls built into them and even shipping, right? We have GPS tagging of packages. So all of this is built to ensure supply chain security. >>But a critical aspect of this is also making sure that the systems which are built in the factories are delivered to the customers without any changes or any tapper. And we have a feature called the secure component verification, which is capable of doing this. What the feature does this, when the system gets built in a factory, it generates an inventory of all the competence in the system and it creates a cryptographic certificate based on the signatures presented to this by the competence. And this certificate is stored separately and sent to the customers separately from the system itself. So once the customers receive the system at their end, they can run out to, it generates an inventory of the competence on the system at their end and then compare it to the golden certificate to make sure nothing was changed. And if any changes are detected, we can figure out if there's an authorized change or unauthorize change. >>Again, authorized changes could be like, you know, upgrades to the drives or memory and ized changes could be any sort of temper. So that's the supply chain aspect of it. And bill of metal use is also an important aspect to galing security, right? And we provide a software bill of materials, which is basically a list of ingredients of all the software pieces in the platform. So what it allows our customers to do is quickly take a look at all the different pieces and compare it to the vulnerability database and see if any of the vulner, which have been discovered out in the wild affected platform. So that's a quick way of figuring out if the platform has any known vulnerabilities and it has not been patched. >>Excellent. That's really good. My last question is, I wonder if you, you know, give us the sort of summary from your perspective, what are the key strengths of Dell server portfolio from a security standpoint? I'm really interested in, you know, the uniqueness and the strong suit that Dell brings to the table, >>Right? Yeah. We have talked enough about the complexity of the environment and how zero risk is necessary for the modern ID environment, right? And this is integral to Dell powered service. And as part of that like you know, security starts with the supply chain. We already talked about the second component verification, which is a beneath feature that Dell platforms have. And on top of it we also have a silicon place platform mode of trust. So this is a key which is programmed into the silicon on the black service during manufacturing and can never be changed after. And this immutable key is what forms the anchor for creating the chain of trust that is used to verify everything in the platform from the hardware and software integrity to the boot, all pieces of it, right? In addition to that, we also have a host of data protection features. >>Whether it is protecting data at risk in news or inflight, we have self encrypting drives, which provides scalable and flexible encryption options. And this couple with external key management provides really good protection for your data address. External key management is important because you know, somebody could physically steam the server, walk away, but then the keys are not stored on the server, it stood separately. So that provides your action layer of security. And we also have dual layer encryption where you can compliment the hardware encryption on the secure encrypted drives with software level encryption. Inion to this we have identity and access management features like multifactor authentication, single sign on roles, scope and time based access controls, all of which are critical to enable that granular control and checks for zero trust approach. So I would say like, you know, if you look at the Dell feature set, it's pretty comprehensive and we also have the flexibility built in to meet the needs of all customers no matter where they fall in the spectrum of, you know, risk tolerance and security sensitivity. And we also have the capabilities to meet all the regulatory requirements and compliance requirements. So in a nutshell, I would say that, you know, Dell Power Service cyber resident infrastructure helps accelerate zero tested option for customers. >>Got it. So you've really thought this through all the various things that that you would do to sort of make sure that your server infrastructure is secure, not compromised, that your supply chain is secure so that your customers can focus on some of the other things that they have to worry about, which are numerous. Thanks Deepak, appreciate you coming on the cube and participating in the program. >>Thank you for having >>You're welcome. In a moment I'll be back to dig into the networking portion of the infrastructure. Stay with us for more coverage of a blueprint for trusted infrastructure and collaboration with Dell Technologies on the cube. Your leader in enterprise and emerging tech coverage.

Published Date : Oct 4 2022

SUMMARY :

So the game of Whackamole continues. But the diversity of alternatives and infrastructure implementations continues to how the industry generally in Dell specifically, are adapting to Deepak, great to have you on the program. We also hit on the storage part of the portfolio. So all of this complexity provides a lot of opportunity for attackers because it's expanding of the data center, it becomes extremely important to protect them. in to deal with the supply chain to ensure the providence, et cetera. And in addition to that, we are also testing and evaluating how the threads have evolved, and we have also seen the regulatory trends and And when I think about zero trust, I think about if And this is the principles that we use on power Edge, part of our end to end boot resilience process, we have what they call a system And in addition to this, we also have chassis inclusion detection where if What does Dell specifically do to track the So now the complexity that we are dealing with like was And this certificate is stored separately and sent to the customers So that's the supply chain aspect of it. the uniqueness and the strong suit that Dell brings to the table, from the hardware and software integrity to the boot, all pieces of it, And we also have dual layer encryption where you of the other things that they have to worry about, which are numerous. In a moment I'll be back to dig into the networking portion of the infrastructure.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DeepakPERSON

0.99+

DellORGANIZATION

0.99+

Jerome WestPERSON

0.99+

Deepak RangarajPERSON

0.99+

Dave AntePERSON

0.99+

second partQUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

Deepak AragePERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

second componentQUANTITY

0.99+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.98+

bothQUANTITY

0.98+

hundredsQUANTITY

0.98+

hundred percentQUANTITY

0.98+

Mahesh NagarPERSON

0.98+

zero trustQUANTITY

0.98+

single cloudQUANTITY

0.96+

every single deviceQUANTITY

0.94+

one partQUANTITY

0.94+

firstQUANTITY

0.94+

every single accessQUANTITY

0.92+

every single userQUANTITY

0.92+

CubeORGANIZATION

0.92+

zero riskQUANTITY

0.9+

WhackamoleTITLE

0.88+

zeroQUANTITY

0.82+

past couple of yearsDATE

0.81+

a few years agoDATE

0.76+

every single levelQUANTITY

0.74+

singleQUANTITY

0.68+

PowerCOMMERCIAL_ITEM

0.66+

part oneOTHER

0.65+

HCIORGANIZATION

0.61+

SecOpsTITLE

0.58+

confidenceQUANTITY

0.57+

SECORGANIZATION

0.55+

part twoQUANTITY

0.54+

suppliersQUANTITY

0.54+

pointQUANTITY

0.53+

DevOpsORGANIZATION

0.52+

Blueprint for Trusted Insfrastructure Episode 2 Full Episode 10-4 V2


 

>>The cybersecurity landscape continues to be one characterized by a series of point tools designed to do a very specific job, often pretty well, but the mosaic of tooling is grown over the years causing complexity in driving up costs and increasing exposures. So the game of Whackamole continues. Moreover, the way organizations approach security is changing quite dramatically. The cloud, while offering so many advantages, has also created new complexities. The shared responsibility model redefines what the cloud provider secures, for example, the S three bucket and what the customer is responsible for eg properly configuring the bucket. You know, this is all well and good, but because virtually no organization of any size can go all in on a single cloud, that shared responsibility model now spans multiple clouds and with different protocols. Now that of course includes on-prem and edge deployments, making things even more complex. Moreover, the DevOps team is being asked to be the point of execution to implement many aspects of an organization's security strategy. >>This extends to securing the runtime, the platform, and even now containers which can end up anywhere. There's a real need for consolidation in the security industry, and that's part of the answer. We've seen this both in terms of mergers and acquisitions as well as platform plays that cover more and more ground. But the diversity of alternatives and infrastructure implementations continues to boggle the mind with more and more entry points for the attackers. This includes sophisticated supply chain attacks that make it even more difficult to understand how to secure components of a system and how secure those components actually are. The number one challenge CISOs face in today's complex world is lack of talent to address these challenges. And I'm not saying that SecOps pros are not talented, They are. There just aren't enough of them to go around and the adversary is also talented and very creative, and there are more and more of them every day. >>Now, one of the very important roles that a technology vendor can play is to take mundane infrastructure security tasks off the plates of SEC off teams. Specifically we're talking about shifting much of the heavy lifting around securing servers, storage, networking, and other infrastructure and their components onto the technology vendor via r and d and other best practices like supply chain management. And that's what we're here to talk about. Welcome to the second part in our series, A Blueprint for Trusted Infrastructure Made Possible by Dell Technologies and produced by the Cube. My name is Dave Ante and I'm your host now. Previously we looked at what trusted infrastructure means and the role that storage and data protection play in the equation. In this part two of the series, we explore the changing nature of technology infrastructure, how the industry generally in Dell specifically, are adapting to these changes and what is being done to proactively address threats that are increasingly stressing security teams. >>Now today, we continue the discussion and look more deeply into servers networking and hyper-converged infrastructure to better understand the critical aspects of how one company Dell is securing these elements so that dev sec op teams can focus on the myriad new attack vectors and challenges that they faced. First up is Deepak rang Garage Power Edge security product manager at Dell Technologies. And after that we're gonna bring on Mahesh Nagar oim, who was consultant in the networking product management area at Dell. And finally, we're close with Jerome West, who is the product management security lead for HCI hyperconverged infrastructure and converged infrastructure at Dell. Thanks for joining us today. We're thrilled to have you here and hope you enjoy the program. Deepak Arage shoes powered security product manager at Dell Technologies. Deepak, great to have you on the program. Thank you. >>Thank you for having me. >>So we're going through the infrastructure stack and in part one of this series we looked at the landscape overall and how cyber has changed and specifically how Dell thinks about data protection in, in security in a manner that both secures infrastructure and minimizes organizational friction. We also hit on the storage part of the portfolio. So now we want to dig into servers. So my first question is, what are the critical aspects of securing server infrastructure that our audience should be aware of? >>Sure. So if you look at compute in general, right, it has rapidly evolved over the past couple of years, especially with trends toward software defined data centers and with also organizations having to deal with hybrid environments where they have private clouds, public cloud locations, remote offices, and also remote workers. So on top of this, there's also an increase in the complexity of the supply chain itself, right? There are companies who are dealing with hundreds of suppliers as part of their supply chain. So all of this complexity provides a lot of opportunity for attackers because it's expanding the threat surface of what can be attacked, and attacks are becoming more frequent, more severe and more sophisticated. And this has also triggered around in the regulatory and mandates around the security needs. >>And these regulations are not just in the government sector, right? So it extends to critical infrastructure and eventually it also get into the private sector. In addition to this, organizations are also looking at their own internal compliance mandates. And this could be based on the industry in which they're operating in, or it could be their own security postures. And this is the landscape in which servers they're operating today. And given that servers are the foundational blocks of the data center, it becomes extremely important to protect them. And given how complex the modern server platforms are, it's also extremely difficult and it takes a lot of effort. And this means protecting everything from the supply chain to the manufacturing and then eventually the assuring the hardware and software integrity of the platforms and also the operations. And there are very few companies that go to the lens that Dell does in order to secure the server. We truly believe in the notion and the security mentality that, you know, security should enable our customers to go focus on their business and proactively innovate on their business and it should not be a burden to them. And we heavily invest to make that possible for our customers. >>So this is really important because the premise that I set up at the beginning of this was really that I, as of security pro, I'm not a security pro, but if I were, I wouldn't want to be doing all this infrastructure stuff because I now have all these new things I gotta deal with. I want a company like Dell who has the resources to build that security in to deal with the supply chain to ensure the providence, et cetera. So I'm glad you you, you hit on that, but so given what you just said, what does cybersecurity resilience mean from a server perspective? For example, are there specific principles that Dell adheres to that are non-negotiable? Let's say, how does Dell ensure that its customers can trust your server infrastructure? >>Yeah, like when, when it comes to security at Dell, right? It's ingrained in our product, so that's the best way to put it. And security is nonnegotiable, right? It's never an afterthought where we come up with a design and then later on figure out how to go make it secure, right? Our security development life cycle, the products are being designed to counter these threats right from the big. And in addition to that, we are also testing and evaluating these products continuously to identify vulnerabilities. We also have external third party audits which supplement this process. And in addition to this, Dell makes the commitment that we will rapidly respond to any mitigations and vulnerability, any vulnerabilities and exposures found out in the field and provide mitigations and patches for in attacking manner. So this security principle is also built into our server life cycle, right? Every phase of it. >>So we want our products to provide cutting edge capabilities when it comes to security. So as part of that, we are constantly evaluating what our security model is done. We are building on it and continuously improving it. So till a few years ago, our model was primarily based on the N framework of protect, detect and rigor. And it's still aligns really well to that framework, but over the past couple of years, we have seen how computers evolved, how the threads have evolved, and we have also seen the regulatory trends and we recognize the fact that the best security strategy for the modern world is a zero trust approach. And so now when we are building our infrastructure and tools and offerings for customers, first and foremost, they're cyber resilient, right? What we mean by that is they're capable of anticipating threats, withstanding attacks and rapidly recurring from attacks and also adapting to the adverse conditions in which they're deployed. The process of designing these capabilities and identifying these capabilities however, is done through the zero press framework. And that's very important because now we are also anticipating how our customers will end up using these capabilities at there and to enable their own zero trust IT environments and IT zero trusts deployments. We have completely adapted our security approach to make it easier for customers to work with us no matter where they are in their journey towards zero trust option. >>So thank you for that. You mentioned the, this framework, you talked about zero trust. When I think about n I think as well about layered approaches. And when I think about zero trust, I think about if you, if you don't have access to it, you're not getting access, you've gotta earn that, that access and you've got layers and then you still assume that bad guys are gonna get in. So you've gotta detect that and you've gotta response. So server infrastructure security is so fundamental. So my question is, what is Dell providing specifically to, for example, detect anomalies and breaches from unauthorized activity? How do you enable fast and easy or facile recovery from malicious incidents, >>Right? What is that is exactly right, right? Breachers are bound to happen and given how complex our current environment is, it's extremely distributed and extremely connected, right? Data and users are no longer contained with an offices where we can set up a perimeter firewall and say, Yeah, everything within that is good. We can trust everything within it. That's no longer true. The best approach to protect data and infrastructure in the current world is to use a zero trust approach, which uses the principles. Nothing is ever trusted, right? Nothing is trusted implicitly. You're constantly verifying every single user, every single device, and every single access in your system at every single level of your ID environment. And this is the principles that we use on power Edge, right? But with an increased focus on providing granular controls and checks based on the principles of these privileged access. >>So the idea is that service first and foremost need to make sure that the threats never enter and they're rejected at the point of entry, but we recognize breaches are going to occur and if they do, they need to be minimized such that the sphere of damage cost by attacker is minimized so they're not able to move from one part of the network to something else laterally or escalate their privileges and cause more damage, right? So the impact radius for instance, has to be radius. And this is done through features like automated detection capabilities and automation, automated remediation capabilities. So some examples are as part of our end to end boot resilience process, we have what they call a system lockdown, right? We can lock down the configuration of the system and lock on the form versions and all changes to the system. And we have capabilities which automatically detect any drift from that lockdown configuration and we can figure out if the drift was caused to authorized changes or unauthorized changes. >>And if it is an unauthorize change can log it, generate security alerts, and we even have capabilities to automatically roll the firm where, and always versions back to a known good version and also the configurations, right? And this becomes extremely important because as part of zero trust, we need to respond to these things at machine speed and we cannot do it at a human speed. And having these automated capabilities is a big deal when achieving that zero trust strategy. And in addition to this, we also have chassis inclusion detection where if the chassis, the box, the several box is opened up, it logs alerts, and you can figure out even later if there's an AC power cycle, you can go look at the logs to see that the box is opened up and figure out if there was a, like a known authorized access or some malicious actor opening and chain something in your system. >>Great, thank you for that lot. Lot of detail and and appreciate that. I want to go somewhere else now cuz Dell has a renowned supply chain reputation. So what about securing the, the supply chain and the server bill of materials? What does Dell specifically do to track the providence of components it uses in its systems so that when the systems arrive, a customer can be a hundred percent certain that that system hasn't been compromised, >>Right? And we've talked about how complex the modern supply chain is, right? And that's no different for service. We have hundreds of confidence on the server and a lot of these form where in order to be configured and run and this former competence could be coming from third parties suppliers. So now the complexity that we are dealing with like was the end to end approach and that's where Dell pays a lot of attention into assuring the security approach approaching and it starts all the way from sourcing competence, right? And then through the design and then even the manufacturing process where we are wetting the personnel leather factories and wetting the factories itself. And the factories also have physical controls, physical security controls built into them and even shipping, right? We have GPS tagging of packages. So all of this is built to ensure supply chain security. >>But a critical aspect of this is also making sure that the systems which are built in the factories are delivered to the customers without any changes or any tapper. And we have a feature called the secure component verification, which is capable of doing this. What the feature does this, when the system gets built in a factory, it generates an inventory of all the competence in the system and it creates a cryptographic certificate based on the signatures presented to this by the competence. And this certificate is stored separately and sent to the customers separately from the system itself. So once the customers receive the system at their end, they can run out to, it generates an inventory of the competence on the system at their end and then compare it to the golden certificate to make sure nothing was changed. And if any changes are detected, we can figure out if there's an authorized change or unauthorize change. >>Again, authorized changes could be like, you know, upgrades to the drives or memory and ized changes could be any sort of temper. So that's the supply chain aspect of it and bill of metal use is also an important aspect to galing security, right? And we provide a software bill of materials, which is basically a list of ingredients of all the software pieces in the platform. So what it allows our customers to do is quickly take a look at all the different pieces and compare it to the vulnerability database and see if any of the vulner which have been discovered out in the wild affected platform. So that's a quick way of figuring out if the platform has any known vulnerabilities and it has not been patched. >>Excellent. That's really good. My last question is, I wonder if you, you know, give us the sort of summary from your perspective, what are the key strengths of Dell server portfolio from a security standpoint? I'm really interested in, you know, the uniqueness and the strong suit that Dell brings to the table, >>Right? Yeah. We have talked enough about the complexity of the environment and how zero risk is necessary for the modern ID environment, right? And this is integral to Dell powered service. And as part of that like you know, security starts with the supply chain. We already talked about the second component verification, which is a beneath feature that Dell platforms have. And on top of it we also have a silicon place platform mode of trust. So this is a key which is programmed into the silicon on the black service during manufacturing and can never be changed after. And this immutable key is what forms the anchor for creating the chain of trust that is used to verify everything in the platform from the hardware and software integrity to the boot, all pieces of it, right? In addition to that, we also have a host of data protection features. >>Whether it is protecting data at risk in news or inflight, we have self encrypting drives which provides scalable and flexible encryption options. And this couple with external key management provides really good protection for your data address. External key management is important because you know, somebody could physically steam the server walk away, but then the keys are not stored on the server, it stood separately. So that provides your action layer of security. And we also have dual layer encryption where you can compliment the hardware encryption on the secure encrypted drives with software level encryption. Inion to this we have identity and access management features like multifactor authentication, single sign on roles, scope and time based access controls, all of which are critical to enable that granular control and checks for zero trust approach. So I would say like, you know, if you look at the Dell feature set, it's pretty comprehensive and we also have the flexibility built in to meet the needs of all customers no matter where they fall in the spectrum of, you know, risk tolerance and security sensitivity. And we also have the capabilities to meet all the regulatory requirements and compliance requirements. So in a nutshell, I would say that you know, Dell Power Service cyber resident infrastructure helps accelerate zero tested option for customers. >>Got it. So you've really thought this through all the various things that that you would do to sort of make sure that your server infrastructure is secure, not compromised, that your supply chain is secure so that your customers can focus on some of the other things that they have to worry about, which are numerous. Thanks Deepak, appreciate you coming on the cube and participating in the program. >>Thank you for having >>You're welcome. In a moment I'll be back to dig into the networking portion of the infrastructure. Stay with us for more coverage of a blueprint for trusted infrastructure and collaboration with Dell Technologies on the cube, your leader in enterprise and emerging tech coverage. We're back with a blueprint for trusted infrastructure and partnership with Dell Technologies in the cube. And we're here with Mahesh Nager, who is a consultant in the area of networking product management at Dell Technologies. Mahesh, welcome, good to see you. >>Hey, good morning Dell's, nice to meet, meet to you as well. >>Hey, so we've been digging into all the parts of the infrastructure stack and now we're gonna look at the all important networking components. Mahesh, when we think about networking in today's environment, we think about the core data center and we're connecting out to various locations including the cloud and both the near and the far edge. So the question is from Dell's perspective, what's unique and challenging about securing network infrastructure that we should know about? >>Yeah, so few years ago IT security and an enterprise was primarily putting a wrapper around data center out because it was constrained to an infrastructure owned and operated by the enterprise for the most part. So putting a rapid around it like a parameter or a firewall was a sufficient response because you could basically control the environment and data small enough control today with the distributed data, intelligent software, different systems, multi-cloud environment and asset service delivery, you know, the infrastructure for the modern era changes the way to secure the network infrastructure In today's, you know, data driven world, it operates everywhere and data has created and accessed everywhere so far from, you know, the centralized monolithic data centers of the past. The biggest challenge is how do we build the network infrastructure of the modern era that are intelligent with automation enabling maximum flexibility and business agility without any compromise on the security. We believe that in this data era, the security transformation must accompany digital transformation. >>Yeah, that's very good. You talked about a couple of things there. Data by its very nature is distributed. There is no perimeter anymore, so you can't just, as you say, put a rapper around it. I like the way you phrase that. So when you think about cyber security resilience from a networking perspective, how do you define that? In other words, what are the basic principles that you adhere to when thinking about securing network infrastructure for your customers? >>So our belief is that cybersecurity and cybersecurity resilience, they need to be holistic, they need to be integrated, scalable, one that span the entire enterprise and with a co and objective and policy implementation. So cybersecurity needs to span across all the devices and running across any application, whether the application resets on the cloud or anywhere else in the infrastructure. From a networking standpoint, what does it mean? It's again, the same principles, right? You know, in order to prevent the threat actors from accessing changing best destroy or stealing sensitive data, this definition holds good for networking as well. So if you look at it from a networking perspective, it's the ability to protect from and withstand attacks on the networking systems as we continue to evolve. This will also include the ability to adapt and recover from these attacks, which is what cyber resilience aspect is all about. So cybersecurity best practices, as you know, is continuously changing the landscape primarily because the cyber threats also continue to evolve. >>Yeah, got it. So I like that. So it's gotta be integrated, it's gotta be scalable, it's gotta be comprehensive, comprehensive and adaptable. You're saying it can't be static, >>Right? Right. So I think, you know, you had a second part of a question, you know, that says what do we, you know, what are the basic principles? You know, when you think about securing network infrastructure, when you're looking at securing the network infrastructure, it revolves around core security capability of the devices that form the network. And what are these security capabilities? These are access control, software integrity and vulnerability response. When you look at access control, it's to ensure that only the authenticated users are able to access the platform and they're able to access only the kind of the assets that they're authorized to based on their user level. Now accessing a network platform like a switch or a rotor for example, is typically used for say, configuration and management of the networking switch. So user access is based on say roles for that matter in a role based access control, whether you are a security admin or a network admin or a storage admin. >>And it's imperative that logging is enable because any of the change to the configuration is actually logged and monitored as that. Talking about software's integrity, it's the ability to ensure that the software that's running on the system has not been compromised. And, and you know, this is important because it could actually, you know, get hold of the system and you know, you could get UND desire results in terms of say validation of the images. It's, it needs to be done through say digital signature. So, so it's important that when you're talking about say, software integrity, a, you are ensuring that the platform is not compromised, you know, is not compromised and be that any upgrades, you know, that happens to the platform is happening through say validated signature. >>Okay. And now, now you've now, so there's access control, software integrity, and I think you, you've got a third element which is i I think response, but please continue. >>Yeah, so you know, the third one is about civil notability. So we follow the same process that's been followed by the rest of the products within the Dell product family. That's to report or identify, you know, any kind of a vulnerability that's being addressed by the Dell product security incident response team. So the networking portfolio is no different, you know, it follows the same process for identification for tri and for resolution of these vulnerabilities. And these are addressed either through patches or through new reasons via networking software. >>Yeah, got it. Okay. So I mean, you didn't say zero trust, but when you were talking about access control, you're really talking about access to only those assets that people are authorized to access. I know zero trust sometimes is a buzzword, but, but you I think gave it, you know, some clarity there. Software integrity, it's about assurance validation, your digital signature you mentioned and, and that there's been no compromise. And then how you respond to incidents in a standard way that can fit into a security framework. So outstanding description, thank you for that. But then the next question is, how does Dell networking fit into the construct of what we've been talking about Dell trusted infrastructure? >>Okay, so networking is the key element in the Dell trusted infrastructure. It provides the interconnect between the service and the storage world. And you know, it's part of any data center configuration for a trusted infrastructure. The network needs to have access control in place where only the authorized nels are able to make change to the network configuration and logging off any of those changes is also done through the logging capabilities. Additionally, we should also ensure that the configuration should provide network isolation between say the management network and the data traffic network because they need to be separate and distinct from each other. And furthermore, even if you look at the data traffic network and now you have things like segmentation isolated segments and via VRF or, or some micro segmentation via partners, this allows various level of security for each of those segments. So it's important you know, that, that the network infrastructure has the ability, you know, to provide all this, this services from a Dell networking security perspective, right? >>You know, there are multiple layer of defense, you know, both at the edge and in the network in this hardware and in the software and essentially, you know, a set of rules and a configuration that's designed to sort of protect the integrity, confidentiality, and accessibility of the network assets. So each network security layer, it implements policies and controls as I said, you know, including send network segmentation. We do have capabilities sources, centralized management automation and capability and scalability for that matter. Now you add all of these things, you know, with the open networking standards or software, different principles and you essentially, you know, reach to the point where you know, you're looking at zero trust network access, which is essentially sort of a building block for increased cloud adoption. If you look at say that you know the different pillars of a zero trust architecture, you know, if you look at the device aspect, you know, we do have support for security for example, we do have say trust platform in a trusted platform models tpms on certain offer products and you know, the physical security know plain, simple old one love port enable from a user trust perspective, we know it's all done via access control days via role based access control and say capability in order to provide say remote authentication or things like say sticky Mac or Mac learning limit and so on. >>If you look at say a transport and decision trust layer, these are essentially, you know, how do you access, you know, this switch, you know, is it by plain hotel net or is it like secure ssh, right? And you know, when a host communicates, you know, to the switch, we do have things like self-signed or is certificate authority based certification. And one of the important aspect is, you know, in terms of, you know, the routing protocol, the routing protocol, say for example BGP for example, we do have the capability to support MD five authentication between the b g peers so that there is no, you know, manages attack, you know, to the network where the routing table is compromised. And the other aspect is about second control plane is here, you know, you know, it's, it's typical that if you don't have a control plane here, you know, it could be flooded and you know, you know, the switch could be compromised by city denial service attacks. >>From an application test perspective, as I mentioned, you know, we do have, you know, the application specific security rules where you could actually define, you know, the specific security rules based on the specific applications, you know, that are running within the system. And I did talk about, say the digital signature and the cryptographic check that we do for authentication and for, I mean rather for the authenticity and the validation of, you know, of the image and the BS and so on and so forth. Finally, you know, the data trust, we are looking at, you know, the network separation, you know, the network separation could happen or VRF plain old wheel Ls, you know, which can bring about sales multi 10 aspects. We talk about some microsegmentation as it applies to nsx for example. The other aspect is, you know, we do have, with our own smart fabric services that's enabled in a fabric, we have a concept of c cluster security. So all of this, you know, the different pillars, they sort of make up for the zero trust infrastructure for the networking assets of an infrastructure. >>Yeah. So thank you for that. There's a, there's a lot to unpack there. You know, one of the premise, the premise really of this, this, this, this segment that we're setting up in this series is really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility of the security team. And, and the premise that we're putting forth is that because security teams are so stretched thin, you, you gotta shift the vendor community. Dell specifically is shifting a lot of those tasks to their own r and d and taking care of a lot of that. So, cuz scop teams got a lot of other stuff to, to worry about. So my question relates to things like automation, which can help and scalability, what about those topics as it relates to networking infrastructure? >>Okay, our >>Portfolio, it enables state of the automation software, you know, that enables simplifying of the design. So for example, we do have, you know, you know the fabric design center, you know, a tool that automates the design of the fabric and you know, from a deployment and you know, the management of the network infrastructure that are simplicities, you know, using like Ansible s for Sonic for example are, you know, for a better sit and tell story. You know, we do have smart fabric services that can automate the entire fabric, you know, for a storage solution or for, you know, for one of the workloads for example. Now we do help reduce the complexity by closely integrating the management of the physical and the virtual networking infrastructure. And again, you know, we have those capabilities using Sonic or Smart Traffic services. If you look at Sonic for example, right? >>It delivers automated intent based secure containerized network and it has the ability to provide some network visibility and Avan has and, and all of these things are actually valid, you know, for a modern networking infrastructure. So now if you look at Sonic, you know, it's, you know, the usage of those tools, you know, that are available, you know, within the Sonic no is not restricted, you know, just to the data center infrastructure is, it's a unified no, you know, that's well applicable beyond the data center, you know, right up to the edge. Now if you look at our north from a smart traffic OS 10 perspective, you know, as I mentioned, we do have smart traffic services which essentially, you know, simplifies the deployment day zero, I mean rather day one, day two deployment expansion plans and the lifecycle management of our conversion infrastructure and hyper and hyper conversion infrastructure solutions. And finally, in order to enable say, zero touch deployment, we do have, you know, a VP solution with our SD van capability. So these are, you know, ways by which we bring down the complexity by, you know, enhancing the automation capability using, you know, a singular loss that can expand from a data center now right to the edge. >>Great, thank you for that. Last question real quick, just pitch me, what can you summarize from your point of view, what's the strength of the Dell networking portfolio? >>Okay, so from a Dell networking portfolio, we support capabilities at multiple layers. As I mentioned, we're talking about the physical security for examples, say disabling of the unused interface. Sticky Mac and trusted platform modules are the things that to go after. And when you're talking about say secure boot for example, it delivers the authenticity and the integrity of the OS 10 images at the startup. And Secure Boot also protects the startup configuration so that, you know, the startup configuration file is not compromised. And Secure port also enables the workload of prediction, for example, that is at another aspect of software image integrity validation, you know, wherein the image is data for the digital signature, you know, prior to any upgrade process. And if you are looking at secure access control, we do have things like role based access control, SSH to the switches, control plane access control that pre do tags and say access control from multifactor authentication. >>We do have various tech ads for entry control to the network and things like CSE and PRV support, you know, from a federal perspective we do have say logging wherein, you know, any event, any auditing capabilities can be possible by say looking at the clog service, you know, which are pretty much in our transmitter from the devices overts for example, and last we talked about say network segment, you know, say network separation and you know, these, you know, separation, you know, ensures that are, that is, you know, a contained say segment, you know, for a specific purpose or for the specific zone and, you know, just can be implemented by a, a micro segmentation, you know, just a plain old wheel or using virtual route of framework VR for example. >>A lot there. I mean I think frankly, you know, my takeaway is you guys do the heavy lifting in a very complicated topic. So thank you so much for, for coming on the cube and explaining that in in quite some depth. Really appreciate it. >>Thank you indeed. >>Oh, you're very welcome. Okay, in a moment I'll be back to dig into the hyper-converged infrastructure part of the portfolio and look at how when you enter the world of software defined where you're controlling servers and storage and networks via software led system, you could be sure that your infrastructure is trusted and secure. You're watching a blueprint for trusted infrastructure made possible by Dell Technologies and collaboration with the cube, your leader in enterprise and emerging tech coverage, your own west product management security lead at for HCI at Dell Technologies hyper-converged infrastructure. Jerome, welcome. >>Thank you Dave. >>Hey Jerome, in this series of blueprint for trusted infrastructure, we've been digging into the different parts of the infrastructure stack, including storage servers and networking, and now we want to cover hyperconverged infrastructure. So my first question is, what's unique about HCI that presents specific security challenges? What do we need to know? >>So what's unique about hyper-converge infrastructure is the breadth of the security challenge. We can't simply focus on a single type of IT system. So like a server or storage system or a virtualization piece of software, software. I mean HCI is all of those things. So luckily we have excellent partners like VMware, Microsoft, and internal partners like the Dell Power Edge team, the Dell storage team, the Dell networking team, and on and on. These partnerships in these collaborations are what make us successful from a security standpoint. So let me give you an example to illustrate. In the recent past we're seeing growing scope and sophistication in supply chain attacks. This mean an attacker is going to attack your software supply chain upstream so that hopefully a piece of code, malicious code that wasn't identified early in the software supply chain is distributed like a large player, like a VMware or Microsoft or a Dell. So to confront this kind of sophisticated hard to defeat problem, we need short term solutions and we need long term solutions as well. >>So for the short term solution, the obvious thing to do is to patch the vulnerability. The complexity is for our HCI portfolio. We build our software on VMware, so we would have to consume a patch that VMware would produce and provide it to our customers in a timely manner. Luckily VX rail's engineering team has co engineered a release process with VMware that significantly shortens our development life cycle so that VMware would produce a patch and within 14 days we will integrate our own code with the VMware release we will have tested and validated the update and we will give an update to our customers within 14 days of that VMware release. That as a result of this kind of rapid development process, VHA had over 40 releases of software updates last year for a longer term solution. We're partnering with VMware and others to develop a software bill of materials. We work with VMware to consume their software manifest, including their upstream vendors and their open source providers to have a comprehensive list of software components. Then we aren't caught off guard by an unforeseen vulnerability and we're more able to easily detect where the software problem lies so that we can quickly address it. So these are the kind of relationships and solutions that we can co engineer with effective collaborations with our, with our partners. >>Great, thank you for that. That description. So if I had to define what cybersecurity resilience means to HCI or converged infrastructure, and to me my takeaway was you gotta have a short term instant patch solution and then you gotta do an integration in a very short time, you know, two weeks to then have that integration done. And then longer term you have to have a software bill of materials so that you can ensure the providence of all the components help us. Is that a right way to think about cybersecurity resilience? Do you have, you know, a additives to that definition? >>I do. I really think that's site cybersecurity and resilience for hci because like I said, it has sort of unprecedented breadth across our portfolio. It's not a single thing, it's a bit of everything. So really the strength or the secret sauce is to combine all the solutions that our partner develops while integrating them with our own layer. So let me, let me give you an example. So hci, it's a, basically taking a software abstraction of hardware functionality and implementing it into something called the virtualized layer. It's basically the virtual virtualizing hardware functionality, like say a storage controller, you could implement it in hardware, but for hci, for example, in our VX rail portfolio, we, our Vxl product, we integrated it into a product called vsan, which is provided by our partner VMware. So that portfolio of strength is still, you know, through our, through our partnerships. >>So what we do, we integrate these, these security functionality and features in into our product. So our partnership grows to our ecosystem through products like VMware, products like nsx, Horizon, Carbon Black and vSphere. All of them integrate seamlessly with VMware and we also leverage VMware's software, part software partnerships on top of that. So for example, VX supports multifactor authentication through vSphere integration with something called Active Directory Federation services for adfs. So there's a lot of providers that support adfs including Microsoft Azure. So now we can support a wide array of identity providers such as Off Zero or I mentioned Azure or Active Directory through that partnership. So we can leverage all of our partners partnerships as well. So there's sort of a second layer. So being able to secure all of that, that provides a lot of options and flexibility for our customers. So basically to summarize my my answer, we consume all of the security advantages of our partners, but we also expand on them to make a product that is comprehensively secured at multiple layers from the hardware layer that's provided by Dell through Power Edge to the hyper-converged software that we build ourselves to the virtualization layer that we get through our partnerships with Microsoft and VMware. >>Great, I mean that's super helpful. You've mentioned nsx, Horizon, Carbon Black, all the, you know, the VMware component OTH zero, which the developers are gonna love. You got Azure identity, so it's really an ecosystem. So you may have actually answered my next question, but I'm gonna ask it anyway cuz you've got this software defined environment and you're managing servers and networking and storage with this software led approach, how do you ensure that the entire system is secure end to end? >>That's a really great question. So the, the answer is we do testing and validation as part of the engineering process. It's not just bolted on at the end. So when we do, for example, VxRail is the market's only co engineered solution with VMware, other vendors sell VMware as a hyper converged solution, but we actually include security as part of the co-engineering process with VMware. So it's considered when VMware builds their code and their process dovetails with ours because we have a secure development life cycle, which other products might talk about in their discussions with you that we integrate into our engineering life cycle. So because we follow the same framework, all of the, all of the codes should interoperate from a security standpoint. And so when we do our final validation testing when we do a software release, we're already halfway there in ensuring that all these features will give the customers what we promised. >>That's great. All right, let's, let's close pitch me, what would you say is the strong suit summarize the, the strengths of the Dell hyper-converged infrastructure and converged infrastructure portfolio specifically from a security perspective? Jerome? >>So I talked about how hyper hyper-converged infrastructure simplifies security management because basically you're gonna take all of these features that are abstracted in in hardware, they're now abstracted in the virtualization layer. Now you can manage them from a single point of view, whether it would be, say, you know, in for VX rail would be b be center, for example. So by abstracting all this, you make it very easy to manage security and highly flexible because now you don't have limitations around a single vendor. You have a multiple array of choices and partnerships to select. So I would say that is the, the key to making it to hci. Now, what makes Dell the market leader in HCI is not only do we have that functionality, but we also make it exceptionally useful to you because it's co engineered, it's not bolted on. So I gave the example of spo, I gave the example of how we, we modify our software release process with VMware to make it very responsive. >>A couple of other features that we have specific just to HCI are digitally signed LCM updates. This is an example of a feature that we have that's only exclusive to Dell that's not done through a partnership. So we digitally signed our software updates so the user can be sure that the, the update that they're installing into their system is an authentic and unmodified product. So we give it a Dell signature that's invalidated prior to installation. So not only do we consume the features that others develop in a seamless and fully validated way, but we also bolt on our own a specific HCI security features that work with all the other partnerships and give the user an exceptional security experience. So for, for example, the benefit to the customer is you don't have to create a complicated security framework that's hard for your users to use and it's hard for your system administrators to manage it all comes in a package. So it, it can be all managed through vCenter, for example, or, and then the specific hyper, hyper-converged functions can be managed through VxRail manager or through STDC manager. So there's very few pains of glass that the, the administrator or user ever has to worry about. It's all self contained and manageable. >>That makes a lot of sense. So you've got your own infrastructure, you're applying your best practices to that, like the digital signatures, you've got your ecosystem, you're doing co-engineering with the ecosystems, delivering security in a package, minimizing the complexity at the infrastructure level. The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, they gotta deal with multiple clouds. Now they have their shared responsibility model going across multiple cl. They got all this other stuff that they have to worry, they gotta secure the containers and the run time and and, and, and, and the platform and so forth. So they're being asked to do other things. If they have to worry about all the things that you just mentioned, they'll never get, you know, the, the securities is gonna get worse. So what my takeaway is, you're removing that infrastructure piece and saying, Okay guys, you now can focus on those other things that is not necessarily Dell's, you know, domain, but you, you know, you can work with other partners to and your own teams to really nail that. Is that a fair summary? >>I think that is a fair summary because absolutely the worst thing you can do from a security perspective is provide a feature that's so unusable that the administrator disables it or other key security features. So when I work with my partners to define, to define and develop a new security feature, the thing I keep foremost in mind is, will this be something our users want to use and our administrators want to administer? Because if it's not, if it's something that's too difficult or onerous or complex, then I try to find ways to make it more user friendly and practical. And this is a challenge sometimes because we are, our products operate in highly regulated environments and sometimes they have to have certain rules and certain configurations that aren't the most user friendly or management friendly. So I, I put a lot of effort into thinking about how can we make this feature useful while still complying with all the regulations that we have to comply with. And by the way, we're very successful in a highly regulated space. We sell a lot of VxRail, for example, into the Department of Defense and banks and, and other highly regulated environments and we're very successful there. >>Excellent. Okay, Jerome, thanks. We're gonna leave it there for now. I'd love to have you back to talk about the progress that you're making down the road. Things always, you know, advance in the tech industry and so would appreciate that. >>I would look forward to it. Thank you very much, Dave. >>You're really welcome. In a moment I'll be back to summarize the program and offer some resources that can help you on your journey to secure your enterprise infrastructure. I wanna thank our guests for their contributions in helping us understand how investments by a company like Dell can both reduce the need for dev sec up teams to worry about some of the more fundamental security issues around infrastructure and have greater confidence in the quality providence and data protection designed in to core infrastructure like servers, storage, networking, and hyper-converged systems. You know, at the end of the day, whether your workloads are in the cloud, on prem or at the edge, you are responsible for your own security. But vendor r and d and vendor process must play an important role in easing the burden faced by security devs and operation teams. And on behalf of the cube production content and social teams as well as Dell Technologies, we want to thank you for watching a blueprint for trusted infrastructure. Remember part one of this series as well as all the videos associated with this program and of course today's program are available on demand@thecube.net with additional coverage@siliconangle.com. And you can go to dell.com/security solutions dell.com/security solutions to learn more about Dell's approach to securing infrastructure. And there's tons of additional resources that can help you on your journey. This is Dave Valante for the Cube, your leader in enterprise and emerging tech coverage. We'll see you next time.

Published Date : Oct 4 2022

SUMMARY :

So the game of Whackamole continues. But the diversity of alternatives and infrastructure implementations continues to how the industry generally in Dell specifically, are adapting to We're thrilled to have you here and hope you enjoy the program. We also hit on the storage part of the portfolio. So all of this complexity provides a lot of opportunity for attackers because it's expanding and the security mentality that, you know, security should enable our customers to go focus So I'm glad you you, you hit on that, but so given what you just said, what And in addition to this, Dell makes the commitment that we will rapidly how the threads have evolved, and we have also seen the regulatory trends and So thank you for that. And this is the principles that we use on power Edge, So the idea is that service first and foremost the chassis, the box, the several box is opened up, it logs alerts, and you can figure Great, thank you for that lot. So now the complexity that we are dealing with like was So once the customers receive the system at their end, do is quickly take a look at all the different pieces and compare it to the vulnerability you know, give us the sort of summary from your perspective, what are the key strengths of And as part of that like you know, security starts with the supply chain. And we also have dual layer encryption where you of the other things that they have to worry about, which are numerous. Technologies on the cube, your leader in enterprise and emerging tech coverage. So the question is from Dell's perspective, what's unique and to secure the network infrastructure In today's, you know, data driven world, it operates I like the way you phrase that. So if you look at it from a networking perspective, it's the ability to protect So I like that. kind of the assets that they're authorized to based on their user level. And it's imperative that logging is enable because any of the change to and I think you, you've got a third element which is i I think response, So the networking portfolio is no different, you know, it follows the same process for identification for tri and And then how you respond to incidents in a standard way has the ability, you know, to provide all this, this services from a Dell networking security You know, there are multiple layer of defense, you know, both at the edge and in the network in And one of the important aspect is, you know, in terms of, you know, the routing protocol, the specific security rules based on the specific applications, you know, that are running within the system. really that everything you just mentioned, or a lot of things you just mentioned used to be the responsibility design of the fabric and you know, from a deployment and you know, the management of the network and all of these things are actually valid, you know, for a modern networking infrastructure. just pitch me, what can you summarize from your point of view, is data for the digital signature, you know, prior to any upgrade process. can be possible by say looking at the clog service, you know, I mean I think frankly, you know, my takeaway is you of the portfolio and look at how when you enter the world of software defined where you're controlling different parts of the infrastructure stack, including storage servers this kind of sophisticated hard to defeat problem, we need short term So for the short term solution, the obvious thing to do is to patch bill of materials so that you can ensure the providence of all the components help So really the strength or the secret sauce is to combine all the So our partnership grows to our ecosystem through products like VMware, you know, the VMware component OTH zero, which the developers are gonna love. life cycle, which other products might talk about in their discussions with you that we integrate into All right, let's, let's close pitch me, what would you say is the strong suit summarize So I gave the example of spo, I gave the example of how So for, for example, the benefit to the customer is you The reason Jerome, this is so important is because SecOps teams, you know, they gotta deal with cloud security, And by the way, we're very successful in a highly regulated space. I'd love to have you back to talk about the progress that you're making down the Thank you very much, Dave. in the quality providence and data protection designed in to core infrastructure like

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeromePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Dave ValantePERSON

0.99+

DeepakPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

Mahesh NagerPERSON

0.99+

DellORGANIZATION

0.99+

Jerome WestPERSON

0.99+

MaheshPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

demand@thecube.netOTHER

0.99+

Department of DefenseORGANIZATION

0.99+

Dave AntePERSON

0.99+

second partQUANTITY

0.99+

first questionQUANTITY

0.99+

VX railORGANIZATION

0.99+

FirstQUANTITY

0.99+

two weeksQUANTITY

0.99+

last yearDATE

0.99+

Deepak AragePERSON

0.99+

14 daysQUANTITY

0.99+

second componentQUANTITY

0.99+

second layerQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

A Blueprint for Trusted Infrastructure Made PossibleTITLE

0.99+

hundredsQUANTITY

0.99+

one partQUANTITY

0.99+

bothQUANTITY

0.98+

VMwareORGANIZATION

0.98+

VHAORGANIZATION

0.98+

coverage@siliconangle.comOTHER

0.98+

hundred percentQUANTITY

0.98+

eachQUANTITY

0.98+

vSphereTITLE

0.98+

dell.com/securityOTHER

0.98+

Matt Provo & Chandler Hoisington | CUBE Conversation, March 2022


 

(bright upbeat music) >> According to the latest survey from Enterprise Technology Research, container orchestration is the number one category as measured by customer spending momentum. It's ahead of AIML, it's ahead of cloud computing, and it's ahead of robotic process automation. All of which also show highly elevated levels of customer spending velocity. Now, we drill deeper into the survey of more than 1200 CIOs and IT buyers, and we find that a whopping 70% of respondents are spending more on Kubernetes initiatives in 2022 as compared to last year. The rise of Kubernetes came about through a series of improbable events that change the way applications are developed, deployed and managed. Very early on Kubernetes committers chose to focus on simplicity in massive adoption rather than deep enterprise functionality. It's why initially virtually all activity around Kubernetes focused on stateless applications. That has changed. As Kubernetes adoption has gone mainstream, the need for stronger enterprise functionality has become much more pressing. You hear this constantly when you attend the various developer conference, and the talk is all around, let's say, shift left to improve security and better cluster management, more complete automation capabilities, support for data-driven workloads and very importantly, vastly better application performance in visibility and management. And that last topic is what we're here to talk about today. Hello, this is Dave Vellante, and welcome to this special CUBE conversation where we invite into our East Coast Studios Matt Provo, who's the founder and CEO of StormForge and Chandler Hoisington, the general manager of EKS Edge in Hybrid at AWS. Gentlemen, welcome, it's good to see you. >> Thanks. >> Thanks for having us. >> So Chandler, you have this convergence, you've got application performance, you've got developer speed and velocity and you've got cloud economics all coming together. What's driving that convergence and why is it important for customers? >> Yeah, yeah, great question. I think it's important to kind of understand how we got here in the first place. I think Kubernetes solves a lot of problems for users, but the complexity of Kubernetes of just standing up a cluster to begin with is not always simple. And that's where services like EKS comes in and where Amazon tried to solve that problem for users saying, "Hey the control plane, it's made up of 10, 15 different components, standing all these up, patching them, you know, handling the CBEs for it et cetera, et cetera, is a very complicated process, let me help you do that." And where EKS has been so successful and with EKS Anywhere which we launched last year, that's what we're helping customers do, a very similar thing in their own data centers. So we're kind of solving this problem of bringing the cluster online and helping customers launch their first application on it. But then what do you do once your application's there? That's the question. And so now you launched your application and does it have enough resources? Did you tune the right CPU? Did you tune the right amount of memory for it? All those questions need to be answered and that's where working with folks like StormForge come in. >> Well, it's interesting Matt because you're all about optimization and trying to maximize the efficiency which might mean people's lower their AWS bill, but that's okay with Amazon, right? You guys have shown the cheaper it is, the more they buy, well. >> Yeah. And it's all about loyalty and developer experience. And so when you can help create or add to the developer experience itself, over time that loyalty's there. And so when we can come alongside EKS and services from Amazon, well, number one StormForge is built on Amazon, on AWS, and so it's a nice fit, but when we don't have to require developers to choose between things like cost and performance, but they can focus on, you know, innovation and connecting the applications that they're managing on Kubernetes as they operationalize them to the actual business objectives that they have, it's a pretty powerful combination. >> So your entry into the market was in pre-production. >> Yeah. >> You can kind of simulate what performance is going to look like and now you've announced optimized live. >> Yep. >> So that should allow you to turn the crank a little bit more. >> Yeah. >> Get a little bit more accurate and respond more quickly. >> Yeah. So we're the only ones that give you both views. And so we want to, you know, we want to provide a view in what we call kind of our experimentation side of our platform, which is pre-production, as well as on ongoing and continuous view which we kind of call our observation, the observation part of our solution, which is in production. And so for us, it's about providing that view, it's also about taking an increased number of data inputs into the platform itself so that our machine learning can learn from that and ultimately be able to automate the right kinds of tasks alongside the developers to meet their objectives. >> So, Chandler, in my intro I was talking about the spending velocity and how Kubernetes was at the top. But when we had other survey questions that ETR did, and this is post pandemic, it was interesting. We asked what's the most important initiative? And the two top ones were security, no surprise, and it popped up really after the pandemic hit in the lockdown even more prominent and cloud migration, >> Right. >> was number two. And so how are you working with StormForge to effect cloud migrations? Talk about that relationship. >> Yeah. I think it's, you know, different enterprises to have different strategies on how they're going to get their workloads to the cloud. Some of 'em want to have modernize in place in their data centers and then take those modernized applications and move them to the cloud, and that's where something like I mentioned earlier, EKS Anywhere comes into play really nicely because we can bring a consistent experience, a Kubernetes experience to your data center, you can modernize your applications and then you can bring those to EKS in the cloud. And as you're moving them back and forth you have a more consistent experience with Kubernetes. And luckily StormForge works on prem as well even in air gapped environments for StormForge. So, you know, that's, you can get your applications tuned correctly for your data center workloads, and then you're going to tune them differently when you move them to the cloud and you can get them tuned correctly there but StormForge can run consistently in both environments. >> Now, can you add some color as to how you optimize EKS? >> Yeah, so I think from a EKS standpoint, when you, again, when the number of parameters that you have to look at for your application inside of EKS and then the associated services that will go alongside that the packages that are coming in from a Kubernetes standpoint itself, and then you start to transition and operationalize where more and more of these are in production, they're, you know, connected to the business, we provide the ability to go beyond what developers typically do which is sort of take the, either the out of the box defaults or recommendations that ship with the services that they put into their application or the any human's ability to kind of keep up with a couple parameters at a time. You know, with two parameters for the typical Kubernetes application, you might have about a 100 different possible combinations that you could choose from. And sometimes humans can keep up with that, at least statically. And so for us, we want to blow that wide open. We want developers to be able to take advantage of the entire footprint or environment itself. And, you know, by using machine learning to help augment what the developers themselves are doing, not replacing them, augmenting them and having them be a part of that process. Now this whole new world of optimization opens up to them, which is pretty fantastic. And so how the actual workloads are configured, you know, on an ongoing basis and predictively based on upcoming business events, or even unknowns many times is a pretty powerful position to be in. >> I mean, you said not to replace development. I mentioned robotic process automation in my intro, and of course in the early days, I was like, oh, it's going to replace my job. What's actually happened is it's replacing all the mundane tasks. >> Yeah. >> So you can actually do your job. >> Yeah. >> Right? We're all working 24/7, 365 these days, so that the extent that you can automate the things that I hate doing, >> Yeah. >> That's a huge win. So Chandler, how do people get started? You mentioned EKS Anywhere, are they starting on prem and then kind of moving into the cloud? If I'm a customer and I'm interested and I'm sort of at the beginning, where do I start? >> Yeah. Yeah. I mean, it really depends on your workload. Any workload that can run in the cloud should run in the cloud. I'm not just saying that because I work at Amazon but I truly think that that is the case. And I think customers think that as well. More and more customers are trying to move workloads to the cloud for that elasticity and all the benefits of using these huge platforms and, you know, hundreds of services that you have advantage of in the cloud but some workloads just can't move to the cloud yet. You have workloads that have latency requirements like some gaming workloads, for example, where we don't have regions close enough to the consumers yet. So, you know, you want to put workloads in Turkey to service Egypt customers or something like this. You also have workloads that are, you know, on cruise ships and they lose connectivity in the middle of the Atlantic, or maybe you have highly secure workloads in air gapped environments or something like this. So there's still a lot of use cases that keep workloads on prem and sometimes customers just have existing investments in hardware that they don't want to eat yet, right? And they want to slowly phase those out as they move to the cloud. And again, that's where EKS Anywhere really plays well for the workloads that you want to keep on prem, but then as you move to the cloud you can take advantage of obviously EKS. >> I'll put you in the spot. >> Sure. >> And don't hate me for doing this, but so Andy Jassy, Adam Selipsky, I've certainly heard Maylan Thompson Bukavek talk about this, and in fullness of time, all workloads will be in the cloud. >> Yeah. >> And I've said the cloud is expanding. We're going to bring the cloud to the edge. Edge is in your title. >> Yeah. >> Is that a correct interpretation and obvious it relates >> Absolutely. >> to Kubernetes. >> And you'll see that in Amazon strategy. I mean, without posts and wavelengths and local zones, like we're, at the end of the day, Amazon tries to satisfy customers. And if customers are saying, "Hey, I need workloads in San, I want to run a workload in San Francisco. And it's really important to me that it's close to those users, the end users that are in that area," we're going to help them do that at Amazon. And there's a variety of options now to do that. EKS Anywhere is actually only one piece of that kind of whole strategy. >> Yeah. I mean, here you have your best people working on the speed of light problem, but until that's solved, sure, sure. >> That's right. >> We'll give you the last word. >> How do you know about that? >> Yeah. Yeah. (all laughing) >> It's a top secret. Sorry. You heard it on the CUBE first. Matt, we'll give you the last word, bring us home. >> I, so I couldn't agree more. The, you know, the cloud is where workloads are going. Whether what I love is the ability to look at, you know, for the same enterprises, a lot of the ones we work with, want a, they want a public and a private view, public cloud, private cloud view. And they want that flexibility to, depending on the nature of the applications to be able to shift between from time to time where, you know, really decide. And I love EKS Anywhere. I think it's a fantastic addition to the, you know, to the ecosystem. And, you know, I think for us, we're about staying focused on the set of problems that we solve. No developer that I've ever met and probably neither of you have met, gets super excited about getting out of bed to manually tune their applications. And so what we find is that, you know, the time spent doing that, literally just is, there's like a one-to-one correlation. It means they're not innovating and they're not doing what they love to be doing. And so when we can come alongside that and automate away the manual task to your point, I think there are a lot of parallels to RPA in that case, it becomes actually a pretty empowering process for our users, so that they feel like they're, again, meeting the business objectives that they have, they get to innovate and yet, you know, they're exploring this whole new world around not having to choose between something like cost and performance for their applications. >> Well, and we're entering an entire new era of scale. >> Yeah. >> We've never seen before and human just are not going to be able to keep up with that. >> Yep. >> And that affect quality and speed and everything else. Guys, hey, thanks so much for coming in a great conversation. And thank you for watching this CUBE conversation. This is Dave Vellante, and we'll see you next time. (upbeat music)

Published Date : Mar 15 2022

SUMMARY :

and the talk is all around, let's say, So Chandler, you have this convergence, And so now you launched your application the more they buy, well. And so when you can help create or add So your entry into the is going to look like and now you to turn the crank and respond more quickly. And so we want to, you know, And the two top ones were And so how are you working with StormForge and then you can bring and then you start to transition and of course in the and I'm sort of at the hundreds of services that you And don't hate me for doing this, the cloud to the edge. at the end of the day, Amazon I mean, here you have your best You heard it on the CUBE first. they get to innovate and yet, you know, Well, and we're entering are not going to be able and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Adam SelipskyPERSON

0.99+

Andy JassyPERSON

0.99+

AmazonORGANIZATION

0.99+

TurkeyLOCATION

0.99+

ChandlerPERSON

0.99+

March 2022DATE

0.99+

Matt ProvoPERSON

0.99+

StormForgeORGANIZATION

0.99+

2022DATE

0.99+

San FranciscoLOCATION

0.99+

AWSORGANIZATION

0.99+

SanLOCATION

0.99+

last yearDATE

0.99+

first applicationQUANTITY

0.99+

hundredsQUANTITY

0.99+

Enterprise Technology ResearchORGANIZATION

0.99+

MattPERSON

0.99+

10QUANTITY

0.99+

Chandler HoisingtonPERSON

0.99+

EgyptLOCATION

0.99+

AtlanticLOCATION

0.99+

firstQUANTITY

0.98+

365QUANTITY

0.98+

todayDATE

0.98+

EKSORGANIZATION

0.98+

two parametersQUANTITY

0.98+

EKS EdgeORGANIZATION

0.98+

EKSTITLE

0.98+

both environmentsQUANTITY

0.97+

two top onesQUANTITY

0.96+

one pieceQUANTITY

0.95+

15 different componentsQUANTITY

0.95+

KubernetesTITLE

0.95+

buyersQUANTITY

0.94+

pandemicEVENT

0.92+

ETRORGANIZATION

0.91+

more than 1200 CIOs andQUANTITY

0.89+

East Coast StudiosORGANIZATION

0.88+

oneQUANTITY

0.87+

CUBEORGANIZATION

0.86+

StormForgeTITLE

0.85+

number one categoryQUANTITY

0.84+

servicesQUANTITY

0.83+

both viewsQUANTITY

0.82+

70% of respondentsQUANTITY

0.78+

about a 100 different possible combinationsQUANTITY

0.77+

Maylan Thompson BukavekPERSON

0.71+

number twoQUANTITY

0.67+

KubernetesPERSON

0.66+

CBEsORGANIZATION

0.62+

premORGANIZATION

0.61+

coupleQUANTITY

0.6+

KubernetesORGANIZATION

0.59+

CUBE ConversationEVENT

0.48+

Breaking Analysis: What to Expect in Cloud 2022 & Beyond


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante you know we've often said that the next 10 years in cloud computing won't be like the last ten cloud has firmly planted its footprint on the other side of the chasm with the momentum of the entire multi-trillion dollar tech business behind it both sellers and buyers are leaning in by adopting cloud technologies and many are building their own value layers on top of cloud in the coming years we expect innovation will continue to coalesce around the three big u.s clouds plus alibaba in apac with the ecosystem building value on top of the hardware saw tooling provided by the hyperscalers now importantly we don't see this as a race to the bottom rather our expectation is that the large public cloud players will continue to take cost out of their platforms through innovation automation and integration while other cloud providers and the ecosystem including traditional companies that buy it mine opportunities in their respective markets as matt baker of dell is fond of saying this is not a zero sum game welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll update you on our latest projections in the cloud market we'll share some new etr survey data with some surprising nuggets and drill into this the important cloud database landscape first we want to take a look at what people are talking about in cloud and what's been in the recent news with the exception of alibaba all the large cloud players have reported earnings google continues to focus on growth at the expense of its profitability google reported that it's cloud business which includes applications like google workspace grew 45 percent to five and a half billion dollars but it had an operating loss of 890 billion now since thomas curion joined google to run its cloud business google has increased head count in its cloud business from 25 000 25 000 people now it's up to 40 000 in an effort to catch up to the two leaders but playing catch up is expensive now to put this into perspective let's go back to aws's revenue in q1 2018 when the company did 5.4 billion so almost exactly the same size as google's current total cloud business and aws is growing faster at the time at 49 don't forget google includes in its cloud numbers a big chunk of high margin software aws at the time had an operating profit of 1.4 billion that quarter around 26 of its revenues so it was a highly profitable business about as profitable as cisco's overall business which again is a great business this is what happens when you're number three and didn't get your head out of your ads fast enough now in fairness google still gets high marks on the quality of its technology according to corey quinn of the duck bill group amazon and google cloud are what he called neck and neck with regard to reliability with microsoft azure trailing because of significant disruptions in the past these comments were made last week in a bloomberg article despite some recent high-profile outages on aws not surprisingly a microsoft spokesperson said that the company's cloud offers industry-leading reliability and that gives customers payment credits after some outages thank you turning to microsoft and cloud news microsoft's overall cloud business surpassed 22 billion in the december quarter up 32 percent year on year like google microsoft includes application software and sas offerings in its cloud numbers and gives little nuggets of guidance on its azure infrastructure as a service business by the way we estimate that azure comprises about 45 percent of microsoft's overall cloud business which we think hit a 40 billion run rate last quarter microsoft guided in its earning call that recent declines in the azure growth rates will reverse in q1 and that implies sequential growth for azure and finally it was announced that the ftc not the doj will review microsoft's announced 75 billion acquisition of activision blizzard it appears ftc chair lena khan wants to take this one on herself she of course has been very outspoken about the power of big tech companies and in recent a recent cnbc interview suggested that the u.s government's actions were a meaningful contributor back then to curbing microsoft's power in the 90s i personally found that dubious just ask netscape wordperfect novell lotus and spc the maker of harvard presentation graphics how effective the government was in curbing microsoft power generally my take is that the u s government has had a dismal record regulating tech companies most notably ibm and microsoft and it was market forces company hubris complacency and self-inflicted wounds not government intervention these were far more effective than the government now of course if companies are breaking the law they should be punished but the u.s government hasn't been very productive in its actions and the unintended consequences of regulation could be detrimental to the u.s competitiveness in the race with china but i digress lastly in the news amazon announced earnings thursday and the company's value increased by 191 billion dollars on friday that's a record valuation gain for u.s stocks aws amazon's profit engine grew 40 percent year on year for the quarter it closed the year at 62 billion dollars in revenue and at a 71 billion dollar revenue run rate aws is now larger than ibm which without kindrel is at a 67 billion dollar run rate just for context ibm's revenue in 2011 was 107 billion dollars now there's a conversation going on in the media and social that in order to continue this growth and compete with microsoft that aws has to get into the sas business and offer applications we don't think that's the right strategy for amp from for amazon in the near future rather we see them enabling developers to compete in that business finally amazon disclosed that 48 of its top 50 customers are using graviton 2 instances why is this important because aws is well ahead of the competition in custom silicon chips is and is on a price performance curve that is far better than alternatives especially those based on x86 this is one of the reasons why we think this business is not a race to the bottom aws is being followed by google microsoft and alibaba in terms of developing custom silicon and will continue to drive down their internal cost structures and deliver price performance equal to or better than the historical moore's law curves so that's the recent news for the big u.s cloud providers let's now take a look at how the year ended for the big four hyperscalers and look ahead to next year here's a table we've shown this view before it shows the revenue estimates for worldwide is and paths generated by aws microsoft alibaba and google now remember amazon and alibaba they share clean eye ass figures whereas microsoft and alphabet only give us these nuggets that we have to interpret and we correlate those tidbits with other data that we gather we're one of the few outlets that actually attempts to make these apples to apples comparisons there's a company called synergy research there's another firm that does this but i really can't map to their numbers their gcp figures look far too high and azure appears somewhat overestimated and they do include other stuff like hosted private cloud services but it's another data point that you can use okay back to the table we've slightly adjusted our gcp figures down based on interpreting some of alphabet's statements and other survey data only alibaba has yet to announce earnings so we'll stick to a 2021 market size of about 120 billion dollars that's a 41 growth rate relative to 2020 and we expect that figure to increase by 38 percent to 166 billion in 2022 now we'll discuss this a bit later but these four companies have created an opportunity for the ecosystem to build what we're calling super clouds on top of this infrastructure and we're seeing it happen it was increasingly obvious at aws re invent last year and we feel it will pick up momentum in the coming months and years a little bit more on that later now here's a graphical view of the quarterly revenue shares for these four companies notice that aws has reversed its share erosion and is trending up slightly aws has accelerated its growth rate four quarters in a row now it accounted for 52 percent of the big four hyperscaler revenue last year and that figure was nearly 54 in the fourth quarter azure finished the year with 32 percent of the hyper scale revenue in 2021 which dropped to 30 percent in q4 and you can see gcp and alibaba they're neck and neck fighting for the bronze medal by the way in our recent 2022 predictions post we said google cloud platform would surpass alibaba this year but given the recent trimming of our numbers google's got some work to do for that prediction to be correct okay just to put a bow on the wikibon market data let's look at the quarterly growth rates and you'll see the compression trends there this data tracks quarterly revenue growth rates back to 20 q1 2019 and you can see the steady downward trajectory and the reversal that aws experienced in q1 of last year now remember microsoft guided for sequential growth and azure so that orange line should trend back up and given gcp's much smaller and big go to market investments that we talked about we'd like to see an acceleration there as well the thing about aws is just remarkable that it's able to accelerate growth at a 71 billion run rate business and alibaba you know is a bit more opaque and likely still reeling from the crackdown of the chinese government we're admittedly not as close to the china market but we'll continue to watch from afar as that steep decline in growth rate is somewhat of a concern okay let's get into the survey data from etr and to do so we're going to take some time series views on some of the select cloud platforms that are showing spending momentum in the etr data set you know etr uses a metric we talked about this a lot called net score to measure that spending velocity of products and services netscore basically asks customers are you spending more less or the same on a platform and a vendor and then it subtracts the lesses from the moors and that yields a net score this chart shows net score for five cloud platforms going back to january 2020. note in the table that the table we've inserted inside that chart shows the net score and shared n the latter metric indicates the number of mentions in the data set and all the platforms we've listed here show strong presence in the survey that red dotted line at 40 percent that indicates spending is at an elevated level and you can see azure and aws and vmware cloud on aws as well as gcp are all nicely elevated and bounding off their october figures indicating continued cloud momentum overall but the big surprise in these figures is the steady climb and the steep bounce up from oracle which came in just under the 40 mark now one quarter is not necessarily a trend but going back to january 2020 the oracle peaks keep getting higher and higher so we definitely want to keep watching this now here's a look at some of the other cloud platforms in the etr survey the chart here shows the same time series and we've now brought in some of the big hybrid players notably vmware cloud which is vcf and other on-prem solutions red hat openstack which as we've reported in the past is still popular in telcos who want to build their own cloud we're also starting to see hpe with green lake and dell with apex show up more and ibm which years ago acquired soft layer which was really essentially a bare metal hosting company and over the years ibm cobbled together its own public cloud ibm is now racing after hybrid cloud using red hat openshift as the linchpin to that strategy now what this data tells us first of all these platforms they don't have the same presence in the data set as do the previous players vmware is the one possible exception but other than vmware these players don't have the spending velocity shown in the previous chart and most are below the red line hpe and dell are interesting and notable in that they're transitioning their early private cloud businesses to dell gr sorry hpe green lake and dell apex respectively and finally after years of kind of staring at their respective navels in in cloud and milking their legacy on-prem models they're finally building out cloud-like infrastructure for their customers they're leaning into cloud and marketing it in a more sensible and attractive fashion for customers so we would expect these figures are going to bounce around for a little while for those two as they settle into a groove and we'll watch that closely now ibm is in the process of a complete do-over arvin krishna inherited three generations of leadership with a professional services mindset now in the post gerschner gerstner era both sam palmisano and ginny rometty held on far too long to ibm's service heritage and protected the past from the future they missed the cloud opportunity and they forced the acquisition of red hat to position the company for the hybrid cloud remedy tried to shrink to grow but never got there krishna is moving faster and with the kindred spin is promising mid-single-digit growth which would be a welcome change ibm is a lot of work to do and we would expect its net score figures as well to bounce around as customers transition to the future all right let's take a look at all these different players in context these are all the clouds that we just talked about in a two-dimensional view the vertical axis is net score or spending momentum and the horizontal axis is market share or presence or pervasiveness in the data set a couple of call-outs that we'd like to make here first the data confirms what we've been saying what everybody's been saying aws and microsoft stand alone with a huge presence many tens of billions of dollars in revenue yet they are both well above the 40 line and show spending momentum and they're well ahead of gcp on both dimensions second vmware while much smaller is showing legitimate momentum which correlates to its public statements alibaba the alibaba in this survey really doesn't have enough sample to make hardcore conclusions um you can see hpe and dell and ibm you know similarly they got a little bit more presence in the data set but they clearly have some work to do what you're seeing there is their transitioning their legacy install bases oracle's the big surprise look what oracle was in the january survey and how they've shot up recently now we'll see if this this holds up let's posit some possibilities as to why it really starts with the fact that oracle is the king of mission critical apps now if you haven't seen video on twitter you have to check it out it's it's hilarious we're not going to run the video here but the link will be in our post but i'll give you the short version some really creative person they overlaid a data migration narrative on top of this one tooth guy who speaks in spanish gibberish but the setup is he's a pm he's a he's a a project manager at a bank and aws came into the bank this of course all hypothetical and said we can move all your apps to the cloud in 12 months and the guy says but wait we're running mission critical apps on exadata and aws says there's nothing special about exadata and he starts howling and slapping his knee and laughing and giggling and talking about the 23 year old senior engineer who says we're going to do this with microservices and he could tell he was he was 23 because he was wearing expensive sneakers and what a nightmare they encountered migrating their environment very very very funny video and anyone who's ever gone through a major migration of mission critical systems this is gonna hit home it's funny not funny the point is it's really painful to move off of oracle and oracle for all its haters and its faults is really the best environment for mission critical systems and customers know it so what's happening is oracle's building out the best cloud for oracle database and it has a lot of really profitable customers running on-prem that the company is migrating to oracle cloud infrastructure oci it's a safer bet than ripping it and putting it into somebody else's cloud that doesn't have all the specialized hardware and oracle knowledge because you can get the same integrated exadata hardware and software to run your database in the oracle cloud it's frankly an easier and much more logical migration path for a lot of customers and that's possibly what's happening here not to mention oracle jacks up the license price nearly doubles the license price if you run on other clouds so not only is oracle investing to optimize its cloud infrastructure it spends money on r d we've always talked about that really focused on mission critical applications but it's making it more cost effective by penalizing customers that run oracle elsewhere so this possibly explains why when the gartner magic quadrant for cloud databases comes out it's got oracle so well positioned you can see it there for yourself oracle's position is right there with aws and microsoft and ahead of google on the right-hand side is gartner's critical capabilities ratings for dbms and oracle leads in virtually all of the categories gartner track this is for operational dvms so it's kind of a narrow view it's like the red stack sweet spot now this graph it shows traditional transactions but gartner has oracle ahead of all vendors in stream processing operational intelligence real-time augmented transactions now you know gartner they're like old name framers and i say that lovingly so maybe they're a bit biased and they might be missing some of the emerging opportunities that for example like snowflake is pioneering but it's hard to deny that oracle for its business is making the right moves in cloud by optimizing for the red stack there's little question in our view when it comes to mission critical we think gartner's analysis is correct however there's this other really exciting landscape emerging in cloud data and we don't want it to be a blind spot snowflake calls it the data cloud jamactagani calls it data mesh others are using the term data fabric databricks calls it data lake house so so does oracle by the way and look the terminology is going to evolve and most of the action action that's happening is in the cloud quite frankly and this chart shows a select group of database and data warehouse companies and we've filtered the data for aws azure and gcp customers accounts so how are these accounts or companies that were showing how these vendors were showing doing in aws azure and gcp accounts and to make the cut you had to have a minimum of 50 mentions in the etr survey so unfortunately data bricks didn't make it just not enough presence in the data set quite quite yet but just to give you a sense snowflake is represented in this cut with 131 accounts aws 240 google 108 microsoft 407 huge [ __ ] 117 cloudera 52 just made the cut ibm 92 and oracle 208. again these are shared accounts filtered by customers running aws azure or gcp the chart shows a net score lime green is new ads forest green is spending more gray is flat spending the pink is spending less and the bright red is defection again you subtract the red from the green and you get net score and you can see that snowflake as we reported last week is tops in the data set with a net score in the 80s and virtually no red and even by the way single digit flat spend aws google and microsoft are all prominent in the data set as is [ __ ] and snowflake as i just mentioned and they're all elevated over the 40 mark cloudera yeah what can we say once they were a high flyer they're really not in the news anymore with anything compelling other than they just you know took the company private so maybe they can re-emerge at some point with a stronger story i hope so because as you can see they actually have some new additions and spending momentum in the green just a lot of customers holding steady and a bit too much red but they're in the positive territory at least with uh plus 17 percent unlike ibm and oracle and this is the flip side of the coin ibm they're knee-deep really chest deep in the middle of a major transformation we've said before arvind krishna's strategy and vision is at least achievable prune the portfolio i.e spin out kindrel sell watson health hold serve with the mainframe and deal with those product cycles shift the mix to software and use red hat to win the day in hybrid red hat is working for ibm's growing well into the double digits unfortunately it's not showing up in this chart with little database momentum in aws azure and gcp accounts zero new ads not enough acceleration and spending a big gray middle in nearly a quarter of the base in the red ibm's data and ai business only grew three percent this last quarter and the word database wasn't even mentioned once on ibm's earnings call this has to be a concern as you can see how important database is to aws microsoft google and the momentum it's giving companies like snowflake and [ __ ] and others which brings us to oracle with a net score of minus 12. so how do you square the momentum in oracle cloud spending and the strong ratings and databases from gartner with this picture good question and i would say the following first look at the profile people aren't adding oracle new a large portion of the base 25 is reducing spend by 6 or worse and there's a decent percentage of the base migrating off oracle with a big fat middle that's flat and this accounts for the poor net score overall but what etr doesn't track is how much is being spent rather it's an account based model and oracle is heavily weighted toward big spenders running mission critical applications and databases oracle's non-gaap operating margins are comparable to ibm's gross margins on a percentage basis so a very profitable company with a big license and maintenance in stall basin oracle has focused its r d investments into cloud erp database automation they've got vertical sas and they've got this integrated hardware and software story and this drives differentiation for the company but as you can see in this chart it has a legacy install base that is constantly trying to minimize its license costs okay here's a little bit of different view on the same data we expand the picture with the two dimensions of net score on the y-axis and market share or pervasiveness on the horizontal axis and the table insert is how the data gets plotted y and x respectively not much to add here other than to say the picture continues to look strong for those companies above the 40 line that are focused and their focus and have figured out a clear cloud strategy and aren't necessarily dealing with a big install base the exception of course is is microsoft and the ones below the line definitely have parts of their portfolio which have solid momentum but they're fighting the inertia of a large install base that moves very slowly again microsoft had the advantage of really azure and migrating those customers very quickly okay so let's wrap it up starting with the big three cloud players aws is accelerating and innovating great example is custom silicon with nitro and graviton and other chips that will help the company address concerns related to the race to the bottom it's not a race to zero aws we believe will let its developers go after the sas business and for the most part aws will offer solutions that address large vertical markets think call centers the edge remains a wild card for aws and all the cloud players really aws believes that in the fullness of time all workloads will run in the public cloud now it's hard for us to imagine the tesla autonomous vehicles running in the public cloud but maybe aws will redefine what it means by its cloud microsoft well they're everywhere and they're expanding further now into gaming and the metaverse when he became ceo in 2014 many people said that satya should ditch xbox just as an aside the joke among many oracle employees at the time was that safra katz would buy her kids and her nieces and her nephews and her kids friends everybody xbox game consoles for the holidays because microsoft lost money for everyone that they shipped well nadella has stuck with it and he sees an opportunity to expand through online gaming communities one of his first deals as ceo was minecraft now the acquisition of activision will make microsoft the world's number three gaming company by revenue behind only 10 cent and sony all this will be powered by azure and drive more compute storage ai and tooling now google for its part is battling to stay relevant in the conversation luckily it can afford the massive losses it endures in cloud because the company's advertising business is so profitable don't expect as many have speculated that google is going to bail on cloud that would be a huge mistake as the market is more than large enough for three players which brings us to the rest of the pack cloud ecosystems generally and aws specifically are exploding the idea of super cloud that is a layer of value that spans multiple clouds hides the underlying complexity and brings new value that the cloud players aren't delivering that's starting to bubble to the top and legacy players are staying close to their customers and fighting to keep them spending and it's working dell hpe cisco and smaller predominantly on-plan prem players like pure storage they continue to do pretty well they're just not as sexy as the big cloud players the real interesting activity it's really happening in the ecosystem of companies and firms within industries that are transforming to create their own digital businesses virtually all of them are running a portion of their offerings on the public cloud but often connecting to on-premises workloads and data think goldman sachs making that work and creating a great experience across all environments is a big opportunity and we're seeing it form right before our eyes don't miss it okay that's it for now thanks to my colleague stephanie chan who helped research this week's topics remember these episodes are all available as podcasts wherever you listen just search breaking analysis podcast check out etr's website at etr dot ai and also we publish a full report every week on wikibon.com and siliconangle.com you can get in touch with me email me at david.velante siliconangle.com you can dm me at divalante or comment on my linkedin post this is dave vellante for the cube insights powered by etr have a great week stay safe be well and we'll see you next time [Music] you

Published Date : Feb 7 2022

SUMMARY :

opportunity for the ecosystem to build

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
amazonORGANIZATION

0.99+

45 percentQUANTITY

0.99+

2011DATE

0.99+

40 percentQUANTITY

0.99+

january 2020DATE

0.99+

2021DATE

0.99+

microsoftORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

32 percentQUANTITY

0.99+

30 percentQUANTITY

0.99+

52 percentQUANTITY

0.99+

5.4 billionQUANTITY

0.99+

2020DATE

0.99+

january 2020DATE

0.99+

2022DATE

0.99+

ibmORGANIZATION

0.99+

48QUANTITY

0.99+

22 billionQUANTITY

0.99+

71 billionQUANTITY

0.99+

40 billionQUANTITY

0.99+

40 percentQUANTITY

0.99+

62 billion dollarsQUANTITY

0.99+

2014DATE

0.99+

107 billion dollarsQUANTITY

0.99+

890 billionQUANTITY

0.99+

two leadersQUANTITY

0.99+

17 percentQUANTITY

0.99+

38 percentQUANTITY

0.99+

1.4 billionQUANTITY

0.99+

67 billion dollarQUANTITY

0.99+

december quarterDATE

0.99+

xboxCOMMERCIAL_ITEM

0.99+

sam palmisanoPERSON

0.99+

191 billion dollarsQUANTITY

0.99+

thomas curionPERSON

0.99+

stephanie chanPERSON

0.99+

awsORGANIZATION

0.99+

three percentQUANTITY

0.99+

last weekDATE

0.99+

fridayDATE

0.99+

david.velanteOTHER

0.99+

last weekDATE

0.99+

71 billion dollarQUANTITY

0.99+

75 billionQUANTITY

0.99+

last yearDATE

0.99+

krishnaPERSON

0.99+

bostonLOCATION

0.99+

50 mentionsQUANTITY

0.99+

three playersQUANTITY

0.99+

23QUANTITY

0.99+

oracleORGANIZATION

0.99+

five and a half billion dollarsQUANTITY

0.99+

q1 2018DATE

0.99+

two dimensionsQUANTITY

0.99+

166 billionQUANTITY

0.99+

lena khanPERSON

0.99+

multi-trillion dollarQUANTITY

0.99+

12 monthsQUANTITY

0.99+

gartnerORGANIZATION

0.99+

Jas Bains, Jamie Smith and Laetitia Cailleteau | AWS Executive Summit 2021


 

(bright upbeat music) >> Welcome to The Cube. We're here for the AWS Executive Summit part of Reinvent 2021. I'm John Farrow, your host of the Cube. We've got a great segment focus here, Art of the Possible is the segment. Jas Bains, Chief Executive at Hafod and Jamie Smith, director of research and innovation and Laetitia Cailleteau who's the global lead of conversational AI at Accenture. Thanks for joining me today for this Art of the Possible segment. >> Thank you. >> So tell us a little bit about Hafod and what you guys are doing to the community 'cause this is a really compelling story of how technology in home care is kind of changing the game and putting a stake in the ground. >> Yeah, so Hafod is one of the largest not for profits in Wales. We employ about 1400 colleagues. We have three strands a service, which practices on key demographics. So people who are vulnerable and socioeconomically disadvantaged. Our three core strands of service are affordable housing, we provide several thousand homes to people in housing need across Wales. We also are an extensive provider of social provision, both residential and in the community. And then we have a third tier, which is a hybrid in between. So that supports people who are not quite ready for independent living but neither are they ready for residential care. So that's a supportive provision. I suppose what one of the things that marks Hafod out and why we're here in this conversation is that we're uniquely placed as one of the organizations that actually has a research and innovation capacity. And it's the work of the research and innovation capacity led by Jamie that brought about this collaboration with Accenture which is great in great meaning and benefits. So thousands of our customers and hopefully universal application as it develops. >> You know this is a really an interesting discussion because multiple levels, one, the pandemic accelerated this needs so, I want to get comments on that. But two, if you look at the future of work and work and home life, you seeing the convergence of where people live. And I think this idea of having this independent home and the ecosystem around it, there's a societal impact as well. So what brought this opportunity together? How did this come together with Accenture and AWS? >> We're going for Jamie and Laetitia. >> Yeah, I can start. Well, we were trying to apply for the LC Aging Grand Challenge in the U.K., so the United Kingdom recognized the need for change around independent living and run a grand challenge. And then we got together as part of this grand challenge. You know, we had some technology, we had trialed with AGK before and Hanover Housing Association. Hafod was really keen to actually start trying some of that technology with some of the resident. And we also worked with Swansea University, was doing a lot of work around social isolation and loneliness. And we came together to kind of pitch for the grand challenge. And we went quite far actually, unfortunately we didn't win but we have built such a great collaboration that we couldn't really let it be, you know, not going any further. And we decided to continue to invest in this idea. And now we here, probably 18 months on with a number of people, Hafod using the technology and a number of feedbacks and returns coming back and us having a grand ambitions to actually go much broader and scale this solution. >> Jas and Jamie, I'd love to get your reaction and commentary on this trend of tech for good because I mean, I'm sure you didn't wake up, oh, just want to do some tech for good. You guys have an environment, you have an opportunity, you have challenges you're going to turn into opportunities. But if you look at the global landscape right now, things that are jumping out at us are looking at the impact of social media on people. You got the pandemic with isolation, this is a first order problem in this new world of how do we get technology to change how people feel and make them better in their lives. >> Yeah, I think for us, the first has to be a problem to solve. There's got to be a question to be answered. And for us, that was in this instance, how do we mitigate loneliness and how do we take services that rely on person to person contact and not particularly scalable and replicate those through technology somehow. And even if we can do 10% of the job of that in-person service then for us, it's worth it because that is scalable. And there are lots of small interventions we can make using technology which is really efficient way for us to support people in the community when we just can't be everywhere at once. >> So, John, just to add, I think that we have about 1500 people living in households that are living alone and isolated. And I think the issue for us was more than just about technology because a lot of these people don't have access to basic technology features that most of us would take for granted. So far this is a two-prong journey. One is about increasing the accessibility to tech and familiarizing people so that they're comfortable with these devices technology and two importantly, make sure that we have the right means to help people reduce their loneliness and isolation. So the opportunity to try out something over the last 12 months, something that's bespoke, that's customized that will undoubtedly be tweaked as we go forward has been an absolutely marvelous opportunity. And for us, the collaboration with Accenture has been absolutely key. I think what we've seen during COVID is cross-fertilization. We've seen multi-disciplinary teams, we've got engineers, architects, manufacturers, and clinicians, and scientists, all trying to develop new solutions around COVID. And I think this probably just exemplary bias, especially as a post COVID where industry and in our case for example public sector and academia working together. >> Yeah, that's a great example and props to everyone there. And congratulations on this really, really important initiative. Let's talk about the home care solution. What does it do? How does it work? Take us through what's happening? >> Okay, so Home Care is actually a platform which is obviously running on AWS technology and this particular platform is the service offered accessible via voice through the Alexa device. We use the Echo Show to be able to use voice but also visuals to kind of make the technology more accessible for end user. On the platform itself, we have a series of services available out there. We connecting in the background a number of services from the community. So in the particular case of Hafod, we had something around shopping during the pandemic where we had people wanting to have access to their food bank. Or we also had during the pandemic, there was some need for having access to financial coaching and things like that. So we actually brought all of the service on the platform and the skills and this skill was really learning how to interact with the end user. And it was all customized for them to be able to access those things in a very easy way. It did work almost too well because some of our end users have been a kind of you know, have not been digital literate before and it was working so well, they were like, "But why can't it do pretty much anything on the planet? "Why can't it do this or that?" So the expectations were really, really high but we did manage to bring comfort to Hafod residents in a number of their daily kind of a need, some of the things during COVID 'cause people couldn't meet face to face. There was some challenge around understanding what events are running. So the coaches would publish events, you know, through the skills and people would be able to subscribe and go to the event and meet together virtually instead of physically. The number of things that really kind of brought a voice enabled experience for those end users. >> You know, you mentioned the people like the solution just before we, I'm going to get the Jamie in a second, but I want to just bring up something that you brought up. This is a digital divide evolution because digital divide, as Josh was saying, is that none about technology,, first, you have to access, you need access, right? First, then you have to bring broadband and internet access. And then you have to get the technology in the home. But then here it seems to be a whole nother level of digital divide bridging to the new heights. >> Yeah, completely, completely. And I think that's where COVID has really accelerated the digital divide before the solution was put in place for Hafod in the sense that people couldn't move and if they were not digitally literate, it was very hard to have access to services. And now we brought this solution in the comfort of their own home and they have the access to the services that they wouldn't have had otherwise on their own. So it's definitely helping, yeah. >> It's just another example of people refactoring their lives or businesses with technology. Jamie, what's your take on the innovation here and the technical aspects of the home care solutions? >> I think the fact that it's so easy to use, it's personalized, it's a digital companion for the home. It overcomes that digital divide that we talked about, which is really important. If you've got a voice you can use home care and you can interact with it in this really simple way. And what I love about it is the fact that it was based on what our customers told us they were finding difficult during this time, during the early lockdowns of the pandemic. There was 1500 so people Jas talked about who were living alone and at risk of loneliness. Now we spoke to a good number of those through a series of welfare calls and we found out exactly what it is they found challenging. >> What were some of the things that they were finding challenging? >> So tracking how they feel on a day-to-day basis. What's my mood like, what's my wellbeing like, and knowing how that changes over time. Just keeping the fridge in the pantry stocked up. What can I cook with these basic ingredients that I've got in my home? You could be signposted to basic resources to help you with that. Staying connected to the people who are really important to you but the bit that shines out for me is the interface with our services, with our neighborhood coaching service, where we can just give these little nudges, these little interventions just to mitigate and take the edge of that loneliness for people. We can see the potential of that coming up to the pandemic, where you can really encourage people to interact with one another, to be physically active and do all of those things that sort of mitigate against loneliness. >> Let me ask you a question 'cause I think a very important point. The timing of the signaling of data is super important. Could you comment on the relevance of having access to data? If you're getting something connected, when you're connected like this, I can only imagine the benefits. It's all about timing, right? Knowing that someone might be thinking some way or whether it's a tactical, in any scenario, timing of data, the right place at the right time, as they say. What's your take on that 'cause it sounds like what you're saying is that you can see things early when people are in the moment. >> Yeah, exactly. So if there's a trend beginning to emerge, for example, around some of these wellbeing, which has been on a low trajectory for a number of days, that can raise a red flag in our system and it alerts one of our neighborhood coaches just to reach out to that person and say, "Well, John, what's going on? "You haven't been out for a walk for a few days. "We know you like to walk, what's happening?" And these early warning signs are really important when we think of the long-term effects of loneliness and how getting upstream of those, preventing it reaching a point where it moves from being a problem into being a crisis. And the earlier we can detect that the more chance we've got of these negative long-term outcomes being mitigated. >> You know, one of the things we see in the cloud business is kind of separate track but it kind of relates to the real world here that you're doing, is automation and AI and machine learning bringing in a lot of value if applied properly. So how are you guys seeing, I can almost imagine that patterns are coming in, right? Do you see patterns in the data? How does AI and analytics technology improve this process especially with the wellbeing and emotional wellbeing of the elderly? >> I think one of the things we've learned through the pilot study we've done is there's not one size fits all. You know, all those people are very different individuals. They have very different habits. You know, there's some people not sleeping over the night. There's some people wanting to be out early, wanting to be social. Some people you have to put in much more. So it's definitely not one size fits all. And automation and digitalization of those kinds of services is really challenging because if they're not personalized, it doesn't really catch the interest or the need of the individuals. So for me as an IT professional being in the industry for like a 20 plus years, I think this is the time where personalization has really a true meaning. Personalization at scale for those people that are not digitally literate. But also in more vulnerable settings 'cause there's just so many different angles that can make them vulnerable. Maybe it's the body, maybe it's the economy position, their social condition, there's so many variation of all of that. So I think this is one of the use case that has to be powered by technology to complement the human side of it. If we really want to start scaling the services we provide to people in general, meaning obviously, in all the Western country now we all growing old, it's no secret. So in 20 years time the majority of everybody will be old and we obviously need people to take care of us. And at the moment we don't have that population to take care of us coming up. So really to crack on those kinds of challenges, we really need to have technology powering and just helping the human side to make it more efficient, connected than human. >> It's interesting. I just did a story where you have these bots that look at the facial recognition via cameras and can detect either in hospitals and or in care patients, how they feel. So you see where this is going. Jas I got to ask you how all this changes, the home care model and how Hafod works. Your workforce, the career's culture, the consortium you guys are bringing to the table, partners, you know this is an ecosystem now, it's a system. >> Yes John, I think that probably, it's also worth talking a little bit about the pressures on state governments around public health issues which are coming to the fore. And clearly we need to develop alternative ways that we engage with mass audiences and technology is going to be absolutely key. One of the challenges I still think that we've not resolved in the U.K. level, this is probably a global issue, is about data protection. When we're talking to cross governmental agencies, it's about sharing data and establishing protocols and we've enjoyed a few challenging conversations with colleagues around data protection. So I think those need to be set out in the context of the journey of this particular project. I think that what's interesting around COVID is that, hasn't materially changed the nature in which we do things, probably not in our focus and our work remains the same. But what we're seeing is very clear evidence of the ways, I mean, who would have thought that 12 months ago, the majority of our workforce would be working from home? So rapid mobilization to ensure that people can use, set IT home effectively. And then how does that relationship impact with people in the communities we're serving? Some of whom have got access to technology, others who haven't. So that's been, I think the biggest change, and that is a fundamental change in the design and delivery of future services that organizations like us will be providing. So I would say that overall, some things remain the same by and large but technology is having an absolutely profound change in the way that our engagement with customers will go forward. >> Well, you guys are in the front end of some massive innovation here with this, are they possible and that, you're really delivering impact. And I think this is an example of that. And you brought up the data challenges, this is something that you guys call privacy by design. This is a cutting edge issue here because there are benefits around managing privacy properly. And I think here, your solution clearly has value, right? And no one can debate that, but as these little blockers get in the way, what's your reaction to that? 'Cause this certainly is something that has to be solved. I mean, it's a problem. >> Yeah, so we designed a solution, I think we had, when we design, I co-designed with your end-users actually. We had up to 14 lawyers working with us at one point in time looking at different kinds of angles. So definitely really tackle the solution with privacy by design in mind and with end users but obviously you can't co-design with thousands of people, you have to co-design with a representative subset of a cohort. And some of the challenge we find is obviously, the media have done a lot of scaremongering around technology, AI and all of that kind of things, especially for people that are not necessarily digitally literate, people that are just not in it. And when we go and deploy the solution, people are a little bit worried. When we make them, we obviously explain to them what's going to happen if they're happy, if they want to consent and all that kind of things. But the people are scared, they're just jumping on a technology on top of it we're asking them some questions around consent. So I think it's just that the solution is super secured and we've gone over millions of hoops within Accenture but also with Hafod itself. You know, it's more that like the type of user we deploying the solution to are just not in that world and then they are little bit worried about sharing. Not only they're worried about sharing with us but you know, in home care, there there's an option as well to share some of that data with your family. And there we also see people are kind of okay to share with us but they don't want to share with their family 'cause they don't want to have too much information kind of going potentially worrying or bothering some of their family member. So there is definitely a huge education kind of angle to embracing the technology. Not only when you create the solution but when you actually deploy it with users. >> It's a fabulous project, I am so excited by this story. It's a great story, has all the elements; technology, innovation, cidal impact, data privacy, social interactions, whether it's with family members and others, internal, external. In teams themselves. You guys doing some amazing work, thank you for sharing. It's a great project, we'll keep track of it. My final question for you guys is what comes next for the home care after the trial? What are Hafod's plans and hopes for the future? >> Maybe if I just give an overview and then invite Jamie and Laetitia. So for us, without conversations, you don't create possibilities and this really is a reflection of the culture that we try to engender. So my ask of my team is to remain curious, is to continue to explore opportunities because it's home care up to today, it could be something else tomorrow. We also recognize that we live in a world of collaboration. We need more cross industrial partnerships. We love to explore more things that Accenture, Amazon, others as well. So that's principally what I will be doing is ensuring that the culture invites us and then I hand over to the clever people like Jamie and Laetitia to get on with the technology. I think for me we've already learned an awful lot about home care and there's clearly a lot more we can learn. We'd love to build on this initial small-scale trial and see how home care could work at a bigger scale. So how would it work with thousands of users? How do we scale it up from a cohort of 50 to a cohort of 5,000? How does it work when we bring different kinds of organizations into that mix? So what if, for example, we could integrate it into health care? So a variety of services can have a holistic view of an individual and interact with one another, to put that person on the right pathway and maybe keep them out of the health and care system for longer, actually reducing the costs to the system in the long run and improving that person's outcomes. That kind of evidence speaks to decision-makers and political partners and I think that's the kind of evidence we need to build. >> Yeah, financial impact is there, it's brutal. It's a great financial impact for the system. Efficiency, better care, everything. >> Yeah and we are 100% on board for whatever comes next. >> Laetitia-- >> What about you Laetitia? >> Great program you got there. A amazing story, thank you for sharing. Congratulations on this awesome project. So much to unpack here. I think this is the future. I mean, I think this is a case study of represents all the moving parts that need to be worked on, so congratulations. >> Thank you. >> Thank you. >> We are the Art of the Possible here inside the Cube, part of AWS Reinvent Executive Summit, I'm John Furrier, your host, thanks for watching. (bright upbeat music)

Published Date : Nov 9 2021

SUMMARY :

Art of the Possible is the segment. in home care is kind of changing the game And it's the work of the and the ecosystem around it, Challenge in the U.K., You got the pandemic with isolation, the first has to be a problem to solve. So the opportunity to try and props to everyone there. and the skills and this the people like the solution for Hafod in the sense of the home care solutions? of the pandemic. and take the edge of that I can only imagine the benefits. And the earlier we can detect of the elderly? And at the moment we the consortium you guys of the journey of this particular project. blockers get in the way, the solution to are just not in that world and hopes for the future? the costs to the system impact for the system. Yeah and we are 100% on all the moving parts that We are the Art of the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamiePERSON

0.99+

LaetitiaPERSON

0.99+

Laetitia CailleteauPERSON

0.99+

JoshPERSON

0.99+

AmazonORGANIZATION

0.99+

John FarrowPERSON

0.99+

JohnPERSON

0.99+

10%QUANTITY

0.99+

Jamie SmithPERSON

0.99+

Jas BainsPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

JasPERSON

0.99+

AccentureORGANIZATION

0.99+

U.K.LOCATION

0.99+

WalesLOCATION

0.99+

thousandsQUANTITY

0.99+

Hanover Housing AssociationORGANIZATION

0.99+

Swansea UniversityORGANIZATION

0.99+

100%QUANTITY

0.99+

20 plus yearsQUANTITY

0.99+

AGKORGANIZATION

0.99+

Echo ShowCOMMERCIAL_ITEM

0.99+

FirstQUANTITY

0.99+

one pointQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

twoQUANTITY

0.99+

50QUANTITY

0.99+

OneQUANTITY

0.99+

12 months agoDATE

0.98+

bothQUANTITY

0.98+

18 monthsQUANTITY

0.98+

pandemicEVENT

0.98+

todayDATE

0.98+

HafodORGANIZATION

0.98+

three core strandsQUANTITY

0.98+

about 1400 colleaguesQUANTITY

0.98+

three strandsQUANTITY

0.98+

5,000QUANTITY

0.98+

HafodLOCATION

0.97+

20 yearsQUANTITY

0.97+

United KingdomLOCATION

0.97+

third tierQUANTITY

0.97+

HafodPERSON

0.97+

thousands of usersQUANTITY

0.96+

Reinvent 2021EVENT

0.95+

first orderQUANTITY

0.95+

AlexaTITLE

0.94+

about 1500 peopleQUANTITY

0.93+

COVIDEVENT

0.92+

up to 14 lawyersQUANTITY

0.92+

AWS Executive SummitEVENT

0.9+

1500 soQUANTITY

0.88+

two-prong journeyQUANTITY

0.84+

AWS Reinvent Executive SummitEVENT

0.82+

Jas Bains, Laetitia Cailleteau and Jamie Smith AWS Executive Summit 2021


 

(bright upbeat music) >> Welcome to The Cube. We're here for the AWS Executive Summit part of Reinvent 2021. I'm John Farrow, your host of the Cube. We've got a great segment focus here, Art of the Possible is the segment. Jas Bains, Chief Executive at Hafod and Jamie Smith, director of research and innovation and Laetitia Cailleteau who's the global lead of conversational AI at Accenture. Thanks for joining me today for this Art of the Possible segment. >> Thank you. >> So tell us a little bit about Hafod and what you guys are doing to the community 'cause this is a really compelling story of how technology in home care is kind of changing the game and putting a stake in the ground. >> Yeah, so Hafod is one of the largest not for profits in Wales. We employ about 1400 colleagues. We have three strands a service, which practices on key demographics. So people who are vulnerable and socioeconomically disadvantaged. Our three core strands of service are affordable housing, we provide several thousand homes to people in housing need across Wales. We also are an extensive provider of social provision, both residential and in the community. And then we have a third tier, which is a hybrid in between. So that supports people who are not quite ready for independent living but neither are they ready for residential care. So that's a supportive provision. I suppose what one of the things that marks Hafod out and why we're here in this conversation is that we're uniquely placed as one of the organizations that actually has a research and innovation capacity. And it's the work of the research and innovation capacity led by Jamie that brought about this collaboration with Accenture which is great in great meaning and benefits. So thousands of our customers and hopefully universal application as it develops. >> You know this is a really an interesting discussion because multiple levels, one, the pandemic accelerated this needs so, I want to get comments on that. But two, if you look at the future of work and work and home life, you seeing the convergence of where people live. And I think this idea of having this independent home and the ecosystem around it, there's a societal impact as well. So what brought this opportunity together? How did this come together with Accenture and AWS? >> We're going for Jamie and Laetitia. >> Yeah, I can start. Well, we were trying to apply for the LC Aging Grand Challenge in the U.K., so the United Kingdom recognized the need for change around independent living and run a grand challenge. And then we got together as part of this grand challenge. You know, we had some technology, we had trialed with AGK before and Hanover Housing Association. Hafod was really keen to actually start trying some of that technology with some of the resident. And we also worked with Swansea University, was doing a lot of work around social isolation and loneliness. And we came together to kind of pitch for the grand challenge. And we went quite far actually, unfortunately we didn't win but we have built such a great collaboration that we couldn't really let it be, you know, not going any further. And we decided to continue to invest in this idea. And now we here, probably 18 months on with a number of people, Hafod using the technology and a number of feedbacks and returns coming back and us having a grand ambitions to actually go much broader and scale this solution. >> Jas and Jamie, I'd love to get your reaction and commentary on this trend of tech for good because I mean, I'm sure you didn't wake up, oh, just want to do some tech for good. You guys have an environment, you have an opportunity, you have challenges you're going to turn into opportunities. But if you look at the global landscape right now, things that are jumping out at us are looking at the impact of social media on people. You got the pandemic with isolation, this is a first order problem in this new world of how do we get technology to change how people feel and make them better in their lives. >> Yeah, I think for us, the first has to be a problem to solve. There's got to be a question to be answered. And for us, that was in this instance, how do we mitigate loneliness and how do we take services that rely on person to person contact and not particularly scalable and replicate those through technology somehow. And even if we can do 10% of the job of that in-person service then for us, it's worth it because that is scalable. And there are lots of small interventions we can make using technology which is really efficient way for us to support people in the community when we just can't be everywhere at once. >> So, John, just to add, I think that we have about 1500 people living in households that are living alone and isolated. And I think the issue for us was more than just about technology because a lot of these people don't have access to basic technology features that most of us would take for granted. So far this is a two-prong journey. One is about increasing the accessibility to tech and familiarizing people so that they're comfortable with these devices technology and two importantly, make sure that we have the right means to help people reduce their loneliness and isolation. So the opportunity to try out something over the last 12 months, something that's bespoke, that's customized that will undoubtedly be tweaked as we go forward has been an absolutely marvelous opportunity. And for us, the collaboration with Accenture has been absolutely key. I think what we've seen during COVID is cross-fertilization. We've seen multi-disciplinary teams, we've got engineers, architects, manufacturers, and clinicians, and scientists, all trying to develop new solutions around COVID. And I think this probably just exemplary bias, especially as a post COVID where industry and in our case for example public sector and academia working together. >> Yeah, that's a great example and props to everyone there. And congratulations on this really, really important initiative. Let's talk about the home care solution. What does it do? How does it work? Take us through what's happening? >> Okay, so Home Care is actually a platform which is obviously running on AWS technology and this particular platform is the service offered accessible via voice through the Alexa device. We use the Echo Show to be able to use voice but also visuals to kind of make the technology more accessible for end user. On the platform itself, we have a series of services available out there. We connecting in the background a number of services from the community. So in the particular case of Hafod, we had something around shopping during the pandemic where we had people wanting to have access to their food bank. Or we also had during the pandemic, there was some need for having access to financial coaching and things like that. So we actually brought all of the service on the platform and the skills and this skill was really learning how to interact with the end user. And it was all customized for them to be able to access those things in a very easy way. It did work almost too well because some of our end users have been a kind of you know, have not been digital literate before and it was working so well, they were like, "But why can't it do pretty much anything on the planet? "Why can't it do this or that?" So the expectations were really, really high but we did manage to bring comfort to Hafod residents in a number of their daily kind of a need, some of the things during COVID 'cause people couldn't meet face to face. There was some challenge around understanding what events are running. So the coaches would publish events, you know, through the skills and people would be able to subscribe and go to the event and meet together virtually instead of physically. The number of things that really kind of brought a voice enabled experience for those end users. >> You know, you mentioned the people like the solution just before we, I'm going to get the Jamie in a second, but I want to just bring up something that you brought up. This is a digital divide evolution because digital divide, as Josh was saying, is that none about technology,, first, you have to access, you need access, right? First, then you have to bring broadband and internet access. And then you have to get the technology in the home. But then here it seems to be a whole nother level of digital divide bridging to the new heights. >> Yeah, completely, completely. And I think that's where COVID has really accelerated the digital divide before the solution was put in place for Hafod in the sense that people couldn't move and if they were not digitally literate, it was very hard to have access to services. And now we brought this solution in the comfort of their own home and they have the access to the services that they wouldn't have had otherwise on their own. So it's definitely helping, yeah. >> It's just another example of people refactoring their lives or businesses with technology. Jamie, what's your take on the innovation here and the technical aspects of the home care solutions? >> I think the fact that it's so easy to use, it's personalized, it's a digital companion for the home. It overcomes that digital divide that we talked about, which is really important. If you've got a voice you can use home care and you can interact with it in this really simple way. And what I love about it is the fact that it was based on what our customers told us they were finding difficult during this time, during the early lockdowns of the pandemic. There was 1500 so people Jas talked about who were living alone and at risk of loneliness. Now we spoke to a good number of those through a series of welfare calls and we found out exactly what it is they found challenging. >> What were some of the things that they were finding challenging? >> So tracking how they feel on a day-to-day basis. What's my mood like, what's my wellbeing like, and knowing how that changes over time. Just keeping the fridge in the pantry stocked up. What can I cook with these basic ingredients that I've got in my home? You could be signposted to basic resources to help you with that. Staying connected to the people who are really important to you but the bit that shines out for me is the interface with our services, with our neighborhood coaching service, where we can just give these little nudges, these little interventions just to mitigate and take the edge of that loneliness for people. We can see the potential of that coming up to the pandemic, where you can really encourage people to interact with one another, to be physically active and do all of those things that sort of mitigate against loneliness. >> Let me ask you a question 'cause I think a very important point. The timing of the signaling of data is super important. Could you comment on the relevance of having access to data? If you're getting something connected, when you're connected like this, I can only imagine the benefits. It's all about timing, right? Knowing that someone might be thinking some way or whether it's a tactical, in any scenario, timing of data, the right place at the right time, as they say. What's your take on that 'cause it sounds like what you're saying is that you can see things early when people are in the moment. >> Yeah, exactly. So if there's a trend beginning to emerge, for example, around some of these wellbeing, which has been on a low trajectory for a number of days, that can raise a red flag in our system and it alerts one of our neighborhood coaches just to reach out to that person and say, "Well, John, what's going on? "You haven't been out for a walk for a few days. "We know you like to walk, what's happening?" And these early warning signs are really important when we think of the long-term effects of loneliness and how getting upstream of those, preventing it reaching a point where it moves from being a problem into being a crisis. And the earlier we can detect that the more chance we've got of these negative long-term outcomes being mitigated. >> You know, one of the things we see in the cloud business is kind of separate track but it kind of relates to the real world here that you're doing, is automation and AI and machine learning bringing in a lot of value if applied properly. So how are you guys seeing, I can almost imagine that patterns are coming in, right? Do you see patterns in the data? How does AI and analytics technology improve this process especially with the wellbeing and emotional wellbeing of the elderly? >> I think one of the things we've learned through the pilot study we've done is there's not one size fits all. You know, all those people are very different individuals. They have very different habits. You know, there's some people not sleeping over the night. There's some people wanting to be out early, wanting to be social. Some people you have to put in much more. So it's definitely not one size fits all. And automation and digitalization of those kinds of services is really challenging because if they're not personalized, it doesn't really catch the interest or the need of the individuals. So for me as an IT professional being in the industry for like a 20 plus years, I think this is the time where personalization has really a true meaning. Personalization at scale for those people that are not digitally literate. But also in more vulnerable settings 'cause there's just so many different angles that can make them vulnerable. Maybe it's the body, maybe it's the economy position, their social condition, there's so many variation of all of that. So I think this is one of the use case that has to be powered by technology to complement the human side of it. If we really want to start scaling the services we provide to people in general, meaning obviously, in all the Western country now we all growing old, it's no secret. So in 20 years time the majority of everybody will be old and we obviously need people to take care of us. And at the moment we don't have that population to take care of us coming up. So really to crack on those kinds of challenges, we really need to have technology powering and just helping the human side to make it more efficient, connected than human. >> It's interesting. I just did a story where you have these bots that look at the facial recognition via cameras and can detect either in hospitals and or in care patients, how they feel. So you see where this is going. Jas I got to ask you how all this changes, the home care model and how Hafod works. Your workforce, the career's culture, the consortium you guys are bringing to the table, partners, you know this is an ecosystem now, it's a system. >> Yes John, I think that probably, it's also worth talking a little bit about the pressures on state governments around public health issues which are coming to the fore. And clearly we need to develop alternative ways that we engage with mass audiences and technology is going to be absolutely key. One of the challenges I still think that we've not resolved in the U.K. level, this is probably a global issue, is about data protection. When we're talking to cross governmental agencies, it's about sharing data and establishing protocols and we've enjoyed a few challenging conversations with colleagues around data protection. So I think those need to be set out in the context of the journey of this particular project. I think that what's interesting around COVID is that, hasn't materially changed the nature in which we do things, probably not in our focus and our work remains the same. But what we're seeing is very clear evidence of the ways, I mean, who would have thought that 12 months ago, the majority of our workforce would be working from home? So rapid mobilization to ensure that people can use, set IT home effectively. And then how does that relationship impact with people in the communities we're serving? Some of whom have got access to technology, others who haven't. So that's been, I think the biggest change, and that is a fundamental change in the design and delivery of future services that organizations like us will be providing. So I would say that overall, some things remain the same by and large but technology is having an absolutely profound change in the way that our engagement with customers will go forward. >> Well, you guys are in the front end of some massive innovation here with this, are they possible and that, you're really delivering impact. And I think this is an example of that. And you brought up the data challenges, this is something that you guys call privacy by design. This is a cutting edge issue here because there are benefits around managing privacy properly. And I think here, your solution clearly has value, right? And no one can debate that, but as these little blockers get in the way, what's your reaction to that? 'Cause this certainly is something that has to be solved. I mean, it's a problem. >> Yeah, so we designed a solution, I think we had, when we design, I co-designed with your end-users actually. We had up to 14 lawyers working with us at one point in time looking at different kinds of angles. So definitely really tackle the solution with privacy by design in mind and with end users but obviously you can't co-design with thousands of people, you have to co-design with a representative subset of a cohort. And some of the challenge we find is obviously, the media have done a lot of scaremongering around technology, AI and all of that kind of things, especially for people that are not necessarily digitally literate, people that are just not in it. And when we go and deploy the solution, people are a little bit worried. When we make them, we obviously explain to them what's going to happen if they're happy, if they want to consent and all that kind of things. But the people are scared, they're just jumping on a technology on top of it we're asking them some questions around consent. So I think it's just that the solution is super secured and we've gone over millions of hoops within Accenture but also with Hafod itself. You know, it's more that like the type of user we deploying the solution to are just not in that world and then they are little bit worried about sharing. Not only they're worried about sharing with us but you know, in home care, there there's an option as well to share some of that data with your family. And there we also see people are kind of okay to share with us but they don't want to share with their family 'cause they don't want to have too much information kind of going potentially worrying or bothering some of their family member. So there is definitely a huge education kind of angle to embracing the technology. Not only when you create the solution but when you actually deploy it with users. >> It's a fabulous project, I am so excited by this story. It's a great story, has all the elements; technology, innovation, cidal impact, data privacy, social interactions, whether it's with family members and others, internal, external. In teams themselves. You guys doing some amazing work, thank you for sharing. It's a great project, we'll keep track of it. My final question for you guys is what comes next for the home care after the trial? What are Hafod's plans and hopes for the future? >> Maybe if I just give an overview and then invite Jamie and Laetitia. So for us, without conversations, you don't create possibilities and this really is a reflection of the culture that we try to engender. So my ask of my team is to remain curious, is to continue to explore opportunities because it's home care up to today, it could be something else tomorrow. We also recognize that we live in a world of collaboration. We need more cross industrial partnerships. We love to explore more things that Accenture, Amazon, others as well. So that's principally what I will be doing is ensuring that the culture invites us and then I hand over to the clever people like Jamie and Laetitia to get on with the technology. I think for me we've already learned an awful lot about home care and there's clearly a lot more we can learn. We'd love to build on this initial small-scale trial and see how home care could work at a bigger scale. So how would it work with thousands of users? How do we scale it up from a cohort of 50 to a cohort of 5,000? How does it work when we bring different kinds of organizations into that mix? So what if, for example, we could integrate it into health care? So a variety of services can have a holistic view of an individual and interact with one another, to put that person on the right pathway and maybe keep them out of the health and care system for longer, actually reducing the costs to the system in the long run and improving that person's outcomes. That kind of evidence speaks to decision-makers and political partners and I think that's the kind of evidence we need to build. >> Yeah, financial impact is there, it's brutal. It's a great financial impact for the system. Efficiency, better care, everything. >> Yeah and we are 100% on board for whatever comes next. >> Laetitia-- >> What about you Laetitia? >> Great program you got there. A amazing story, thank you for sharing. Congratulations on this awesome project. So much to unpack here. I think this is the future. I mean, I think this is a case study of represents all the moving parts that need to be worked on, so congratulations. >> Thank you. >> Thank you. >> We are the Art of the Possible here inside the Cube, part of AWS Reinvent Executive Summit, I'm John Furrier, your host, thanks for watching. (bright upbeat music)

Published Date : Oct 27 2021

SUMMARY :

Art of the Possible is the segment. in home care is kind of changing the game And it's the work of the and the ecosystem around it, Challenge in the U.K., You got the pandemic with isolation, the first has to be a problem to solve. So the opportunity to try and props to everyone there. and the skills and this the people like the solution for Hafod in the sense of the home care solutions? of the pandemic. and take the edge of that I can only imagine the benefits. And the earlier we can detect of the elderly? And at the moment we the consortium you guys of the journey of this particular project. blockers get in the way, the solution to are just not in that world and hopes for the future? the costs to the system impact for the system. Yeah and we are 100% on all the moving parts that We are the Art of the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JamiePERSON

0.99+

LaetitiaPERSON

0.99+

Laetitia CailleteauPERSON

0.99+

JoshPERSON

0.99+

AmazonORGANIZATION

0.99+

John FarrowPERSON

0.99+

JohnPERSON

0.99+

10%QUANTITY

0.99+

Jas BainsPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

Jamie SmithPERSON

0.99+

JasPERSON

0.99+

AccentureORGANIZATION

0.99+

U.K.LOCATION

0.99+

WalesLOCATION

0.99+

thousandsQUANTITY

0.99+

Hanover Housing AssociationORGANIZATION

0.99+

Swansea UniversityORGANIZATION

0.99+

100%QUANTITY

0.99+

20 plus yearsQUANTITY

0.99+

AGKORGANIZATION

0.99+

Echo ShowCOMMERCIAL_ITEM

0.99+

FirstQUANTITY

0.99+

one pointQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

twoQUANTITY

0.99+

50QUANTITY

0.99+

OneQUANTITY

0.99+

12 months agoDATE

0.98+

bothQUANTITY

0.98+

18 monthsQUANTITY

0.98+

pandemicEVENT

0.98+

todayDATE

0.98+

HafodORGANIZATION

0.98+

three core strandsQUANTITY

0.98+

about 1400 colleaguesQUANTITY

0.98+

three strandsQUANTITY

0.98+

5,000QUANTITY

0.98+

HafodLOCATION

0.97+

20 yearsQUANTITY

0.97+

United KingdomLOCATION

0.97+

third tierQUANTITY

0.97+

HafodPERSON

0.97+

thousands of usersQUANTITY

0.96+

Reinvent 2021EVENT

0.95+

first orderQUANTITY

0.95+

AlexaTITLE

0.94+

about 1500 peopleQUANTITY

0.93+

COVIDEVENT

0.92+

up to 14 lawyersQUANTITY

0.92+

AWS Executive SummitEVENT

0.9+

1500 soQUANTITY

0.88+

two-prong journeyQUANTITY

0.84+

4-video test


 

>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.

Published Date : Sep 27 2020

SUMMARY :

bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Exxon MobilORGANIZATION

0.99+

AndyPERSON

0.99+

Sean HagarPERSON

0.99+

Daniel WennbergPERSON

0.99+

ChrisPERSON

0.99+

USCORGANIZATION

0.99+

CaltechORGANIZATION

0.99+

2016DATE

0.99+

100 timesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

Tatsuya NagamotoPERSON

0.99+

twoQUANTITY

0.99+

1978DATE

0.99+

FoxORGANIZATION

0.99+

six systemsQUANTITY

0.99+

HarvardORGANIZATION

0.99+

Al QaedaORGANIZATION

0.99+

SeptemberDATE

0.99+

second versionQUANTITY

0.99+

CIAORGANIZATION

0.99+

IndiaLOCATION

0.99+

300 yardsQUANTITY

0.99+

University of TokyoORGANIZATION

0.99+

todayDATE

0.99+

BurnsPERSON

0.99+

Atsushi YamamuraPERSON

0.99+

0.14%QUANTITY

0.99+

48 coreQUANTITY

0.99+

0.5 microsecondsQUANTITY

0.99+

NSFORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

CBSORGANIZATION

0.99+

NTTORGANIZATION

0.99+

first implementationQUANTITY

0.99+

first experimentQUANTITY

0.99+

123QUANTITY

0.99+

Army Research OfficeORGANIZATION

0.99+

firstQUANTITY

0.99+

1,904,711QUANTITY

0.99+

oneQUANTITY

0.99+

sixQUANTITY

0.99+

first versionQUANTITY

0.99+

StevePERSON

0.99+

2000 spinsQUANTITY

0.99+

five researcherQUANTITY

0.99+

CreoleORGANIZATION

0.99+

three setQUANTITY

0.99+

second partQUANTITY

0.99+

third partQUANTITY

0.99+

Department of Applied PhysicsORGANIZATION

0.99+

10QUANTITY

0.99+

eachQUANTITY

0.99+

85,900QUANTITY

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

136 CPUQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

ScottPERSON

0.99+

2.4 gigahertzQUANTITY

0.99+

1000 timesQUANTITY

0.99+

two timesQUANTITY

0.99+

two partsQUANTITY

0.99+

131QUANTITY

0.99+

14,233QUANTITY

0.99+

more than 100 spinsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

13,580QUANTITY

0.99+

5QUANTITY

0.99+

4QUANTITY

0.99+

one microsecondsQUANTITY

0.99+

first stepQUANTITY

0.99+

first partQUANTITY

0.99+

500 spinsQUANTITY

0.99+

two identical photonsQUANTITY

0.99+

3QUANTITY

0.99+

70 years agoDATE

0.99+

IraqLOCATION

0.99+

one experimentQUANTITY

0.99+

zeroQUANTITY

0.99+

Amir Safarini NiniPERSON

0.99+

SaddamPERSON

0.99+

Coherent Nonlinear Dynamics and Combinatorial Optimization


 

Hi, I'm Hideo Mabuchi from Stanford University. This is my presentation on coherent nonlinear dynamics, and combinatorial optimization. This is going to be a talk, to introduce an approach, we are taking to the analysis, of the performance of Coherent Ising Machines. So let me start with a brief introduction, to ising optimization. The ising model, represents a set of interacting magnetic moments or spins, with total energy given by the expression, shown at the bottom left of the slide. Here the cigna variables are meant to take binary values. The matrix element jij, represents the interaction, strength and sign, between any pair of spins ij, and hi represents a possible local magnetic field, acting on each thing. The ising ground state problem, is defined in an assignment of binary spin values, that achieves the lowest possible value of total energy. And an instance of the easing problem, is specified by given numerical values, for the matrix j and vector h, although the ising model originates in physics, we understand the ground state problem, to correspond to what would be called, quadratic binary optimization, in the field of operations research. And in fact, in terms of computational complexity theory, it can be established that the, ising ground state problem is NP complete. Qualitatively speaking, this makes the ising problem, a representative sort of hard optimization problem, for which it is expected, that the runtime required by any computational algorithm, to find exact solutions, should asyntonically scale, exponentially with the number of spins, and four worst case instances at each end. Of course, there's no reason to believe that, the problem instances that actually arise, in practical optimization scenarios, are going to be worst case instances. And it's also not generally the case, in practical optimization scenarios, that we demand absolute optimum solutions. Usually we're more interested in, just getting the best solution we can, within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for computation. This focus is great interest on, so-called heuristic algorithms, for the ising problem and other NP complete problems, which generally get very good, but not guaranteed optimum solutions, and run much faster than algorithms, that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem, for which extensive compilations, of benchmarking data may be found online. A recent study found that, the best known TSP solver required median runtimes, across a library of problem instances, that scaled as a very steep route exponential, for an up to approximately 4,500. This gives some indication of the change, in runtime scaling for generic, as opposed to worst case problem instances. Some of the instances considered in this study, were taken from a public library of TSPs, derived from real world VLSI design data. This VLSI TSP library, includes instances within ranging from 131 to 744,710, instances from this library within between 6,880 and 13,584, were first solved just a few years ago, in 2017 requiring days of runtime, and a 48 core two gigahertz cluster, all instances with n greater than or equal to 14,233, remain unsolved exactly by any means. Approximate solutions however, have been found by heuristic methods, for all instances in the VLSI TSP library, with, for example, a solution within 0.014% of a known lower bound, having been discovered for an instance, with n equal 19,289, requiring approximately two days of runtime, on a single quarter at 2.4 gigahertz. Now, if we simple-minded the extrapolate, the route exponential scaling, from the study yet to n equal 4,500, we might expect that an exact solver, would require something more like a year of runtime, on the 48 core cluster, used for the n equals 13,580 for instance, which shows how much, a very small concession on the quality of the solution, makes it possible to tackle much larger instances, with much lower costs, at the extreme end, the largest TSP ever solved exactly has n equal 85,900. This is an instance derived from 1980s VLSI design, and this required 136 CPU years of computation, normalized to a single core, 2.4 gigahertz. But the 20 fold larger, so-called world TSP benchmark instance, with n equals 1,904,711, has been solved approximately, with an optimality gap bounded below 0.0474%. Coming back to the general practical concerns, of applied optimization. We may note that a recent meta study, analyze the performance of no fewer than, 37 heuristic algorithms for MaxCut, and quadratic binary optimization problems. And find the performance... Sorry, and found that a different heuristics, work best for different problem instances, selected from a large scale heterogeneous test bed, with some evidence, the cryptic structure, in terms of what types of problem instances, were best solved by any given heuristic. Indeed, there are reasons to believe, that these results for MaxCut, and quadratic binary optimization, reflect to general principle, of a performance complementarity, among heuristic optimization algorithms, and the practice of solving hard optimization problems. There thus arises the critical pre processing issue, of trying to guess, which of a number of available, good heuristic algorithms should be chosen, to tackle a given problem instance. Assuming that any one of them, would incur high cost to run, on a large problem of incidents, making an astute choice of heuristic, is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight, about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This is certainly pinpointed by researchers in the field, as a circumstance and must be addressed. So adding this all up, we see that a critical frontier, for cutting edge academic research involves, both the development of novel heuristic algorithms, that deliver better performance with lower costs, on classes of problem instances, that are underserved by existing approaches, as well as fundamental research, to provide deep conceptual insight, into what makes a given problem instance, easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law, and speculate about a so-called second quantum revolution, it's natural to talk not only about novel algorithms, for conventional CPUs, but also about highly customized, special purpose hardware architectures, on which we may run entirely unconventional algorithms, for common tutorial optimizations, such as ising problem. So against that backdrop, I'd like to use my remaining time, to introduce our work on, analysis of coherent using machine architectures, and associated optimization algorithms. Ising machines in general, are a novel class of information processing architectures, for solving combinatorial optimization problems, by embedding them in the dynamics, of analog, physical, or a cyber-physical systems. In contrast to both more traditional engineering approaches, that build ising machines using conventional electronics, and more radical proposals, that would require large scale quantum entanglement the emerging paradigm of coherent ising machines, leverages coherent nominal dynamics, in photonic or optical electronic platforms, to enable near term construction, of large scale prototypes, that leverage posting as information dynamics. The general structure of current of current CIM systems, as shown in the figure on the right, the role of the easing spins, is played by a train of optical pulses, circulating around a fiber optical storage ring, that beam splitter inserted in the ring, is used to periodically sample, the amplitude of every optical pulse. And the measurement results, are continually read into an FPGA, which uses then to compute perturbations, to be applied to each pulse, by a synchronized optical injections. These perturbations are engineered to implement, the spin-spin coupling and local magnetic field terms, of the ising hamiltonian, corresponding to a linear part of the CIM dynamics. Asynchronously pumped parametric amplifier, denoted here as PPL and wave guide, adds a crucial nonlinear component, to the CIM dynamics as well. And the basic CIM algorithm, the pump power starts very low, and is gradually increased, at low pump powers, the amplitudes of the easing spin pulses, behave as continuous complex variables, whose real parts which can be positive or negative, by the role of soft or perhaps mean field spins. Once the pump power crosses the threshold, for perimetric self oscillation in the optical fiber ring, however, the amplitudes of the easing spin pulses, become effectively quantized into binary values, while the pump power is being ramped up, the FPGA subsystem continuously applies, its measurement based feedback implementation, of the using hamiltonian terms. The interplay of the linearized easing dynamics, implemented by the FPGA , and the thresholds quantization dynamics, provided by the sink pumped parametric amplifier, result in a final state, of the optical plus amplitudes, at the end of the pump ramp, that can be read as a binary strain, giving a proposed solution, of the ising ground state problem. This method of solving ising problems, seems quite different from a conventional algorithm, that runs entirely on a digital computer. As a crucial aspect, of the computation is performed physically, by the analog continuous coherent nonlinear dynamics, of the optical degrees of freedom, in our efforts to analyze CA and performance. We have therefore turn to dynamical systems theory. Namely a study of bifurcations, the evolution of critical points, and typologies of heteroclitic orbits, and basins of attraction. We conjecture that such analysis, can provide fundamental insight, into what makes certain optimization instances, hard or easy for coherent ising machines, and hope that our approach, can lead to both improvements of the course CIM algorithm, and the pre processing rubric, for rapidly assessing the CIM insuibility of the instances. To provide a bit of intuition about how this all works. It may help to consider the threshold dynamics, of just one or two optical parametric oscillators, in the CIM architecture just described. We can think of each of the pulse time slots, circulating around the fiber ring, as are presenting an independent OPO. We can think of a single OPO degree of freedom, as a single resonant optical mode, that experiences linear dissipation, due to coupling loss, and gain in a pump near crystal, as shown in the diagram on the upper left of the slide, as the pump power is increased from zero. As in the CIM algorithm, the non-linear gain is initially too low, to overcome linear dissipation. And the OPO field remains in a near vacuum state, at a critical threshold value, gain equals dissipation, and the OPO undergoes a sort of lasing transition. And the steady States of the OPO, above this threshold are essentially coherent States. There are actually two possible values, of the OPO coherent amplitude, and any given above threshold pump power, which are equal in magnitude, but opposite in phase, when the OPO cross this threshold, it basically chooses one of the two possible phases, randomly, resulting in the generation, of a single bit of information. If we consider two uncoupled OPOs, as shown in the upper right diagram, pumped at exactly the same power at all times, then as the pump power is increased through threshold, each OPO will independently choose a phase, and thus two random bits are generated, for any number of uncoupled OPOs, the threshold power per OPOs is unchanged, from the single OPO case. Now, however, consider a scenario, in which the two appeals are coupled to each other, by a mutual injection of their out coupled fields, as shown in the diagram on the lower right. One can imagine that, depending on the sign of the coupling parameter alpha, when one OPO is lasing, it will inject a perturbation into the other, that may interfere either constructively or destructively, with the field that it is trying to generate, via its own lasing process. As a result, when can easily show that for alpha positive, there's an effective ferromagnetic coupling, between the two OPO fields, and their collective oscillation threshold, is lowered from that of the independent OPO case, but only for the two collective oscillation modes, in which the two OPO phases are the same. For alpha negative, the collective oscillation threshold, is lowered only for the configurations, in which the OPO phases are opposite. So then looking at how alpha is related to the jij matrix, of the ising spin coupling hamilitonian, it follows the, we could use this simplistic to OPO CIM, to solve the ground state problem, of the ferromagnetic or antiferromagnetic angles, to ising model, simply by increasing the pump power, from zero and observing what phase relation occurs, as the two appeals first start to lase. Clearly we can imagine generalizing the story to larger, and, however, the story doesn't stay as clean and simple, for all larger problem instances. And to find a more complicated example, we only need to go to n equals four, for some choices of jij for n equals four, the story remains simple, like the n equals two case. The figure on the upper left of this slide, shows the energy of various critical points, for a non frustrated n equals for instance, in which the first bifurcated critical point, that is the one that, by forgets of the lowest pump value a, this first bifurcated critical point, flows asyntonically into the lowest energy using solution, and the figure on the upper right, however, the first bifurcated critical point, flows to a very good, but suboptimal minimum at large pump power. The global minimum is actually given, by a distinct critical point. The first appears at a higher pump power, and is not needed radically connected to the origin. The basic CIM algorithm, is this not able to find this global minimum, such non-ideal behavior, seems to become more common at margin end, for the n equals 20 instance show in the lower plots, where the lower right pod is just a zoom into, a region of the lower left block. It can be seen that the global minimum, corresponds to a critical point, that first appears that of pump parameter a around 0.16, at some distance from the adriatic trajectory of the origin. That's curious to note that, in both of these small and examples, however, the critical point corresponding to the global minimum, appears relatively close, to the adiabatic trajectory of the origin, as compared to the most of the other, local minimum that appear. We're currently working to characterise, the face portrait typology, between the global minimum, and the adiabatic trajectory of the origin, taking clues as to how the basic CIM algorithm, could be generalized, to search for non-adiabatic trajectories, that jumped to the global minimum, during the pump up, of course, n equals 20 is still too small, to be of interest for practical optimization applications. But the advantage of beginning, with the study of small instances, is that we're able to reliably to determine, their global minima, and to see how they relate to the idea, that trajectory of the origin, and the basic CIM algorithm. And the small land limit, We can also analyze, for the quantum mechanical models of CAM dynamics, but that's a topic for future talks. Existing large-scale prototypes, are pushing into the range of, n equals, 10 to the four, 10 to the five, 10 to the six. So our ultimate objective in theoretical analysis, really has to be, to try to say something about CAM dynamics, and regime of much larger in. Our initial approach to characterizing CAM behavior, in the large end regime, relies on the use of random matrix theory. And this connects to prior research on spin classes, SK models, and the tap equations, et cetera, at present we're focusing on, statistical characterization, of the CIM gradient descent landscape, including the evolution of critical points, And their value spectra, as the pump powers gradually increase. We're investigating, for example, whether there could be some way, to explain differences in the relative stability, of the global minimum versus other local minima. We're also working to understand the deleterious, or potentially beneficial effects, of non-ideologies such as asymmetry, in the implemented using couplings, looking one step ahead, we plan to move next into the direction, of considering more realistic classes of problem instances, such as quadratic binary optimization with constraints. So in closing I should acknowledge, people who did the hard work, on these things that I've shown. So my group, including graduate students, Edwin Ng, Daniel Wennberg, Ryatatsu Yanagimoto, and Atsushi Yamamura have been working, in close collaboration with, Surya Ganguli, Marty Fejer and Amir Safavi-Naeini. All of us within the department of applied physics, at Stanford university and also in collaboration with Yoshihisa Yamamoto, over at NTT-PHI research labs. And I should acknowledge funding support, from the NSF by the Coherent Ising Machines, expedition in computing, also from NTT-PHI research labs, army research office, and ExxonMobil. That's it. Thanks very much.

Published Date : Sep 21 2020

SUMMARY :

by forgets of the lowest pump value a,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Edwin NgPERSON

0.99+

ExxonMobilORGANIZATION

0.99+

Daniel WennbergPERSON

0.99+

85,900QUANTITY

0.99+

Marty FejerPERSON

0.99+

Ryatatsu YanagimotoPERSON

0.99+

4,500QUANTITY

0.99+

Hideo MabuchiPERSON

0.99+

2017DATE

0.99+

Amir Safavi-NaeiniPERSON

0.99+

13,580QUANTITY

0.99+

Surya GanguliPERSON

0.99+

48 coreQUANTITY

0.99+

136 CPUQUANTITY

0.99+

1980sDATE

0.99+

14,233QUANTITY

0.99+

20QUANTITY

0.99+

Yoshihisa YamamotoPERSON

0.99+

oneQUANTITY

0.99+

NTT-PHIORGANIZATION

0.99+

1,904,711QUANTITY

0.99+

2.4 gigahertzQUANTITY

0.99+

Atsushi YamamuraPERSON

0.99+

19,289QUANTITY

0.99+

firstQUANTITY

0.99+

twoQUANTITY

0.99+

two appealsQUANTITY

0.99+

two possible phasesQUANTITY

0.99+

10QUANTITY

0.99+

two caseQUANTITY

0.99+

Coherent Ising MachinesORGANIZATION

0.98+

0.014%QUANTITY

0.98+

131QUANTITY

0.98+

each pulseQUANTITY

0.98+

two possible valuesQUANTITY

0.98+

NSFORGANIZATION

0.98+

744,710QUANTITY

0.98+

fourQUANTITY

0.98+

Stanford UniversityORGANIZATION

0.98+

20 foldQUANTITY

0.98+

13,584QUANTITY

0.98+

bothQUANTITY

0.97+

two gigahertzQUANTITY

0.96+

single coreQUANTITY

0.96+

singleQUANTITY

0.95+

sixQUANTITY

0.95+

zeroQUANTITY

0.95+

fiveQUANTITY

0.95+

6,880QUANTITY

0.94+

approximately two daysQUANTITY

0.94+

eachQUANTITY

0.93+

each endQUANTITY

0.93+

37 heuristicQUANTITY

0.93+

MoorePERSON

0.93+

each OPOQUANTITY

0.93+

two collective oscillation modesQUANTITY

0.93+

single bitQUANTITY

0.92+

each thingQUANTITY

0.92+

20 instanceQUANTITY

0.91+

one stepQUANTITY

0.9+

around 0.16QUANTITY

0.89+

Stanford universityORGANIZATION

0.88+

single quarterQUANTITY

0.87+

approximately 4,500QUANTITY

0.87+

second quantum revolutionQUANTITY

0.85+

a yearQUANTITY

0.84+

two random bitsQUANTITY

0.83+

two OPOQUANTITY

0.81+

few years agoDATE

0.77+

two uncoupled OPOsQUANTITY

0.76+

MaxCutTITLE

0.74+

four worst caseQUANTITY

0.71+

0.0474%QUANTITY

0.7+

up toQUANTITY

0.7+

CoherentORGANIZATION

0.69+

Securing Your Cloud, Everywhere


 

>>welcome to our session on security titled Securing Your Cloud. Everywhere With Me is Brian Langston, senior solutions engineer from Miranda's, who leads security initiatives from Renta's most security conscious customers. Our topic today is security, and we're setting the bar high by talking in some depth about the requirements of the most highly regulated industries. So, Brian four Regulated industries What do you perceive as the benefits of evolution from classic infra za service to container orchestration? >>Yeah, the adoption of container orchestration has given rise to five key benefits. The first is accountability. Think about the evolution of Dev ops and the security focused version of that team. Deb. SEC ops. These two competencies have emerged to provide, among other things, accountability for the processes they oversee. The outputs that they enable. The second benefit is audit ability. Logging has always been around, but the pervasiveness of logging data within container or container environments allows for the definition of audit trails in new and interesting ways. The third area is transparency organizations that have well developed container orchestration pipelines are much more likely to have a higher degree of transparency in their processes. This helps development teams move faster. It helped operations teams operations teams identify and resolve issues easier and help simplify the observation and certification of security operations by security organizations. Next is quality. Several decades ago, Toyota revolutionized the manufacturing industry when they implemented the philosophy of continuous improvement. Included within that philosophy was this dependency and trust in the process as the process was improved so that the quality of the output Similarly, the refinement of the process of container orchestration yields ah, higher quality output. The four things have mentioned ultimately points to a natural outcome, which is speed when you don't have to spend so much time wondering who does what or who did what. When you have the clear visibility to your processes and because you can continuously improve the quality of your work, you aren't wasting time in a process that produces defects or spending time and wasteful rework phases. You can move much faster than we've seen this to be the case with our customers. >>So what is it specifically about? Container orchestration that gives these benefits, I guess. I guess I'm really asking why are these benefits emerging now around these technologies? What's enabling them, >>right? So I think it boils down to four things related to the orchestration pipelines that are also critical components. Two successful security programs for our customers and related industry. The first one is policy. One of the core concepts and container orchestration is this idea of declaring what you want to happen or declaring the way you want things done? One place where declarations air made our policies. So as long as we can define what we want to happen, it's much easier to do complementary activities like enforcement, which is our second enabler. Um, tools that allow you to define a policy typically have a way to enforce that policy. Where this isn't the case, you need to have a way of enforcing and validating the policies objectives. Miranda's tools allow custom policies to be written and also enforce those policies. The third enabler is the idea of a baseline. Having a well documented set of policies and processes allows you to establish a baseline. Um, it allows you to know what's normal. Having a baseline allows you to measure against it as a way of evaluating whether or not you're achieving your objectives with container orchestration. The fourth enabler of benefits is continuous assessment, which is about measuring constantly back to what I said a few minutes ago. With the toilet away measuring constantly helps you see whether your processes and your target and state are being delivered as your output deviates from that baseline, your adjustments can be made more quickly. So these four concepts, I think, could really make or break your compliance status. >>It's a really way interesting way of thinking about compliance. I had thought previously back compliance, mostly as a as a matter of legally declaring and then trying to do something. But at this point, we have methods beyond legal boilerplate for asserting what we wanna happen, as you say, and and this is actually opening up new ways to detect, deviation and and enforce failure to comply. That's really exciting. Um, so you've you've touched on the benefits of container orchestration here, and you've provided some thoughts on what the drivers on enablers are. So what does Miranda's fit in all this? How does how are we helping enable these benefits, >>right? Well, our goal and more antis is ultimately to make the world's most compliant distribution. We we understand what our customers need, and we have developed our product around those needs, and I could describe a few key security aspects about our product. Um, so Miranda's promotes this idea of building and enabling a secure software supply chain. The simplified version of that that pertains directly to our product follows a build ship run model. So at the build stage is doctor trusted registry. This is where images are stored following numerous security best practices. Image scanning is an optional but highly recommended feature to enable within D T R. Image tags can be regularly pruned so that you have the most current validated images available to your developers. And the second or middle stage is the ship stage, where Miranda's enforces policies that also follow industry best practices, as well as custom image promotion policies that our customers can write and align to their own internal security requirements. The third and final stages to run stage. And at this stage, we're talking about the engine itself. Docker Engine Enterprise is the Onley container, run time with 51 40 dash to cryptography and has many other security features built in communications across the cluster across the container platform are all secure by default. So this build ship stage model is one way of how our products help support this idea of a secure supply chain. There are other aspects of the security supply chain that arm or customer specific that I won't go into. But that's kind of how we could help our product. The second big area eso I just touched on the secure supply chain. The second big area is in a Stig certification. Um, a stick is basically an implementation or configuration guide, but it's published by the U. S government for products used by the US government. It's not exclusive to them, but for customers that value security highly, especially in a regulated industry, will understand the significance and value that the Stig certification brings. So in achieving the certification, we've demonstrated compliance or alignment with a very rigid set of guidelines. Our fifth validation, the cryptography and the Stig certification our third party at two stations that our product is secure, whether you're using our product as a government customer, whether you're a customer in a regulated industry or something else, >>I did not understand what the Stig really Waas. It's helpful because this is not something that I think people in the industry by and large talk about. I suspect because these things are hard to get and time consuming to get s so they don't tend to bubble up to the top of marketing speak the way glitzy new features do that may or may not >>be secure. >>The, uh so then moving on, how has container orchestration changed? How your customers approach compliance assessment and reporting. >>Yeah, This has been an interesting experience and observation as we've worked with some of our customers in these areas. Eso I'll call out three areas. One is the integration of assessment tooling into the overall development process. The second is assessment frequency and then the third is how results are being reported, which includes what data is needed to go into the reporting. There are very likely others that could be addressed. But those are three things that I have noticed personally and working with customers. >>What do you mean exactly? By integration of assessment tooling. >>Yeah. So our customers all generally have some form of a development pipeline and process eso with various third party and open source tools that can be inserted at various phases of the pipeline to do things like status static source would analysis or host scanning or image scanning and other activities. What's not very well established in some cases is how everything fits within the overall pipeline framework. Eso fit too many customers, ends up having a conversation with us about what commands need should be run with what permissions? Where in the environment should things run? How does code get there that does this scanning? Where does the day to go? Once the out once the scan is done and how will I consume it? Thies Real things where we can help our customers understand? Um, you know what? Integration? What? Integration of assessment. Tooling really means. >>It is fascinating to hear this on, baby. We can come back to it at the end. But what I'm picking out of this Ah, this the way you speak about this and this conversation is this kind of re emergence of these Japanese innovations in product productivity in in factory floor productivity. Um, like, just in time delivery and the, you know, the Toyota Miracle and, uh, and that kind of stuff. Fundamentally, it's someone Yesterday, Anders Wahlgren from cloud bees, of course. The C I. C D expert told me, um, that one of the things he likes to tell his, uh consult ease and customers is to put a GoPro on the head of your code and figure out where it's going and how it's spending its time, which is very reminiscent of these 19 fifties time and motion studies, isn't it that that that people, you know pioneered accelerating the factory floor in the industrial America of the mid century? The idea that we should be coming back around to this and doing it at light speed with code now is quite fascinating. >>Yeah, it's funny how many of those same principles are really transferrable from 50 60 70 years ago to today. Yeah, quite fascinating. >>So getting back to what you were just talking about integrating, assessment, tooling, it sounds like that's very challenging. And you mentioned assessment frequency and and reporting. What is it about those areas that that's required? Adaptation >>Eso eso assessment frequency? Um, you know, in legacy environments, if we think about what those look like not too long ago, uh, compliance assessment used to be relatively infrequent activity in the form of some kind of an audit, whether it be a friendly peer review or intercompany audit. Formal third party assessments, whatever. In many cases, these were big, lengthy reviews full of interview questions, Um, it's requests for information, periods of data collection and then the actual review itself. One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities would sometimes go unnoticed or unmitigated until these reviews at it. But in this era of container orchestration, with the decomposition of everything in the software supply chain and with clearer visibility of the various inputs to the build life cycle, our customers can now focus on what tooling and processes can be assembled together in the form of a pipeline that allows constant inspection of a continuous flow of code from start to finish. And they're asking how our product can integrate into their pipeline into their Q A frameworks to help simplify this continuous assessment framework. Eso that's that kind of addresses the frequency, uh, challenge now regarding reporting, our customers have had to reevaluate how results are being reported and the data that's needed in the reporting. The root of this change is in the fact that security has multiple stakeholder groups and I'll just focus on two of them. One is development, and their primary focus, if you think about it, is really about finding and fixing defects. That's all they're focused on, really, is there is there pushing code? The other group, though, is the Security Project Management Office, or PMO. This group is interested in what security controls are at risk due to those defects. So the data that you need for these two stakeholder groups is very different. But because it's also related, it requires a different approach to how the data is expressed, formatted and ultimately integrated with sometimes different data sources to be able to appease both use cases. >>Mhm. So how does Miranda's help improve the rate of compliance assessment? Aziz? Well, as this question of the need for differential data presentation, >>right, So we've developed on exposed a P I S that helped report the compliance status of our product as it's implemented in our customers on environment. So through these AP eyes, we express the data and industry standard formats using plastic out Oscar is a relatively new project out of the mist organization. It's really all about standardizing a set of standards instead of formats that expresses control information. So in this way our customers can get machine and human readable information related to compliance, and that data can then be massaged into other tools or downstream processes that our customers might have. And what I mean by downstream processes is if you're a development team and you have the inspection tools, the process is to gather findings defects related to your code. A downstream process might be the ticketing system with the era that might log a formal defect or that finding. But it all starts with having a common, standard way of expressing thes scan output. And the findings such that both development teams and and the security PMO groups can both benefit from the data. So essentially we've been following this philosophy of transparency, insecurity. What we mean by that is security isn't or should not be a black box of information on Lee, accessible and consumable by security professionals. Assessment is happening proactively in our product, and it's happening automatically. We're bringing security out of obscurity by exposing the aspects of our product that ultimately have a bearing on your compliance status and then making that information available to you in very user friendly ways. >>It's fascinating. Uh uh. I have been excited about Oscar's since, uh, since first hearing about it, Um, it seems extraordinarily important to have what is, in effect, a ah query capability. Um, that that let's that that lets different people for different reasons formalize and ask questions of a system that is constantly in flux, very, very powerful. So regarding security, what do you see is the basic requirements for container infrastructure and tools for use in production by the industries that you are working with, >>right? So obviously, you know, the tools and infrastructure is going to vary widely across customers. But Thio generalize it. I would refer back to the concept I mentioned earlier of a secure software supply chain. There are several guiding principles behind us that are worth mentioning. The first is toe have a strategy for ensuring code quality. What this means is being able to do static source code analysis, static source code analysis tools are largely language specific, so there may be a few different tools that you'll need to have to be able to manage that, um, second point is to have a framework for doing regular testing or even slightly more formal security assessments. There are plenty of tools that can help get a company started doing this. Some of these tools are scanning engines like open ESCAP that's also a product of n'est open. ESCAP can use CS benchmarks as inputs, and these tools do a very good job of summarizing and visualizing output, um, along the same family or idea of CS benchmarks. There's many, many benchmarks that are published. And if you look at your own container environment, um, there are very likely to be many benchmarks that can form the core platform, the building blocks of your container environment. There's benchmarks for being too, for kubernetes, for Dr and and it's always growing. In fact, Mirante is, uh, editing the benchmark for container D, so that will be a formal CSCE benchmark coming up very shortly. Um, next item would be defining security policies that line with your organization's requirements. There are a lot of things that come out of box that comes standard that comes default in various products, including ours, but we also give you through our product. The ability to write your own policies that align with your own organization's requirements, uh, minimizing your tax surface. It's another key area. What that means is only deploying what's necessary. Pretty common sense. But sometimes it's overlooked. What this means is really enabling required ports and services and nothing more. Um, and it's related to this concept of least privilege, which is the next thing I would suggest focusing on these privileges related to minimizing your tax service. It's, uh, it's about only allowing permissions to those people or groups that excuse me that are absolutely necessary. Um, within the container environment, you'll likely have heard this deny all approach. This denial approach is recommended here, which means deny everything first and then explicitly allow only what you need. Eso. That's a very common, uh uh, common thing that sometimes overlooked in some of our customer environments. Andi, finally, the idea of defense and death, which is about minimizing your plast radius by implementing multiple layers of defense that also are in line with your own risk management strategy. Eso following these basic principles, adapting them to your own use cases and requirements, uh, in our experience with our customers, they could go a long way and having a secure software supply chain. >>Thank you very much, Brian. That was pretty eye opening. Um, and I had the privilege of listening to it from the perspective of someone who has been working behind the scenes on the launch pad 2020 event. So I'd like to use that privilege to recommend that our listeners, if you're interested in this stuff certainly if you work within one of these regulated industries in a development role, um, that you may want to check out, which will be easy for you to do today, since everything is available once it's been presented. Matt Bentley's live presentation on secure Supply Chain, where he demonstrates one possible example of a secure supply chain that permits image. Signing him, Scanning on content Trust. Um, you may want to check out the session that I conducted with Andres Falcon at Cloud Bees who talks about thes um, these industrial efficiency factory floor time and motion models for for assessing where software is in order to understand what policies can and should be applied to it. Um, and you will probably want to frequent the tutorial sessions in that track, uh, to see about how Dr Enterprise Container Cloud implements many of these concentric security policies. Um, in order to provide, you know, as you say, defense in depth. There's a lot going on in there, and, uh, and it's ah, fascinating Thio to see it all expressed. Brian. Thanks again. This has been really, really educational. >>My pleasure. Thank you. >>Have a good afternoon. >>Thank you too. Bye.

Published Date : Sep 15 2020

SUMMARY :

about the requirements of the most highly regulated industries. Yeah, the adoption of container orchestration has given rise to five key benefits. So what is it specifically about? or declaring the way you want things done? on the benefits of container orchestration here, and you've provided some thoughts on what the drivers So in achieving the certification, we've demonstrated compliance or alignment I suspect because these things are hard to get and time consuming How your customers approach compliance assessment One is the integration of assessment tooling into the overall development What do you mean exactly? Where does the day to go? America of the mid century? Yeah, it's funny how many of those same principles are really transferrable So getting back to what you were just talking about integrating, assessment, One of the big drawbacks to this lengthy engagement is an infrequent engagement is that vulnerabilities Well, as this question of the need for differential the process is to gather findings defects related to your code. the industries that you are working with, finally, the idea of defense and death, which is about minimizing your plast Um, and I had the privilege of listening to it from the perspective of someone who has Thank you. Thank you too.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Brian LangstonPERSON

0.99+

Matt BentleyPERSON

0.99+

Anders WahlgrenPERSON

0.99+

ToyotaORGANIZATION

0.99+

Andres FalconPERSON

0.99+

Cloud BeesORGANIZATION

0.99+

OneQUANTITY

0.99+

two stationsQUANTITY

0.99+

U. S governmentORGANIZATION

0.99+

50DATE

0.99+

bothQUANTITY

0.99+

thirdQUANTITY

0.99+

second pointQUANTITY

0.99+

ESCAPTITLE

0.99+

firstQUANTITY

0.99+

four thingsQUANTITY

0.99+

third areaQUANTITY

0.98+

US governmentORGANIZATION

0.98+

secondQUANTITY

0.98+

five key benefitsQUANTITY

0.98+

MirandaORGANIZATION

0.98+

second enablerQUANTITY

0.98+

todayDATE

0.97+

second benefitQUANTITY

0.97+

fifth validationQUANTITY

0.97+

OscarORGANIZATION

0.97+

three thingsQUANTITY

0.97+

MiracleCOMMERCIAL_ITEM

0.97+

ThioPERSON

0.97+

MiranteORGANIZATION

0.97+

AzizPERSON

0.97+

StigORGANIZATION

0.97+

one wayQUANTITY

0.96+

two competenciesQUANTITY

0.96+

Several decades agoDATE

0.95+

two stakeholder groupsQUANTITY

0.95+

YesterdayDATE

0.95+

four conceptsQUANTITY

0.94+

second bigQUANTITY

0.93+

fourth enablerQUANTITY

0.93+

19 fiftiesDATE

0.92+

RentaORGANIZATION

0.92+

both useQUANTITY

0.91+

three areasQUANTITY

0.9+

Securing Your CloudTITLE

0.9+

oneQUANTITY

0.9+

One placeQUANTITY

0.87+

51 40 dashQUANTITY

0.87+

D TTITLE

0.86+

launch pad 2020EVENT

0.86+

GoProORGANIZATION

0.86+

mid centuryDATE

0.85+

70 years agoDATE

0.84+

first oneQUANTITY

0.83+

few minutesDATE

0.83+

OscarEVENT

0.82+

two of themQUANTITY

0.81+

JapaneseOTHER

0.8+

Everywhere With MeTITLE

0.79+

60DATE

0.78+

Security Project Management OfficeORGANIZATION

0.77+

third enablerQUANTITY

0.75+

one possibleQUANTITY

0.74+

StigTITLE

0.67+

DebPERSON

0.66+

PMOORGANIZATION

0.62+

Two successful security programsQUANTITY

0.62+

AndiPERSON

0.61+

Dr Enterprise Container CloudORGANIZATION

0.6+

fourQUANTITY

0.6+

Docker EngineORGANIZATION

0.59+

AmericaLOCATION

0.53+

ThiesQUANTITY

0.5+

EsoORGANIZATION

0.49+

LeeORGANIZATION

0.48+

MirandaPERSON

0.47+

Anurag Goel, Render & Steve Herrod, General Catalyst | CUBE Conversation, June 2020


 

>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, and welcome to this CUBE Conversation, from our Boston area studio, I'm Stu Miniman, happy to welcome to the program, first of all we have a first time guest, always love when we have a founder on the program, Anurag Goel is the founder and CEO of Render, and we've brought along a longtime friend of the program, Dr. Steve Herrod, he is a managing director at General Catalyst, a investor in Render. Anurag and Steve, thanks so much for joining us. >> Thank you for having me. >> Yeah, thanks, Stu. >> All right, so Anurag, Render, your company, the tagline is the easiest cloud for developers and startups. It's a rather bold statement, most people feel that the first generation of cloud has happened and there were certain clear winners there. The hearts and minds of developers absolutely has been a key thing for many many companies, and one of those drivers in the software world. Why don't you give us a little bit of your background, and as the founder of the company, what was it, the opportunity that you saw, that had you create Render? >> Yeah, so I was the fifth engineer at Stripe, and helped launch the company and grow it to five billion dollars in revenue. And throughout that period, I saw just how much money we were spending on just hiring DevOps engineers, AWS was a huge huge management headache, really, there's no other way to describe it. And even after I left Stripe, I was thinking hard about what I wanted to do next, and a lot of those ideas required some form of development and deployment, and putting things in production, and every single time I had to do the same thing over and over and over again, as a developer, so despite all the advancements in the cloud, it was always repetitive work, that wasn't just for my projects, I think a lot of my friends felt the same way. And so, I decided that we needed to automate some of these new things that have come about, as part of the regular application deployment process, and how it evolves, and that's how Render was born. >> All right, so Steve, remember in the early days, cloud was supposed to be easy and inexpensive, I've been saying on theCUBE it's like well, I guess it hasn't quite turned out that way. Love your viewpoint a little bit, because you've invested here, to really be competitive in the cloud, tens of billions of dollars a year, that need to go into this, right? >> Yeah, I had the fortunate chance to meet Anurag early on, General Catalyst was an investor in Stripe, and so seeing what they did sort of spurred us to think about this, but I think we've talked about this before, also, on theCUBE, even back, long ago in the VMware days, we looked very seriously at buying Heroku, one of the early players, and still around, obviously, at Salesforce in this PaaS space, and every single infrastructure conversation I've had from the start, I have to come back to myself and come back to everyone else and just say, don't forget, the only reason any infrastructure even exists is to run applications. And as we talked about, the first generation of cloud, it was about, let's make the infrastructure disappear, and make it programmatic, but I think even that, we're realizing from developers, that is just still way too low of an abstraction level. You want to write code, you want to have it in GitHub, and you want to just press go, and it should automatically deploy, automatically scale, automatically secure itself, and just let the developer focus purely on the app, and that's a idea that people have been talking about for 20 years, and should continue to talk about, but I really think with Render, we found a way to make it just super easy to deploy and run, and certainly it is big players out there, but it really starts with developers loving the platform, and that's been Anurag's obsession since I met him. >> Yeah, it's interesting, when I first was reading I'm like "Wait," reminds me a lot of somebody like DigitalOcean, cloud for developers who are, Steve, we walked through, the PaaS discussion has gone through so many iterations, what would containerization do for things, or serverless was from its name, I don't need to think about that underlying layer. Anurag, give us a little bit as to how should we think of Render, you are a cloud, but you're not so much, you're not an infrastructure layer, you're not trying to compete against the laundry list of features that AWS, Azure, or Google have, you're a little bit different than some of the previous PaaS players, and you're not serverless, so, what is Render? >> Yeah, it is actually a new category that has come about because of the advent of containers, and because of container orchestration tools, and all of the surrounding technologies, that make it possible for companies like Render to innovate on top of those things, and provide experiences to developers that are essentially serverless, so by serverless you could mean one of two things, or many things really, but the way in which Render is serverless is you just don't have to think about servers, all you need to do is connect your code to GitHub, and give Render a quick start command for your server and a build command if needed, and we suggest a lot of those values ourselves, and then every push to your GitHub repo deploys a new version of your service. And then if you wanted to check out pull requests, which is a way developers test out code before actually pushing it to deployment, every pull request ends up creating a new instance of your service, and you can do everything from a single static site, to building complex clusters of several microservices, as well as managed Postgres, things like clustered Kafka and Elasticsearch, and really one way to think about Render, is it is the platform that every company ends up building internally, and spends a lot of time and money to build, and we're just doing it once for everyone and doing it right, and this is what we specialize in, so you don't have to. >> Yeah, just to add to that if I could, Stu, what's I think interesting is that we've had and talked about a lot of startups doing a lot of different things, and there's a huge amount of complexity to enable all of this to work at scale, and to make it work with all the things you look for, whether it's storage or CDNs, or metrics and alerting and monitoring, all of these little startups that we've gone through and big companies alike, if you could just hide that entirely from the developer and just make it super easy to use and deploy, that's been the mission that Anurag's been on to start, and as you hear it from some of the early customers, and how they're increasing the usage, it's just that love of making it simple that is key in this space. >> All right, yeah, Anurag, maybe it would really help illustrate things if you could talk a little bit about some of your early customers, their use case, and give us what stats you can about how your company's growing. >> Certainly. So, one of our more prominent customers was the Pete Buttigieg campaign, which ran through most of 2019, and through the first couple of months of 2020. And they moved to us from Google Cloud, because they just could not or did not want to deal with the complexity in today's standard infrastructure providers, where you get a VM and then you have to figure out how to work with it, or even Managed Kubernetes, actually, they were trying to run on Managed Kubernetes on GKE, and that was too complex or too much to manage for the team. And so they moved all of their infrastructure over to Render, and they were able to service billions of requests over the next few months, just on our platform, and every time Pete Buttigieg went on stage during a debate and said "Oh, go to PeteForAmerica.com," there's a huge spike in traffic on our platform, and it scaled with every debate. And so that's just one example of where really high quality engineering teams are saying "No, this stuff is too complex, it doesn't need to be," and there is a simpler alternative, and Render is filling in that gap. We also have customers all over, from single indie hackers who are just building out their new project ideas, to late stage companies like Stripe, where we are making sure that we scale with our users, and we give them the things that they would need without them having to "mature" into AWS, or grow into AWS. I think Render is built for the entire lifecycle of a company, which is you start off really easily, and then you grow with us, and that is what we're seeing with Render where a lot of customers are starting out simple and then continuing to grow their usage and their traffic with us. >> Yeah, I was doing some research getting ready for this, Anurag, I saw, not necessarily you're saying that you're cheaper, but there are some times that price can help, performance can be better, if I was a Heroku customer, or an AWS customer, I guess what might be some of the reasons that I'd be considering Render? >> So, for Heroku, I think the comparison of course, there's a big difference in price, because we think Heroku is significantly overpriced, because they have a perpetual free tier, and so their paid customers end up footing the bill for that. We don't have a perpetual free tier that way, we make sure that our paid customers pay what's fair, but more importantly, we have features that just haven't been available in any platform as a service up until now, for example, you cannot spin up persistent storage, block storage, in Heroku, you cannot set up private networking in Heroku as a developer, unless you pay for some crazy enterprise tier which is 1500, 3000 dollars a month. And Render just builds all of that into the platform out of the box, and when it comes to AWS, again, there's no comparison in terms of ease of use, we'll never be cheaper than AWS, that's not our goal either, it's our goal to make sure that you never have to deal with the complexity of AWS while still giving you all of the functionality that you would need from AWS, and when you think about applications as applications and services as opposed to applications that are running on servers, that's where Render makes it much easier for developers and development teams to say "Look, we don't actually need "to hire hundreds of DevOps people," we can significantly reduce our DevOps team and the existing DevOps team that we have can focus on application-level concerns, like performance. >> All right, so Steve, I guess, a couple questions for you, number one is, we haven't talked about security yet, which I know is a topic near and dear to your heart, was one of the early concerns about cloud, but now often is a driver to move to cloud, give us the security angle for this space. >> Yeah, I mean the key thing in all of the space is to get rid of the complexity, and complexity and human error is often, as we've talked about, that is the number one security problem. So by taking this fresh approach that's all about just the application, and a very simple GitOps-based workflow for it, you're not going to have the human error that typically has misconfigured things and coming into there, I think more broadly, the overall notion of the serverless world has also been a very nice move forward for security. If you're only bringing up and taking down the pieces of the application as needed, they're not there to be hacked or attacked. So I think for those two reasons, this is really a more modern way of looking at it, and again, I think we've talked about many times, security is the bane of DevOps, it's the slowest part of any deployment, and the more we get rid of that, the more the extra value proposition comes safer and also faster to deploy. >> The question I'd like to hear both of you is, the role of the developer has changed an awful lot. Five years ago, if I talked to companies, and they were trying to bring DevOps to the enterprise, or anything like that, it seemed like they were doomed, but things have matured, we all understand how important the developer is, and it feels like that line between the infrastructure team and the developer team is starting to move, or at least have tools and communication happening between them, I'd love, maybe Steve if you can give us a little bit your macroview of it, and Anurag, where that plays for Render too. >> Yeah, and Anurag especially would be able to go into our existing customers. What I love about Render, this is a completely clean sheet approach to thinking about, get rid of infrastructure, just make it all go away, and have it be purely there for the developers. Certainly the infrastructure people need to audit and make sure that you're passing the certifications and make sure that it has acceptable security, and data retention and all those other pieces, but that becomes Anurag's problem, not the developer problem. And so that's really how you look at it. The second thing I've seen across all these startups, you don't typically have, especially, you're not talking about startups, but mid-sized companies and above, they don't convert all the way to DevOps. You typically have people peeling off individual projects, and trying to move faster, and use some new approach for those, and then as those hopefully go successful, more and more of the existing projects will begin to move over there, and so what Render's been doing, and what we've been hoping from the start, is let's attract some of the key developers and key new projects, and then word will spread within the companies from there, but so the answer, and a lot of these companies make developers love you, and make the infrastructure team at least support you. >> Yeah, and that was a really good point about developers and infrastructure, DevOps people, the line between them sort of thinning, and becoming more of a gray area, I think that's absolutely right, I think the developers want to continue to think about code, but then, in today's environment, outside of Render when we see things like AWS, and things like DigitalOcean, you still see developers struggling. And in some ways, Render is making it easy for smaller companies and developers and startups to use the same best practices that a fully fledged DevOps team would give them, and then for larger companies, again, it makes it much easier for them to focus their efforts on business development and making sure they're building features for their users, and making their apps more secure outside of the infrastructure realm, and not spending as much time just herding servers, and making those servers more secure. To give you an example, Render's machines aren't even accessible from the public internet, where our workloads run, so there's no firewall to configure, really, for your app, there's no DMZ, there's no VPN. And then when you want to make sure that you're just, you want a private network, that's just built into Render along with service discovery. All your services are visible to each other, but not to anyone else. And just setting those things up, on something like AWS, and then managing it on an ongoing basis, is a huge, huge, huge cost in terms of resources, and people. >> All right, so Anurag, you just opened your first region, in Europe, Frankfurt if I remember right. Give us a little bit as to what growth we should expect, what you're seeing, and how you're going to be expanding your services. >> Yeah, so the expansion to Europe was by far our most requested feature, we had a lot of European users using Render, even though our servers were, until now, based in the US. In fact, one of, or perhaps the largest recipe-sharing site in Italy was using Render, even though the servers were in the US, and all their users were in Italy, and when we moved to Europe, that was like, it was Christmas come early for them, and they just started moving over things to our European region. But that's just the start, we have to make sure that we make compute as accessible to everyone, not just in the US or Europe but also in other places, so we're looking forward to expanding in Asia, to expanding in South America, and even Africa. And our goal is to make sure that your applications can run in a way that is completely transparent to where they're running, and you can even say "Look, I just want my application to run "in these four regions across the globe, "you figure out how to do it," and we will. And that's really the sort of dream that a lot of platforms as service have been selling, but haven't been able to deliver yet, and I think, again, Render is sort of this, at this point in time, where we can work on those crazy crazy dreams that we've been selling all along, and actually make them happen for companies that have been burned by platforms as a service before. >> Yeah, I guess it brings up a question, you talk about platforms, and one of the original ideas of PaaS and one of the promises of containerization was, I should be able to focus on my code and not think about where it lives, but part of that was, if I need to be able to run it somewhere else, or want to be able to move it somewhere else, that I can. So that whole discussion of portability, in the Kubernetes space, it definitely is something that gets talked quite a bit about. And can I move my code, so where does multicloud fit into your customers' environments, Anurag, and is it once they come onto Render, they're happy and it's easy and they're just doing it, or are there things that they develop on Render and then run somewhere else also, maybe for a region that you don't have, how does multicloud fit into your customers' world? >> That's a great question, and I think that multicloud is a reality that will continue to exist, and just grow over time, because not every cloud provider can give you every possible service you can think of, obviously, and so we have customers who are using, say, Redshift, on AWS, but they still want to run their compute workloads on Render. And as a result, they connect to AWS from their services running on Render. The other thing to point out here, is that Render does not force you into a specific paradigm of programming. So you can take your existing apps that have been containerized, or not, and just run them as-is on Render, and then if you don't like Render for whatever reason, you can take them away without really changing anything in your app, and run them somewhere else. Now obviously, you'll have to build out all the other things that Render gives you out of the box, but we don't lock you in by forcing you to program in a way that, for example, AWS Lambda does. And when it comes to the future, multicloud, I think Render will continue to run in all the major clouds, as well as our own data centers, and make sure that our customers can run the appropriate workloads wherever they are, as well as connect to them from the Render services with ease. >> Excellent. >> And maybe I'll make one more point if I could, Stu, which is one thing I've been excited to watch is the, in any of these platform as a services, you can't do everything yourself, so you want the opensource package vendors and other folks to really buy into this platform too, and one exciting thing we've seen at Render is a lot of the big opensource packages are saying "Boy, it'd be easier for our customers to use our opensource "if it were running on Render." And so this ecosystem and this set of packages that you can use will just be easier and easier over time, and I think that's going to lead to, at the end of the day people would like to be able to move their applications and have it run anywhere, and I think by having those services here, ultimately they're going to deploy to AWS or Google or somewhere else, but it is really the right abstraction layer for letting people build the app they want, that's going to be future-proof. >> Excellent, well Steve and Anurag, thank you so much for the update, great to hear about Render, look forward to hearing more updates in the future. >> Thank you, Stu. >> Thanks, Stu, good to talk to you. >> All right, and stay tuned, lots more coverage, if you go to theCUBE.net you can see all of the events that we're doing with remote coverage, as well as the back catalog of what we've done. I'm Stu Miniman, thank you for watching theCUBE. (calm music)

Published Date : Jun 8 2020

SUMMARY :

leaders all around the world, and we've brought along a and as the founder of the company, and grow it to five that need to go into this, right? and just let the developer I don't need to think about and all of the surrounding technologies, and to make it work with us what stats you can about and then continuing to grow their usage and the existing DevOps near and dear to your heart, and the more we get rid of that, and the developer team and make sure that you're Yeah, and that was a to be expanding your services. and you can even say and one of the original ideas of PaaS and then if you don't like and I think that's going to lead to, great to hear about Render, can see all of the events

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

EuropeLOCATION

0.99+

Anurag GoelPERSON

0.99+

ItalyLOCATION

0.99+

AsiaLOCATION

0.99+

AnuragPERSON

0.99+

USLOCATION

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

June 2020DATE

0.99+

Steve HerrodPERSON

0.99+

AfricaLOCATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

South AmericaLOCATION

0.99+

StuPERSON

0.99+

five billion dollarsQUANTITY

0.99+

RenderTITLE

0.99+

GoogleORGANIZATION

0.99+

hundredsQUANTITY

0.99+

General CatalystORGANIZATION

0.99+

RenderORGANIZATION

0.99+

bothQUANTITY

0.99+

StripeORGANIZATION

0.99+

ElasticsearchTITLE

0.99+

HerokuORGANIZATION

0.99+

KafkaTITLE

0.99+

FrankfurtLOCATION

0.99+

ChristmasEVENT

0.99+

2019DATE

0.99+

1500QUANTITY

0.99+

two reasonsQUANTITY

0.99+

20 yearsQUANTITY

0.98+

SalesforceORGANIZATION

0.98+

first regionQUANTITY

0.98+

first timeQUANTITY

0.98+

AnuragORGANIZATION

0.98+

fifth engineerQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.97+

second thingQUANTITY

0.97+

Brian Kumagai & Scott Beekman, Toshiba Memory America | CUBE Conversation, December 2018


 

>> Pomp YouTubers. Welcome to another cube conversation from ours, the Cube Studios in Palo Alto, California In this conversation, we're going to build upon some other recent conversations we've had which explores this increasingly important relationship between Senate conductor, memory or flash and new classes of applications that are really making life easier and changing the way that human beings in Iraq with each other, both in business as wells and consumer domains. And to explore these crucial issues. We've got two great guests. Brian Kumagai is the director of business development at Kashima Memory America. Scott Beekman is the director of managed flashes to Sheba Memory America's Well, gentlemen, welcome to the Cube. And yet so I'm gonna give you my perspective. I think this is pretty broadly held generally is that as a technology gets more broadly adopted, people get experience with. And as designers, developers, users gain experience with technology, they start to apply their own creativity, and it starts to morph and change and pull and stretch of technology and a lot of different directions. And that leads to increased specialization. That's happening in the flash work I got there, right? Scott? >> Yes, you know the great thing about flashes. Just how you because this it is and how widely it's used. And if you think about any electronic device it needs, it needs a brain processor. Needs to remember what it's doing. Memory and memories, What? What we do. And so we see it used in, you know, so many applications from smartphones, tablets, printers, laptops, you know, streaming media devices. And, uh and so you know, that that technology we see used, for example, like human see memory. It's a low power memory is designed for, for, like, smartphones that aren't plugged in. And, uh, and so when you see smartphones, one point five billion smartphones, it drives that technology and then migrates into all kinds of other applications is well, and then we see new technologies that come and replace that like U F s Universal flash storage. It's intended to be the high performance replacement. Mm. See, And so now that's also mag raiding its way through smartphones and all these other applications. >> So there's a lot of new applications that are requiring new classes of flash. But there's still a fair amount of, AH applications that require traditional flash technology. These air not coming in squashing old flash or traditional flasher other pipe types of parts, but amplifying their use in specialized ways. Brian Possible. But about >> that. So it's interesting that these days no one's really talks about the original in the hand flash that was ever developed back in nineteen eighty seven and that was based on a single of a cell, or SLC technology, which today still offers the highest reliability and fastest before me. Anand device available in the market today. And because of that, designers have found this type of memory to work well for storing boot code and some levels of operating system code. And these are in a wide variety of devices, both and consumer and industrial segments. Anything from set top boxes connecting streaming video. You've got your printers. You, Aye aye. Speakers. Just a numerous breath of product. I >> gotta also believe a lot of AA lot of i o t lot of industrial edge devices they're goingto feature. A lot of these kinds of parts may be disconnected, maybe connected beneath low power, very high speed, low cost, highly reliable. >> That's correct. And because these particular devices air still offered in lower densities. It does offer a very cost effective solutions for designers today. >> Okay, well, let's start with one of the applications. That is very, very popular. Press. When automated driving autonomous funerals on the work, it's it's There's a Thomas vehicles, but there's autonomous robots more broadly, let's start with Autonomous vehicle Scott. What types of flash based technologies are ending up in cars and why? >> Okay, so we've seen a lot of changes within vehicles over the last few years. You know, increasing storage requirements for, like, infotainment systems. You know, more sophisticated navigations of waste recognition. Ah, no instrument clusters more informed of digital displays and then ate ass features. You know, collision avoidance things like like that and all that's driving maur Maureen memory storage and faster performance memory. And in particular, what we've seen for automotive is it's basically adopting the type of memory that you have in your smartphone. So smart phones have a long time have used this political this. Mm. See a memory. And that has made you made my greatest weigh in automotive. And now a CZ smartphones have transition been transitioning do you? A fast, in fact, sushi. But it was the first introduced samples of U F U F S in early two thousand thirteen, and then you started to see it in smartphones in two thousand fifteen. Well, that's now migrating in tow. Automotive as well. They need to take advantage of the higher performance, the higher densities and so and so to Chiba. Zero. We're supporting, you know this, this growth within automotive as well. >> But automotive is a is a market on DH. Again, I think it's a great distinction you made. It's just not autonomous. It's thie even when the human being is still driving. It's the class of services that provided to that driver, both from an entertainment, say and and safety and overall experience standpoint. Is driving a very aggressively forward that volume in and the ability to demonstrate what you can do in a car is having a significant implications on the other classes of applications that we think for some of these high end parts. How is the experience that were incorporating into an automotive application or set of applications starting to impact? How others envision how their consumer products can be made better, Better experience safer, etcetera in other domains >> uh, well, yeah, I mean, we see that all kinds of applications are taking advantage of the these technologies. Like like even air via air, for example. Again, it's all it's all taking advantage of this idea of needing higher, larger density of storage at a lower cost with low power, good performance and all these applications air taking an advantage of that, including automotive. And if you look it automotive, you know, it's it's not just within the vehicle. Actually, it's estimated, you know, projected that autonomous vehicles we need, like one two, three terabytes of storage within the within the vehicle. But then all the data that's collected from cameras and sensors need to be uploaded to the cloud and all that needs to be stored. So that's driving storage to data centers because you basically need to learn from that to improve the software. For the for, Ah, you know, for the time being, Yeah, exactly. So all these things are driving more and more storage, both with within the devices themselves, like a car is like a device, but also in the data centers as >> well. So if we can't Brian take us through some of the decisions that designer has to go through to start to marry some of these different memory technologies together to create, whether it's an autonomous car, perhaps something a little bit more mundane. This might be a computing device. What is the designer? How does is I think about how these fit together to serve the needs of the user in the application. >> Um, I think >> these days, you know a lot of new products. They require a lot of features and capabilities. So I think a lot of input or thought is going into the the memory size itself. You know, I think software guys are always wanting to have more storage, to write more code, that sort of thing. So I think that is one lt's step that they think about the size of the package and then cost is always a factor as well. So you know nothing about the Sheba's. We do offer a broad product breath that producing all types of I'm not about to memory that'll fit everyone's needs. >> So give us some examples of what that product looks like and how it maps to some of these animation needs. >> So we like unmentioned we offered the lower density SLC man that's thought that a one gigabit density and then it max about maximum thirty to get bit dying. And as you get into more multi level cell or triple level cell or cue Elsie type devices, you're been able to use memory that's up to a single diet could be upto one point three three terror bits. So there's such a huge range of memory devices available >> today. And so if we think about where the memories devices are today and we're applications or pulling us, what kind of stuff is on the horizon scarred? >> Well, one is just more and more storage for smartphones. We want more, you know, two fifty six gigabyte fight told Gigabyte, one terabyte and and in particular for a lot of these mobile devices. You know, like convention You f s is really where things were going and continuing to advance that technology continuing to increase their performance, continuing to increase the densities. And so, you know, and that enables a lot of applications that we actually a hardman vision at this point. And when we know autonomous vehicles are important, I'm really excited about that because I'm in need that when I'm ninety, you know can drive anywhere. I want everyone to go, but and then I I you know where I's going, so it's a lot of things. So you know, we have some idea now, but there's things that we can't envision, and this technology enables that and enables other people who can see how do I take advantage of that? The faster performance, the greater density is a lower cost forbid. >> So if we think about, uh, General Computer, especially some of these out cases were talking about where the customer experience is a function of how fast a device starts up or how fast the service starts up, or how rich the service could be in terms of different classes of input, voice or visual or whatever else might be. And we think about these data centers where the closed loop between the processing and the interesting of some of these models and how it affects what that transactions going to do. We're tournament lower late. See, that's driving a lot of designers to think about how they can start moving certain classes of function closer to the memory, both from a security standpoint from an error correction standpoint, talk to us a little bit about the direction that to Sheba imagines, Oh, the differential ability of future memories relative Well, memories today, relative to where they've been, how what kinds of features and functions are being added to some of these parts to make them that much more robust in some of these application. >> I think a >> CZ you meant mentioned the robustness. So the memory itself. And I think that actually some current memory devices will allow you to actually identify the number of bits that are being corrected. And then that kind of gives an indication the integrity or the reliability of a particular block of memory. And I think as users are able to get early detection of this, they could do things to move the data around and then make their overall storage more reliable. >> Things got way. Yeah. I mean, we continue, Teo, figure out how to cram orbits within a given space. You know, moving from S l see them. I'll see the teal seemed. And on cue, Elsie. That's all enabling that Teo enabled greater storage. Lower cost on DH, then, Aziz, we just talked from the beginning. Just that there's all kinds of differentiation in terms of of flash products that are really tailored for certain things. Someone focus for really high performance and give up some power. And others you need a certain balance of that. Were, you know, a mobile device, you know, handheld device. You're not going to play. You know, You give up some performance for less power. And so there's a whole spectrum. It's someone you know. Endurance is incredibly important. So we have a full breast of products that address all those particular needs. >> The designer. It's just whatever I need. I could come to you. >> Yeah, that's right. So she betrays them. The full breath of products available. >> All right, gentlemen. Thank you very much for being on the Cube. Brian Coma Guy, director of business development to Sheba Memory America. Scott Beekman, director of Manage Flash. Achieve a memory. America again. Thanks very much for being on the Q. Thank you. Thank you. And this closes this cube conversation on Peter Burress until next time. Thank you very much for watching

Published Date : Jan 30 2019

SUMMARY :

And that leads to increased specialization. And so we see it used in, you know, so many applications from smartphones, So there's a lot of new applications that are requiring new classes of flash. So it's interesting that these days no one's really talks about the original A lot of these kinds of parts may be disconnected, And because these particular devices air still offered in lower densities. When automated driving autonomous funerals on the work, And that has made you made my greatest weigh in automotive. It's the class of services that provided to that driver, both from an entertainment, And if you look it automotive, you know, it's it's not just within the to serve the needs of the user in the application. So you know nothing about the Sheba's. And as you get into more multi level cell or triple And so if we think about where the memories devices are today and we're And so, you know, the direction that to Sheba imagines, Oh, And I think that actually some current memory devices And others you need a certain balance of that. I could come to you. So she betrays them. Thank you very much for being on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian KumagaiPERSON

0.99+

Scott BeekmanPERSON

0.99+

Peter BurressPERSON

0.99+

Brian Coma GuyPERSON

0.99+

Brian PossiblePERSON

0.99+

IraqLOCATION

0.99+

December 2018DATE

0.99+

AzizPERSON

0.99+

firstQUANTITY

0.99+

ScottPERSON

0.99+

ninetyQUANTITY

0.99+

Kashima Memory AmericaORGANIZATION

0.99+

BrianPERSON

0.99+

Sheba Memory AmericaORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three terabytesQUANTITY

0.99+

SenateORGANIZATION

0.99+

ElsiePERSON

0.99+

one terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

Cube StudiosORGANIZATION

0.98+

GigabyteORGANIZATION

0.97+

TeoPERSON

0.97+

todayDATE

0.97+

two thousand fifteenQUANTITY

0.97+

CubeORGANIZATION

0.96+

Toshiba Memory AmericaORGANIZATION

0.96+

U F U F SCOMMERCIAL_ITEM

0.94+

twoQUANTITY

0.94+

two great guestsQUANTITY

0.94+

five billion smartphonesQUANTITY

0.93+

one gigabitQUANTITY

0.93+

U FORGANIZATION

0.92+

two fifty six gigabyteQUANTITY

0.92+

oneQUANTITY

0.91+

AmericaLOCATION

0.91+

ShebaORGANIZATION

0.9+

singleQUANTITY

0.85+

threeQUANTITY

0.85+

one pointQUANTITY

0.82+

ShebaPERSON

0.74+

eighty sevenQUANTITY

0.74+

FlashTITLE

0.73+

thirtyQUANTITY

0.73+

three terror bitsQUANTITY

0.72+

ScottORGANIZATION

0.72+

AnandORGANIZATION

0.71+

early twoDATE

0.71+

one ofQUANTITY

0.7+

single dietQUANTITY

0.68+

MaureenPERSON

0.67+

thousand thirteenQUANTITY

0.64+

ChibaORGANIZATION

0.63+

ZeroQUANTITY

0.62+

cellQUANTITY

0.61+

last few yearsDATE

0.54+

ThomasPERSON

0.52+

nineteenDATE

0.51+

applicationsQUANTITY

0.47+

Anthony "Tony G" Giandomenico, Fortinet & FortiGuard Labs | CUBEConversation, August 2018


 

(Intense orchestral music) >> Hi, I'm Peter Burris and once again welcome to a CUBEComnversation from our beautiful studios here in Palo Alto, California. For the last few quarters I've been lucky enough to speak with Tony Giandomenico, who's the Senior Security Strategist and Researcher at Fortinet, specifically in the FortiGuard labs, about some of the recent trends that they've been encountering and some of the significant, groundbreaking, industry-wide research we do on security threats, and trends in vulnerabilities. And once again, Tony's here on theCUBE to talk about the second quarter report, Tony, welcome back to theCUBE. >> Hey, Peter, it's great to be here man, you know, sorry I actually couldn't be right there with you though, I'm actually in Las Vegas for the Black Hat DEF CON Conference this time so, I'm havin' a lot of fun here, but definitely missin' you back in the studio. >> Well, we'll getcha next time, but, it's good to have you down there because, (chuckles) we need your help. So, Tony, let's start with the obvious, second quarter report, this is the Fortinet threat landscape report. What were some of the key findings? >> Yeah, so there's a lot of them, but I think some of the key ones were, one, you know, cryptojacking is actually moving into the IOT and media device space. Also, we did an interesting report, that we'll talk about a little bit later within the actual threat report itself, was really around the amount of vulnerabilities that are actually actively being exploited over that actual Q2 period. And then lastly, we did start to see the bad guys using agile development methodologies to quickly get updates into their malware code. >> So let's take each of those in tern, because they're all three crucially important topics, starting with crypto, starting with cryptojacking, and the relationship between IOT. The world is awash in IOT, it's an especially important domain, it's going to have an enormous number of opportunities for businesses, and it's going to have an enormous impact in people's lives. So as these devices roll out, they get more connected through TCP/IP and related types of protocols, they become a threat, what's happening? >> Yeah, what we're seeing now is, I think the bad guys continue to experiment with this whole cryptojacking thing, and if you're not really, for the audience who may not be familiar with cryptojacking, it's really the ability, it's malware, that helps the bad guys mine for cryptocurrencies, and we're seeing that cryptojacking malware move into those IOT devices now, as well as those media devices, and, you know, you might be saying well, are you really getting a lot of resources out of those IOT devices? Well, not necessarily, but, like you mentioned Peter, there's a lot of them out there, right, so the strength is in the number, so I think if they can get a lot of IOTs compromised into an actual botnet, really the strength's in the numbers, and I think you can start to see a lot more of those CPU resources being leverages across an entire botnet. Now adding onto that, we did see some cryptojacking affecting some of those media devices as well, we have a lot of honeypots out there. Examples would be say, different types of smart TVs, a lot of these software frameworks they have kind of plugins that you can download, and at the end of the day these media devices are basically browsers. And what some folks will do is they'll kind of jailbreak the stuff, and they'll go out there and maybe, for example, they want to be able to download the latest movie, they want to be able to stream that live, it may be a bootleg movie; however, when they go out there an download that stuff, often malware actually comes along for the ride, and we're seeing cryptojacking being downloaded onto those media devices as well. >> So, the act of trying to skirt some of the limits that are placed on some of these devices, gives often one of the bad guys an opportunity to piggyback on top of that file that's coming down, so, don't break the law, period, and copyright does have a law, because when you do, you're likely going to be encountering other people who are going to break the law, and that could be a problem. >> Absolutely, absolutely. And then I think also, for folks who are actually starting to do that, it really starts to-- we talk a lot about how segmentation, segmenting your network and your corporate environment, things in that nature but, those same methodologies now have to apply at your home, right? Because at your home office, your home network, you're actually starting to build a fairly significant network, so, kind of separating lot of that stuff from your work environment, because everybody these days seems to be working remotely from time to time, so, the last thing you want is to create a conduit for you to actually get malware on your machine, that maybe you go and use for work resources, you don't want that malware then to end up in your environment. >> So, cryptojacking, exploiting IOT devices to dramatically expand the amount of processing power that could be applied to doing bad things. That leads to the second question: there's this kind of notion, it's true about data, but I presume it's also true about bad guys and the things that they're doing, that there's these millions and billions of files out there, that are all bad, but your research has discovered that yeah, there are a lot, but there are a few that are especially responsible for the bad things that are being done, what did you find out about the actual scope of vulnerabilities from a lot of these different options? >> Yeah, so what's interesting is, I mean we always play this, and I think all the vendors talk about this cyber hygiene, you got to patch, got to patch, got to patch, well that's easier said than done, and what organizations end up doing is actually trying to prioritize what vulnerabilities they really should be patching first, 'cause they can't patch everything. So we did some natural research where we took about 108 thousand plus vulnerabilities that are actually publicly known, and we wanted to see which ones are actually actively being exploited over an actual quarter, in this case it was Q2 of this year, and we found out, only 5.7% of those vulnerabilities were actively being exploited, so this is great information, I think for the IT security professional, leverage these types of reports to see which particular vulnerabilities are actively being exploited. Because the bad guys are going to look at the ones that are most effective, and they're going to continue to use those, so, prioritize your patching really based on these types of reports. >> Yeah, but let's be clear about this Tony, right, that 108 thousand, looking at 108 thousand potential vulnerabilities, 5.7% is still six thousand possible sources of vulnerability. (Tony laughs) >> So, prioritize those, but that's not something that people are going to do in a manual way, on their own, is it? >> No, no, no, not at all, so there's a lot of, I mean there's a lot of stuff that goes into the automation of those vulnerabilities and things of that nature, and there's different types of methodologies that they can use, but at the end of the day, if you look at these type of reports, and you can read some of the top 10 or top 20 exploits out there, you can determine, hey, I should probably start patching those first, and even, what we see, we see also this trend now of once the malware's in there, it starts to spread laterally, often times in worm like spreading capabilities, will look for other vulnerabilities to exploit, and move their malware into those systems laterally in the environment, so, just even taking that information and saying oh, okay so once the malware's in there it's going to start leveraging X, Y, Z, vulnerability, let me make sure that those are actually patched first. >> You know Tony the idea of cryptojacking IOT devices and utilizing some new approaches, new methods, new processes to take advantage of that capacity, the idea of a lateral movement of 5.7% of the potential vulnerabilities suggests that even the bag guys are starting to accrete a lot of new experience, new devices, new ways of doing things, finding what they've already learned about some of these vulnerabilities and extending them to different domains. Sounds like the bad guys themselves are starting to develop a fairly high degree of sophistication in the use of advanced application development methodologies, 'cause at the end of the day, they're building apps too aren't they? >> Yeah, absolutely, it's funny, I always use this analogy of from a good guy side, for us to have a good strong security program, of course we need technology controls, but we need the expertise, right, so we need the people, and we also need the processes, right, so very good, streamline sort of processes. Same thing on the bad guy side, and this is what we're starting to see is a lot more agile development methodologies that the bad guys--(clears throat) are actually using. Prior to, well I think it still happens, but, earlier on, for the bad guys to be able to circumvent a lot of these security defenses, they were leveraging polymorphous, modifying those kind of malwares fairly quickly to evade our defenses. Now, that still happens, and it's very effective still, but I think the industry as a whole is getting better. So the bad guys, I think are starting to use better, more streamlined processes to update their malicious software, their malicious code, to then, always try to stay one step ahead of the actual good guys. >> You know it's interesting, we did a, what we call a crowd chat yesterday, which is an opportunity to bring our communities together and have a conversation about a crucial issue, and this particular one was about AI and the adoption of AI, and we asked the community: What domains are likely to see significant investment and attention? And a domain that was identified as number one was crypto, and a lot of us kind of stepped back and said well why is that and we kind of concluded that one of the primary reasons is is that the bad guys are as advanced, and have an economic incentive to continue to drive the state of the art in bad application development, and that includes the use of AI, and other types of technologies. So, as you think about prices for getting access to these highly powerful systems, including cryptojacking going down, the availability of services that allow us to exploit these technologies, the expansive use of data, the availability of data everywhere, suggests that we're in a pretty significant arms race, for how we utilize these new technologies. What's on the horizon, do you think, over the course of the next few quarters? And what kinds of things do you anticipate that we're going to be talking about, what headlines will we be reading about over the course of the next few quarters as this war game continues? >> Well I think a lot of it is, and I think you touched upon it, AI, right, so using machine learning in the industry, in cyber we are really excited about this type of technology it's still immature, we still have a long way to go, but it's definitely helping at being able to quickly identify these types of malicious threats. But, on the flip side, the bad guys are doing the same thing, they're leveraging that same artificial intelligence, the machine learning, to be able to modify their malware. So I think we'll continue to see more and more malware that might be AI sort of focused, or AI sort of driven. But at the same time, we've been taking about this a little bit, this swarm type of technology where you have these larger, botnet infrastructures, and instead of the actual mission of a malware being very binary, and if it's in the system, it's either yes or no, it does or it doesn't, and that's it. But I think we'll start to see a little bit more on what's the mission? And whatever that mission is, using artificial intelligence then to be able to determine, well what do I need to do to be able to complete that place, or complete that mission, I think we'll see more of that type of stuff. So with that though, on the good guy side, for the defenses, we need to continue to make sure that our technology controls are talking with each other, and that they're making some automated decisions for us. 'Cause I'd rather get a security professional working in a saw, I want an alert saying: hey, we've detected a breach, and I've actually quarantined this particular threat at these particular endpoints, or we've contained it in this area. Rather than: hey, you got an alert, you got to figure out what to do. Minimize the actual impact of the breach, let me fight the attack a little longer, give me some more time. >> False positives are not necessarily a bad thing when the risk is very high. Alright-- >> Yeah, absolutely. >> Tony Giandomenico, Senior Security Strategist and Researcher at Fortinet, the FortiGuard labs, enjoy Black Hat, talk to you again. >> Thanks Peter, it's always good seein' ya! >> And once again this is Peter Burris, CUBEConversation from our Palo Alto studios, 'til next time. (intense orchestral music)

Published Date : Aug 13 2018

SUMMARY :

and some of the significant, groundbreaking, Hey, Peter, it's great to be here man, you know, it's good to have you down there because, (chuckles) the amount of vulnerabilities that are actually and the relationship between IOT. and at the end of the day gives often one of the bad guys an opportunity to the last thing you want is to create a conduit and the things that they're doing, Because the bad guys are going to look at the ones Yeah, but let's be clear about this Tony, okay so once the malware's in there it's going to start even the bag guys are starting to accrete So the bad guys, I think are starting to use better, and the adoption of AI, and we asked the community: and instead of the actual mission of a malware False positives are not necessarily a bad thing and Researcher at Fortinet, the FortiGuard labs, And once again this is Peter Burris,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tony GiandomenicoPERSON

0.99+

TonyPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

5.7%QUANTITY

0.99+

FortinetORGANIZATION

0.99+

August 2018DATE

0.99+

second questionQUANTITY

0.99+

Las VegasLOCATION

0.99+

FortiGuardORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

108 thousandQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

FortiGuard LabsORGANIZATION

0.99+

oneQUANTITY

0.99+

yesterdayDATE

0.98+

six thousand possible sourcesQUANTITY

0.98+

top 10QUANTITY

0.97+

108 thousand potential vulnerabilitiesQUANTITY

0.96+

eachQUANTITY

0.96+

Black Hat DEF CON ConferenceEVENT

0.95+

Anthony "Tony G"PERSON

0.94+

about 108 thousand plus vulnerabilitiesQUANTITY

0.94+

one stepQUANTITY

0.93+

top 20 exploitsQUANTITY

0.92+

Q2DATE

0.86+

millions andQUANTITY

0.86+

firstQUANTITY

0.84+

billions of filesQUANTITY

0.83+

CUBEConversationEVENT

0.82+

GiandomenicoORGANIZATION

0.81+

Q2 ofDATE

0.75+

three crucially important topicsQUANTITY

0.74+

few quartersDATE

0.72+

this yearDATE

0.71+

agileTITLE

0.7+

Black HatTITLE

0.62+

second quarterQUANTITY

0.61+

quartersDATE

0.6+

FortinetTITLE

0.49+

nextDATE

0.49+

Lily Chang, VMware | Women Transforming Technology (wt2) 2018


 

>> Narrator: From the VMware Campus in Palo Alto California, it's The Cube covering Women Transforming Technology. (upbeat music) >> I'm Lisa Martin with the Cube and we are on the ground in Palo Alto with VMware for the third annual, Women Transforming Technology event. Excited to welcome back to the Cube, Lily Chang, VP of strategic transformation here at VMware. Lily it's great to have you back. >> Thank you, it's fantastic to have this event again, for the third time in the history. >> Yes, in fact, I read online that it was sold out within hours and the keynote this morning was... >> Lily: Fantastic >> Fantastic >> And very inspiring. >> Very inspiring. For those of you who don't know, Laila Ali was the keynote this morning. What a great analogy, not just being a sports star, but being someone, a woman, in a very male dominated industry who just had this sort of natural confidence that she just knew what her purpose was. I thought that was a very inspiring message for those of us in tech as well. >> Yeah, and it's also very key that women leaders, such as herself, is willing to come out and share the story, and be the role model and set a path and show the example for the younger generation to follow and to look up to. That is incredible. >> I love for one of the things she said, Lily, when she said she still sometimes kind of loses sight and has to reignite that inner warrior. I thought that was a really important and empowering message too that even really strong women who are naturally confident still have times where they have to kind of remind themselves of what their purpose is. I just thought that was a very impactful statement and I think regardless of any industry you're in. >> That is absolutely true. I mean, we're only human, right? So every one of us experiences challenges in life so there are times even all genders, you're going to bump into road blocks, you're going to bump into challenges and then you need to self motivating and lift yourself up and rise to the ocassions of the challenge. A lot of times these changes, and I'm sure it's true for her as well, that actually make her a better leader. >> Definitely. So you are one of the board members of Women Who Code. This is something that's very near and dear to VMwear's heart. VMwear got involved in 2016 when it was about a 10,000 person organization. >> Actually, a little bit less than that. >> A little less than 10,000? And now it's? >> We were very young. >> And now how large is it? >> It's 137,000 members globally, 20 counties, 60 cities. >> So what's the mission of Women Who Code? >> The mission is very simple. Basically we want to basically help all women that inspire and excel in their technical career journey and in their career development. So that's basically the simple mission statement and for that a very critical thrust that Women Who Code has and kind of coincide with VMware's community vision, is basically technical woman community. So they were very young but we saw the passion, we saw the commitment, and we believed that this is a great mutual opportunity because we want to be a global company. We want to not only view leadership within U.S., we wanted it to be in NIA, to be in APJ, We have R & D research offices everywhere and so we basically collaborated with Women Who Code and that has been a very successful leadership program which only work with them. And they basically blossomed under the collaboration and we're not the only company but we are the one of two founding partner in sponsor for Women Who Code. >> It's grown dramatically as you said. >> Lily: Dramatically. >> Yeah, just a couple of years since you've been involved with VMware. What are a some of things that have surprised you about, not just the growth, but about some of the lesson that maybe you've learned by watching some of these other women come into this organization and be inspired and impact their careers? >> So I see the story, both in VMware woman leadership, and also in outside community woman leadership. Right? So what I see is all these woman basically have the passion but they were a little bit worried about let it come out but when you're actually in a community you're supporting one and other and you have that platform where they feel very comfortable to communicate, network, share, and learn, and so basically that is a very powerful thing and I see the growth and the booster of the potential, it's kind of like we lift them up all of a sudden. Right? One of the stories recently is that, for example, on the external side, We have basically a Canada city director is all volunteer positions. Right. And within a year, she actually moved from a line management position to basically to a director position because the city director role basically expose you to basically get the community view out and that encourage you and challenge you to basically has hands on soft leadership skill and so a lot of the technical woman have a lot of technology and a lot of the technologist mentality but you need to accompany that with a lot of the soft skill. And then the combination of the two that makes a perfect combination. And we see a lot of that in our VMware women as well. So we set out to do basically cities in China, we actually opened China for Women Who Code. It was zero member, and now it has like 3,000-4,000 members. It's actually in China. It's a little bit of a difficult mysterious place. Right? But we made it happen in Beijing. We made it happen in Shanghai. And it's basically participate by a lot of the local company, not just multi-national company. And in India we actually open it up, and in India now is blossomed like crazy so there are like since VMware's opening up in Bangalore basically there are three other cities that joined in. India is like basically a rose in blossoming peak point right now. And we also opened up a Sophia, so basically we work with women who go to do a corporate leadership program. And within the first year, where we appointed some of the city directors from our women, basically we have experience about a 50% promotion rate and pretty much 100% retention rate. >> Lisa: Wow. >> Yeah. >> 50% promotion and 100% retention is incredible. >> It is incredible, so I see that miracle happening and then I become very convinced after year one and then I've also learned that I'm not the only leader in the world that believes in this. That's the reason why they blossom like crazy. >> I imagine growing up in China, I was reading a little bit about your story, that the expansion in China must mean something a bit personal for you as well. It sounds like you were a bit fortunate though, with your parents saying "hey," you had two choices when you graduated from college, flight attendant, or secretary and your parents thought "she should have more options that that." So maybe kind of full circle, how was that for you when those two in Shanghai and Beijing opened? >> To me, I feel like, that is what is 21st century supposed to be. I wish it were true in the 19th century and but bottom line is, minor correction, actually I did interview for those two positions. I was rejected. I was not qualified. >> Lisa: Lucky VMware. >> Yeah. (laughing) Actually lucky United State. >> There you go. >> So basically my dad and my mom, they basically raised me up very differently in that era. They basically feel that they give me kind of almost a virtual space where I do not feel there is any difference between genders. They always made me feel like I'm a equal citizen in the family. I have the same speaking right, my dad, my mom both foster me that so when they learned that I could not get those two possible jobs and I was very well educated, graduated from the best university in the island, quoting my dad, he basically "invested on me," right? So he basically said "well" what he needs to do is "continue to invest in me." So that's the reason why he exported me to United States and then basically I went to the graduate school here and then since then I been very blessed. So this is almost like the Beijing and Shanghai success of the Women Who Code. It's almost like I'm giving it back to my origin. Right? And I'm bringing a lot of the blend between the western and eastern culture together. Right? To open that up which is fantastic and basically in the global environment to make it very diverse and inclusive at the same time. >> So you had really strong parents who instilled this belief in you that you could do anything. When we look at some of the statistics that show that less than 25% of technical roles are held by women and then we also look at the retention, the attrition is so high in tech. What were some of the things that kept you kind of focused on your dreams? How did you kind of foster that persistence? And I'm wondering what your advice is for women who are in tech and might be thinking of leaving. >> Well, very interesting, so first advice I have is, basically believe in yourself and dream very big. Because that, and the second this is never afraid of change. Change is always a good thing and that has been throughout my growth in a foreign country as well as here. Right? And I remember when I was in the university, even thought it was the best university, and I actually changed department and major twice and the third time I attempted to do it, because at that time I told my dad, say "hey, I heard there's this cool computer science thing I really want to go do" he did some calculation and said "look, if you transfer again, the third time, it will take you five to six years to graduate" so he said "no, just stick with it and then later on you want to move, go ahead." Right? So in grad school I changed again and I was very blessed that there are a lot of sponsors and mentors. Not just my parents. Throughout my growth and throughout my journey in the career basically really foster and help me, supported me, give me a lot of advice, so I'm a big believer in mentorship and sponsorship and that's what I believe the technical woman community will offer. It's kind of a genetically built it within that philosophy in the community. Right? It doesn't matter which forum. It is basically bringing in the common belief and the vision together and it's basically peer to peer mentorship and because there are different walks and different levels of women and technologist in that community then you actually could do the tiering and peering and basically help people to either inspire, basically move into new career journey, or elevating themselves. So I'm a very big believer in mentorship and sponsorship. >> Speaking of change, we talked about the changes you've made previously. You've made a big change from R & D to financier. >> Lily: That's correct. >> The very first at VMware to do that? >> Lily: Yes, very first... >> Tell us about kind of the impetus and what excited you and what you are benefiting from. >> Well, I'd been in the R & D career for a couple decades and so every ten years I look at my resume and then I kind of try to have an out of body experience to basically advise myself and say, what would you do differently, so that you actually are setup for the growth for the next ten years. Right? So when I look at my career about a year ago I basically said to myself and said "well, you've got enough R & D experience, you made enough investment. For you to be in the next journey you really need to have the business experience." And even though I have basically with VMware's support and sponsorship I did go back to the business school and got kind of the Berkeley business certificate and I got lots of great executives supporting me. But the reality is if you don't do that role, day in and day out, and really experience it blended into your DNA, it's not going to come natural. Right? And I don't want to be an imposter, so essentially I made a fairly major determination that I want to basically switch into business world. So I'm kind of a unique case in the sense that I'm both over-qualified and under-qualified at the same time. I'm very lucky that I have a lot of the executive sponsorship that I was able to find a perfect role that allowed me to learn and excel and basically be inspired basically in my role today and that is something fantastic. Only after I transfer that's where I learn that I'm actually the first employee in VMware's history that moved from R & D to finance and I still remain as the only one so far and I hope that my success can actually inspire more R & D people because I truly believe that a lot of times when you can actually can look at from the other lens it would just simply make you be able to do your original job better. Like right now, I would tell my old R & D self that some of the decision I made I would have debated and petitioned and argued and thought about it in a completely different way because my thinking has shift which I think is a very healthy shift. >> I agree, and you know, one of the things that Laila Ali said this morning was basically encouraging people to get uncomfortable, to be comfortable and that's, you talked about change, absolutely there's so many opportunities and we know that on one level but it can be pretty intimidating to change something. But I love also what you said. I think there's a parallel with saying now that you have this business experience looking through that other lens at R & D, you would have made decisions differently and I think that is very reflective and an opportunity for organizations to invest in creating a more diverse executive team. When you bring in that though diversity. >> Lily: Exactly. >> And it just opens the door, not just seeing things through different lenses and perspectives whether we're talking about gender or what not, but the profitability that can come from that alone is tremendous. >> Yeah, so for example one of the things that there is a statistics actually based on McKinsey for company that basically has reasonable percentage blend of woman leadership actually grows better and makes much sounder decision and so the experience I have moving from R & D to business and then now I work still very closely with R & D community and the product business unit, basically that's kind of a testemonial for that because the decision making all of a sudden is multi facet. And you always will be able to make a better decision and a sound decision. Now, you will be able to see a different risk at a different level, and we will be communicating in a more common language, like I used to not be able to speak the business tone and the business language, now I actually can be that effective communication bridge, which I find it very powerful and very exciting and very illuminating in terms of just the whole shift, make it a very worth while actually. It's just a very fantastic personal and professional experiences so far. >> You studied that Mckinsey report and that was actually mentioned this morning that the press release that VMwear did with the Stanford Institute investing 15 million in building a womens innovation lab to study the barriers, identify how to remove those barriers, but in that press release McKinsey report found that, and this is shocking, that companies that have more diversity at the executive level, are 21% more profitable. >> Lily: Exactly. >> That's a huge number. >> That's because you actually, for business, right? The technology moves so fast and there are so many different factors will be coming in hitting the business, giving business decision, you just go down a unique lane and not basically bringing all the different facets of perspective, you tend to basically gradually work yourself into a corner or you may just believe what you want to believe. Right? So that's where the other genders perspective or even the inclusive culture will bring you, basically. So this is my firm belief. Right? It's just in a different dimension basically. >> And I think that's great advice for all walks of life Lily. Thank you so much for stopping by The Cube and sharing with us what you're doing with Women Who Code and congratulations on being the first VMware to successfully transition from R & D to finance. >> Yeah, I actually hit my one year anniversary. >> Oh congratulations and thanks so much for your time. >> Thank you. >> We want to thank you for watching the cube. I'm Lisa Martin, on the ground at Women Transforming Technology VMware. Thanks for watching. (digital music)

Published Date : May 24 2018

SUMMARY :

Narrator: From the VMware Campus in Palo Alto California, Lily it's great to have you back. for the third time in the history. Yes, in fact, I read online that it was sold out For those of you who don't know, and be the role model and set a path and show the example and has to reignite that inner warrior. and then you need to self motivating and lift yourself up So you are one of the board members of It's 137,000 members globally, and for that a very critical thrust that Women Who Code has and be inspired and impact their careers? and that encourage you and challenge you and then I become very convinced after year one So maybe kind of full circle, how was that for you and but bottom line is, minor correction, Yeah. and inclusive at the same time. and then we also look at the retention, and the third time I attempted to do it, Speaking of change, we talked about the and what you are benefiting from. and got kind of the Berkeley business certificate I agree, and you know, one of the things that Laila Ali And it just opens the door, not just seeing things and so the experience I have moving from R & D to business and that was actually mentioned this morning and there are so many different factors will be coming in and sharing with us what you're doing We want to thank you for watching the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

VMwareORGANIZATION

0.99+

ShanghaiLOCATION

0.99+

fiveQUANTITY

0.99+

2016DATE

0.99+

LilyPERSON

0.99+

ChinaLOCATION

0.99+

BeijingLOCATION

0.99+

Lily ChangPERSON

0.99+

BangaloreLOCATION

0.99+

IndiaLOCATION

0.99+

Laila AliPERSON

0.99+

21%QUANTITY

0.99+

60 citiesQUANTITY

0.99+

20 countiesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

100%QUANTITY

0.99+

VMwearORGANIZATION

0.99+

21st centuryDATE

0.99+

OneQUANTITY

0.99+

LisaPERSON

0.99+

19th centuryDATE

0.99+

twoQUANTITY

0.99+

137,000 membersQUANTITY

0.99+

Stanford InstituteORGANIZATION

0.99+

two possible jobsQUANTITY

0.99+

50%QUANTITY

0.99+

United StatesLOCATION

0.99+

15 millionQUANTITY

0.99+

NIALOCATION

0.99+

third timeQUANTITY

0.99+

zero memberQUANTITY

0.99+

six yearsQUANTITY

0.99+

CanadaLOCATION

0.99+

3,000-4,000 membersQUANTITY

0.99+

less than 25%QUANTITY

0.99+

twiceQUANTITY

0.99+

two positionsQUANTITY

0.99+

Palo Alto CaliforniaLOCATION

0.99+

U.S.LOCATION

0.99+

2018DATE

0.99+

bothQUANTITY

0.98+

two choicesQUANTITY

0.98+

Women Who CodeTITLE

0.98+

first employeeQUANTITY

0.98+

oneQUANTITY

0.98+

Women Who CodeORGANIZATION

0.98+

todayDATE

0.98+

Women Transforming TechnologyORGANIZATION

0.97+

firstQUANTITY

0.97+

first yearQUANTITY

0.97+

one levelQUANTITY

0.96+

a yearQUANTITY

0.96+

United StateLOCATION

0.96+

three other citiesQUANTITY

0.95+

one year anniversaryQUANTITY

0.95+

McKinseyORGANIZATION

0.95+

secondQUANTITY

0.94+

Sanjay Poonen - VMworld 2014 - theCUBE - #VMworld


 

live from San Francisco California it's the queue at vmworld 2014 brought to you by vmware cisco EMC HP and nutanix now here are your hosts John furrier and Dave vellante okay welcome back and run live in San Francisco California this is the cube vmworld 2014 our 50 year covering vmworld I'm John for my coach Dave vellante Sanjay pune in the EVP and general manager end-user computing friend of the cube he's been on throughout his career at SAAP that he moves right across the street to VMware last year and great to see you great good to see back in the cube Thank You John's pleasure to be what a year right so last year you came on board guns blend Pat was really excited you've accomplished some of your goals I think you laid out I said what's your goals for next year you laid out some goals and then big acquisition AirWatch securities hot mobile was booming we are living in a multi cloud mobile infrastructure demand tell us what happened over the past year obviously big M&A give us the details yo John and Dave I was like on day like point five day one when I came down there cute but I was actually watching the replay and I'm like I actually said that and it made sense no it's been a great year and its really been a team effort so the first thing that I did was I said you know well before we decide the what and the how I really want to figure out who's on the bus so we really both kind of promoted a couple of key people within the company like kid Kohlberg remember kid was like the star of last year's show he's now our CTO and user computing what hired a couple of rock stars for the industry like summit the lawn and a few others who've really come in and shaped us and then as the team started to gel we then began to ask our customers what was the key missing part in our strategy and it was mobile it's very clear and we began to then ask ourselves listen if we're going to get into the mobile space you know do we build do we buy to we partner and we were winning deals in the desktop space primarily against Citrix we compete in there getting a lot of market share but the mobile space we'd lose deals and I go and ask our customers who you pickin and eighty ninety percent of time was AirWatch same time our CIO was doing an evaluation internally we were running on an SMB tool fiber link that then since got bought by IBM were running out of steam with it because as SME tool and I said listen you evaluate the market look at all the options and based on what you pick will probably influenced our acquisition decision they love their watch do so you know those were two or three key moments it's the franchise player in the team right I mean ultimately ultimately you know Mobile is today kind of that sizzle point if you're talking mobile cloud it is the sizzle point John Marshall and Alan dabiri came in they've added a lot so you know I talked to my keynote about three core pillars desktop mobile content collaboration we really feel like today when I was looking back we had a tenth of the portfolio last year this time and I think you know lots of good vision but now we actually a vision and substance right i think is pretty powerful so is it the lebron james who it was the is that the Tom Brady is it the Ray Allen you know the key role play I love basketball all those teams are great i think i'm some of my favorite all the Phil Jackson teams yeah my role is really to be the coach and to bring into the construct the Michael Jordan the Scottie Pippen's you know all that construct so that when you put together a world-class ski I really believe we have the best end-user computing team in the industry bar not and this team really is now packed with people and process and product innovation and that's what you've seen the last 12 months it's a real tribute to this fantastic and use a computing team so as you talk about the news this morning around SI p we didn't catch the detail that we were on the cube here can you just take us through some of those some of those key highlights I mean clearly I have a soft corner for a safe as you would expect that was there for seven years and have a tremendous respect they are the leader in business applications a tremendous player you know hundreds of thousands of customers and what we felt was if you could marry the best of breed aspects of what sa fie does well applications mobile applications cloud applications on-premise applications all of that what we do very well which is management and security for mobile and that's what our customers have among the 13 thousand customers of AirWatch probably the biggest basin enterprise rsap customers and they've been longing for better integration you know you but I what's going on over there you know we asked you I mean listen to the end of the day we want to do what's best for customers and you know so packed bill mcdermott myself talk Kevin ruchi bharani who was on stage and we felt that we could build integration between the mobile apps and the mobile platform of SI p where s if he is very good with the management and security of air watch where we're very good you get the combination to best debrief and I think the customer quote in that press release put it well so G Abraham basically said he was a CIO sigma-aldrich we love the fact that you're bringing together the best of breed aspects of mobile security from AirWatch with mobile apps and mobile platform Mississippi and that's a nessuno abdur for the enterprise because of reality because the challenge people are having is it was taking it was too hard it was taking too long so how does that change now with this integration I mean in essence era what AirWatch provides is an elegant simple cloud centric mobile management security solutions much more than MDM device management at Marikana management and you know in every ranking by the analyst they are the undecided gold medal now you can basically use that solution and make sure that your applications also work so let's say you're bringing up we showed in the demo an example of essay p medical records or maybe SI p furia Psychlo whatever have you you can now bring that up on a device that's secure and the posture is checked with their watch and that's the best combination of both and this could just apply to any application it could be a box it could be our own content locker SI p is a clearly the leader in business application I start sweet recently and said VMware working with apple and United Airlines to bring mobility airplanes all secured by air watch obviously United Airlines big customer GE and other things so the interface to pretty much everything whether it's big data is going to be some mobile or edge device is that the number one requirement that you're hearing from customers that it's not just mobile users is the Internet of Things part of this how do you see that that's interesting piece is that is that true don't absolutely I think well I talked about the United Airlines case start in fact it's right off the website of Apple you go to apple and look at the business case studies they have the United Airlines is one of those case studies in the case that is actually pretty simple you know you've got these pilots that are lugging around 30 40 pound bags lots of paper manuals their flight landing instructions now those are being digitized with iPads in the cockpit so as you think about what the future is everything goes digital that first invades the cockpit then the flight attendants habit so they can check to make sure they have a list of the passengers and they can serve their passengers better and that's the way the world is moving but then you take that same concept and you extend now to machines where every single potential machine that is on the Internet can be tracked can be managed and security and our proposition there is to manage and secure every possible machine and thing and then analyze the data coming out of it we think that's a huge opportunity FML touch in Chicago last year and the chairman of the United told me a one percent savings in efficiency just on just on gas is billions of dollars of real savings so you know this brings back down to the the whole concept it's not just an IT thing it's a business process thing so how far along are you seeing the customer base on things like this is it is where it's--okay IT got workers out there you know bring your own device to work okay but outside of that what is the the uptake if you will on really connected intelligence yeah i think it's a it's and when we have you know 13,000 customers that we've had their watched 50,000 our customers with horizon 500,000 customers we have vmware many of them start speaking and we're finding in a couple of industries and consumer packaged goods and retail industries people are looking at things like for example smart vending in devices medical devices the future of a protected medtronics was on stage and they are a rare watch customer they were talking about the fact that their vision is well beyond just the mobile devices every medical device being protected potentially by air watch you look at oil and gas customers practically almost every oil and gas customers in AirWatch customer there's going to be embedded intelligence inside a lot of the oil and gas machinery and infrastructure that protects people from potential damage we expect to be able to secure that so our proposition in that equation is the management and security of every machine and everything and then the beautiful part of it is beyond just management and security I think the analytics of data coming out of that is a treasure trove of incredible valuable places for big data you know we spoke with bill McDermott when you were also at sa p and they had a very vertical approach and when we go talk about the big data conferences with a Q veterans all this vertical we need to have a vertical niche to kind of be a major player or or even a differentiated niche player but how does that affect your business is it vertical eyes you mentioned a loyal and gas flow but you know airlines is there a horizontal platform that can work across the industries or is it specifically verticals you see up your levels now you're at a different you're the edge of the network what's your take on that do you have to be a vertical player or zero horizontal plane that's a great question Jon I think that as the world's leaf asta scrawing and biggest infrastructure software company VMware that's what we've been going from zero to you know roughly run rate six billion in 15 years there is fundamentally first off a horizontal play that goes across and cuts across many industries but very quickly we find as we were able to package solutions by industries so I talked for example at the keynote about the health care industry and how we were you imagine a doctor walking into their office moving from their office to the ward from their desktop to an iPad to potentially getting into the room and they then have a thin terminal client terminal and then they collaborate with their other doctor that has you know an iPad to healthcare is one example state and local public sector is a different example we're being successful education retail manufacturing we picked four or five verticals I been fortunate in the fact that much of my experience at SAAP was running the industries at SI p so i have a good amount of experience at industry solutions we're certainly not an application's player like i say p where we're going to vertical eyes in a vertical stack applications but you're going to see us drive solutions and when you drive industry solutions and let's say five or ten industries where we're relevant you're going to see our average selling price growth and differentiation is application-specific is tends to be vertical but as a platform product player you're this way yeah you don't wait fundamentally to start with but then you start creating solutions yeah which are scenarios that work in a particular industry to enable those guys exactly and we pick the five or ten industries where we think we're going to go focus and we're starting to see as we do that our average selling price growth everything they have some fools yeah you know what the other thing that happens is that you actually start becoming relevant to a line of business buyer beyond just idea and that's very important I was on the performance metrics give us some data can you share some of that pat was glowing with always performing well so can you share some numbers yeah I'll tell you what we did the last three quarters and growth this is the fastest growing the one of the fastest growing business units in q4 last year we grew thirty percent north of thirty percent in q1 or we announced we grew north of thirty percent again and then in q2 we said we grew north of fifty percent right and now some of that results the contribution of area watch but organic or inorganic we are growing and it's not a small business you can grow from one to two and that's a hundred percent this is a size of a part of VMware's revenue and a growing part of it we're talking hundreds of millions of here is that for ya I mean it's well over ten percent of the revenue and the growing percentage of the total company's revenue I think that this is going to become an increasing part of the embers total revenue total relevance the CIO and because a mobile cloud and a big part of the brand appeal of the inland I mean listen remember is well known as an infrastructure company done very well in the data center but the moment you start talking mobile and clouds you're appealing to the CIO and that's a very different type of conversation we want to raise the appeal of VMware I yield to the CIO and we think mobile it's a big market you guys did the TAM analysis Pat I probably has you doing that but whoever may be Jonathan it's a big chunk of it at his EUC a sizable pardon bigger than it was before and we just have to kind of grow into that Tam and then grow the tam further and that's and you started that kind of throw that sounds getting the flywheel effect going and the problem with VD I was always a cost cost cost and you know so it was a narrow niche this mobile it seems to change that hold concussion for my cost of value you know Dave it's a very good point first off mobile for us means more than just a device it means being on the move and on the move means you could be on the move and you're using a laptop here we got to think about the relevance of how you get solutions on to your laptop and desktop I think part of the reason video I gonna hit a little bit of a bump and some of our competitors have been stalling and declining is it's just too complex into costly and we fundamentally now reinvented a modern stack for desktop virtualization that runs on top of all the great innovation that we have in the software-defined data sound like virtual set like vSphere and a lot of things we're doing so all of a sudden the cost of EDI we can show we take down by at least thirty to forty percent that's a game changer now you add moberly to say listen when you go from a desktop or a laptop to a tablet or phone you've got the leader in mobile security and management AirWatch integrated the horizon this is what we announced with the workspace sweep and the final pillar is being able to share that content in a very simple yet secure way so think sort of Dropbox but all of a security and SharePoint brought you that's the third pillar all three of those desktop mobile and content extremely so you're saying saji the tipping point is the asset leverage that you're getting out of the infrastructure is you move toward this sort of software-defined thing that enables this type of decline in cost and accelerated growth absolutely and that's you know the whole aspect of how software has been done is you integrate things so your lower costs and you make it much much easier to be able to palette and by now either could be bottom premise or the cloud so we're seeing that connection of you know the head and the body think of the body being the traditional software-defined data center the head being end-user computing all the connective tissue muscle fiber blood vessels and so on so forth making that connected now makes us a lot more appealing than telling a customer listen by your data center infrastructure from VMware your desktop infrastructure from Citrix your mobile infrastructure from MobileIron and you're you know content collaboration solution from like 10 different starters right increasingly we think that that's not the way in which people are going to be buying software Sanjay just some highlights from the keynote looking here on Twitter through our little listening tool great reviews by the way electric flying speed she's gonna be CEO someday Pat heads up on that that was coming from the Trident that was this guy without a limiting move on stage when I said fat ought to be thinking about an ice bucket challenge so anyway rights beyond amazing executive really got really great reviews on the twittersphere besides a challenging pat calcium of the ice bucket challenge of which joe 2g already challenged so let's see how he's out of fun again oh fun in all seriousness two quotes i want to pull out from the twittersphere you said software in the modern cars more than the nasa spacecraft awesome comment when I pivot on that in a second the other one was Sanjay is emphasizing the importance of world-class infrastructure so first define world-class infrastructure from your perspective given your industry experience in vision for the future and to talk about how it relates to the modern car were just NASA and the change of speed of Technology you know John when I gave my keynote i put this beautiful picture of this incredible modern architecture in single protocol to marina marina bay sands tower it's three big towers I think 40 50 60 floors and a fantastic infinity swimming pools at the top and not been a Singapore you got to go there and check out the swimming pool at the top of it but the only way in which you could make those three towers work was world-class foundational infrastructure the three towers by the way was a metaphor to desktop mobile content collaboration and of course the beautiful workspace view at the top of it so the thrust the impersonalist well all of that to us the software-defined data center is the de facto interest so that makes a lot of that happen we feel very very fortunate and blessed to have the world's best infrastructure that makes that happen virtual server storage networking management all of that put together allows me to be able to build world-class towers on top of that and the end of the day it's not just solid it's lower cost of ownership in the opportunity now my comment about the the 1970s spacecraft and so just to say that today we live in a software economy it's not to say that hardware is not important but someone joked that software is like the wine and hardware is like the bottle while it was important but the the software glue really ties Harvard together in a very special way and that's really the genius of what's making everything whether it's a device whether it's a machine even more relevant and that clearly was defined in 1972 spacecraft but today you can see this invading automobile thermostat refrigerator vending machine that we believe the future so how to ask you to shoot the arrow forward what are you getting excited about I'll see the accelerated pace of change from the spacecraft to the car after you mention the United Airlines and Apple it's a well documented as an end user environment certainly the interfaces everything and that seems to be the focus area what's your view what is exciting where's the inflection point enabling technology that you're watching from the foundation only to the top I mean listen i spent seven years at SAAP primarily in the analytics and big data space and then fire that another five years that companies like in thematically and I've just my life has been about end-users and whereas we came in here we coined this phrase which is our big broad vision we want to allow end-users to work at the speed of life so if you think about your life in the consumer world you don't lug around 300 CDs into your car you have an ipod you have an iphone your connect to the iCloud and it's all seamlessly there you watch a movie you start off on netflix you go from San Francisco to New York to Barcelona you may start and then stop you know someplace else and you can you can start exactly where you stop house of cards or whatever have you watching enterprise software has been unfortunately hard to use complex hard to implement and the more that we can make enterprise software simple simple and secure we to do the security part of it pretty good we tend to do the simplicity part so i think enterprise software companies can actually take a page out of the book of consumer software companies on the simplicity now the consumer companies could take a lesson out of the book from us and security and but when you put simplicity and security together you get magic when you could put together control and choice together you get magic so it's not the consumerization of IITs we all love it's the IT of consumers each other you could really flip that around like dead laptop staff I mean there's so many different place in the words that you could do that's exactly the way but I think that's a great point Sanjay thanks so much for coming to Cuba congratulations on a great keynote and thanks for coming to spend your valuable time with us here of the cube appreciate it we live here in San Francisco we write back with our next guest after the short break thanks John

Published Date : Aug 28 2014

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
ChicagoLOCATION

0.99+

seven yearsQUANTITY

0.99+

AppleORGANIZATION

0.99+

JonathanPERSON

0.99+

United AirlinesORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Michael JordanPERSON

0.99+

Dave vellantePERSON

0.99+

Alan dabiriPERSON

0.99+

13,000 customersQUANTITY

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

BarcelonaLOCATION

0.99+

appleORGANIZATION

0.99+

CubaLOCATION

0.99+

thirty percentQUANTITY

0.99+

New YorkLOCATION

0.99+

JohnPERSON

0.99+

50,000QUANTITY

0.99+

IBMORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

John MarshallPERSON

0.99+

five yearsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

Dave vellantePERSON

0.99+

nutanixORGANIZATION

0.99+

SanjayPERSON

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

hundreds of millionsQUANTITY

0.99+

fourQUANTITY

0.99+

six billionQUANTITY

0.99+

1972DATE

0.99+

last yearDATE

0.99+

oneQUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

iphoneCOMMERCIAL_ITEM

0.99+

one percentQUANTITY

0.99+

San Francisco CaliforniaLOCATION

0.99+

JonPERSON

0.99+

GEORGANIZATION

0.99+

iPadsCOMMERCIAL_ITEM

0.99+

50 yearQUANTITY

0.99+

thirty percentQUANTITY

0.99+

13 thousand customersQUANTITY

0.99+

NASAORGANIZATION

0.99+

vmwareORGANIZATION

0.99+

San Francisco CaliforniaLOCATION

0.99+

United AirlinesORGANIZATION

0.99+

Kevin ruchi bharaniPERSON

0.99+

todayDATE

0.98+

next yearDATE

0.98+

MarikanaORGANIZATION

0.98+

hundred percentQUANTITY

0.98+

billions of dollarsQUANTITY

0.98+

John furrierPERSON

0.98+

Tom BradyPERSON

0.98+

UnitedORGANIZATION

0.98+

three towersQUANTITY

0.98+

lebron jamesPERSON

0.98+

fifty percentQUANTITY

0.98+

40QUANTITY

0.98+

hundreds of thousands of customersQUANTITY

0.98+

bothQUANTITY

0.98+

netflixORGANIZATION

0.98+

Ray AllenPERSON

0.98+

three towersQUANTITY

0.98+

first thingQUANTITY

0.97+

15 yearsQUANTITY

0.97+

two quotesQUANTITY

0.97+

q1DATE

0.97+

KohlbergPERSON

0.97+

10 different startersQUANTITY

0.97+

AirWatchCOMMERCIAL_ITEM

0.97+

eighty ninety percentQUANTITY

0.96+

PatPERSON

0.96+