Image Title

Search Results for .NEXT.:

Is Data Mesh the Next Killer App for Supercloud?


 

(upbeat music) >> Welcome back to our Supercloud 2 event live coverage here of stage performance in Palo Alto syndicating around the world. I'm John Furrier with Dave Vellante. We got exclusive news and a scoop here for SiliconANGLE in theCUBE. Zhamak Dehghani, creator of data mesh has formed a new company called Nextdata.com, Nextdata. She's a cube alumni and contributor to our supercloud initiative, as well as our coverage and Breaking Analysis with Dave Vellante on data, the killer app for supercloud. Zhamak, great to see you. Thank you for coming into the studio and congratulations on your newly formed venture and continued success on the data mesh. >> Thank you so much. It's great to be here. Great to see you in person. >> Dave: Yeah, finally. >> Wonderful. Your contributions to the data conversation has been well documented certainly by us and others in the industry. Data mesh taking the world by storm. Some people are debating it, throwing cold water on it. Some are thinking it's the next big thing. Tell us about the data mesh, super data apps that are emerging out of cloud. >> I mean, data mesh, as you said, the pain point that it surface were universal. Everybody said, "Oh, why didn't I think of that?" It was just an obvious next step and people are approaching it, implementing it. I guess the last few years I've been involved in many of those implementations and I guess supercloud is somewhat a prerequisite for it because it's data mesh and building applications using data mesh is about sharing data responsibly across boundaries. And those boundaries include organizational boundaries, cloud technology boundaries, and trust boundaries. >> I want to bring that up because your venture, Nextdata, which is new just formed. Tell us about that. What wave is that riding? What specifically are you targeting? What's the pain point? >> Absolutely. Yes, so Nextdata is the result of, I suppose the pains that I suffered from implementing data mesh for many of the organizations. Basically a lot of organizations that I've worked with they want decentralized data. So they really embrace this idea of decentralized ownership of the data, but yet they want interconnectivity through standard APIs, yet they want discoverability and governance. So they want to have policies implemented, they want to govern that data, they want to be able to discover that data, and yet they want to decentralize it. And we do that with a developer experience that is easy and native to a generalist developer. So we try to find the, I guess the common denominator that solves those problems and enables that developer experience for data sharing. >> Since you just announced the news, what's been the reaction? >> I just announced the news right now, so what's the reaction? >> But people in the industry know you did a lot of work in the area. What have been some of the feedback on the new venture in terms of the approach, the customers, problem? >> Yeah, so we've been in stealth mode so we haven't publicly talked about it, but folks that have been close to us, in fact have reached that we already have implementations of our pilot platform with early customers, which is super exciting. And we going to have multiple of those. Of course, we're a tiny, tiny company. We can have many of those, but we are going to have multiple pilot implementations of our platform in real world where real global large scale organizations that have real world problems. So we're not going to build our platform in vacuum. And that's what's happening right now. >> Zhamak, when I think about your role at ThoughtWorks, you had a very wide observation space with a number of clients, helping them implement data mesh and other things as well prior to your data mesh initiative. But when I look at data mesh, at least the ones that I've seen, they're very narrow. I think of JPMC, I think of HelloFresh. They're generally, obviously not surprising, they don't include the big vision of inclusivity across clouds, across different data storage. But it seems like people are having to go through some gymnastics to get to the organizational reality of decentralizing data and at least pushing data ownership to the line of business. How are you approaching, or are you approaching solving that problem? Are you taking a narrow slice? What can you tell us about Nextdata? >> Yeah, absolutely. Gymnastics, the cute word to describe what the organizations have to go through. And one of those problems is that the data as you know resides on different platforms, it's owned by different people, is processed by pipelines that who knows who owns them. So there's this very disparate and disconnected set of technologies that were very useful for when we thought about data and processing as a centralized problem. But when you think about data as a decentralized problem the cost of integration of these technologies in a cohesive developer experience is what's missing. And we want to focus on that cohesive end-to-end developer experience to share data responsibly in these autonomous units. We call them data products, I guess in data mesh. That constitutes computation. That governs that data policies, discoverability. So I guess, I heard this expression in the last talks that you can have your cake and eat it too. So we want people have their cakes, which is data in different places, decentralization, and eat it too, which is interconnected access to it. So we start with standardizing and codifying this idea of a data product container that encapsulates data computation APIs to get to it in a technology agnostic way, in an open way. And then sit on top and use existing tech, Snowflake, Databricks, whatever exists, the millions of dollars of investments that companies have made, sit on top of those but create this cohesive, integrated experience where data product is a first class primitive. And that's really key here. The language and the modeling that we use is really native to data mesh, which is that I'm building a data product I'm sharing a data product, and that encapsulates I'm providing metadata about this. I'm providing computation that's constantly changing the data. I'm providing the API for that. So we we're trying to kind of codify and create a new developer experience based on that. And developer, both from provider side and user side, connected to peer-to-peer data sharing with data product as a primitive first class concept. >> So the idea would be developers would build applications leveraging those data products, which are discoverable and governed. Now today you see some companies, take a Snowflake for example, attempting to do that within their own little walled garden. They even at one point used the term mesh. I don't know if they pull back on that. And then they became aware of some of your work. But a lot of the things that they're doing within their little insulated environment support that governance, they're building out an ecosystem. What's different in your vision? >> Exactly. So we realized that, and this is a reality, like you go to organizations, they have a Snowflake and half of the organization happily operates on Snowflake. And on the other half, "oh, we are on Bare infrastructure on AWS or we are on Databricks." This is the reality. This supercloud that's written up here, it's about working across boundaries of technology. So we try to embrace that. And even for our own technology with the way we're building it, we say, "Okay, nobody's going to use Nextdata, data mesh operating system. People will have different platforms." So you have to build with openness in mind and in case of Snowflake, I think, they have very, I'm sure very happy customers as long as customers can be on Snowflake. But once you cross that boundary of platforms then that becomes a problem. And we try to keep that in mind in our solution. >> So it's worth reviewing that basically the concept of data mesh is that whether you're a data lake or a data warehouse, an S3 bucket, an Oracle database as well, they should be inclusive inside of the data. >> We did a session with AWS on the startup showcase, data as code. And remember I wrote a blog post in 2007 called "Data as the New Developer Kit" back then we used to call them developer kits if you remember. And that we said at that time, whoever can code data will have a competitive advantage. >> Aren't the machines going to be doing that? Didn't we just hear that? >> Well, we have. Hey, Siri. Hey, Cube, find me that best video for data mesh. There it is. But this is the point, like what's happening is that now data has to be addressable. for machines and for coding because as you need to call the data. So the question is how do you manage the complexity of big things as promiscuous as possible, making it available, as well as then governing it? Because it's a trade off. The more you make open, the better the machine learning. But yet the governance issue, so this is the, you need an OS to handle this maybe. >> Yes. So yes, well we call, our mental model for our platform is an OS operating system. Operating systems have shown us how you can abstract what's complex and take care of a lot of complexities, but yet provide an open and dynamic enough interface. So we think about it that way. Just, we try to solve the problem of policies live with the data, an enforcement of the policies happens at the most granular level, which is in this concept of the data product. And that would happen whether you read, write or access a data product. But we can never imagine what are these policies could be. So our thinking is we should have a policy, open policy framework that can allow organizations write their own policy drivers and policy definitions and encode it and encapsulated in this data product container. But I'm not going to fool myself to say that, that's going to solve the problem that you just described. I think we are in this, I don't know, if I look into my crystal ball, what I think might happen is that right now the primitives that we work with to train machine learning model are still bits and bytes and data. They're fields, rows, columns and that creates quite a large surface area and attack area for privacy of the data. So perhaps one of the trends that we might see is this evolution of data APIs to become more and more computational aware to bring the compute to the data to reduce that surface area. So you can really leave the control of the data to the sovereign owners of that data. So that data product. So I think that evolution of our data APIs perhaps will become more and more computational. So you describe what you want and the data owner decides how to manage. >> That's interesting, Dave, 'cause it's almost like we just talked about ChatGPT in the last segment we had with you. It was a machine learning have been around the industry. It's almost as if you're starting to see reason come into, the data reasoning is like starting to see not just metadata. Using the data to reason so that you don't have to expose the raw data. So almost like a, I won't say curation layer, but an intelligence layer. >> Zhamak: Exactly. >> Can you share your vision on that? 'Cause that seems to be where the dots are connecting. >> Yes, perhaps further into the future because just from where we stand, we have to create still that bridge of familiarity between that future and present. So we are still in that bridge making mode. However, by just the basic notion of saying, "I'm going to put an API in front of my data." And that API today might be as primitive as a level of indirection, as in you tell me what you want, tell me who you are, let me go process that, all the policies and lineage and insert all of this intelligence that need to happen. And then today, I will still give you a file. But by just defining that API and standardizing it now we have this amazing extension point that we can say, "Well, the next revision of this API, you not just tell me who you are, but you actually tell me what intelligence you're after. What's a logic that I need to go and now compute on your API?" And you can evolve that. Now you have a point of evolution to this very futuristic, I guess, future where you just described the question that you're asking from the ChatGPT. >> Well, this is the supercloud, go ahead, Dave. >> I have a question from a fan, I got to get it in. It's George Gilbert. And so his question is, you're blowing away the way we synchronize data from operational systems to the data stack to applications. So the concern that he has and he wants your feedback on this, is the data product app devs get exposed to more complexity with respect to moving data between data products or maybe it's attributes between data products? How do you respond to that? How do you see? Is that a problem? Is that something that is overstated or do you have an answer for that? >> Absolutely. So I think there's a sweet spot in getting data developers, data product developers closer to the app, but yet not overburdening them with the complexity of the application and application logic and yet reducing their cognitive load by localizing what they need to know about, which is that domain where they're operating within. Because what's happening right now? What's happening right now is that data engineers with, a ton of empathy for them for their high threshold of pain that they can deal with, they have been centralized, they've put into the data team, and they have been given this unbelievable task of make meaning out of data, put semantic over it, curate it, cleans it, and so on. So what we are saying is that get those folks embedded into the domain closer to the application developers. These are still separately moving units. Your app and your data products are independent, but yet tightly closed with each other, tightly coupled with each other based on the context of the domain. So reduce cognitive load by localizing what they need to know about to the domain, get them closer to the application, but yet have them separate from app because app provides a very different service. Transactional data for my e-commerce transaction. Data product provides a very different service. Longitudinal data for the variety of this intelligent analysis that I can do on the data. But yet it's all within the domain of e-commerce or sales or whatnot. >> It's a lot of decoupling and coupling create that cohesiveness architecture. So I have to ask you, this is an interesting question 'cause it came up on theCUBE all last year. Back on the old server data center days and cloud, SRE, Google coined the term, site reliability engineer, for someone to look over the hundreds of thousands of servers. We asked the question to data engineering community who have been suffering, by the way, I agree. Is there an SRE like role for data? Because in a way data engineering, that platform engineer, they are like the SRE for data. In other words managing the large scale to enable automation and cell service. What's your thoughts and reaction to that? >> Yes, exactly. So maybe we go through that history of how SRE came to be. So we had the first DevOps movement, which was remove the wall between dev and ops and bring them together. So you have one unit of one cross-functional units of the organization that's responsible for you build it, you run it. So then there is no, I'm going to just shoot my application over the wall for somebody else to manage it. So we did that and then we said, okay, there is a ton, as we decentralized and had these many microservices running around, we had to create a layer that abstracted a lot of the complexity around running now a lot or monitoring, observing, and running a lot while giving autonomy to this cross-functional team. And that's where the SRE, a new generation of engineers came to exist. So I think if I just look at. >> Hence, Kubernetes. >> Hence, hence, exactly. Hence, chaos engineering. Hence, embracing the complexity and messiness. And putting engineering discipline to embrace that and yet give a cohesive and high integrity experience of those systems. So I think if we look at that evolution, perhaps something like that is happening by bringing data and apps closer and make them these domain-oriented data product teams or domain-oriented cross-functional teams full stop and still have a very advanced maybe at the platform level, infrastructure level operational team that they're not busy doing two jobs, which is taking care of domains and the infrastructure, but they're building infrastructure that is embracing that complexity, interconnectivity of this data process. >> So you see similarities? >> I see, absolutely. But I feel like we're probably in a more early days of that movement. >> So it's a data DevOps kind of thing happening where scales happening. It's good things are happening, yet a little bit fast and loose with some complexities to clean up. >> Yes. This is a different restructure. As you said, the job of this industry as a whole, an architect, is decompose recompose, decompose recompose in new way and now we're like decomposing centralized team, recomposing them as domains. >> So is data mesh the killer app for supercloud? >> You had to do this to me. >> Sorry, I couldn't resist. >> I know. Of course you want me to say this. >> Yes. >> Yes, of course. I mean, supercloud, I think it's really, the terminology supercloud, open cloud, but I think in spirits of it this embracing of diversity and giving autonomy for people to make decisions for what's right for them and not yet lock them in. I think just embracing that is baked into how data mesh assume the world would work. >> Well, thank you so much for coming on Supercloud 2. We really appreciate it. Data has driven this conversation. Your success of data mesh has really opened up the conversation and exposed the slow moving data industry. >> Dave: Been a great catalyst. >> That's now going well. We can move faster. So thanks for coming on. >> Thank you for hosting me. It was wonderful. >> Supercloud 2 live here in Palo Alto, our stage performance. I'm John Furrier with Dave Vellante. We'll back with more after this short break. Stay with us all day for Supercloud 2. (upbeat music)

Published Date : Jan 25 2023

SUMMARY :

and continued success on the data mesh. Great to see you in person. and others in the industry. I guess the last few What's the pain point? for many of the organizations. But people in the industry know you did but folks that have been close to us, at least the ones that I've is that the data as you know But a lot of the things that they're doing and half of the organization that basically the concept of data mesh And that we said at that time, is that now data has to be addressable. and the data owner decides how to manage. the data reasoning is like starting to see 'Cause that seems to be where What's a logic that I need to go Well, this is the So the concern that he has into the domain closer to We asked the question to of the organization that's responsible So I think if we look at that evolution, in a more early days of that movement. So it's a data DevOps As you said, the job of Of course you want me to say this. assume the world would work. the conversation and exposed So thanks for coming on. Thank you for hosting me. I'm John Furrier with Dave Vellante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

2007DATE

0.99+

George GilbertPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

NextdataORGANIZATION

0.99+

ZhamakPERSON

0.99+

Palo AltoLOCATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

oneQUANTITY

0.99+

Nextdata.comORGANIZATION

0.99+

two jobsQUANTITY

0.99+

JPMCORGANIZATION

0.99+

todayDATE

0.99+

HelloFreshORGANIZATION

0.99+

ThoughtWorksORGANIZATION

0.99+

last yearDATE

0.99+

Supercloud 2EVENT

0.99+

OracleORGANIZATION

0.98+

firstQUANTITY

0.98+

SiriTITLE

0.98+

CubePERSON

0.98+

DatabricksORGANIZATION

0.98+

SnowflakeORGANIZATION

0.97+

SupercloudORGANIZATION

0.97+

bothQUANTITY

0.97+

one unitQUANTITY

0.97+

SnowflakeTITLE

0.96+

SRETITLE

0.95+

millions of dollarsQUANTITY

0.94+

first classQUANTITY

0.94+

hundreds of thousands of serversQUANTITY

0.92+

supercloudORGANIZATION

0.92+

one pointQUANTITY

0.92+

Supercloud 2TITLE

0.89+

ChatGPTORGANIZATION

0.81+

halfQUANTITY

0.81+

Data Mesh the Next Killer AppTITLE

0.78+

supercloudTITLE

0.75+

a tonQUANTITY

0.73+

Supercloud 2ORGANIZATION

0.72+

SiliconANGLEORGANIZATION

0.7+

DevOpsTITLE

0.66+

SnowflakeEVENT

0.59+

S3TITLE

0.54+

lastDATE

0.54+

supercloudEVENT

0.48+

KubernetesTITLE

0.47+

HPE Compute Engineered for your Hybrid World - Next Gen Enhanced Scalable processors


 

>> Welcome to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier, host of "theCUBE" with the new fourth gen Intel Z on scalable process being announced, HPE is releasing four new HPE ProLiant Gen 11 servers and here to talk about the feature of those servers as well as the partnership between HPE and Intel, we have Darren Anthony, director compute server product manager with HPE, and Suzi Jewett, general manager of the Zion products with Intel. Thanks for joining us folks. Appreciate you coming on. >> Thanks for having us. (Suzi's speech drowned out) >> This segment is about NextGen enhanced scale of process. Obviously the Zion fourth gen. This is really cool stuff. What's the most exciting element of the new Intel fourth gen Zion processor? >> Yeah, John, thanks for asking. Of course, I'm very excited about the fourth gen Intel Zion processor. I think the best thing that we'll be delivering is our new ong package accelerators, which you know allows us to service the majority of the server market, which still is buying in that mid core count range and provide workload acceleration that matters for every one of the products that we sell. And that workload acceleration allows us to drive better efficiency and allows us to really dive into improved sustainability and workload optimizations for the data center. >> It's about al the rage about the cores. Now we got the acceleration continued to innovate with Zion. Congratulations. Darren what does the new Intel fourth Gen Zion processes mean for HPE from the ProLiant perspective? You're on Gen 11 servers. What's in it? What's it mean for you guys and for your customers? >> Well, John, first we got to talk about the great partnership. HPE and Intel have been partners delivering innovation for our server products for over 30 years, and we're continuing that partnership with HP ProLiant Gen 11 servers to deliver compelling business outcomes for our customers. Customers are on a digital transformation journey, and they need the right compute to power applications, accelerate analytics, and turn data into value. HP ProLiant Compute is engineered for your hybrid world and delivers optimized performance for your workloads. With HP ProLiant Gen 11 servers and Intel fourth gen Zion processors, you can have the performance to accelerate workloads from the data center to the edge. With Gen 11, we have more. More performance to meet new workload demands. With PCI Gen five which delivers increased bandwidth with room for more data and graphics accelerators for workloads like VDI, our new demands at the edge. DDR5 memory springs greater bandwidth and performance increases for low latency and memory solutions for database and analytics workloads and higher clock speed CPU chipset combinations for processor intensive AI and machine learning applications. >> Got to love the low latency. Got to love the more performance. Got to love the engineered for the hybrid world. You mentioned that. Can you elaborate more on engineered for the hybrid world? What does that mean? Can you elaborate? >> Well, HP ProLiant Compute is based on three pillars. First, an intuitive cloud operating experience with HPE GreenLake compute ops management. Second, trusted security by design with a zero trust approach from silicone to cloud. And third, optimize for performance for your workloads, whether you deploy as a traditional infrastructure or a pay-as-you-go model with HPE GreenLake on-premise at the edge in a colo and in the public cloud. >> Well, thanks Suzi and Darren, we'll be right back. We're going to take a quick break. We're going to come back and do a deep dive and get into the ProLiant Gen 11 servers. We're going to dig into it. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (upbeat music) >> Hello everyone. Welcome back continuing coverage of "theCUBE's" "Compute Engineered for Your Hybrid World" with HP and Intel. I'm John Furrier, host of "theCUBE'" joined back by Darren Anthony from HPE and Suzie Jewitt from Intel. as we continue our conversation on the fourth gen Zion scalable processor and HP Gen 11 servers. Suzi, we'll start with you first. Can you give us some use cases around the new fourth gen, Intel Zion scalable processors? >> Yeah, I'd love to. What we're really seeing with an ever-changing market, and you know, adapting to that is we're leading with that workload focus approach. Some examples, you know, that we see are with vRAN. For in vRAN, we estimate the 2021 market size was about 150 million, and we expect a CAG of almost 30% all the way through 2030. So we're really focused on that, on, you know deployed edge use cases, growing about 10% to over 50% in 2026. And HPC use cases, of course, continue to grow at a study CAGR around, you know, about 7%. Then last but not least is cloud. So we're, you know, targeting a growth rate of almost 20% over a five year CAGR. And the fourth G Zion is targeted to all of those workloads, both through our architectural improvements that, you know deliver node level performance as well as our operational improvements that deliver data center performance. And wrapping that all around with the accelerators that I talked about earlier that provide that workload specific improvements that get us to where our customers need to operationalize in their data center. >> I love the focus solutions around seeing compute used that way and the processors. Great stuff. Darren, how do you see the new ProLiant Gen 11 servers being used on your side? I mean obviously, you've got the customers deploying the servers. What are you seeing on those workloads? Those targeted workloads? (John chuckling) >> Well, you know, very much in line with what Suzi was talking about. The generational improvements that we're seeing in performance for Gen 11. They're outstanding for many different use cases. You know, obviously VDI. what we're seeing a lot is around the analytics. You know, with moving to the edge, there's a lot more data. Customers need to convert that data into something tangible. Something that's actionable. And so we're really seeing the strong use cases around analytics in order to mine that data and to make better, faster decisions for the customers. >> You know what I love about this market is people really want to hear about performance. They love speed, they love the power, and low power, by the way on the other side. So, you know, this has really been a big part of the focus now this year. We're seeing a lot more discussion. Suzi, can you tell us more about the key performance improvements on the processors? And Darren, if you don't mind, if you can follow up on the benefits of the new servers relative to the performance. Suzi? >> Sure, so, you know, at a standard expectant rate we're looking at, you know, 60% gen over gen, from our previous third gen Zion, but more importantly as we've been mentioning is the performance improvement we get with the accelerators. As an example, an average accelerator proof point that we have is 2.9 times improvement in performance per wat for accelerated workloads versus non-accelerated workloads. Additionally, we're seeing really great and performance improvement in low jitter so almost 20 to 50 times improvement versus previous gen in jitter on particular workloads which is really important, you know to our cloud service providers. >> Darren, what's your follow up on this? This is obviously translates into the the gen 11 servers. >> Well, you know, this generation. Huge improvements across the board. And what we're seeing is that not only customers are prepared for what they need now you know, workloads are evolving and transitioning. Customers need more. They're doing more. They're doing more analytics. And so not only do you have the performance you need now, but it's actually built for the future. We know that customers are looking to take in that data and do something and work with the data wherever it resides within their infrastructure. We also see customers that are beginning to move servers out of a centralized data center more to the edge, closer to the way that where the data resides. And so this new generation really tremendous for that. Seeing a lot of benefits for the customers from that perspective. >> Okay, Suzi, Darren, I want to get your thoughts on one of the hottest trends happening right now. Obviously machine learning and AI has always been hot, but recently more and more focus has been on AI. As you start to see this kind of next gen kind of AI coming on, and the younger generation of developers, you know, they're all into this. This is really the one of the hottest trends of AI. We've seen the momentum and accelerations kind of going next level. Can you guys comment on how Zion here and Gen 11 are tying into that? What's that mean for AI? >> So, exactly. With the fourth gen Intel Zion, we have one of our key you know, on package accelerators in every core is our AMX. It delivers up to 10 times improvement on inference and training versus previous gens, and, you know throws the competition out of the water. So we are really excited for our AI performance leading with Zion >> And- >> And John, what we're seeing is that this next generation, you know you're absolutely right, you know. Workloads a lot more focused. A lot more taking more advantage of AI machine learning capabilities. And with this generation together with the Intel Zion fourth gen, you know what we're seeing is the opportunity with that increase in IO bandwidth that now we have an opportunity for those applications and those use cases and those workloads to take advantage of this capability. We haven't had that before, but now more than ever, we've actually, you know opened the throttle with the performance and with the capabilities to support those workloads. >> That's great stuff. And you know, the AI stuff also does all lot on differentiated heavy lifting, and it needs processing power. It needs the servers. This is just, (John chuckling) it creates more and more value. This is right in line. Congratulations. Super excited by that call out. Really appreciate it. Thanks Suzi and Darren. Really appreciate. A lot more discuss with you guys as we go a little bit deeper. We're going to talk about security and wrap things up after this short break. I'm John Furrier, "theCUBE," the leader in enterprise tech coverage. (upbeat music) >> Welcome back to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World." I'm John Furrier, host of "theCUBE" joined by Darren Anthony from HPE and Suzi Jewett from Intel as we turn our discussion to security. A lot of great features with the new Zion scalable processor's gen four and the ProLiant gen 11. Let's get into it. Suzi, what are some of the cool features of the fourth gen Intel Zion scalable processors? >> Sure, John, I'd love to talk about it. With fourth gen, Intel offers the most comprehensive confidential computing portfolio to really enhance data security and ingest regulatory compliance and sovereignty concerns. A couple examples of those features and technologies that we've included are a larger baseline enclave with the SGX technology, which is our application isolation technology and our Intel CET substantially reduces the risk of whole class software-based attacks. That wrapped around at a platform level really allows us, you know, to secure workload acceleration software and ensure platform integrity. >> Darren, this is a great enablement for HPE. Can you tell us about the security with the the new HP ProLiant Gen 11 servers? >> Absolutely, John. So HP ProLiant engineered with a fundamental security approach to defend against increasingly complex threats and uncompromising focus on state-of-the-art security innovations that are built right into our DNA, from silicon to software, from the factory to the cloud. It's our goal to protect the customer's infrastructure, workloads, and the data from threats to hardware and risk from third party software and devices. So Gen 11 is just a continuation of the the great technological innovations that we've had around providing zero trust architecture. We're extending our Silicon Root of Trust, and it's just a motion forward for innovating on that Silicon Root of Trust that we've had. So with Silicon Root of Trust, we protect millions of lines of firmware code from malware and ransomware with the digital footprint that's unique to the server. With this Silicon Root of Trust, we're securing over 4 million HPE servers around the world and beyond that Silicon, the authentication of and extending this to our partner ecosystem, the authentication of platform components, such as network interface cards and storage controllers just gives us that protection against additional entry points of security threats that can compromise the entire server infrastructure. With this latest version, we're also doing authentication integrity with those components using the security protocol and data model protocol or SPDM. But we know that trusted and protected infrastructure begins with a secure supply chain, a layer of protection that starts at the manufacturing floor. HP provides you optimized protection for ProLiant servers from trusted suppliers to the factories and into transit to the customer. >> Any final messages Darren you'd like to share with your audience on the hybrid world engineering for the hybrid world security overall the new Gen 11 servers with the Zion fourth generation process scalable processors? >> Well, it's really about choice. Having the right choice for your compute, and we know HPE ProLiant servers, together, ProLiant Gen 11 servers together with the new Zion processors is the right choice. Delivering the capabilities to performance and the efficiency that customers need to run their most complex workloads and their most performance hungry work workloads. We're really excited about this next generation of platforms. >> ProLiant Gen 11. Suzi, great customer for Intel. You got the fourth generation Zion scalable processes. We've been tracking multiple generations for both of you guys for many, many years now, the past decade. A lot of growth, a lot of innovation. I'll give you the last word on the series here on this segment. Can you share the the collaboration between Intel and HP? What does it mean and what's that mean for customers? Can you give your thoughts and share your views on the relationship with with HPE? >> Yeah, we value, obviously HPE as one of our key customers. We partner with them from the beginning of when we are defining the product all the way through the development and validation. HP has been a great partner in making sure that we deliver collaboratively to the needs of their customers and our customers all together to make sure that we get the best product in the market that meets our customer needs allowing for the flexibility, the operational efficiency, the security that our markets demand. >> Darren, Suzi, thank you so much. You know, "Compute for an Engineered Hybrid World" is really important. Compute is... (John stuttering) We need more compute. (John chuckling) Give us more power and less power on the sustainability side. So a lot of great advances. Thank you so much for spending the time and give us an overview on the innovation around the Zion and, and the ProLiant Gen 11. Appreciate your time. Appreciate it. >> You're welcome. Thanks for having us. >> You're watching "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier with "theCUBE." Thanks for watching. (upbeat music)

Published Date : Dec 27 2022

SUMMARY :

and here to talk about the Thanks for having us. of the new Intel fourth of the server market, continued to innovate with Zion. from the data center to the edge. engineered for the hybrid world? and in the public cloud. and get into the ProLiant Gen 11 servers. on the fourth gen Zion scalable processor and you know, adapting I love the focus solutions decisions for the customers. and low power, by the the performance improvement into the the gen 11 servers. the performance you need now, This is really the one of With the fourth gen Intel with the Intel Zion fourth gen, you know A lot more discuss with you guys and the ProLiant gen 11. Intel offers the most Can you tell us about the security from the factory to the cloud. and the efficiency that customers need on the series here on this segment. allowing for the flexibility, and the ProLiant Gen 11. Thanks for having us. I'm John Furrier with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

Ed MacoskyPERSON

0.99+

Darren AnthonyPERSON

0.99+

Yaron HavivPERSON

0.99+

Mandy DollyPERSON

0.99+

Mandy DhaliwalPERSON

0.99+

David RichardsPERSON

0.99+

Suzi JewettPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

HPORGANIZATION

0.99+

twoQUANTITY

0.99+

2.9 timesQUANTITY

0.99+

DarrenPERSON

0.99+

GoogleORGANIZATION

0.99+

SuziPERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

RenDiscoORGANIZATION

0.99+

2009DATE

0.99+

Suzie JewittPERSON

0.99+

HPEORGANIZATION

0.99+

2022DATE

0.99+

YahooORGANIZATION

0.99+

LisaPERSON

0.99+

2008DATE

0.99+

AKSORGANIZATION

0.99+

Las VegasLOCATION

0.99+

500 terabytesQUANTITY

0.99+

60%QUANTITY

0.99+

2021DATE

0.99+

HadoopTITLE

0.99+

1,000 cameraQUANTITY

0.99+

oneQUANTITY

0.99+

18,000 customersQUANTITY

0.99+

fiveQUANTITY

0.99+

AmsterdamLOCATION

0.99+

2030DATE

0.99+

OneQUANTITY

0.99+

HIPAATITLE

0.99+

tomorrowDATE

0.99+

2026DATE

0.99+

YaronPERSON

0.99+

two daysQUANTITY

0.99+

EuropeLOCATION

0.99+

FirstQUANTITY

0.99+

todayDATE

0.99+

TelcoORGANIZATION

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

Breaking Analysis: re:Invent 2022 marks the next chapter in data & cloud


 

from the cube studios in Palo Alto in Boston bringing you data-driven insights from the cube and ETR this is breaking analysis with Dave vellante the ascendancy of AWS under the leadership of Andy jassy was marked by a tsunami of data and corresponding cloud services to leverage that data now those Services they mainly came in the form of Primitives I.E basic building blocks that were used by developers to create more sophisticated capabilities AWS in the 2020s being led by CEO Adam solipski will be marked by four high-level Trends in our opinion one A Rush of data that will dwarf anything we've previously seen two a doubling or even tripling down on the basic elements of cloud compute storage database security Etc three a greater emphasis on end-to-end integration of AWS services to simplify and accelerate customer adoption of cloud and four significantly deeper business integration of cloud Beyond it as an underlying element of organizational operations hello and welcome to this week's wikibon Cube insights powered by ETR in this breaking analysis we extract and analyze nuggets from John furrier's annual sit-down with the CEO of AWS we'll share data from ETR and other sources to set the context for the market and competition in cloud and we'll give you our glimpse of what to expect at re invent in 2022. now before we get into the core of our analysis Alibaba has announced earnings they always announced after the big three you know a month later and we've updated our Q3 slash November hyperscale Computing forecast for the year as seen here and we're going to spend a lot of time on this as most of you have seen the bulk of it already but suffice to say alibaba's cloud business is hitting that same macro Trend that we're seeing across the board but a more substantial slowdown than we expected and more substantial than its peers they're facing China headwinds they've been restructuring its Cloud business and it's led to significantly slower growth uh in in the you know low double digits as opposed to where we had it at 15 this puts our year-end estimates for 2022 Revenue at 161 billion still a healthy 34 growth with AWS surpassing 80 billion in 2022 Revenue now on a related note one of the big themes in Cloud that we've been reporting on is how customers are optimizing their Cloud spend it's a technique that they use and when the economy looks a little shaky and here's a graphic that we pulled from aws's website which shows the various pricing plans at a high level as you know they're much more granular than that and more sophisticated but Simplicity we'll just keep it here basically there are four levels first one here is on demand I.E pay by the drink now we're going to jump down to what we've labeled as number two spot instances that's like the right place at the right time I can use that extra capacity in the moment the third is reserved instances or RIS where I pay up front to get a discount and the fourth is sort of optimized savings plans where customers commit to a one or three year term and for a better price now you'll notice we labeled the choices in a different order than AWS presented them on its website and that's because we believe that the order that we chose is the natural progression for customers this started on demand they maybe experiment with spot instances they move to reserve instances when the cloud bill becomes too onerous and if you're large enough you lock in for one or three years okay the interesting thing is the order in which AWS presents them we believe that on-demand accounts for the majority of AWS customer spending now if you think about it those on-demand customers they're also at risk customers yeah sure there's some switching costs like egress and learning curve but many customers they have multiple clouds and they've got experience and so they're kind of already up to a learning curve and if you're not married to AWS with a longer term commitment there's less friction to switch now AWS here presents the most attractive plan from a financial perspective second after on demand and it's also the plan that makes the greatest commitment from a lock-in standpoint now In fairness to AWS it's also true that there is a trend towards subscription-based pricing and we have some data on that this chart is from an ETR drill down survey the end is 300. pay attention to the bars on the right the left side is sort of busy but the pink is subscription and you can see the trend upward the light blue is consumption based or on demand based pricing and you can see there's a steady Trend toward subscription now we'll dig into this in a later episode of Breaking analysis but we'll share with you a little some tidbits with the data that ETR provides you can select which segment is and pass or you can go up the stack Etc but so when you choose is and paths 44 of customers either prefer or are required to use on-demand pricing whereas around 40 percent of customers say they either prefer or are required to use subscription pricing again that's for is so now the further mu you move up the stack the more prominent subscription pricing becomes often with sixty percent or more for the software-based offerings that require or prefer subscription and interestingly cyber security tracks along with software at around 60 percent that that prefer subscription it's likely because as with software you're not shutting down your cyber protection on demand all right let's get into the expectations for reinvent and we're going to start with an observation in data in this 2018 book seeing digital author David michella made the point that whereas most companies apply data on the periphery of their business kind of as an add-on function successful data companies like Google and Amazon and Facebook have placed data at the core of their operations they've operationalized data and they apply machine intelligence to that foundational element why is this the fact is it's not easy to do what the internet Giants have done very very sophisticated engineering and and and cultural discipline and this brings us to reinvent 2022 in the future of cloud machine learning and AI will increasingly be infused into applications we believe the data stack and the application stack are coming together as organizations build data apps and data products data expertise is moving from the domain of Highly specialized individuals to Everyday business people and we are just at the cusp of this trend this will in our view be a massive theme of not only re invent 22 but of cloud in the 2020s the vision of data mesh We Believe jamachtagani's principles will be realized in this decade now what we'd like to do now is share with you a glimpse of the thinking of Adam solipsky from his sit down with John Furrier each year John has a one-on-one conversation with the CEO of AWS AWS he's been doing this for years and the outcome is a better understanding of the directional thinking of the leader of the number one Cloud platform so we're now going to share some direct quotes I'm going to run through them with some commentary and then bring in some ETR data to analyze the market implications here we go this is from solipsky quote I.T in general and data are moving from departments into becoming intrinsic parts of how businesses function okay we're talking here about deeper business integration let's go on to the next one quote in time we'll stop talking about people who have the word analyst we inserted data he meant data data analyst in their title rather will have hundreds of millions of people who analyze data as part of their day-to-day job most of whom will not have the word analyst anywhere in their title we're talking about graphic designers and pizza shop owners and product managers and data scientists as well he threw that in I'm going to come back to that very interesting so he's talking about here about democratizing data operationalizing data next quote customers need to be able to take an end-to-end integrated view of their entire data Journey from ingestion to storage to harmonizing the data to being able to query it doing business Intelligence and human-based Analysis and being able to collaborate and share data and we've been putting together we being Amazon together a broad Suite of tools from database to analytics to business intelligence to help customers with that and this last statement it's true Amazon has a lot of tools and you know they're beginning to become more and more integrated but again under jassy there was not a lot of emphasis on that end-to-end integrated view we believe it's clear from these statements that solipsky's customer interactions are leading him to underscore that the time has come for this capability okay continuing quote if you have data in one place you shouldn't have to move it every time you want to analyze that data couldn't agree more it would be much better if you could leave that data in place avoid all the ETL which has become a nasty three-letter word more and more we're building capabilities where you can query that data in place end quote okay this we see a lot in the marketplace Oracle with mySQL Heatwave the entire Trend toward converge database snowflake [ __ ] extending their platforms into transaction and analytics respectively and so forth a lot of the partners are are doing things as well in that vein let's go into the next quote the other phenomenon is infusing machine learning into all those capabilities yes the comments from the michelleographic come into play here infusing Ai and machine intelligence everywhere next one quote it's not a data Cloud it's not a separate Cloud it's a series of broad but integrated capabilities to help you manage the end-to-end life cycle of your data there you go we AWS are the cloud we're going to come back to that in a moment as well next set of comments around data very interesting here quote data governance is a huge issue really what customers need is to find the right balance of their organization between access to data and control and if you provide too much access then you're nervous that your data is going to end up in places that it shouldn't shouldn't be viewed by people who shouldn't be viewing it and you feel like you lack security around that data and by the way what happens then is people overreact and they lock it down so that almost nobody can see it it's those handcuffs there's data and asset are reliability we've talked about that for years okay very well put by solipsky but this is a gap in our in our view within AWS today and we're we're hoping that they close it at reinvent it's not easy to share data in a safe way within AWS today outside of your organization so we're going to look for that at re invent 2022. now all this leads to the following statement by solipsky quote data clean room is a really interesting area and I think there's a lot of different Industries in which clean rooms are applicable I think that clean rooms are an interesting way of enabling multiple parties to share and collaborate on the data while completely respecting each party's rights and their privacy mandate okay again this is a gap currently within AWS today in our view and we know snowflake is well down this path and databricks with Delta sharing is also on this curve so AWS has to address this and demonstrate this end-to-end data integration and the ability to safely share data in our view now let's bring in some ETR spending data to put some context around these comments with reference points in the form of AWS itself and its competitors and partners here's a chart from ETR that shows Net score or spending momentum on the x-axis an overlap or pervasiveness in the survey um sorry let me go back up the net scores on the y-axis and overlap or pervasiveness in the survey is on the x-axis so spending momentum by pervasiveness okay or should have share within the data set the table that's inserted there with the Reds and the greens that informs us to how the dots are positioned so it's Net score and then the shared ends are how the plots are determined now we've filtered the data on the three big data segments analytics database and machine learning slash Ai and we've only selected one company with fewer than 100 ends in the survey and that's databricks you'll see why in a moment the red dotted line indicates highly elevated customer spend at 40 percent now as usual snowflake outperforms all players on the y-axis with a Net score of 63 percent off the charts all three big U.S cloud players are above that line with Microsoft and AWS dominating the x-axis so very impressive that they have such spending momentum and they're so large and you see a number of other emerging data players like rafana and datadog mongodbs there in the mix and then more established players data players like Splunk and Tableau now you got Cisco who's gonna you know it's a it's a it's a adjacent to their core networking business but they're definitely into you know the analytics business then the really established players in data like Informatica IBM and Oracle all with strong presence but you'll notice in the red from the momentum standpoint now what you're going to see in a moment is we put red highlights around databricks Snowflake and AWS why let's bring that back up and we'll explain so there's no way let's bring that back up Alex if you would there's no way AWS is going to hit the brakes on innovating at the base service level what we call Primitives earlier solipsky told Furrier as much in their sit down that AWS will serve the technical user and data science Community the traditional domain of data bricks and at the same time address the end-to-end integration data sharing and business line requirements that snowflake is positioned to serve now people often ask Snowflake and databricks how will you compete with the likes of AWS and we know the answer focus on data exclusively they have their multi-cloud plays perhaps the more interesting question is how will AWS compete with the likes of Specialists like Snowflake and data bricks and the answer is depicted here in this chart AWS is going to serve both the technical and developer communities and the data science audience and through end-to-end Integrations and future services that simplify the data Journey they're going to serve the business lines as well but the Nuance is in all the other dots in the hundreds or hundreds of thousands that are not shown here and that's the AWS ecosystem you can see AWS has earned the status of the number one Cloud platform that everyone wants to partner with as they say it has over a hundred thousand partners and that ecosystem combined with these capabilities that we're discussing well perhaps behind in areas like data sharing and integrated governance can wildly succeed by offering the capabilities and leveraging its ecosystem now for their part the snowflakes of the world have to stay focused on the mission build the best products possible and develop their own ecosystems to compete and attract the Mind share of both developers and business users and that's why it's so interesting to hear solipski basically say it's not a separate Cloud it's a set of integrated Services well snowflake is in our view building a super cloud on top of AWS Azure and Google when great products meet great sales and marketing good things can happen so this will be really fun to watch what AWS announces in this area at re invent all right one other topic that solipsky talked about was the correlation between serverless and container adoption and you know I don't know if this gets into there certainly their hybrid place maybe it starts to get into their multi-cloud we'll see but we have some data on this so again we're talking about the correlation between serverless and container adoption but before we get into that let's go back to 2017 and listen to what Andy jassy said on the cube about serverless play the clip very very earliest days of AWS Jeff used to say a lot if I were starting Amazon today I'd have built it on top of AWS we didn't have all the capability and all the functionality at that very moment but he knew what was coming and he saw what people were still able to accomplish even with where the services were at that point I think the same thing is true here with Lambda which is I think if Amazon were starting today it's a given they would build it on the cloud and I think we with a lot of the applications that comprise Amazon's consumer business we would build those on on our serverless capabilities now we still have plenty of capabilities and features and functionality we need to add to to Lambda and our various serverless services so that may not be true from the get-go right now but I think if you look at the hundreds of thousands of customers who are building on top of Lambda and lots of real applications you know finra has built a good chunk of their market watch application on top of Lambda and Thompson Reuters has built you know one of their key analytics apps like people are building real serious things on top of Lambda and the pace of iteration you'll see there will increase as well and I really believe that to be true over the next year or two so years ago when Jesse gave a road map that serverless was going to be a key developer platform going forward and so lipsky referenced the correlation between serverless and containers in the Furrier sit down so we wanted to test that within the ETR data set now here's a screen grab of The View across 1300 respondents from the October ETR survey and what we've done here is we've isolated on the cloud computing segment okay so you can see right there cloud computing segment now we've taken the functions from Google AWS Lambda and Microsoft Azure functions all the serverless offerings and we've got Net score on the vertical axis we've got presence in the data set oh by the way 440 by the way is highly elevated remember that and then we've got on the horizontal axis we have the presence in the data center overlap okay that's relative to each other so remember 40 all these guys are above that 40 mark okay so you see that now what we're going to do this is just for serverless and what we're going to do is we're going to turn on containers to see the correlation and see what happens so watch what happens when we click on container boom everything moves to the right you can see all three move to the right Google drops a little bit but all the others now the the filtered end drops as well so you don't have as many people that are aggressively leaning into both but all three move to the right so watch again containers off and then containers on containers off containers on so you can see a really major correlation between containers and serverless okay so to get a better understanding of what that means I call my friend and former Cube co-host Stu miniman what he said was people generally used to think of VMS containers and serverless as distinctly different architectures but the lines are beginning to blur serverless makes things simpler for developers who don't want to worry about underlying infrastructure as solipsky and the data from ETR indicate serverless and containers are coming together but as Stu and I discussed there's a spectrum where on the left you have kind of native Cloud VMS in the middle you got AWS fargate and in the rightmost anchor is Lambda AWS Lambda now traditionally in the cloud if you wanted to use containers developers would have to build a container image they have to select and deploy the ec2 images that they or instances that they wanted to use they have to allocate a certain amount of memory and then fence off the apps in a virtual machine and then run the ec2 instances against the apps and then pay for all those ec2 resources now with AWS fargate you can run containerized apps with less infrastructure management but you still have some you know things that you can you can you can do with the with the infrastructure so with fargate what you do is you'd build the container images then you'd allocate your memory and compute resources then run the app and pay for the resources only when they're used so fargate lets you control the runtime environment while at the same time simplifying the infrastructure management you gotta you don't have to worry about isolating the app and other stuff like choosing server types and patching AWS does all that for you then there's Lambda with Lambda you don't have to worry about any of the underlying server infrastructure you're just running code AS functions so the developer spends their time worrying about the applications and the functions that you're calling the point is there's a movement and we saw in the data towards simplifying the development environment and allowing the cloud vendor AWS in this case to do more of the underlying management now some folks will still want to turn knobs and dials but increasingly we're going to see more higher level service adoption now re invent is always a fire hose of content so let's do a rapid rundown of what to expect we talked about operate optimizing data and the organization we talked about Cloud optimization there'll be a lot of talk on the show floor about best practices and customer sharing data solipsky is leading AWS into the next phase of growth and that means moving beyond I.T transformation into deeper business integration and organizational transformation not just digital transformation organizational transformation so he's leading a multi-vector strategy serving the traditional peeps who want fine-grained access to core services so we'll see continued Innovation compute storage AI Etc and simplification through integration and horizontal apps further up to stack Amazon connect is an example that's often cited now as we've reported many times databricks is moving from its stronghold realm of data science into business intelligence and analytics where snowflake is coming from its data analytics stronghold and moving into the world of data science AWS is going down a path of snowflake meet data bricks with an underlying cloud is and pass layer that puts these three companies on a very interesting trajectory and you can expect AWS to go right after the data sharing opportunity and in doing so it will have to address data governance they go hand in hand okay price performance that is a topic that will never go away and it's something that we haven't mentioned today silicon it's a it's an area we've covered extensively on breaking analysis from Nitro to graviton to the AWS acquisition of Annapurna its secret weapon new special specialized capabilities like inferential and trainium we'd expect something more at re invent maybe new graviton instances David floyer our colleague said he's expecting at some point a complete system on a chip SOC from AWS and maybe an arm-based server to eventually include high-speed cxl connections to devices and memories all to address next-gen applications data intensive applications with low power requirements and lower cost overall now of course every year Swami gives his usual update on machine learning and AI building on Amazon's years of sagemaker innovation perhaps a focus on conversational AI or a better support for vision and maybe better integration across Amazon's portfolio of you know large language models uh neural networks generative AI really infusing AI everywhere of course security always high on the list that reinvent and and Amazon even has reinforce a conference dedicated to it uh to security now here we'd like to see more on supply chain security and perhaps how AWS can help there as well as tooling to make the cio's life easier but the key so far is AWS is much more partner friendly in the security space than say for instance Microsoft traditionally so firms like OCTA and crowdstrike in Palo Alto have plenty of room to play in the AWS ecosystem we'd expect of course to hear something about ESG it's an important topic and hopefully how not only AWS is helping the environment that's important but also how they help customers save money and drive inclusion and diversity again very important topics and finally come back to it reinvent is an ecosystem event it's the Super Bowl of tech events and the ecosystem will be out in full force every tech company on the planet will have a presence and the cube will be featuring many of the partners from the serial floor as well as AWS execs and of course our own independent analysis so you'll definitely want to tune into thecube.net and check out our re invent coverage we start Monday evening and then we go wall to wall through Thursday hopefully my voice will come back we have three sets at the show and our entire team will be there so please reach out or stop by and say hello all right we're going to leave it there for today many thanks to Stu miniman and David floyer for the input to today's episode of course John Furrier for extracting the signal from the noise and a sit down with Adam solipski thanks to Alex Meyerson who was on production and manages the podcast Ken schiffman as well Kristen Martin and Cheryl Knight helped get the word out on social and of course in our newsletters Rob hoef is our editor-in-chief over at siliconangle does some great editing thank thanks to all of you remember all these episodes are available as podcasts wherever you listen you can pop in the headphones go for a walk just search breaking analysis podcast I published each week on wikibon.com at siliconangle.com or you can email me at david.valante at siliconangle.com or DM me at di vallante or please comment on our LinkedIn posts and do check out etr.ai for the best survey data in the Enterprise Tech business this is Dave vellante for the cube insights powered by ETR thanks for watching we'll see it reinvent or we'll see you next time on breaking analysis [Music]

Published Date : Nov 26 2022

SUMMARY :

so now the further mu you move up the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David michellaPERSON

0.99+

Alex MeyersonPERSON

0.99+

Cheryl KnightPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

oneQUANTITY

0.99+

Dave vellantePERSON

0.99+

David floyerPERSON

0.99+

Kristen MartinPERSON

0.99+

JohnPERSON

0.99+

sixty percentQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Adam solipskiPERSON

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2022DATE

0.99+

Andy jassyPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

hundredsQUANTITY

0.99+

2017DATE

0.99+

Palo AltoLOCATION

0.99+

40 percentQUANTITY

0.99+

alibabaORGANIZATION

0.99+

LambdaTITLE

0.99+

63 percentQUANTITY

0.99+

1300 respondentsQUANTITY

0.99+

Super BowlEVENT

0.99+

80 billionQUANTITY

0.99+

John furrierPERSON

0.99+

ThursdayDATE

0.99+

CiscoORGANIZATION

0.99+

three yearsQUANTITY

0.99+

Monday eveningDATE

0.99+

JessePERSON

0.99+

Stu minimanPERSON

0.99+

siliconangle.comOTHER

0.99+

OctoberDATE

0.99+

thecube.netOTHER

0.99+

fourthQUANTITY

0.99+

a month laterDATE

0.99+

thirdQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

fargateORGANIZATION

0.99+

Next Gen Servers Ready to Hit the Market


 

(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)

Published Date : Nov 10 2022

SUMMARY :

and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

David NicholsonPERSON

0.99+

January 2023DATE

0.99+

OracleORGANIZATION

0.99+

JanuaryDATE

0.99+

DellORGANIZATION

0.99+

hundredsQUANTITY

0.99+

November 10thDATE

0.99+

AMDORGANIZATION

0.99+

10 bucksQUANTITY

0.99+

five bucksQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

100 gigQUANTITY

0.99+

EMCORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

LenovoORGANIZATION

0.99+

100%QUANTITY

0.99+

SaturdayDATE

0.99+

128 coreQUANTITY

0.99+

25 gigQUANTITY

0.99+

96 coresQUANTITY

0.99+

five timesQUANTITY

0.99+

2XQUANTITY

0.99+

96 coreQUANTITY

0.99+

8XQUANTITY

0.99+

4XQUANTITY

0.99+

96QUANTITY

0.99+

next yearDATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

2022DATE

0.98+

bothQUANTITY

0.98+

doeshardwarematter.comOTHER

0.98+

5th Gen.QUANTITY

0.98+

4th GenQUANTITY

0.98+

ARMORGANIZATION

0.98+

18-wheelerQUANTITY

0.98+

Z4COMMERCIAL_ITEM

0.97+

firstQUANTITY

0.97+

IntelORGANIZATION

0.97+

2023DATE

0.97+

Zen 4COMMERCIAL_ITEM

0.97+

Sapphire RapidsCOMMERCIAL_ITEM

0.97+

thousandsQUANTITY

0.96+

one serverQUANTITY

0.96+

doubleQUANTITY

0.95+

PCIe Gen 4OTHER

0.95+

Sapphire Rapid CPUsCOMMERCIAL_ITEM

0.94+

PCIe Gen 3OTHER

0.93+

PCIe 4OTHER

0.93+

x86COMMERCIAL_ITEM

0.92+

Wharton CTO AcademyORGANIZATION

0.92+

Breaking Analysis: CEO Nuggets from Microsoft Ignite & Google Cloud Next


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> This past week we saw two of the Big 3 cloud providers present the latest update on their respective cloud visions, their business progress, their announcements and innovations. The content at these events had many overlapping themes, including modern cloud infrastructure at global scale, applying advanced machine intelligence, AKA AI, end-to-end data platforms, collaboration software. They talked a lot about the future of work automation. And they gave us a little taste, each company of the Metaverse Web 3.0 and much more. Despite these striking similarities, the differences between these two cloud platforms and that of AWS remains significant. With Microsoft leveraging its massive application software footprint to dominate virtually all markets and Google doing everything in its power to keep up with the frenetic pace of today's cloud innovation, which was set into motion a decade and a half ago by AWS. Hello and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we unpack the immense amount of content presented by the CEOs of Microsoft and Google Cloud at Microsoft Ignite and Google Cloud Next. We'll also quantify with ETR survey data the relative position of these two cloud giants in four key sectors: cloud IaaS, BI analytics, data platforms and collaboration software. Now one thing was clear this past week, hybrid events are the thing. Google Cloud Next took place live over a 24-hour period in six cities around the world, with the main gathering in New York City. Microsoft Ignite, which normally is attended by 30,000 people, had a smaller event in Seattle, in person with a virtual audience around the world. AWS re:Invent, of course, is much different. Yes, there's a virtual component at re:Invent, but it's all about a big live audience gathering the week after Thanksgiving, in the first week of December in Las Vegas. Regardless, Satya Nadella keynote address was prerecorded. It was highly produced and substantive. It was visionary, energetic with a strong message that Azure was a platform to allow customers to build their digital businesses. Doing more with less, which was a key theme of his. Nadella covered a lot of ground, starting with infrastructure from the compute, highlighting a collaboration with Arm-based, Ampere processors. New block storage, 60 regions, 175,000 miles of fiber cables around the world. He presented a meaningful multi-cloud message with Azure Arc to support on-prem and edge workloads, as well as of course the public cloud. And talked about confidential computing at the infrastructure level, a theme we hear from all cloud vendors. He then went deeper into the end-to-end data platform that Microsoft is building from the core data stores to analytics, to governance and the myriad tooling Microsoft offers. AI was next with a big focus on automation, AI, training models. He showed demos of machines coding and fixing code and machines automatically creating designs for creative workers and how Power Automate, Microsoft's RPA tooling, would combine with Microsoft Syntex to understand documents and provide standard ways for organizations to communicate with those documents. There was of course a big focus on Azure as developer cloud platform with GitHub Copilot as a linchpin using AI to assist coders in low-code and no-code innovations that are coming down the pipe. And another giant theme was a workforce transformation and how Microsoft is using its heritage and collaboration and productivity software to move beyond what Nadella called productivity paranoia, i.e., are remote workers doing their jobs? In a world where collaboration is built into intelligent workflows, and he even showed a glimpse of the future with AI-powered avatars and partnerships with Meta and Cisco with Teams of all firms. And finally, security with a bevy of tools from identity, endpoint, governance, et cetera, stressing a suite of tools from a single provider, i.e., Microsoft. So a couple points here. One, Microsoft is following in the footsteps of AWS with silicon advancements and didn't really emphasize that trend much except for the Ampere announcement. But it's building out cloud infrastructure at a massive scale, there is no debate about that. Its plan on data is to try and provide a somewhat more abstracted and simplified solutions, which differs a little bit from AWS's approach of the right database tool, for example, for the right job. Microsoft's automation play appears to provide simple individual productivity tools, kind of a ground up approach and make it really easy for users to drive these bottoms up initiatives. We heard from UiPath that forward five last month, a little bit of a different approach of horizontal automation, end-to-end across platforms. So quite a different play there. Microsoft's angle on workforce transformation is visionary and will continue to solidify in our view its dominant position with Teams and Microsoft 365, and it will drive cloud infrastructure consumption by default. On security as well as a cloud player, it has to have world-class security, and Azure does. There's not a lot of debate about that, but the knock on Microsoft is Patch Tuesday becomes Hack Wednesday because Microsoft releases so many patches, it's got so much Swiss cheese in its legacy estate and patching frequently, it becomes a roadmap and a trigger for hackers. Hey, patch Tuesday, these are all the exploits that you can go after so you can act before the patches are implemented. And so it's really become a problem for users. As well Microsoft is competing with many of the best-of-breed platforms like CrowdStrike and Okta, which have market momentum and appear to be more attractive horizontal plays for customers outside of just the Microsoft cloud. But again, it's Microsoft. They make it easy and very inexpensive to adopt. Now, despite the outstanding presentation by Satya Nadella, there are a couple of statements that should raise eyebrows. Here are two of them. First, as he said, Azure is the only cloud that supports all organizations and all workloads from enterprises to startups, to highly regulated industries. I had a conversation with Sarbjeet Johal about this, to make sure I wasn't just missing something and we were both surprised, somewhat, by this claim. I mean most certainly AWS supports more certifications for example, and we would think it has a reasonable case to dispute that claim. And the other statement, Nadella made, Azure is the only cloud provider enabling highly regulated industries to bring their most sensitive applications to the cloud. Now, reasonable people can debate whether AWS is there yet, but very clearly Oracle and IBM would have something to say about that statement. Now maybe it's not just, would say, "Oh, they're not real clouds, you know, they're just going to hosting in the cloud if you will." But still, when it comes to mission-critical applications, you would think Oracle is really the the leader there. Oh, and Satya also mentioned the claim that the Edge browser, the Microsoft Edge browser, no questions asked, he said, is the best browser for business. And we could see some people having some questions about that. Like isn't Edge based on Chrome? Anyway, so we just had to question these statements and challenge Microsoft to defend them because to us it's a little bit of BS and makes one wonder what else in such as awesome keynote and it was awesome, it was hyperbole. Okay, moving on to Google Cloud Next. The keynote started with Sundar Pichai doing a virtual session, he was remote, stressing the importance of Google Cloud. He mentioned that Google Cloud from its Q2 earnings was on a $25-billion annual run rate. What he didn't mention is that it's also on a 3.6 billion annual operating loss run rate based on its first half performance. Just saying. And we'll dig into that issue a little bit more later in this episode. He also stressed that the investments that Google has made to support its core business and search, like its global network of 22 subsea cables to support things like, YouTube video, great performance obviously that we all rely on, those innovations there. Innovations in BigQuery to support its search business and its threat analysis that it's always had and its AI, it's always been an AI-first company, he's stressed, that they're all leveraged by the Google Cloud Platform, GCP. This is all true by the way. Google has absolutely awesome tech and the talk, as well as his talk, Pichai, but also Kurian's was forward thinking and laid out a vision of the future. But it didn't address in our view, and I talked to Sarbjeet Johal about this as well, today's challenges to the degree that Microsoft did and we expect AWS will at re:Invent this year, it was more out there, more forward thinking, what's possible in the future, somewhat less about today's problem, so I think it's resonates less with today's enterprise players. Thomas Kurian then took over from Sundar Pichai and did a really good job of highlighting customers, and I think he has to, right? He has to say, "Look, we are in this game. We have customers, 9 out of the top 10 media firms use Google Cloud. 8 out of the top 10 manufacturers. 9 out of the top 10 retailers. Same for telecom, same for healthcare. 8 out of the top 10 retail banks." He and Sundar specifically referenced a number of companies, customers, including Avery Dennison, Groupe Renault, H&M, John Hopkins, Prudential, Minna Bank out of Japan, ANZ bank and many, many others during the session. So you know, they had some proof points and you got to give 'em props for that. Now like Microsoft, Google talked about infrastructure, they referenced training processors and regions and compute optionality and storage and how new workloads were emerging, particularly data-driven workloads in AI that required new infrastructure. He explicitly highlighted partnerships within Nvidia and Intel. I didn't see anything on Arm, which somewhat surprised me 'cause I believe Google's working on that or at least has come following in AWS's suit if you will, but maybe that's why they're not mentioning it or maybe I got to do more research there, but let's park that for a minute. But again, as we've extensively discussed in Breaking Analysis in our view when it comes to compute, AWS via its Annapurna acquisition is well ahead of the pack in this area. Arm is making its way into the enterprise, but all three companies are heavily investing in infrastructure, which is great news for customers and the ecosystem. We'll come back to that. Data and AI go hand in hand, and there was no shortage of data talk. Google didn't mention Snowflake or Databricks specifically, but it did mention, by the way, it mentioned Mongo a couple of times, but it did mention Google's, quote, Open Data cloud. Now maybe Google has used that term before, but Snowflake has been marketing the data cloud concept for a couple of years now. So that struck as a shot across the bow to one of its partners and obviously competitor, Snowflake. At BigQuery is a main centerpiece of Google's data strategy. Kurian talked about how they can take any data from any source in any format from any cloud provider with BigQuery Omni and aggregate and understand it. And with the support of Apache Iceberg and Delta and Hudi coming in the future and its open Data Cloud Alliance, they talked a lot about that. So without specifically mentioning Snowflake or Databricks, Kurian co-opted a lot of messaging from these two players, such as life and tech. Kurian also talked about Google Workspace and how it's now at 8 million users up from 6 million just two years ago. There's a lot of discussion on developer optionality and several details on tools supported and the open mantra of Google. And finally on security, Google brought out Kevin Mandian, he's a CUBE alum, extremely impressive individual who's CEO of Mandiant, a leading security service provider and consultancy that Google recently acquired for around 5.3 billion. They talked about moving from a shared responsibility model to a shared fate model, which is again, it's kind of a shot across AWS's bow, kind of shared responsibility model. It's unclear that Google will pay the same penalty if a customer doesn't live up to its portion of the shared responsibility, but we can probably assume that the customer is still going to bear the brunt of the pain, nonetheless. Mandiant is really interesting because it's a services play and Google has stated that it is not a services company, it's going to give partners in the channel plenty of room to play. So we'll see what it does with Mandiant. But Mandiant is a very strong enterprise capability and in the single most important area security. So interesting acquisition by Google. Now as well, unlike Microsoft, Google is not competing with security leaders like Okta and CrowdStrike. Rather, it's partnering aggressively with those firms and prominently putting them forth. All right. Let's get into the ETR survey data and see how Microsoft and Google are positioned in four key markets that we've mentioned before, IaaS, BI analytics, database data platforms and collaboration software. First, let's look at the IaaS cloud. ETR is just about to release its October survey, so I cannot share the that data yet. I can only show July data, but we're going to give you some directional hints throughout this conversation. This chart shows net score or spending momentum on the vertical axis and overlap or presence in the data, i.e., how pervasive the platform is. That's on the horizontal axis. And we've inserted the Wikibon estimates of IaaS revenue for the companies, the Big 3. Actually the Big 4, we included Alibaba. So a couple of points in this somewhat busy data chart. First, Microsoft and AWS as always are dominant on both axes. The red dotted line there at 40% on the vertical axis. That represents a highly elevated spending velocity and all of the Big 3 are above the line. Now at the same time, GCP is well behind the two leaders on the horizontal axis and you can see that in the table insert as well in our revenue estimates. Now why is Azure bigger in the ETR survey when AWS is larger according to the Wikibon revenue estimates? And the answer is because Microsoft with products like 365 and Teams will often be considered by respondents in the survey as cloud by customers, so they fit into that ETR category. But in the insert data we're stripping out applications and SaaS from Microsoft and Google and we're only isolating on IaaS. The other point is when you take a look at the early October returns, you see downward pressure as signified by those dotted arrows on every name. The only exception was Dell, or Dell and IBM, which showing slightly improved momentum. So the survey data generally confirms what we know that AWS and Azure have a massive lead and strong momentum in the marketplace. But the real story is below the line. Unlike Google Cloud, which is on pace to lose well over 3 billion on an operating basis this year, AWS's operating profit is around $20 billion annually. Microsoft's Intelligent Cloud generated more than $30 billion in operating income last fiscal year. Let that sink in for a moment. Now again, that's not to say Google doesn't have traction, it does and Kurian gave some nice proof points and customer examples in his keynote presentation, but the data underscores the lead that Microsoft and AWS have on Google in cloud. And here's a breakdown of ETR's proprietary net score methodology, that vertical axis that we showed you in the previous chart. It asks customers, are you adopting the platform new? That's that lime green. Are you spending 6% or more? That's the forest green. Is you're spending flat? That's the gray. Is you're spending down 6% or worse? That's the pinkest color. Or are you replacing the platform, defecting? That's the bright red. You subtract the reds from the greens and you get a net score. Now one caveat here, which actually is really favorable from Microsoft, the Microsoft data that we're showing here is across the entire Microsoft portfolio. The other point is, this is July data, we'll have an update for you once ETR releases its October results. But we're talking about meaningful samples here, the ends. 620 for AWS over a thousand from Microsoft in more than 450 respondents in the survey for Google. So the real tell is replacements, that bright red. There is virtually no churn for AWS and Microsoft, but Google's churn is 5x, those two in the survey. Now 5% churn is not high, but you'd like to see three things for Google given it's smaller size. One is less churn, two is much, much higher adoption rates in the lime green. Three is a higher percentage of those spending more, the forest green. And four is a lower percentage of those spending less. And none of these conditions really applies here for Google. GCP is still not growing fast enough in our opinion, and doesn't have nearly the traction of the two leaders and that shows up in the survey data. All right, let's look at the next sector, BI analytics. Here we have that same XY dimension. Again, Microsoft dominating the picture. AWS very strong also in both axes. Tableau, very popular and respectable of course acquired by Salesforce on the vertical axis, still looking pretty good there. And again on the horizontal axis, big presence there for Tableau. And Google with Looker and its other platforms is also respectable, but it again, has some work to do. Now notice Streamlit, that's a recent Snowflake acquisition. It's strong in the vertical axis and because of Snowflake's go-to-market (indistinct), it's likely going to move to the right overtime. Grafana is also prominent in the Y axis, but a glimpse at the most recent survey data shows them slightly declining while Looker actually improves a bit. As does Cloudera, which we'll move up slightly. Again, Microsoft just blows you away, doesn't it? All right, now let's get into database and data platform. Same X Y dimensions, but now database and data warehouse. Snowflake as usual takes the top spot on the vertical axis and it is actually keeps moving to the right as well with again, Microsoft and AWS is dominant in the market, as is Oracle on the X axis, albeit it's got less spending velocity, but of course it's the database king. Google is well behind on the X axis but solidly above the 40% line on the vertical axis. Note that virtually all platforms will see pressure in the next survey due to the macro environment. Microsoft might even dip below the 40% line for the first time in a while. Lastly, let's look at the collaboration and productivity software market. This is such an important area for both Microsoft and Google. And just look at Microsoft with 365 and Teams up into the right. I mean just so impressive in ubiquitous. And we've highlighted Google. It's in the pack. It certainly is a nice base with 174 N, which I can tell you that N will rise in the next survey, which is an indication that more people are adopting. But given the investment and the tech behind it and all the AI and Google's resources, you'd really like to see Google in this space above the 40% line, given the importance of this market, of this collaboration area to Google's success and the degree to which they emphasize it in their pitch. And look, this brings up something that we've talked about before on Breaking Analysis. Google doesn't have a tech problem. This is a go-to-market and marketing challenge that Google faces and it's up against two go-to-market champs and Microsoft and AWS. And Google doesn't have the enterprise sales culture. It's trying, it's making progress, but it's like that racehorse that has all the potential in the world, but it's just missing some kind of key ingredient to put it over at the top. It's always coming in third, (chuckles) but we're watching and Google's obviously, making some investments as we shared with earlier. All right. Some final thoughts on what we learned this week and in this research: customers and partners should be thrilled that both Microsoft and Google along with AWS are spending so much money on innovation and building out global platforms. This is a gift to the industry and we should be thankful frankly because it's good for business, it's good for competitiveness and future innovation as a platform that can be built upon. Now we didn't talk much about multi-cloud, we haven't even mentioned supercloud, but both Microsoft and Google have a story that resonates with customers in cross cloud capabilities, unlike AWS at this time. But we never say never when it comes to AWS. They sometimes and oftentimes surprise you. One of the other things that Sarbjeet Johal and John Furrier and I have discussed is that each of the Big 3 is positioning to their respective strengths. AWS is the best IaaS. Microsoft is building out the kind of, quote, we-make-it-easy-for-you cloud, and Google is trying to be the open data cloud with its open-source chops and excellent tech. And that puts added pressure on Snowflake, doesn't it? You know, Thomas Kurian made some comments according to CRN, something to the effect that, we are the only company that can do the data cloud thing across clouds, which again, if I'm being honest is not really accurate. Now I haven't clarified these statements with Google and often things get misquoted, but there's little question that, as AWS has done in the past with Redshift, Google is taking a page out of Snowflake, Databricks as well. A big difference in the Big 3 is that AWS doesn't have this big emphasis on the up-the-stack collaboration software that both Microsoft and Google have, and that for Microsoft and Google will drive captive IaaS consumption. AWS obviously does some of that in database, a lot of that in database, but ISVs that compete with Microsoft and Google should have a greater affinity, one would think, to AWS for competitive reasons. and the same thing could be said in security, we would think because, as I mentioned before, Microsoft competes very directly with CrowdStrike and Okta and others. One of the big thing that Sarbjeet mentioned that I want to call out here, I'd love to have your opinion. AWS specifically, but also Microsoft with Azure have successfully created what Sarbjeet calls brand distance. AWS from the Amazon Retail, and even though AWS all the time talks about Amazon X and Amazon Y is in their product portfolio, but you don't really consider it part of the retail organization 'cause it's not. Azure, same thing, has created its own identity. And it seems that Google still struggles to do that. It's still very highly linked to the sort of core of Google. Now, maybe that's by design, but for enterprise customers, there's still some potential confusion with Google, what's its intentions? How long will they continue to lose money and invest? Are they going to pull the plug like they do on so many other tools? So you know, maybe some rethinking of the marketing there and the positioning. Now we didn't talk much about ecosystem, but it's vital for any cloud player, and Google again has some work to do relative to the leaders. Which brings us to supercloud. The ecosystem and end customers are now in a position this decade to digitally transform. And we're talking here about building out their own clouds, not by putting in and building data centers and installing racks of servers and storage devices, no. Rather to build value on top of the hyperscaler gift that has been presented. And that is a mega trend that we're watching closely in theCUBE community. While there's debate about the supercloud name and so forth, there little question in our minds that the next decade of cloud will not be like the last. All right, we're going to leave it there today. Many thanks to Sarbjeet Johal, and my business partner, John Furrier, for their input to today's episode. Thanks to Alex Myerson who's on production and manages the podcast and Ken Schiffman as well. Kristen Martin and Cheryl Knight helped get the word out on social media and in our newsletters. And Rob Hof is our editor in chief over at SiliconANGLE, who does some wonderful editing. And check out SiliconANGLE, a lot of coverage on Google Cloud Next and Microsoft Ignite. Remember, all these episodes are available as podcast wherever you listen. Just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. And you can always get in touch with me via email, david.vellante@siliconangle.com or you can DM me at dvellante or comment on my LinkedIn posts. And please do check out etr.ai, the best survey data in the enterprise tech business. This is Dave Vellante for the CUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (gentle music)

Published Date : Oct 15 2022

SUMMARY :

with Dave Vellante. and the degree to which they

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

NadellaPERSON

0.99+

Alex MyersonPERSON

0.99+

NvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Kevin MandianPERSON

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Kristen MartinPERSON

0.99+

Thomas KurianPERSON

0.99+

DellORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

OctoberDATE

0.99+

Satya NadellaPERSON

0.99+

SeattleLOCATION

0.99+

John FurrierPERSON

0.99+

3.6 billionQUANTITY

0.99+

Rob HofPERSON

0.99+

SundarPERSON

0.99+

PrudentialORGANIZATION

0.99+

JulyDATE

0.99+

New York CityLOCATION

0.99+

H&MORGANIZATION

0.99+

KurianPERSON

0.99+

twoQUANTITY

0.99+

6%QUANTITY

0.99+

Minna BankORGANIZATION

0.99+

5xQUANTITY

0.99+

Sarbjeet JohalPERSON

0.99+

Saket Saurabh, Next | AWS Startup Showcase S2 E2


 

[Music] welcome everyone to thecube's presentation of the aws startup showcase data as code this is season two episode two of our ongoing series covering exciting startups in the aws ecosystem to talk about data and analytics i'm your host lisa martin i have a cube alumni here with me socket sarah the ceo and founder of nexla he's here to talk about a future of automated data engineering socket welcome back great to see you lisa thank you for having me pleasure to be here again let's dig into nexla's mission ready to use data in the hands of every user what does that mean that means that you know every organization what what are they trying to do with data they want to make use of data they want to make decisions from data they want to make data a part of their business right the challenge is that every function in an organization today needs to leverage data whether it is finance whether it is hr whether it is marketing sales or product the problem for companies is that for each of these users into each of these teams the data is not ready for them to use as it is there is a lot that goes on before the data can be in their hands and it's in the tools that they like to work with and that's where a lot of data engineering happens today i would say that is by far one of the biggest bottlenecks today for companies in accelerating their business and being you know truly data-driven so talk to me about what makes nexla unique when you're in customer conversations as every company these days in every industry has to be a data company what do you tell them about what differentiates you yeah one of the biggest challenges out there is that the variety of data that companies work with is growing tremendously you know every sas application you use becomes a data source every type of database every type of user event anything can be a source of data now it is a tremendous engineering challenge for companies to make the data usable and the biggest challenge there is people companies just cannot have enough people to write that code to make the data engineering happen and where we come in with a very unique value is how to start thinking about making this whole process much faster much more automated at the end of the day lisa time to value and time to results is by far the number one thing on top of mind for customers time to value is critical we're all thin on patients these days whether we're in our consumerizer our business lives but being able to get access to data to make intelligent decisions whether it's on something that you're going to buy or a product or service you're going to deliver is really critical give me a snapshot of some of the users of nexla yeah the users of nexla are actually across different industries one of the main one of the interesting things is that the data challenges whether you are in financial services whether you are in retail e-commerce whether you are in healthcare they are very similar is basically getting connected to all these data systems and having the data now what people do with the data is very specific to their industry so for example within the e-commerce world or retail world itself you know companies from the likes of bed bath beyond and forever 21 and poshmark which are retailers or e-commerce companies they use nexla today to bring a lot of data in uh so do delivery companies like dodash and instacart and you know so do for example logistics providers like you know narwhal or customer loyalty and customer data companies like yacht pro so across the board for example just in retail we cover a whole bunch of companies got it now let's dig into you're here to talk about the future of automated data engineering talk to me about data engineering what is it let's define it and crack it open yeah um data engineering is i would say by far one of the hottest areas of work today the one of the hardest people to hire if you're looking for one data engineering is basically um all the code you know the process and the people that is basically connecting to their system so just to give a very practical example right for um for somebody in e-commerce let's say a take-off case of door dash right it's extremely important for them to have data as to which stores have what products what is available is this something they can list for people to go and buy is this something that they can therefore deliver right this is data that changes all the time now imagine them getting data from hundreds of different merchants across the board so it is the task of data engineering to then consume that data from all these different places different formats different apis different systems and then somehow unify all the data so that it can be used by the applications that they are building so data engineering in this case becomes taking data from different places and making it useful again back to what i was talking about ready to use data it is a lot of code it's a lot of people not just that it is something that runs every single day so it means it has monitoring it has reliability um it has performance it has every aspect of engineering as we know going into it you mentioned it's a hot topic which it is but it's also really challenging to accomplish how does nexla help enable that yeah data engineering is quite interesting in that one it is difficult to implement you know the the necessary sort of pieces but it is also very repetitive at some level right i mean when you connect to say 10 systems and get data from them you know that's not the end of it you have 10 more and 10 more and 10 more and then at some point you have thousands of such you know data connectivity and data flows happening it's hard to maintain them right as well so the way nexla gets into the whole picture is looking at what can we understand about data what can we observe about the data systems what can be done from that and then start to automate certain pieces of data engineering so that we are helping those teams just accelerate a lot faster and it i would say comes down to more people being able to do these tasks rather than only very very specialized people more people being able to do the tasks more users kind of democratization of data really there can you talk to us in more detail about how naxa is automating data engineering yeah i think um you know i think this is best shared through a visual so let me walk you through that a little bit as to how we automated engineering right so if we think about data engineering three of the most core components are many parts to it but three of the most core components of that are integrating with data systems preparing and transforming data and then monitoring that right so automating data engineering happens in you know three different ways first of all connecting connecting to data is is basically about the gateway to data the ability to read and write data from different systems this is where the data journey starts but it is extremely complex because people have to write code to connect to different systems one part that we have automated is generating these connectors so that you don't have to write code for that also making them bi-directional is extremely valuable because now you can read and write from any system the second part is that the gateway the connector has read the data but how do you represent it to the user so anybody can understand it and that's where the concept of data product comes in so we also look at auto generating data products these become the common language and entity that people can understand and work with and then the third part is taking all this automation and bringing the human in the loop no automation is perfect and therefore bringing the human in the loop means that somebody who is an expert in data who can look at it and understand it can now do things which only data systems experts were able to do before so bringing in that user of data directly into the into the picture is one important part but let's not forget data challenges are very diverse and very complex so the same system also becomes accessible to the engineers who are experts in that and now both of these can work together while an engineer will come through apis and sdk and command interfaces a data user comes in through a nice no code user interface and all of these things coming together are what is accelerating back to that time to value that really everybody cares about so if i'm in marketing and i'm a data user i'm able to have a collaborative workflow with the data engineer yeah yeah for the first time that is actually possible and everybody's focuses on their expertise and their know-how so you know um somebody who for example in financial services really understands portfolio and transactions and different type of asset classes they have the data in front of them the engineers who understand the underlying real-time data feeds and those they are still involved in the loop but now they are not doing that back and forth you know as the user of data i'm not going to the engineer saying hey can you do this for me can you get the data here and that back and forth is not only time taking it's frustrating and the number one hold back right yeah that and that's time that nobody has to waste as we know for many reasons talk to me about when you look into your crystal ball which i'm sure you have one what is the future of of data engineering from nexus perspective you talked about the automation what's the future hold i think the future of data engineering becomes that we up level this at a point where um companies don't have to be slowed down for it um i think a lot of tooling is already happening the way to think about this is that here in 2022 if we think that our data challenges are you know like x they will be a thousand x in five years right i mean this complexity is just increasing very rapidly so we think that this becomes one of those fundamental layers you know and you know as i was saying maybe the last time this is like the road you know you don't feel it you just move on it you do your job you build your products deliver your services as a company this just works for you um and that's where i think the future is and that's where i think the future should be we all need to work towards that we're not there yet not there yet a lot of a lot of potential a lot of opportunity and a lot of momentum speaking of momentum i want to talk about data mesh that is a topic of a lot of excitement a lot of discussion let's unpack that yeah i think uh you know the idea that data should be democratized that people should get access to the data and it's all coming back to that sort of basic concept of scale companies can scale only when more people can do the relevant jobs without depending on each other right so the idea of data democratization has been there for a long time but you know recently in the last couple of years the concept of data mesh was introduced by zamak digani and thoughtworks and that has really caught the attention of people and the imagination of leadership as well the idea that data should be available as a product you know that democratization can happen what is the entity of the democratization that's data presented as a product that people can use and collaborate is extremely powerful um i think a lot of companies are gravitating towards that and that's why it's exciting this is promising a future that is you know possible so second speaking of data products we talked a little bit about this last time but can you really help us understand see smell touch feel what a data product is and give us that context yeah absolutely i think uh best to orient ourselves with the general thinking of how we consider something as a product right a product is something that we find ready to use for example this table that i'm using right now made out of raw materials wood metal screws somebody designed it somebody produced it and i'm using it right now when we think about data products we think about data as the raw material so for example a spreadsheet an api a database query those are the raw raw materials what is a data product is something that further enriches and enhances that entity to be much more usable ready to use right um let me illustrate that with a little bit of a visual actually and that might help okay um the idea of the data product and this is how a data product looks like in next lab for a user to write as you see the concept of a data product is something that first of all it's a logical entity this simply means that it's not a new copy of data just like containers or logical compute units you know these data products are logical entities but they represent data in the same consistent fashion regardless of where the data comes from what format it is in they provide the user the idea of what the structure of data is what the sample data looks like what the characteristics of data are it allows people to have some documentation around it what does the data mean what do these attributes you know mean and how to interpret them how to validate that data something that users often know in an industry how is my data looking like well this value can never be negative because it's a price for example right um then the ability to take these data products that you know we automate by generating as i was mentioning earlier automatically creating these data products taking these data products to create new data products now that's something that's very unique about data you could take data off about an order for a from a company and say well the order data has an order id and a user id but i need to look up shipping address so i can combine user and order data to get that information in one place so you know creating new data products giving people access hey i've designed a data product i think you'll find it useful you can go use that as it is you don't have to go from scratch so all of those things together make a data product something that people can find ready to use again and this is this is also usable by the again that example where i'm in marketing uh or i'm in sales this is available to me as a general user as a general user in the tool of your choice so you can say oh no i am most familiar with using data in a spreadsheet i would like it there or i prefer my data in a tableau or a looker to visualize it and you can have it there so these data products give multiple interfaces for the end user to make use of it got it i like it you're meeting the user where they are with relevant data that helps them understand so much more contextually i'm curious when you're in customer conversations customers that come to you saying saka we need to build the data mesh how is nexl relevant they're how what is your conversation like yeah when people want to build a data mesh they're really looking for how their organization will scale into the future uh there are multiple components to building a data mesh there's a tooling part of it the technology portion there are people and processes right i mean unless you train people in certain processes and say hey when you build a data product you know make sure you have taken care of privacy or compliance to certain rules or who do you give access to is something you have to follow some rules about so we provide the technology component of it and then the people and process is something that companies you know then as they adopt and do that right so the concept of data product becomes core to building the data mesh having governance on it uh having all this be self-serve it's an essential part of that so that's where we come into the picture as a as a technology component to the whole story and working to deliver on that mission to getting data in the hands of every user you mentioned i want to dig into in the last few minutes here that we have uh the target audience you mentioned a few by name big names customers that nexla has you i heard retail i heard e-commerce i think i heard logistics but talk to me about the target customer for nexla any verticals in particular or any company's sizes in particular as well yeah you know the one of the top three banks in the country is a big user of nexla as part of their data stack uh we actually sit as part of their enterprise-wide ai platform providing data to their data scientists um we're not allowed to share their name unfortunately but um you know there are multiple other companies in asset management area for example they work with a lot of data in markets portfolio and so on um the leading medical devices companies using nexla data scientists there are using data coming in real time or streaming for medical devices to train and um and combine that with other data to do sort of clinical trial related research that they do um we have you know the companies for example linkedin is an excellent customer linkedin is by far the largest social network um their marketing team leverages nexla to bring data from different type of systems together as well um you know so are companies in education space like nerdy is a public company that uses nexla for you know student enrollment education data as they collaborate with school districts for example um you know there are companies across the board in marketing live brand you know for example uses nexla so we are um we are you know from who uses nexla is today mostly mid to large to very large enterprises today leverage nexla as a very critical component and often mission critical data for which they leverage us do you see that changing anytime soon as every company these days has to be a data company we expect that as consumers whether it's my grocery store um or my local coffee shop that they've got to use data to deliver me that personalized experience do you see the target audience kind of shifting down to more into mid-market smb space for next level oh yeah absolutely look we started the journey of the company with the thinking that the most complex data challenges exist in the large enterprise and if we can make it no code self-serve easy to use for them we can bring the same high-end technology to everybody and this is exactly why we recently launched in the amazon marketplace so anybody can go there get access to nexla and start to use it and you will see more and more of that happen where we will be bringing even some free versions of our product available so you're absolutely right every company needs to leverage data and i think people are getting much better at it you know especially in the last couple of years i've seen that teams have become much more sophisticated yes even if you are a coffee shop and you're running campaigns you know getting people yelp reviews and so on this data that you can use and understand better your demographic your customer and run your business better so one day yes we will absolutely be in the hands of every single person here a lot more opportunity to delight a lot more consumers and customers socket thank you so much for joining me on the program during the startup showcase you did a great job of helping us understand the future of automated data engineering we appreciate your insights thank you so much lisa it's a pleasure talking to you likewise for soccer sarah i'm lisa martin you're watching thecube's coverage of the aws startup showcase season two episode two stick around more great content coming up from the cube the leader in hybrid tech event coverage [Music]

Published Date : Mar 30 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
10 systemsQUANTITY

0.99+

10QUANTITY

0.99+

Saket SaurabhPERSON

0.99+

lisa martinPERSON

0.99+

2022DATE

0.99+

lisaPERSON

0.99+

sarahPERSON

0.99+

second partQUANTITY

0.99+

thousandsQUANTITY

0.99+

third partQUANTITY

0.99+

one partQUANTITY

0.99+

nexlaORGANIZATION

0.99+

naxaORGANIZATION

0.99+

threeQUANTITY

0.98+

eachQUANTITY

0.98+

dodashORGANIZATION

0.98+

hundreds of difQUANTITY

0.98+

todayDATE

0.98+

first timeQUANTITY

0.98+

five yearsQUANTITY

0.98+

narwhalORGANIZATION

0.97+

bothQUANTITY

0.97+

AWSORGANIZATION

0.96+

instacartORGANIZATION

0.96+

yacht proORGANIZATION

0.95+

linkedinORGANIZATION

0.94+

oneQUANTITY

0.94+

firstQUANTITY

0.94+

awsORGANIZATION

0.94+

one important partQUANTITY

0.92+

nexlaTITLE

0.91+

one placeQUANTITY

0.91+

every single dayQUANTITY

0.91+

zamak diganiPERSON

0.9+

three different waysQUANTITY

0.89+

amazonORGANIZATION

0.89+

last couple of yearsDATE

0.88+

last couple of yearsDATE

0.88+

secondQUANTITY

0.87+

poshmarkORGANIZATION

0.87+

sarah the ceoPERSON

0.85+

nexusORGANIZATION

0.83+

season twoQUANTITY

0.8+

a lot of peopleQUANTITY

0.8+

ShowcaseEVENT

0.79+

every functionQUANTITY

0.79+

one dayQUANTITY

0.78+

three banksQUANTITY

0.77+

10 moreQUANTITY

0.77+

number oneQUANTITY

0.76+

ferentORGANIZATION

0.73+

lot of dataQUANTITY

0.73+

thousandQUANTITY

0.73+

core componentsQUANTITY

0.7+

single personQUANTITY

0.69+

S2 E2EVENT

0.67+

one of the biggest bottlenecksQUANTITY

0.67+

lot of companiesQUANTITY

0.6+

episode twoQUANTITY

0.59+

thecubeORGANIZATION

0.56+

challengesQUANTITY

0.53+

IBM, The Next 3 Years of Life Sciences Innovation


 

>>Welcome to this exclusive discussion. IBM, the next three years of life sciences, innovation, precision medicine, advanced clinical data management and beyond. My name is Dave Volante from the Cuban today, we're going to take a deep dive into some of the most important trends impacting the life sciences industry in the next 60 minutes. Yeah, of course. We're going to hear how IBM is utilizing Watson and some really important in life impacting ways, but we'll also bring in real world perspectives from industry and the independent analyst view to better understand how technology and data are changing the nature of precision medicine. Now, the pandemic has created a new reality for everyone, but especially for life sciences companies, one where digital transformation is no longer an option, but a necessity. Now the upside is the events of the past 22 months have presented an accelerated opportunity for innovation technology and real world data are coming together and being applied to support life science, industry trends and improve drug discovery, clinical development, and treatment commercialization throughout the product life cycle cycle. Now I'd like to introduce our esteemed panel. Let me first introduce Lorraine Marshawn, who is general manager of life sciences at IBM Watson health. Lorraine leads the organization dedicated to improving clinical development research, showing greater treatment value in getting treatments to patients faster with differentiated solutions. Welcome Lorraine. Great to see you. >>Dr. Namita LeMay is the research vice-president of IDC, where she leads the life sciences R and D strategy and technology program, which provides research based advisory and consulting services as well as market analysis. The loan to meta thanks for joining us today. And our third panelist is Greg Cunningham. Who's the director of the RWE center of excellence at Eli Lilly and company. Welcome, Greg, you guys are doing some great work. Thanks for being here. Thanks >>Dave. >>Now today's panelists are very passionate about their work. If you'd like to ask them a question, please add it to the chat box located near the bottom of your screen, and we'll do our best to answer them all at the end of the panel. Let's get started. Okay, Greg, and then Lorraine and meta feel free to chime in after one of the game-changers that you're seeing, which are advancing precision medicine. And how do you see this evolving in 2022 and into the next decade? >>I'll give my answer from a life science research perspective. The game changer I see in advancing precision medicine is moving from doing research using kind of a single gene mutation or kind of a single to look at to doing this research using combinations of genes and the potential that this brings is to bring better drug targets forward, but also get the best product to a patient faster. Um, I can give, uh, an example how I see it playing out in the last decade. Non-oncology real-world evidence. We've seen an evolution in precision medicine as we've built out the patient record. Um, as we've done that, uh, the marketplace has evolved rapidly, uh, with, particularly for electronic medical record data and genomic data. And we were pretty happy to get our hands on electronic medical record data in the early days. And then later the genetic test results were combined with this data and we could do research looking at a single mutation leading to better patient outcomes. But I think where we're going to evolve in 2022 and beyond is with genetic testing, growing and oncology, providing us more data about that patient. More genes to look at, uh, researchers can look at groups of genes to analyze, to look at that complex combination of gene mutations. And I think it'll open the door for things like using artificial intelligence to help researchers plow through the complex number of permutations. When you think about all those genes you can look at in combination, right? Lorraine yes. Data and machine intelligence coming together, anything you would add. >>Yeah. Thank you very much. Well, I think that Greg's response really sets us up nicely, particularly when we think about the ability to utilize real-world data in the farm industry across a number of use cases from discovery to development to commercial, and, you know, in particular, I think with real world data and the comments that Greg just made about clinical EMR data linked with genetic or genomic data, a real area of interest in one that, uh, Watson health in particular is focused on the idea of being able to create a data exchange so that we can bring together claims clinical EMR data, genomics data, increasingly wearables and data directly from patients in order to create a digital health record that we like to call an intelligent patient health record that basically gives us the digital equivalent of a real life patient. And these can be used in use cases in randomized controlled clinical trials for synthetic control arms or natural history. They can be used in order to track patients' response to drugs and look at outcomes after they've been on various therapies as, as Greg is speaking to. And so I think that, you know, the promise of data and technology, the AI that we can apply on that is really helping us advance, getting therapies to market faster, with better information, lower sample sizes, and just a much more efficient way to do drug development and to track and monitor outcomes in patients. >>Great. Thank you for that now to meta, when I joined IDC many, many years ago, I really didn't know much about the industry that I was covering, but it's great to see you as a former practitioner now bringing in your views. What do you see as the big game-changers? >>So, um, I would, I would agree with what both Lorraine and Greg said. Um, but one thing that I'd just like to call out is that, you know, everyone's talking about big data, the volume of data is growing. It's growing exponentially actually about, I think 30% of data that exists today is healthcare data. And it's growing at a rate of 36%. That's huge, but then it's not just about the big, it's also about the broad, I think, um, you know, I think great points that, uh, Lorraine and Greg brought out that it's, it's not just specifically genomic data, it's multi omic data. And it's also about things like medical history, social determinants of health, behavioral data. Um, and why, because when you're talking about precision medicine and we know that we moved away from the, the terminology of personalized to position, because you want to talk about disease stratification and you can, it's really about convergence. >>Um, if you look at a recent JAMA paper in 2021, only 1% of EHS actually included genomic data. So you really need to have that ability to look at data holistically and IDC prediction is seeing that investments in AI to fuel in silico, silicone drug discovery will double by 20, 24, but how are you actually going to integrate all the different types of data? Just look at, for example, diabetes, you're on type two diabetes, 40 to 70% of it is genetically inherited and you have over 500 different, uh, genetic low side, which could be involved in playing into causing diabetes. So the earlier strategy, when you are looking at, you know, genetic risk scoring was really single trait. Now it's transitioning to multi rate. And when you say multi trade, you really need to get that integrated view that converging for you to, to be able to drive a precision medicine strategy. So to me, it's a very interesting contrast on one side, you're really trying to make it specific and focused towards an individual. And on the other side, you really have to go wider and bigger as well. >>Uh, great. I mean, the technology is enabling that convergence and the conditions are almost mandating it. Let's talk about some more about data that the data exchange and building an intelligent health record, as it relates to precision medicine, how will the interoperability of real-world data, you know, create that more cohesive picture for the, for the patient maybe Greg, you want to start, or anybody else wants to chime in? >>I think, um, the, the exciting thing from, from my perspective is the potential to gain access to data. You may be weren't aware of an exchange in implies that, uh, some kind of cataloging, so I can see, uh, maybe things that might, I just had no idea and, uh, bringing my own data and maybe linking data. These are concepts that I think are starting to take off in our field, but it, it really opens up those avenues to when you, you were talking about data, the robustness and richness volume isn't, uh, the only thing is Namita said, I think really getting to a rich high-quality data and, and an exchange offers a far bigger, uh, range for all of us to, to use, to get our work done. >>Yeah. And I think, um, just to chime, chime into that, uh, response from Greg, you know, what we hear increasingly, and it's pretty pervasive across the industry right now, because this ability to create an exchange or the intelligent, uh, patient health record, these are new ideas, you know, they're still rather nascent and it always is the operating model. Uh, that, that is the, uh, the difficult challenge here. And certainly that is the case. So we do have data in various silos. Uh, they're in patient claims, they're in electronic medical records, they might be in labs, images, genetic files on your smartphone. And so one of the challenges with this interoperability is being able to tap into these various sources of data, trying to identify quality data, as Greg has said, and the meta is underscoring as well. Uh, we've gotta be able to get to the depth of data that's really meaningful to us, but then we have to have technology that allows us to pull this data together. >>First of all, it has to be de-identified because of security and patient related needs. And then we've gotta be able to link it so that you can create that likeness in terms of the record, it has to be what we call cleaned or curated so that you get the noise and all the missing this out of it, that's a big step. And then it needs to be enriched, which means that the various components that are going to be meaningful, you know, again, are brought together so that you can create that cohort of patients, that individual patient record that now is useful in so many instances across farm, again, from development, all the way through commercial. So the idea of this exchange is to enable that exact process that I just described to have a, a place, a platform where various entities can bring their data in order to have it linked and integrated and cleaned and enriched so that they get something that is a package like a data package that they can actually use. >>And it's easy to plug into their, into their studies or into their use cases. And I think a really important component of this is that it's gotta be a place where various third parties can feel comfortable bringing their data together in order to match it with other third parties. That is a, a real value, uh, that the industry is increasingly saying would be important to them is, is the ability to bring in those third-party data sets and be able to link them and create these, these various data products. So that's really the idea of the data exchange is that you can benefit from accessing data, as Greg mentioned in catalogs that maybe are across these various silos so that you can do the kind of work that you need. And that we take a lot of the hard work out of it. I like to give an example. >>We spoke with one of our clients at one of the large pharma companies. And, uh, I think he expressed it very well. He said, what I'd like to do is have like a complete dataset of lupus. Lupus is an autoimmune condition. And I've just like to have like the quintessential lupus dataset that I can use to run any number of use cases across it. You know, whether it's looking at my phase one trial, whether it's selecting patients and enriching for later stage trials, whether it's understanding patient responses to different therapies as I designed my studies. And so, you know, this idea of adding in therapeutic area indication, specific data sets and being able to create that for the industry in the meta mentioned, being able to do that, for example, in diabetes, that's how pharma clients need to have their needs met is through taking the hard workout, bringing the data together, having it very therapeutically enriched so that they can use it very easily. >>Thank you for that detail and the meta. I mean, you can't do this with humans at scale in technology of all the things that Lorraine was talking about, the enrichment, the provenance, the quality, and of course, it's got to be governed. You've got to protect the privacy privacy humans just can't do all that at massive scale. Can it really tech that's where technology comes in? Doesn't it and automation. >>Absolutely. >>I, couldn't more, I think the biggest, you know, whether you talk about precision medicine or you talk about decentralized trials, I think there's been a lot of hype around these terms, but what is really important to remember is technology is the game changer and bringing all that data together is really going to be the key enabler. So multimodal data integration, looking at things like security or federated learning, or also when you're talking about leveraging AI, you're not talking about things like bias or other aspects around that are, are critical components that need to be addressed. I think the industry is, uh, it's partly, still trying to figure out the right use cases. So it's one part is getting together the data, but also getting together the right data. Um, I think data interoperability is going to be the absolute game changer for enabling this. Uh, but yes, um, absolutely. I can, I can really couldn't agree more with what Lorraine just said, that it's bringing all those different aspects of data together to really drive that precision medicine strategy. >>Excellent. Hey Greg, let's talk about protocols decentralized clinical trials. You know, they're not new to life silences, but, but the adoption of DCTs is of course sped up due to the pandemic we've had to make trade-offs obviously, and the risk is clearly worth it, but you're going to continue to be a primary approach as we enter 2022. What are the opportunities that you see to improve? How DCTs are designed and executed? >>I see a couple opportunities to improve in this area. The first is, uh, back to technology. The infrastructure around clinical trials has, has evolved over the years. Uh, but now you're talking about moving away from kind of site focus to the patient focus. Uh, so with that, you have to build out a new set of tools that would help. So for example, one would be novel trial, recruitment, and screening, you know, how do you, how do you find patients and how do you screen them to see if are they, are they really a fit for, for this protocol? Another example, uh, very important documents that we have to get is, uh, you know, the e-consent that someone's says, yes, I'm, well, I understand this study and I'm willing to do it, have to do that in a more remote way than, than we've done in the past. >>Um, the exciting area, I think, is the use of, uh, eco, uh, E-Pro where we capture data from the patient using apps, devices, sensors. And I think all of these capabilities will bring a new way of, of getting data faster, uh, in, in this kind of model. But the exciting thing from, uh, our perspective at Lily is it's going to bring more data about the patient from the patient, not just from the healthcare provider side, it's going to bring real data from these apps, devices and sensors. The second thing I think is using real-world data to identify patients, to also improve protocols. We run scenarios today, looking at what's the impact. If you change a cut point on a, a lab or a biomarker to see how that would affect, uh, potential enrollment of patients. So it, it definitely the real-world data can be used to, to make decisions, you know, how you improve these protocols. >>But the thing that we've been at the challenge we've been after that this probably offers the biggest is using real-world data to identify patients as we move away from large academic centers that we've used for years as our sites. Um, you can maybe get more patients who are from the rural areas of our countries or not near these large, uh, uh, academic centers. And we think it'll bring a little more diversity to the population, uh, who who's, uh, eligible, but also we have their data, so we can see if they really fit the criteria and the probability they are a fit for the trial is much higher than >>Right. Lorraine. I mean, your clients must be really pushing you to help them improve DCTs what are you seeing in the field? >>Yes, in fact, we just attended the inaugural meeting of the de-central trials research Alliance in, uh, in Boston about two weeks ago where, uh, all of the industry came together, pharma companies, uh, consulting vendors, just everyone who's been in this industry working to help define de-central trials and, um, think through what its potential is. Think through various models in order to enable it, because again, a nascent concept that I think COVID has spurred into action. Um, but it is important to take a look at the definition of DCT. I think there are those entities that describe it as accessing data directly from the patient. I think that is a component of it, but I think it's much broader than that. To me, it's about really looking at workflows and processes of bringing data in from various remote locations and enabling the whole ecosystem to work much more effectively along the data continuum. >>So a DCT is all around being able to make a site more effective, whether it's being able to administer a tele visit or the way that they're getting data into the electronic data captures. So I think we have to take a look at the, the workflows and the operating models for enabling de-central trials and a lot of what we're doing with our own technology. Greg mentioned the idea of electronic consent of being able to do electronic patient reported outcomes, other collection of data directly from the patient wearables tele-health. So these are all data acquisition, methodologies, and technologies that, that we are enabling in order to get the best of the data into the electronic data capture system. So edit can be put together and processed and submitted to the FDA for regulatory use for clinical trial type submission. So we're working on that. I think the other thing that's happening is the ability to be much more flexible and be able to have more cloud-based storage allows you to be much more inter-operable to allow API APIs in order to bring in the various types of data. >>So we're really looking at technology that can make us much more fluid and flexible and accommodating to all the ways that people live and work and manage their health, because we have to reflect that in the way we collect those data types. So that's a lot of what we're, what we're focused on. And in talking with our clients, we spend also a lot of time trying to understand along the, let's say de-central clinical trials continuum, you know, w where are they? And I know Namita is going to talk a little bit about research that they've done in terms of that adoption curve, but because COVID sort of forced us into being able to collect data in more remote fashion in order to allow some of these clinical trials to continue during COVID when a lot of them had to stop. What we want to make sure is that we understand and can codify some of those best practices and that we can help our clients enable that because the worst thing that would happen would be to have made some of that progress in that direction. >>But then when COVID is over to go back to the old ways of doing things and not bring some of those best practices forward, and we actually hear from some of our clients in the pharma industry, that they worry about that as well, because we don't yet have a system for operationalizing a de-central trial. And so we really have to think about the protocol it's designed, the indication, the types of patients, what makes sense to decentralize, what makes sense to still continue to collect data in a more traditional fashion. So we're spending a lot of time advising and consulting with our patients, as well as, I mean, with our clients, as well as CRS, um, on what the best model is in terms of their, their portfolio of studies. And I think that's a really important aspect of trying to accelerate the adoption is making sure that what we're doing is fit for purpose, just because you can use technology doesn't mean you should, it really still does require human beings to think about the problem and solve them in a very practical way. >>Great, thank you for that. Lorraine. I want to pick up on some things that Lorraine was just saying. And then back to what Greg was saying about, uh, uh, DCTs becoming more patient centric, you had a prediction or IDC, did I presume your fingerprints were on it? Uh, that by 20 25, 70 5% of trials will be patient-centric decentralized clinical trials, 90% will be hybrid. So maybe you could help us understand that relationship and what types of innovations are going to be needed to support that evolution of DCT. >>Thanks, Dave. Yeah. Um, you know, sorry, I, I certainly believe that, uh, you know, uh, Lorraine was pointing out of bringing up a very important point. It's about being able to continue what you have learned in over the past two years, I feel this, you know, it was not really a digital revolution. It was an attitude. The revolution that this industry underwent, um, technology existed just as clinical trials exist as drugs exist, but there was a proof of concept that technology works that this model is working. So I think that what, for example, telehealth, um, did for, for healthcare, you know, transition from, from care, anywhere care, anytime, anywhere, and even becoming predictive. That's what the decentralized clinical trials model is doing for clinical trials today. Great points again, that you have to really look at where it's being applied. You just can't randomly apply it across clinical trials. >>And this is where the industry is maturing the complexity. Um, you know, some people think decentralized trials are very simple. You just go and implement these centralized clinical trials, but it's not that simple as it it's being able to define, which are the right technologies for that specific, um, therapeutic area for that specific phase of the study. It's being also a very important point is bringing in the patient's voice into the process. Hey, I had my first telehealth visit sometime last year and I was absolutely thrilled about it. I said, no time wasted. I mean, everything's done in half an hour, but not all patients want that. Some want to consider going back and you, again, need to customize your de-centralized trials model to, to the, to the type of patient population, the demographics that you're dealing with. So there are multiple factors. Um, also stepping back, you know, Lorraine mentioned they're consulting with, uh, with their clients, advising them. >>And I think a lot of, um, a lot of companies are still evolving in their maturity in DCTs though. There's a lot of boys about it. Not everyone is very mature in it. So it's, I think it, one thing everyone's kind of agreeing with is yes, we want to do it, but it's really about how do we go about it? How do we make this a flexible and scalable modern model? How do we integrate the patient's voice into the process? What are the KPIs that we define the key performance indicators that we define? Do we have a playbook to implement this model to make it a scalable model? And, you know, finally, I think what organizations really need to look at is kind of developing a de-centralized mature maturity scoring model, so that I assess where I am today and use that playbook to define, how am I going to move down the line to me reach the next level of maturity. Those were some of my thoughts. Right? >>Excellent. And now remember you, if you have any questions, use the chat box below to submit those questions. We have some questions coming in from the audience. >>At one point to that, I think one common thread between the earlier discussion around precision medicine and around decentralized trials really is data interoperability. It is going to be a big game changer to, to enable both of these pieces. Sorry. Thanks, Dave. >>Yeah. Thank you. Yeah. So again, put your questions in the chat box. I'm actually going to go to one of the questions from the audience. I get some other questions as well, but when you think about all the new data types that are coming in from social media, omics wearables. So the question is with greater access to these new types of data, what trends are you seeing from pharma device as far as developing capabilities to effectively manage and analyze these novel data types? Is there anything that you guys are seeing, um, that you can share in terms of best practice or advice >>I'll offer up? One thing, I think the interoperability isn't quite there today. So, so what's that mean you can take some of those data sources. You mentioned, uh, some Omix data with, uh, some health claims data and it's the, we spend too much time and in our space putting data to gather the behind the scenes, I think the stat is 80% of the time is assembling the data 20% analyzing. And we've had conversations here at Lilly about how do we get to 80% of the time is doing analysis. And it really requires us to think, take a step back and think about when you create a, uh, a health record, you really have to be, have the same plugins so that, you know, data can be put together very easily, like Lorraine mentioned earlier. And that comes back to investing in as an industry and standards so that, you know, you have some of data standard, we all can agree upon. And then those plugs get a lot easier and we can spend our time figuring out how to make, uh, people's lives better with healthcare analysis versus putting data together, which is not a lot of fun behind the scenes. >>Other thoughts on, um, on, on how to take advantage of sort of novel data coming from things like devices in the nose that you guys are seeing. >>I could jump in there on your end. Did you want to go ahead? Okay. So, uh, I mean, I think there's huge value that's being seen, uh, in leveraging those multiple data types. I think one area you're seeing is the growth of prescription digital therapeutics and, um, using those to support, uh, you know, things like behavioral health issues and a lot of other critical conditions it's really taking you again, it is interlinking real-world data cause it's really taking you to the patient's home. Um, and it's, it's, there's a lot of patients in the city out here cause you can really monitor the patient real-time um, without the patient having coming, you know, coming and doing a site visit once in say four weeks or six weeks. So, um, I, and, uh, for example, uh, suicidal behavior and just to take an example, if you can predict well in advance, based on those behavioral parameters, that this is likely to trigger that, uh, the value of it is enormous. Um, again, I think, uh, Greg made a valid point about the industry still trying to deal with resolving the data interoperability issue. And there are so many players that are coming in the industry right now. There are really few that have the maturity and the capability to address these challenges and provide intelligence solutions. >>Yeah. Maybe I'll just, uh, go ahead and, uh, and chime into Nikita's last comment there. I think that's what we're seeing as well. And it's very common, you know, from an innovation standpoint that you have, uh, a nascent industry or a nascent innovation sort of situation that we have right now where it's very fragmented. You have a lot of small players, you have some larger entrenched players that have the capability, um, to help to solve the interoperability challenge, the standards challenge. I mean, I think IBM Watson health is certainly one of the entities that has that ability and is taking a stand in the industry, uh, in order to, to help lead in that way. Others are too. And, uh, but with, with all of the small companies that are trying to find interesting and creative ways to gather that data, it does create a very fragmented, uh, type of environment and ecosystem that we're in. >>And I think as we mature, as we do come forward with the KPIs, the operating models, um, because you know, the devil's in the detail in terms of the operating models, it's really exciting to talk these trends and think about the future state. But as Greg pointed out, if you're spending 80% of your time just under the hood, you know, trying to get the engine, all the spark plugs to line up, um, that's, that's just hard grunt work that has to be done. So I think that's where we need to be focused. And I think bringing all the data in from these disparate tools, you know, that's fine, we need, uh, a platform or the API APIs that can enable that. But I think as we, as we progress, we'll see more consolidation, uh, more standards coming into play, solving the interoperability types of challenges. >>And, um, so I think that's where we should, we should focus on what it's going to take and in three years to really codify this and make it, so it's a, it's a well hum humming machine. And, you know, I do know having also been in pharma that, uh, there's a very pilot oriented approach to this thing, which I think is really healthy. I think large pharma companies tend to place a lot of bets with different programs on different tools and technologies, to some extent to see what's gonna stick and, you know, kind of with an innovation mindset. And I think that's good. I think that's kind of part of the process of figuring out what is going to work and, and helping us when we get to that point of consolidating our model and the technologies going forward. So I think all of the efforts today are definitely driving us to something that feels much more codified in the next three to five years. >>Excellent. We have another question from the audience it's sort of related to the theme of this discussion, given the FDA's recent guidance on using claims and electronic health records, data to support regulatory decision-making what advancements do you think we can expect with regards to regulatory use of real-world data in the coming years? It's kind of a two-parter so maybe you guys can collaborate on this one. What role that, and then what role do you think industry plays in influencing innovation within the regulatory space? >>All right. Well, it looks like you've stumped the panel there. Uh, Dave, >>It's okay to take some time to think about it, right? You want me to repeat it? You guys, >>I, you know, I I'm sure that the group is going to chime into this. I, so the FDA has issued a guidance. Um, it's just, it's, it's exactly that the FDA issues guidances and says that, you know, it's aware and supportive of the fact that we need to be using real-world data. We need to create the interoperability, the standards, the ways to make sure that we can include it in regulatory submissions and the like, um, and, and I sort of think about it akin to the critical path initiative, probably, I don't know, 10 or 12 years ago in pharma, uh, when the FDA also embrace this idea of the critical path and being able to allow more in silico modeling of clinical trial, design and development. And it really took the industry a good 10 years, um, you know, before they were able to actually adopt and apply and take that sort of guidance or openness from the FDA and actually apply it in a way that started to influence the way clinical trials were designed or the in silico modeling. >>So I think the second part of the question is really important because while I think the FDA is saying, yes, we recognize it's important. Uh, we want to be able to encourage and support it. You know, when you look for example, at synthetic control arms, right? The use of real-world data in regulatory submissions over the last five or six years, all of the use cases have been in oncology. I think there've been about maybe somewhere between eight to 10 submissions. And I think only one actually was a successful submission, uh, in all those situations, the real-world data arm of that oncology trial that synthetic control arm was actually rejected by the FDA because of lack of completeness or, you know, equalness in terms of the data. So the FDA is not going to tell us how to do this. So I think the second part of the question, which is what's the role of industry, it's absolutely on industry in order to figure out exactly what we're talking about, how do we figure out the interoperability, how do we apply the standards? >>How do we ensure good quality data? How do we enrich it and create the cohort that is going to be equivalent to the patient in the real world, uh, in the end that would otherwise be in the clinical trial and how do we create something that the FDA can agree with? And we'll certainly we'll want to work with the FDA in order to figure out this model. And I think companies are already doing that, but I think that the onus is going to be on industry in order to figure out how you actually operationalize this and make it real. >>Excellent. Thank you. Um, question on what's the most common misconception that clinical research stakeholders with sites or participants, et cetera might have about DCTs? >>Um, I could jump in there. Right. So, sure. So, um, I think in terms of misconceptions, um, I think the communist misconceptions that sites are going away forever, which I do not think is really happening today. Then the second, second part of it is that, um, I think also the perspective that patients are potentially neglected because they're moving away. So we'll pay when I, when I, what I mean by that neglected, perhaps it was not the appropriate term, but the fact that, uh, will patients will, will, will patient engagement continue, will retention be strong since the patients are not interacting in person with the investigator quite as much. Um, so site retention and patient retention or engagement from both perspectives, I think remains a concern. Um, but actually if you look at, uh, look at, uh, assessments that have been done, I think patients are more than happy. >>Majority of the patients have been really happy about, about the new model. And in fact, sites are, seem to increase, have increased investments in technology by 50% to support this kind of a model. So, and the last thing is that, you know, decentralized trials is a great model and it can be applied to every possible clinical trial. And in another couple of weeks, the whole industry will be implementing only decentralized trials. I think we are far away from that. It's just not something that you would implement across every trial. And we discussed that already. So you have to find the right use cases for that. So I think those were some of the key misconceptions I'd say in the industry right now. Yeah. >>Yeah. And I would add that the misconception I hear the most about is, uh, the, the similar to what Namita said about the sites and healthcare professionals, not being involved to the level that they are today. Uh, when I mentioned earlier in our conversation about being excited about capturing more data, uh, from the patient that was always in context of, in addition to, you know, healthcare professional opinion, because I think both of them bring that enrichment and a broader perspective of that patient experience, whatever disease they're faced with. So I, I think some people think is just an all internet trial with just someone, uh, putting out there their own perspective. And, and it's, it's a combination of both to, to deliver a robust data set. >>Yeah. Maybe I'll just comment on, it reminds me of probably 10 or 15 years ago, maybe even more when, um, really remote monitoring was enabled, right? So you didn't have to have the study coordinator traveled to the investigative site in order to check the temperature of the freezer and make sure that patient records were being completed appropriately because they could have a remote visit and they could, they could send the data in a via electronic data and do the monitoring visit, you know, in real time, just the way we're having this kind of communication here. And there was just so much fear that you were going to replace or supplant the personal relationship between the sites between the study coordinators that you were going to, you know, have to supplant the role of the monitor, which was always a very important role in clinical trials. >>And I think people that really want to do embrace the technology and the advantages that it provided quickly saw that what it allowed was the monitor to do higher value work, you know, instead of going in and checking the temperature on a freezer, when they did have their visit, they were able to sit and have a quality discussion for example, about how patient recruitment was going or what was coming up in terms of the consent. And so it created a much more high touch, high quality type of interaction between the monitor and the investigative site. And I think we should be looking for the same advantages from DCT. We shouldn't fear it. We shouldn't think that it's going to supplant the site or the investigator or the relationship. It's our job to figure out where the technology fits and clinical sciences always got to be high touch combined with high-tech, but the high touch has to lead. And so getting that balance right? And so that's going to happen here as well. We will figure out other high value work, meaningful work for the site staff to do while they let the technology take care of the lower quality work, if you will, or the lower value work, >>That's not an, or it's an, and, and you're talking about the higher value work. And it, it leads me to something that Greg said earlier about the 80, 20, 80% is assembly. 20% is actually doing the analysis and that's not unique to, to, to life sciences, but, but sort of question is it's an organizational question in terms of how we think about data and how we approach data in the future. So Bamyan historically big data in life sciences in any industry really is required highly centralized and specialized teams to do things that the rain was talking about, the enrichment, the provenance, the data quality, the governance, the PR highly hyper specialized teams to do that. And they serve different constituencies. You know, not necessarily with that, with, with context, they're just kind of data people. Um, so they have responsibility for doing all those things. Greg, for instance, within literally, are you seeing a move to, to, to democratize data access? We've talked about data interoperability, part of that state of sharing, um, that kind of breaks that centralized hold, or is that just too far in the future? It's too risky in this industry? >>Uh, it's actually happening now. Uh, it's a great point. We, we try to classify what people can do. And, uh, the example would be you give someone who's less analytically qualified, uh, give them a dashboard, let them interact with the data, let them better understand, uh, what, what we're seeing out in the real world. Uh, there's a middle user, someone who you could give them, they can do some analysis with the tool. And the nice thing with that is you have some guardrails around that and you keep them in their lane, but it allows them to do some of their work without having to go ask those centralized experts that, that you mentioned their precious resources. And that's the third group is those, uh, highly analytical folks that can, can really deliver, uh, just value beyond. But when they're doing all those other things, uh, it really hinders them from doing what we've been talking about is the high value stuff. So we've, we've kind of split into those. We look at people using data in one of those three lanes and it, and it has helped I think, uh, us better not try to make a one fit solution for, for how we deliver data and analytic tools for people. Right. >>Okay. I mean, DCT hot topic with the, the, the audience here. Another question, um, what capabilities do sponsors and CRS need to develop in-house to pivot toward DCT? >>Should I jump in here? Yeah, I mean, um, I think, you know, when, when we speak about DCTs and when I speak with, uh, folks around in the industry, I, it takes me back to the days of risk-based monitoring. When it was first being implemented, it was a huge organizational change from the conventional monitoring models to centralize monitoring and risk-based monitoring, it needs a mental reset. It needs as Lorraine had pointed out a little while ago, restructuring workflows, re redefining processes. And I think that is one big piece. That is, I think the first piece, when, you know, when you're implementing a new model, I think organizational change management is a big piece of it because you are disturbing existing structures, existing methods. So getting that buy-in across the organization towards the new model, seeing what the value add in it. And where do you personally fit into that story? >>How do your workflows change, or how was your role impacted? I think without that this industry will struggle. So I see organizations, I think, first trying to work on that piece to build that in. And then of course, I also want to step back for the second to the, uh, to the point that you brought out about data democratization. And I think Greg Greg gave an excellent point, uh, input about how it's happening in the industry. But I would also say that the data democratization really empowerment of, of, of the stakeholders also includes the sites, the investigators. So what is the level of access to data that you know, that they have now, and is it, uh, as well as patients? So see increasingly more and more companies trying to provide access to patients finally, it's their data. So why shouldn't they have some insights to it, right. So access to patients and, uh, you know, the 80, 20 part of it. Uh, yes, he's absolutely right that, uh, we want to see that flip from, uh, 20%, um, you know, focusing on, on actually integrating the data 80% of analytics, but the real future will be coming in when actually the 20 and 18 has gone. And you actually have analysts the insights out on a silver platter. That's kind of wishful thinking, some of the industries is getting there in small pieces, but yeah, then that's just why I should, why we share >>Great points. >>And I think that we're, we're there in terms that like, I really appreciate the point around democratizing the data and giving the patient access ownership and control over their own data. I mean, you know, we see the health portals that are now available for patients to view their own records, images, and labs, and claims and EMR. We have blockchain technology, which is really critical here in terms of the patient, being able to pull all of their own data together, you know, in the blockchain and immutable record that they can own and control if they want to use that to transact clinical trial types of opportunities based on their data, they can, or other real world scenarios. But if they want to just manage their own data because they're traveling and if they're in a risky health situation, they've got their own record of their health, their health history, uh, which can avoid, you know, medical errors occurring. So, you know, even going beyond life sciences, I think this idea of democratizing data is just good for health. It's just good for people. And we definitely have the technology that can make it a reality. Now >>You're here. We have just about 10 minutes left and now of course, now all the questions are rolling in like crazy from the crowd. Would it be curious to know if there would be any comments from the panel on cost comparison analysis between traditional clinical trials in DCTs and how could the outcome effect the implementation of DCTs any sort of high-level framework you can share? >>I would say these are still early days to, to drive that analysis because I think many companies are, um, are still in the early stages of implementation. They've done a couple of trials. The other part of it that's important to keep in mind is, um, is for organizations it's, they're at a stage of, uh, of being on the learning curve. So when you're, you're calculating the cost efficiencies, if ideally you should have had two stakeholders involved, you could have potentially 20 stakeholders involved because everyone's trying to learn the process and see how it's going to be implemented. So, um, I don't think, and the third part of it, I think is organizations are still defining their KPIs. How do you measure it? What do you measure? So, um, and even still plugging in the pieces of technology that they need to fit in, who are they partnering with? >>What are the pieces of technology they're implementing? So I don't think there is a clear cut as answered at this stage. I think as you scale this model, the efficiencies will be seen. It's like any new technology or any new solution that's implemented in the first stages. It's always a little more complex and in fact sometimes costs extra. But as, as you start scaling it, as you establish your workflows, as you streamline it, the cost efficiencies will start becoming evident. That's why the industry is moving there. And I think that's how it turned out on the long run. >>Yeah. Just make it maybe out a comment. If you don't mind, the clinical trials are, have traditionally been costed are budgeted is on a per patient basis. And so, you know, based on the difficulty of the therapeutic area to recruit a rare oncology or neuromuscular disease, there's an average that it costs in order to find that patient and then execute the various procedures throughout the clinical trial on that patient. And so the difficulty of reaching the patient and then the complexity of the trial has led to what we might call a per patient stipend, which is just the metric that we use to sort of figure out what the average cost of a trial will be. So I think to point, we're going to have to see where the ability to adjust workflows, get to patients faster, collect data more easily in order to make the burden on the site, less onerous. I think once we start to see that work eases up because of technology, then I think we'll start to see those cost equations change. But I think right now the system isn't designed in order to really measure the economic benefit of de-central models. And I think we're going to have to sort of figure out what that looks like as we go along and since it's patient oriented right now, we'll have to say, well, you know, how does that work, ease up? And to those costs actually come down and then >>Just scale, it's going to be more, more clear as the media was saying, next question from the audiences, it's kind of a best fit question. You all have touched on this, but let me just ask it is what examples in which, in which phases suit DCT in its current form, be it fully DCT or hybrid models, none of our horses for courses question. >>Well, I think it's kind of, uh, it's, it's it's has its efficiencies, obviously on the later phases, then the absolute early phase trials, those are not the ideal models for DCTs I would say so. And again, the logic is also the fact that, you know, when you're, you're going into the later phase trials, the volume of number of patients is increasing considerably to the point that Lorraine brought up about access to the patients about patient selection. The fact, I think what one should look at is really the advantages that it brings in, in terms of, you know, patient access in terms of patient diversity, which is a big piece that, um, the cities are enabling. So, um, if you, if, if you, if you look at the spectrum of, of these advantages and, and just to step back for a moment, if you, if you're looking at costs, like you're looking at things like remote site monitoring, um, is, is a big, big plus, right? >>I mean, uh, site monitoring alone accounts for around a third of the trial costs. So there are so many pieces that fall in together. The challenge actually that comes when you're in defining DCTs and there are, as Rick pointed out multiple definitions of DCTs that are existing, uh, you know, in the industry right now, whether you're talking of what Detroit is doing, or you're talking about acro or Citi or others. But the point is it's a continuum, it's a continuum of different pieces that have been woven together. And so how do you decide which pieces you're plugging in and how does that impact the total cost or the solution that you're implementing? >>Great, thank you. Last question we have in the audience, excuse me. What changes have you seen? Are there others that you can share from the FDA EU APAC, regulators and supporting DCTs precision medicine for approval processes, anything you guys would highlight that we should be aware of? >>Um, I could quickly just add that. I think, um, I'm just publishing a report on de-centralized clinical trials should be published shortly, uh, perspective on that. But I would say that right now, um, there, there was a, in the FDA agenda, there was a plan for a decentralized clinical trials guidance, as far as I'm aware, one has not yet been published. There have been significant guidances that have been published both by email and by, uh, the FDA that, um, you know, around the implementation of clinical trials during the COVID pandemic, which incorporate various technology pieces, which support the DCD model. Um, but I, and again, I think one of the reasons why it's not easy to publish a well-defined guidance on that is because there are so many moving pieces in it. I think it's the Danish, uh, regulatory agency, which has per se published a guidance and revised it as well on decentralized clinical trials. >>Right. Okay. Uh, we're pretty much out of time, but I, I wonder Lorraine, if you could give us some, some final thoughts and bring us home things that we should be watching or how you see the future. >>Well, I think first of all, let me, let me thank the panel. Uh, we really appreciate Greg from Lily and the meta from IDC bringing their perspectives to this conversation. And, uh, I hope that the audience has enjoyed the, uh, the discussion that we've had around the future state of real world data as, as well as DCT. And I think, you know, some of the themes that we've talked about, number one, I think we have a vision and I think we have the right strategies in terms of the future promise of real-world data in any number of different applications. We certainly have talked about the promise of DCT to be more efficient, to get us closer to the patient. I think that what we have to focus on is how we come together as an industry to really work through these very vexing operational issues, because those are always the things that hang us up and whether it's clinical research or whether it's later stage, uh, applications of data. >>We, the healthcare system is still very fragmented, particularly in the us. Um, it's still very, state-based, uh, you know, different states can have different kinds of, uh, of, of cultures and geographic, uh, delineations. And so I think that, you know, figuring out a way that we can sort of harmonize and bring all of the data together, bring some of the models together. I think that's what you need to look to us to do both industry consulting organizations, such as IBM Watson health. And we are, you know, through DTRA and, and other, uh, consortia and different bodies. I think we're all identifying what the challenges are in terms of making this a reality and working systematically on those. >>It's always a pleasure to work with such great panelists. Thank you, Lorraine Marshawn, Dr. Namita LeMay, and Greg Cunningham really appreciate your participation today and your insights. The next three years of life sciences, innovation, precision medicine, advanced clinical data management and beyond has been brought to you by IBM in the cube. You're a global leader in high tech coverage. And while this discussion has concluded, the conversation continues. So please take a moment to answer a few questions about today's panel on behalf of the entire IBM life sciences team and the cube decks for your time and your feedback. And we'll see you next time.

Published Date : Dec 7 2021

SUMMARY :

and the independent analyst view to better understand how technology and data are changing The loan to meta thanks for joining us today. And how do you see this evolving the potential that this brings is to bring better drug targets forward, And so I think that, you know, the promise of data the industry that I was covering, but it's great to see you as a former practitioner now bringing in your Um, but one thing that I'd just like to call out is that, you know, And on the other side, you really have to go wider and bigger as well. for the patient maybe Greg, you want to start, or anybody else wants to chime in? from my perspective is the potential to gain access to uh, patient health record, these are new ideas, you know, they're still rather nascent and of the record, it has to be what we call cleaned or curated so that you get is, is the ability to bring in those third-party data sets and be able to link them and create And so, you know, this idea of adding in therapeutic I mean, you can't do this with humans at scale in technology I, couldn't more, I think the biggest, you know, whether What are the opportunities that you see to improve? uh, very important documents that we have to get is, uh, you know, the e-consent that someone's the patient from the patient, not just from the healthcare provider side, it's going to bring real to the population, uh, who who's, uh, eligible, you to help them improve DCTs what are you seeing in the field? Um, but it is important to take and submitted to the FDA for regulatory use for clinical trial type And I know Namita is going to talk a little bit about research that they've done the adoption is making sure that what we're doing is fit for purpose, just because you can use And then back to what Greg was saying about, uh, uh, DCTs becoming more patient centric, It's about being able to continue what you have learned in over the past two years, Um, you know, some people think decentralized trials are very simple. And I think a lot of, um, a lot of companies are still evolving in their maturity in We have some questions coming in from the audience. It is going to be a big game changer to, to enable both of these pieces. to these new types of data, what trends are you seeing from pharma device have the same plugins so that, you know, data can be put together very easily, coming from things like devices in the nose that you guys are seeing. and just to take an example, if you can predict well in advance, based on those behavioral And it's very common, you know, the operating models, um, because you know, the devil's in the detail in terms of the operating models, to some extent to see what's gonna stick and, you know, kind of with an innovation mindset. records, data to support regulatory decision-making what advancements do you think we can expect Uh, Dave, And it really took the industry a good 10 years, um, you know, before they I think there've been about maybe somewhere between eight to 10 submissions. onus is going to be on industry in order to figure out how you actually operationalize that clinical research stakeholders with sites or participants, Um, but actually if you look at, uh, look at, uh, It's just not something that you would implement across you know, healthcare professional opinion, because I think both of them bring that enrichment and do the monitoring visit, you know, in real time, just the way we're having this kind of communication to do higher value work, you know, instead of going in and checking the the data quality, the governance, the PR highly hyper specialized teams to do that. And the nice thing with that is you have some guardrails around that and you keep them in in-house to pivot toward DCT? That is, I think the first piece, when, you know, when you're implementing a new model, to patients and, uh, you know, the 80, 20 part of it. I mean, you know, we see the health portals that We have just about 10 minutes left and now of course, now all the questions are rolling in like crazy from learn the process and see how it's going to be implemented. I think as you scale this model, the efficiencies will be seen. And so, you know, based on the difficulty of the therapeutic Just scale, it's going to be more, more clear as the media was saying, next question from the audiences, the logic is also the fact that, you know, when you're, you're going into the later phase trials, uh, you know, in the industry right now, whether you're talking of what Detroit is doing, Are there others that you can share from the FDA EU APAC, regulators and supporting you know, around the implementation of clinical trials during the COVID pandemic, which incorporate various if you could give us some, some final thoughts and bring us home things that we should be watching or how you see And I think, you know, some of the themes that we've talked about, number one, And so I think that, you know, figuring out a way that we can sort of harmonize and and beyond has been brought to you by IBM in the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LorrainePERSON

0.99+

GregPERSON

0.99+

Lorraine MarshawnPERSON

0.99+

Greg CunninghamPERSON

0.99+

Dave VolantePERSON

0.99+

IBMORGANIZATION

0.99+

40QUANTITY

0.99+

80%QUANTITY

0.99+

DavePERSON

0.99+

RickPERSON

0.99+

Namita LeMayPERSON

0.99+

30%QUANTITY

0.99+

2022DATE

0.99+

secondQUANTITY

0.99+

Greg GregPERSON

0.99+

six weeksQUANTITY

0.99+

FDAORGANIZATION

0.99+

RWEORGANIZATION

0.99+

BostonLOCATION

0.99+

36%QUANTITY

0.99+

four weeksQUANTITY

0.99+

2021DATE

0.99+

20%QUANTITY

0.99+

20 stakeholdersQUANTITY

0.99+

90%QUANTITY

0.99+

three yearsQUANTITY

0.99+

second partQUANTITY

0.99+

50%QUANTITY

0.99+

eightQUANTITY

0.99+

todayDATE

0.99+

NikitaPERSON

0.99+

DCTORGANIZATION

0.99+

IDCORGANIZATION

0.99+

first pieceQUANTITY

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

Next Gen Analytics & Data Services for the Cloud that Comes to You | An HPE GreenLake Announcement


 

(upbeat music) >> Welcome back to theCUBE's coverage of HPE GreenLake announcements. We're seeing the transition of Hewlett Packard Enterprise as a company, yes they're going all in for as a service, but we're also seeing a transition from a hardware company to what I look at increasingly as a data management company. We're going to talk today to Vishal Lall who's GreenLake cloud services solutions at HPE and Matt Maccaux who's a global field CTO, Ezmeral Software at HPE. Gents welcome back to theCube. Good to see you again. >> Thank you for having us here. >> Thanks Dave. >> So Vishal let's start with you. What are the big mega trends that you're seeing in data? When you talk to customers, when you talk to partners, what are they telling you? What's your optic say? >> Yeah, I mean, I would say the first thing is data is getting even more important. It's not that data hasn't been important for enterprises, but as you look at the last, I would say 24 to 36 months has become really important, right? And it's become important because customers look at data and they're trying to stitch data together across different sources, whether it's marketing data, it's supply chain data, it's financial data. And they're looking at that as a source of competitive advantage. So, customers were able to make sense out of the data, enterprises that are able to make sense out of that data, really do have a competitive advantage, right? And they actually get better business outcomes. So that's really important, right? If you start looking at, where we are from an analytics perspective, I would argue we are in maybe the third generation of data analytics. Kind of the first one was in the 80's and 90's with data warehousing kind of EDW. A lot of companies still have that, but think of Teradata, right? The second generation more in the 2000's was around data lakes, right? And that was all about Hadoop and others, and really the difference between the first and the second generation was the first generation was more around structured data, right? Second became more about unstructured data, but you really couldn't run transactions on that data. And I would say, now we are entering this third generation, which is about data lake houses, right? Customers what they want really is, or enterprises, what they want really is they want structured data. They want unstructured data altogether. They want to run transactions on them, right? They want to use the data to mine it for machine learning purposes, right? Use it for SQL as well as non-SQL, right? And that's kind of where we are today. So, that's really what we are hearing from our customers in terms of at least the top trends. And that's how we are thinking about our strategy in context of those trends. >> So lake house use that term. It's an increasing popular term. It connotes, "Okay, I've got the best of data warehouse "and I've got the best of data lake. "I'm going to try to simplify the data warehouse. "And I'm going to try to clean up the data swamp "if you will." Matt, so, talk a little bit more about what you guys are doing specifically and what that means for your customers. >> Well, what we think is important is that there has to be a hybrid solution, that organizations are going to build their analytics. They're going to deploy algorithms, where the data either is being produced or where it's going to be stored. And that could be anywhere. That could be in the trunk of a vehicle. It could be in a public cloud or in many cases, it's on-premises in the data center. And where organizations struggle is they feel like they have to make a choice and a trade-off going from one to the other. And so what HPE is offering is a way to unify the experiences of these different applications, workloads, and algorithms, while connecting them together through a fabric so that the experience is tied together with consistent, security policies, not having to refactor your applications and deploying tools like Delta lake to ensure that the organization that needs to build a data product in one cloud or deploy another data product in the trunk of an automobile can do so. >> So, Vishal I wonder if we could talk about some of the patterns that you're seeing with customers as you go to deploy solutions. Are there other industry patterns? Are there any sort of things you can share that you're discerning? >> Yeah, no, absolutely. As we kind of hear back from our customers across industries, I think the problem sets are very similar, right? Whether you look at healthcare customers. You look at telco customers, you look at consumer goods, financial services, they're all quite similar. I mean, what are they looking for? They're looking for making sense, making business value from the data, breaking down the silos that I think Matt spoke about just now, right? How do I stitch intelligence across my data silos to get more business intelligence out of it. They're looking for openness. I think the problem that's happened is over time, people have realized that they are locked in with certain vendors or certain technologies. So, they're looking for openness and choice. So that's an important one that we've at least heard back from our customers. The other one is just being able to run machine learning on algorithms on the data. I think that's another important one for them as well. And I think the last one I would say is, TCO is important as customers over the last few years have realized going to public cloud is starting to become quite expensive, to run really large workloads on public cloud, especially as they want to egress data. So, cost performance, trade offs are starting to become really important and starting to enter into the conversation now. So, I would say those are some of the key things and themes that we are hearing from customers cutting across industries. >> And you talked to Matt about basically being able to essentially leave the data where it belongs, bring the compute to data. We talk about that all the time. And so that has to include on-prem, it's got to include the cloud. And I'm kind of curious on the edge, where you see that 'cause that's... Is that an eventual piece? Is that something that's actually moving in parallel? There's lot of fuzziness as an observer in the edge. >> I think the edge is driving the most interesting use cases. The challenge up until recently has been, well, I think it's always been connectivity, right? Whether we have poor connection, little connection or no connection, being able to asynchronously deploy machine learning jobs into some sort of remote location. Whether it's a very tiny edge or it's a very large edge, like a factory floor, the challenge as Vishal mentioned is that if we're going to deploy machine learning, we need some sort of consistency of runtime to be able to execute those machine learning models. Yes, we need consistent access to data, but consistent access in terms of runtime is so important. And I think Hadoop got us started down this path, the ability to very efficiently and cost-effectively run large data jobs against large data sets. And it attempted to work into the source ecosystem, but because of the monolithic deployment, the tightly coupling of the compute and the data, it never achieved that cloud native vision. And so what as role in HPE through GreenLake services is delivering with open source-based Kubernetes, open source Apache Spark, open source Delta lake libraries, those same cloud native services that you can develop on your workstation, deploy in your data center in the same way you deploy through automation out at the edge. And I think that is what's so critical about what we're going to see over the next couple of years. The edge is driving these use cases, but it's consistency to build and deploy those machine learning models and connect it consistently with data that's what's going to drive organizations to success. >> So you're saying you're able to decouple, to compute from the storage. >> Absolutely. You wouldn't have a cloud if you didn't decouple compute from storage. And I think this is sort of the demise of Hadoop was forcing that coupling. We have high-speed networks now. Whether I'm in a cloud or in my data center, even at the edge, I have high-performance networks, I can now do distributed computing and separate compute from storage. And so if I want to, I can have high-performance compute for my really data intensive applications and I can have cost-effective storage where I need to. And by separating that off, I can now innovate at the pace of those individual tools in that opensource ecosystem. >> So, can I stay on this for a second 'cause you certainly saw Snowflake popularize that, they were kind of early on. I don't know if they're the first, but they certainly one of the most successful. And you saw Amazon Redshift copied it. And Redshift was kind of a bolt on. What essentially they did is they teared off. You could never turn off the compute. You still had to pay for a little bit compute, that's kind of interesting. Snowflakes at the t-shirt sizes, so there's trade offs there. There's a lot of ways to skin the cat. How did you guys skin the cat? >> What we believe we're doing is we're taking the best of those worlds. Through GreenLake cloud services, the ability to pay for and provision on demand the computational services you need. So, if someone needs to spin up a Delta lake job to execute a machine learning model, you spin up that. We're of course spinning that up behind the scenes. The job executes, it spins down, and you only pay for what you need. And we've got reserve capacity there. So you, of course, just like you would in the public cloud. But more importantly, being able to then extend that through a fabric across clouds and edge locations, so that if a customer wants to deploy in some public cloud service, like we know we're going to, again, we're giving that consistency across that, and exposing it through an S3 API. >> So, Vishal at the end of the day, I mean, I love to talk about the plumbing and the tech, but the customer doesn't care, right? They want the lowest cost. They want the fastest outcome. They want the greatest value. My question is, how are you seeing data organizations evolve to sort of accommodate this third era of this next generation? >> Yeah. I mean, the way at least, kind of look at, from a customer perspective, what they're trying to do is first of all, I think Matt addressed it somewhat. They're looking at a consistent experience across the different groups of people within the company that do something to data, right? It could be a SQL users. People who's just writing a SQL code. It could be people who are writing machine learning models and running them. It could be people who are writing code in Spark. Right now they are, you know the experience is completely disjointed across them, across the three types of users or more. And so that's one thing that they trying to do, is just try to get that consistency. We spoke about performance. I mean the disjointedness between compute and storage does provide the agility, because there customers are looking for elasticity. How can I have an elastic environment? So, that's kind of the other thing they're looking at. And performance and DCU, I think a big deal now. So, I think that that's definitely on a customer's mind. So, as enterprises are looking at their data journey, those are the at least the attributes that they are trying to hit as they organize themselves to make the most out of the data. >> Matt, you and I have talked about this sort of trend to the decentralized future. We're sort of hitting on that. And whether it's in a first gen data warehouse, second gen data lake, data hub, bucket, whatever, that essentially should ideally stay where it is, wherever it should be from a performance standpoint, from a governance standpoint and a cost perspective, and just be a node on this, I like the term data mesh, but be a node on that, and essentially allow the business owners, those with domain context to you've mentioned data products before to actually build data products, maybe air quotes, but a data product is something that can be monetized. Maybe it cuts costs. Maybe it adds value in other ways. How do you see HPE fitting into that long-term vision which we know is going to take some time to play out? >> I think what's important for organizations to realize is that they don't have to go to the public cloud to get that experience they're looking for. Many organizations are still reluctant to push all of their data, their critical data, that is going to be the next way to monetize business into the public cloud. And so what HPE is doing is bringing the cloud to them. Bringing that cloud from the infrastructure, the virtualization, the containerization, and most importantly, those cloud native services. So, they can do that development rapidly, test it, using those open source tools and frameworks we spoke about. And if that model ends up being deployed on a factory floor, on some common X86 infrastructure, that's okay, because the lingua franca is Kubernetes. And as Vishal mentioned, Apache Spark, these are the common tools and frameworks. And so I want organizations to think about this unified analytics experience, where they don't have to trade off security for cost, efficiency for reliability. HPE through GreenLake cloud services is delivering all of that where they need to do it. >> And what about the speed to quality trade-off? Have you seen that pop up in customer conversations, and how are organizations dealing with that? >> Like I said, it depends on what you mean by speed. Do you mean a computational speed? >> No, accelerating the time to insights, if you will. We've got to go faster, faster, agile to the data. And it's like, "Whoa, move fast break things. "Whoa, whoa. "What about data quality and governance and, right?" They seem to be at odds. >> Yeah, well, because the processes are fundamentally broken. You've got a developer who maybe is able to spin up an instance in the public cloud to do their development, but then to actually do model training, they bring it back on-premises, but they're waiting for a data engineer to get them the data available. And then the tools to be provisioned, which is some esoteric stack. And then runtime is somewhere else. The entire process is broken. So again, by using consistent frameworks and tools, and bringing that computation to where the data is, and sort of blowing this construct of pipelines out of the water, I think is what is going to drive that success in the future. A lot of organizations are not there yet, but that's I think aspirationally where they want to be. >> Yeah, I think you're right. I think that is potentially an answer as to how you, not incrementally, but revolutionized sort of the data business. Last question, is talking about GreenLake, how this all fits in. Why GreenLake? Why do you guys feel as though it's differentiable in the market place? >> So, I mean, something that you asked earlier as well, time to value, right? I think that's a very important attribute and kind of a design factor as we look at GreenLake. If you look at GreenLake overall, kind of what does it stand for? It stands for experience. How do we make sure that we have the right experience for the users, right? We spoke about it in context of data. How do we have a similar experience for different users of data, but just broadly across an enterprise? So, it's all about experience. How do you automate it, right? How do you automate the workloads? How do you provision fast? How do you give folks a cloud... An experience that they have been used to in the public cloud, on using an Apple iPhone? So it's all about experience, I think that's number one. Number two is about choice and openness. I mean, as we look at GreenLake is not a proprietary platform. We are very, very clear that the design, one of the important design principles is about choice and openness. And that's the reason we are, you hear us talk about Kubernetes, about Apaches Spark, about Delta lake et cetera, et cetera, right? We're using kind of those open source models where customers have a choice. If they don't want to be on GreenLake, they can go to public cloud tomorrow. Or they can run in our Holos if they want to do it that way or in their Holos, if they want to do it. So they should have the choice. Third is about performance. I mean, what we've done is it's not just about the software, but we as a company know how to configure infrastructure for that workload. And that's an important part of it. I mean if you think about the machine learning workloads, we have the right Nvidia chips that accelerate those transactions. So, that's kind of the last, the third one, and the last one, I think, as I spoke about earlier is cost. We are very focused on TCO, but from a customer perspective, we want to make sure that we are giving a value proposition, which is just not about experience and performance and openness, but also about costs. So if you think about GreenLake, that's kind of the value proposition that we bring to our customers across those four dimensions. >> Guys, great conversation. Thanks so much, really appreciate your time and insights. >> Matt: Thanks for having us here, David. >> All right, you're welcome. And thank you for watching everybody. Keep it right there for more great content from HPE GreenLake announcements. You're watching theCUBE. (upbeat music)

Published Date : Sep 28 2021

SUMMARY :

Good to see you again. What are the big mega trends enterprises that are able to "and I've got the best of data lake. fabric so that the experience about some of the patterns that And I think the last one I would say is, And so that has to include on-prem, the ability to very efficiently to compute from the storage. of the demise of Hadoop of the most successful. services, the ability to pay for end of the day, I mean, So, that's kind of the other I like the term data mesh, bringing the cloud to them. on what you mean by speed. to insights, if you will. that success in the future. in the market place? And that's the reason we are, Thanks so much, really appreciate And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

DavePERSON

0.99+

VishalPERSON

0.99+

Matt MaccauxPERSON

0.99+

HPEORGANIZATION

0.99+

MattPERSON

0.99+

24QUANTITY

0.99+

Vishal LallPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

firstQUANTITY

0.99+

SecondQUANTITY

0.99+

second generationQUANTITY

0.99+

first generationQUANTITY

0.99+

third generationQUANTITY

0.99+

tomorrowDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

SparkTITLE

0.99+

ThirdQUANTITY

0.99+

first oneQUANTITY

0.99+

36 monthsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

second generationQUANTITY

0.99+

telcoORGANIZATION

0.99+

GreenLakeORGANIZATION

0.98+

RedshiftTITLE

0.98+

first genQUANTITY

0.98+

oneQUANTITY

0.98+

one thingQUANTITY

0.98+

TeradataORGANIZATION

0.98+

third oneQUANTITY

0.97+

SQLTITLE

0.97+

theCUBEORGANIZATION

0.97+

second genQUANTITY

0.96+

S3TITLE

0.96+

todayDATE

0.96+

Ezmeral SoftwareORGANIZATION

0.96+

AppleORGANIZATION

0.96+

three typesQUANTITY

0.96+

2000'sDATE

0.95+

thirdQUANTITY

0.95+

90'sDATE

0.95+

HPE GreenLakeORGANIZATION

0.95+

TCOORGANIZATION

0.94+

Delta lakeORGANIZATION

0.93+

80'sDATE

0.91+

Number twoQUANTITY

0.88+

lastDATE

0.88+

theCubeORGANIZATION

0.87+

AmazonORGANIZATION

0.87+

ApacheORGANIZATION

0.87+

KubernetesTITLE

0.86+

KubernetesORGANIZATION

0.83+

HadoopTITLE

0.83+

first thingQUANTITY

0.82+

SnowflakeTITLE

0.82+

four dimensionsQUANTITY

0.8+

HolosTITLE

0.79+

yearsDATE

0.78+

secondQUANTITY

0.75+

X86TITLE

0.73+

next couple of yearsDATE

0.73+

Delta lakeTITLE

0.69+

Apaches SparkORGANIZATION

0.65+

RETAIL Next Gen 3soft


 

>> Hello everyone. And thanks for joining us today. My name is Brent Biddulph, managing director retail, consumer goods here at Cloudera. Cloudera is very proud to be partnering with companies like 3Soft to provide data and analytic capabilities for over 200 retailers across the world and understanding why demand forecasting could be considered the heartbeat of retail. And what's at stake is really no mystery to most retailers. And really just a quick level set before handing this over to my good friend, Kamil at 3Soft. IDC, Gartner, many other analysts kind of summed up an average here that I thought would be important to share just to level set the importance of demand forecasting in retail, and what's at stake, meaning the combined business value for retailers leveraging AI and IOT. So this is above and beyond what demand forecasting has been in the past, is a $371 billion opportunity. And what's critically important to understand about demand forecasting is it directly impacts both the top line and the bottom line of retail. So how does it affect the top line? Retailers that leverage AI and IOT for demand forecasting are seeing average revenue increases of 2% and think of that as addressing the in stock or out of stock issue in retail and retail is become much more complex now, and that it's no longer just brick and mortar, of course, but it's fulfillment centers driven by e-commerce. So inventory is now having to be spread over multiple channels. Being able to leverage AI and IOT is driving 2% average revenue increases. Now, if you think about the size of most retailers or the average retailer that, on its face is worth millions of dollars of improvement for any individual retailer. On top of that is balancing your inventory, getting the right product in the right place, and having productive inventory. And that is the bottom line. So the average inventory reduction, leveraging AI and IOT as the analysts have found, and frankly, having spent time in this space myself in the past a 15% average inventory reduction is significant for retailers, not being overstocked on product in the wrong place at the wrong time. And it touches everything from replenishment to out-of-stocks, labor planning, and customer engagement. For purposes of today's conversation, we're going to focus on inventory and inventory optimization and reducing out-of-stocks. And of course, even small incremental improvements. I mentioned before in demand forecast accuracy have millions of dollars of direct business impact, especially when it comes to inventory optimization. Okay. So without further ado, I would like to now introduce Dr. Kamil Volker to share with you what his team has been up to, and some of the amazing things are driving at top retailers today. So over to you, Kamil. >> I'm happy to be here and I'm happy to speak to you about what we deliver to our customers, but let me first introduce 3Soft. We are a 100 person company based in Europe, in Southern Poland, and we, with 18 years of experience specialized in providing what we call a data driven business approach to our customers. Our roots are in the solutions in the services. We originally started as a software house. And on top of that, we build our solutions. We've been automation that you get the software for biggest enterprises in Poland, further, we understood the meaning of data and data management and how it can be translated into business profits. Adding artificial intelligence on top of that makes our solutions portfolio holistic, which enables us to realize very complex projects, which leverage all of those three pillars of our business. However, in the recent time, we also understood the services is something which only the best and biggest companies can afford at scale. And we believe that the future of retail demand forecasting is in the product solutions. So that's why we created Occubee, our AI platform for data driven retail that also covers this area that we talked about today. I'm personally proud to be responsible for our technology partnerships with Cloudera and Microsoft. It's a great pleasure to work with such great companies and to be able to deliver the solutions to our customers together based on a common trust and understanding of the business, which cumulates at customer success at the end. So why should we analyze data at retail? Why is it so important? It's kind of obvious that there is a lot of potential in the data per se, but also understanding the different areas where it can be used in retail is very important. We believe that thanks to using data, it's basically easier to derive the good decisions for the business based on the facts and not intuition anymore. Those four areas that we observed in retail, our online data analysis, that's the fastest growing sector, let's say for those data analytics services, which is of course based on the econ and online channels, availability to the customer. Pandemic only speeds up this process of engagement of the customers in that channel, of course, but traditional offline, let's say brick and mortar shops. They still play the biggest role for most of the retailers, especially from the FMCG sector. However, it's also very important to remember that there is plenty of business related questions that need to be answered from the headquarter perspective. So is it actually good idea to open a store in a certain place? Is it a good idea to optimize a stock in a certain producer? Is it a good idea to allocate the goods to online channel in specific way, those kinds of questions, they need to be answered in retail every day. And with that massive amount of factors coming into the equation, it's really not that easy to base only on the integration and expert knowledge. Of course, as Brent mentioned at the beginning, the supply chain and everything who's relates to that is also super important. We observe our customers to seek for the huge improvements in the revenue, just from that one single area as well. So let me present you a case study of one of our solutions, and that was the lever to a leading global grocery retailer. The project started with the challenge set of challenges that we had to conquer. And of course the most important was how to limit overstocks and out of stocks. That's like the holy grail in retail, of course, how to do it without flooding the stores with the goods. And in the same time, how to avoid empty shelves. From the perspective of the customer, it was obvious that we need to provide a very well, a very high quality of sales forecast to be able to ask for what will be the actual sales of the individual product in each store every day, considering huge role of the perishable goods in the specific grocery retailer, it was a huge challenge to provide a solution that was able to analyze and provide meaningful information about what's there in the sales data and the other factors we analyzed on daily basis at scale, however, our holistic approach implementing AI with data management background and these automation solutions all together created a platform that was able to significantly increase the sales for our customer just by minimizing out of stocks. In the same time, we managed to not overflood the stock, the shops with the goods, which actually decreased losses significantly, especially on the fresh fruit. Having said that, these results, of course translate into the increase in revenue, which can be calculated in hundreds of millions of dollars per year. So how the solution actually works? Well in its principle, it's quite simple. We just collect the data. We do it online, we put that in our data, like based on the cloud, through other technology, we implement our artificial intelligence models on top of it. And then based on the aggregated information, we create the forecast and we do it every day or every night for every single product in every single store. This information is sent to the warehouses and then the automated replenishment based on the forecast is on the way. The huge and most important aspect of that is the use of the good tools to do the right job. Having said that, you can be sure that there is too many information in this data. And there is actually two-minute forecast created every night than any expert could ever check. This means our solution needs to be very robust. It needs to provide information with high quality and high veracity. There is plenty of different business process, which is based on our forecast, which need to be delivered on time for every product in each individual shop. Observing the success of this project and having the huge market potential in mind, we decided to create our Occubee, which can be used by many retailers who don't want to create a dedicated software that will be solving this kind of problem. Occubee is our software service offering, which is enabling retailers to go data driven path management. We create Occubee with retailers for retailers, implementing artificial intelligence on top of data science models created by our experts. Having data analysis in place based on data management tools that we use, we've written first attitude. The uncertain times of pandemic clearly shows that it's very important to apply correction factors, which are sometimes required because we need to respond quickly to the changes in the sales characteristics. That's why Occubee is open box solution, which means that you basically can implement that in your organization, without changing the process internally. It's all about mapping your process into the system, not the other way around. The fast trends and products collection possibilities allow the retailers to react to any changes, which occur in the sales every day. Also, it's worth to mention that really it's not only FMCG and we believe that different use cases, which we observe in fashion, health and beauty, home and garden, pharmacies, and electronics, flavors of retail are also very meaningful. They also have one common thread. That's the growing importance of e-commerce. That's why we didn't want to leave that aside of Occubee. And we made everything we can to implement a solution, which covers all the needs. When you think about the factors that affect sales, there is actually huge variety of data that we can analyze. Of course, the transactional data that every dealer possesses, like sales data from sale from stores, from e-commerce channel, also averaging numbers from weeks, months, and years makes sense, but it's also worth to mention that using the right tool that allows you to collect that data from also internal and external sources makes perfect sense for retail. It's very hard to imagine a competitive retailer that is not analyzing the competitor's activity, changes in weather or information about some seasonal stores, which can be very important during the summer and other holidays, for example. But on the other hand, having this information in one place makes the actual benefit and environment for the customer. Demand forecasting seems to be like the most important and promising use case. We can talk about when I think about retail, but it's also the whole process of replenishment that can cover with different sets of machine learning models, and data management tools. We believe that analyzing data from different parts of the retail replenishment process can be achieved with implementing a data management solution based on Cloudera products and with adding some AI on top of it, it makes perfect sense to focus on not only demand forecasting, but also further use cases down the line. When it comes to the actual benefits from implementing solutions for demand management, we believe it's really important to analyze them holistically first it's of course, out of stock minimization, which can be provided by simply better size focus, but also reducing overstocks by better inventory management can be achieved by us in the same time. Having said that, we believe that analyzing data without any specific new equipment required in point of sales is the low hanging fruit that can be easily achieved in almost every industry, in almost every regular customer. >> Hey, thanks, Kamil. Having worked with retailers in this space for a couple of decades, myself, I was really impressed by a couple of things and they might've been understated, frankly, the results of course. I mean, as I kind of set up this session, you doubled the numbers on the statistics that the analysts found. So obviously in customers, you're working with... you're doubling average numbers that the industry overall is having, and most notably how the use of AI or Occubee has automated so many manual tasks of the past, like tour tuning, item profiles, adding new items, et cetera, and also how quickly it felt like, and this is my core question. Your team can cover or provide the solution to not only core center store, for example, in grocery, but you're covering fresh products. And frankly, there are solutions out on the market today that only focus on center store non-perishable departments. I was really impressed by the coverage that you're able to provide as well. So can you articulate kind of what it takes to get up and running and your overall process to roll out the solution? I feel like based on what you talked about and how you were approaching this in leveraging AI, that you're streamlining processes of legacy, demand, forecasting solutions that required more manual intervention, how quickly can you get people set up? And what is the overall process of like to get started with this software? >> Yeah, usually, it takes three to six months to onboard a new customer to that kind of solution. And frankly, it depends on the data that the customer has. Usually it's different for smaller, bigger companies, of course, but we believe that it's very important to start with a good foundation. The platform needs to be there, the platform that is able to basically analyze or process different types of data, structured, unstructured, internal, external, and so on. But when you have this platform set is all about starting ingesting data there. And usually for a smaller companies, it's easier to start with those, let's say, low hanging fruits. So the internal data, which is there, this data has the highest veracity. It's all really easy to start with, to work with them because everyone in the organization understands this data. For the bigger companies it might be important to ingest also kind of more unstructured data, some kind of external data that need to be acquired. So that may influence the length of the process. But we usually start with the customers with workshops. That's very important to understand the reasons because not every deal is the same. Of course, we believe that the success of our customers comes also due to the fact that we train those models, those AI models individually to the needs of our customers. >> Totally understand. And POS data, every retailer has right in, in one way shape or form. And it is the fundamental data point, whether it's e-comm or the brick and mortar data, every retailer has that data. So, that totally makes sense. But what you just described was months, there are legacy and other solutions out there, that this could be a year or longer process to roll out to the number of stores, for example, that you're scaling to. So that's highly impressive. And my guess is a lot of the barriers that have been knocked down with your solution are the fact that you're running this in the cloud. from a compute standpoint on Cloudera from a public cloud stamp point on Microsoft. So there's no IT intervention, if you will, or hurdles in preparation to get the database set up and all of the work. I would imagine that part of the time savings to getting started, would that be an accurate description? >> Yeah, absolutely. In the same time, this actually lowering the business risks because we see the same data and put that into the data lake, which is in the cloud. We did not interfere with the existing processes, which are processing this data in the combined. So we just use the same data. We just already in the company, we ask some external data if needed, but it's all aside of the current customers infrastructure. So this is also a huge gain, as you said. >> Right. And you're meeting customers where they are, right? So as I said, foundationally, every retailer POS data, if they want to add weather data or calendar event data, or, one incorporated course online data with offline data, you have a roadmap and the ability to do that. So it is a building block process. So getting started with core data as with POS online or offline is the foundational component, which obviously you're very good at. And then having that ability to then incorporate other data sets is critically important because that just improves demand forecast accuracy, right. By being able to pull in those, those other data sources, if you will. So Kamil, I just have one final question for you. There are plenty of... not plenty, but I mean, there's enough demand forecasting solutions out on the market today for retailers. One of the things that really caught my eye, especially being a former retailer and talking with retailers was the fact that you're promoting an open box solution. And that is a key challenge for a lot of retailers that have seen black box solutions come and go. And especially in this space where you really need direct input from the customer to continue to fine tune and improve forecast accuracy. Could you give just a little bit more of a description or response to your approach to open box versus black box? >> Yeah, of course. So, we've seen in the past the failures of the projects based on the black box approach, and we believe that this is not the way to go, especially with this kind of, let's say specialized services that we provide in meaning of understanding the customer's business first and then applying the solution, because what stands behind our concept in Occubee is the, basically your process in the organization as a retailer, they have been optimized for years already. That's where retailers put their focus for many years. We don't want to change that. We are not able to optimize it properly for sure as IT combined, we are able to provide you a tool which can then be used for mapping those very well optimized process and not to change them. That's our idea. And the open box means that in every process that you will map in the solution, you can then in real time monitor the execution of those processes and see what is the result of every step. That way, we create truly explainable experience for our customers, then can easily go for the whole process and see how the forecast was calculated. And what is the reason for a specific number to be there at the end of the day? >> I think that is invaluable. (indistinct) I really think that is a differentiator and what 3Soft is bringing to market. With that, thanks everyone for joining us today. Let's stay in touch. I want to make sure to leave Kamil's information here. So reach out to him directly, or feel free at any point in time obviously to reach out to me. Again, so glad everyone was able to join today, look forward to talking to you soon.

Published Date : Aug 5 2021

SUMMARY :

And that is the bottom line. aspect of that is the use of the that the analysts found. So that may influence the the time savings to getting that into the data lake, the ability to do that. and see how the forecast was calculated. look forward to talking to you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KamilPERSON

0.99+

3SoftORGANIZATION

0.99+

EuropeLOCATION

0.99+

Brent BiddulphPERSON

0.99+

MicrosoftORGANIZATION

0.99+

BrentPERSON

0.99+

ClouderaORGANIZATION

0.99+

PolandLOCATION

0.99+

two-minuteQUANTITY

0.99+

$371 billionQUANTITY

0.99+

Kamil VolkerPERSON

0.99+

2%QUANTITY

0.99+

18 yearsQUANTITY

0.99+

todayDATE

0.99+

15%QUANTITY

0.99+

threeQUANTITY

0.99+

100 personQUANTITY

0.99+

Southern PolandLOCATION

0.99+

IDCORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

3softORGANIZATION

0.99+

firstQUANTITY

0.98+

each storeQUANTITY

0.98+

bothQUANTITY

0.98+

over 200 retailersQUANTITY

0.98+

six monthsQUANTITY

0.98+

a yearQUANTITY

0.97+

each individual shopQUANTITY

0.97+

one final questionQUANTITY

0.96+

millions of dollarsQUANTITY

0.96+

oneQUANTITY

0.96+

OneQUANTITY

0.95+

OccubeeLOCATION

0.95+

one wayQUANTITY

0.93+

one single areaQUANTITY

0.91+

pandemicEVENT

0.9+

one placeQUANTITY

0.88+

one common threadQUANTITY

0.86+

every single productQUANTITY

0.86+

hundreds of millions of dollars per yearQUANTITY

0.83+

first attitudeQUANTITY

0.81+

OccubeeORGANIZATION

0.8+

every nightQUANTITY

0.8+

every single storeQUANTITY

0.75+

three pillarsQUANTITY

0.73+

ClouderaTITLE

0.7+

couple of decadesQUANTITY

0.66+

productQUANTITY

0.58+

dayQUANTITY

0.53+

Rick Farnell, Protegrity | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(gentle music) >> Welcome to today's session of the AWS Startup Showcase The Next Big Thing in AI, Security, & Life Sciences. Today we're featuring Protegrity for the life sciences track. I'm your host for theCUBE, Natalie Erlich, and now we're joined by our guest, Rick Farnell, the CEO of Protegrity. Thank you so much for being with us. >> Great to be here. Thanks so much Natalie, great to be on theCUBE. >> Yeah, great, and so we're going to talk today about the ransomware game, and how it has changed with kinetic data protection. So, the title of today's video segment makes a bold claim, how are kinetic data and ransomware connected? >> So first off kinetic data, data is in use, it's moving, it's not static, it's no longer sitting still, and your data protection has to adhere to those same standards. And I think if you kind of look at what's happening in the ransomware kind of attacks, there's a couple of different things going on, which is number one, bad actors are getting access to data in the clear, and they're holding that data ransom, and threatening to release that data. So kind of from a Protegrity standpoint, with our protection capabilities, that data would be rendered useless to them in that scenario. So there's lots of ways in which kind of backup data protection, really wonderful opportunities to do both data protection and kind of that backup mixed together really is a wonderful solution to the threat of ransomware. And it's a serious issue and it's not just targeting the most highly regulated industries and customers, we're seeing kind of attacks on pipeline and ferry companies, and really there is no end to where some of these bad actors are really focusing on and the damages can be in the hundreds of millions of dollars and last for years after from a brand reputation. So I think if you look at how data is used today, there's that kind of opposing forces where the business wants to use data at the speed of light to produce more machine learning, and more artificial intelligence, and predict where customers are going to be, and have wonderful services at their fingertips. But at the same time, they really want to protect their data, and sometimes those architectures can be at odds, and at Protegrity, we're really focusing on solving that problem. So free up your data to be used in artificial intelligence and machine learning, while making sure that it is absolutely bulletproof from some of these ransomware attacks. >> Yeah, I mean, you bring a really fascinating point that's really central to your business. Could you tell us more about how you're actually making that data worthless? I mean, that sounds really revolutionary. >> So, it sounds novel, right? To kind of make your data worthless in the wrong hands. And I think from a Protegrity perspective, our kind of policy and protection capability follows the individual piece of data no matter where it lives in the architecture. And we do a ton of work as the world does with Amazon Web Services, so kind of helping customers really blend their hybrid cloud strategies with their on-premise and their use of AWS, is something that we thrive at. So protecting that data, not just at rest or while it's in motion, but it's a continuous protection policy that we can basically preserve the privacy of the data but still keep it unique for use in downstream analytics and machine learning. >> Right, well, traditional security is rather stifling, so how can we fix this, and what are you doing to amend that? >> Well, I think if you look at cybersecurity, and we certainly play a big role in the cybersecurity world but like any industry, there are many layers. And traditional cybersecurity investment has been at the perimeter level, at the network level keeping bad actors out, and once people do get through some of those fences, if your data is not protected at a fine grain level, they have access to it. And I think from our standpoint, yes, we're last line of defense but at the same time, we partner with folks in the cybersecurity industry and with AWS and with others in the backup and recovery to give customers that level of protection, but still allow their kinetic data to be utilized in downstream analytics. >> Right, well, I'd love to hear more about the types of industries that you're helping, and specifically healthcare obviously, a really big subject for the year and probably now for years to come, how is this industry using kinetic protection at the moment? >> So certainly, as you mentioned, some of the most highly regulated industries are our sweet spot. So financial services, insurance, online retail, and healthcare, or any industry that has sensitive data and sensitive customer data, so think first name last name, credit card information, national ID number, social security number blood type, cancer type. That's all sensitive information that you as an organization want to protect. So in the healthcare space, specifically, some of the largest healthcare organizations in the world rely on Protegrity to provide that level of protection, but at the same time, give them the business flexibility to utilize that data. So one of our customers, one of the leaders in online prescriptions, and that is an AWS customer, to allow a wonderful service to be delivered to all of their customers while maintaining protection. If you think about sharing data on your watch with your insurance provider, we have lots of customers that bridge that gap and have that personal data coming in to the insurance companies. All the way to, if in a use case in the future, looking at the pandemic, if you have to prove that you've been vaccinated, we're talking about some sensitive information, so you want to be able to show that information but still have the confidence that it's not going to be used for nefarious purposes. >> Right, and what is next for Protegrity? >> Well, I think continuing on our journey, we've been around for 17 years now, and I think the last couple, there's been an absolute renaissance in fine-grained data protection or that connected data protection, and organizations are recognizing that continuing to protect your perimeter, continuing to protect your firewalls, that's not going to go away anytime soon. Your access points, your points of vulnerability to keep bad actors out, but at the same time, recognizing that the data itself needs to be protected but with that balance of utilizing it downstream for analytic purposes, for machine learning, for artificial intelligence. Keeping the data of hundreds of millions if not billions of people saved, that's what we do. If you were to add up the customers of all of our customers, the largest banks, the largest insurance companies, largest healthcare companies in the world, globally, we're protecting the private data of billions of human beings. And it doesn't just stop there, I think you asked a great question about kind of the industry and yes, insurance, healthcare, retail, where there's a lot of sensitive data that certainly can be a focus point. But in the IOT space, kind of if you think about GPS location or geolocation, if you think about a device, and what it does, and the intelligence that it has, and the decisions that it makes on the fly, protecting data and keeping that safe is not just a personal thing, we're stepping into intellectual property and some of the most valuable assets that companies have, which is their decision-making on how they use data and how they deliver an experience, and I think that's why there's been such a renaissance, if you will, in kind of that fine grain data protection that we provide. >> Yeah, well, what is Protegrity's role now in future proofing businesses against cyber attacks? I mean, you mentioned really the ramifications of that and the impact it can have on businesses, but also on governments. I mean, obviously this is really critical. >> So there's kind of a three-step approach, and this is something that we have certainly kind of felt for a long, long time, and we work on with our customers. One is having that fine-grain data protection. So tokenizing your data so that if someone were to get your data, it's worthless, unless they have the ability to unlock every single individual piece of data. So that's number one, and then that's kind of what Protegrity provides. Number two, having a wonderful backup capability to roll kind of an active-active, AWS being one of the major clouds in the world where we deploy our software regularly and work with our customers, having multi-regions, multi-capabilities for an active-active scenario where if there's something that goes down or happens you can bring that down and bring in a new environment up. And then third is kind of malware detection in the rest of the cyber world to make sure that you rinse kind of your architecture from some of those agents. And I think when you kind of look at it, ransomware, they take data, they encrypt your data, so they force you to give them Bitcoin, or whatnot, or they'll release some of your data. And if that data is rendered useless, that's one huge step in kind of your discussions with these nefarious actors and be like you could release it, but there's nothing there, you're not going to see anything. And then second, if you have a wonderful backup capability where you wind down that environment that has been infiltrated, prove that this new environment is safe, have your production data have rolling and then wind that back up, you're back in business. You don't have to notify your customers, you don't have to deal with the ransomware players. So it's really a three-step process but ultimately it starts with protecting your data and tokenizing your data, and that's something that Protegrity does really, really well. >> So you're basically able to eliminate the financial impact of a breach? >> Honestly, we dramatically reduce the risk of customers being at risk for ransomware attacks 100%. Now, tokenizing data and moving that direction is something that it's not trivial, we are literally replacing production data with a token and then making sure that all downstream applications have the ability to utilize that, and make sure that the analytic systems and machine learning systems, and artificial intelligence applications that are built downstream on that data have the ability to execute, but that is something that from our patent portfolio and what we provide to our customers, again, some of the largest organizations in retail, in financial services, in banking, and in healthcare, we've been doing that for a long time. We're not just saying that we can do this and we're in version one of our product, we've been doing this for years, supporting the largest organizations with a 24 by seven capability. >> Right, and tell us a bit about the competitive landscape, where do you see your offering compared to your competitors? >> So, kind of historically back, let's call it an era ago maybe even before cloud even became a thing, and hybrid cloud, there were a handful of players that could acquire into much larger organizations, those organizations have been dusting off those acquired assets, and we're seeing them come back in. There's some new entrants into our space that have some protection mechanisms, whether it be encryption, or whether it be anonymization, but unless you're doing fine grain tokenization, you're not going to be able to allow that data to participate in the artificial intelligence world. So, we see kind of a range of competition there. And then I'd say probably the biggest competitor, Natalie, is customers not doing tokenization. They're saying, "No, we're okay, we'll continue protecting our firewall, we'll continue protecting our access points, we'll invest a little bit more in maybe some governance, but that fine grain data protection, maybe it's not for us." And that is the big shift that's happening. You look at kind of the beginning of this year with the solar winds attack, and the vulnerability that caused the very large and important organizations found themselves the last few weeks with all the ransomware attacks that are happening on meat processing plants and facilities, shutting down meat production, pipeline, stopping oil and gas and kind of that. So we're seeing a complete shift in the types of organizations and the industries that need to protect their data. It's not just the healthcare organizations, or the banks, or the credit card companies, it is every single industry, every single size company. >> Right, and I got to ask you this questioning, what is your defining contribution to the future of cloud scale? >> Well, ultimately we kind of have a charge here at Protegrity where we feel like we protect the world's most sensitive data. And when we come into work every day, that's what every single employee thinks at Protegrity. We are standing behind billions of individuals who are customers of our customers, and that's a cultural thing for us, and we take that very serious. We have maniacal customer support supporting our biggest customers with a fall of the sun 24 by seven global capability. So that's number one. So, I think our part in this is really helping to educate the world that there is a solution for this ransomware and for some of these things that don't have to happen. Now, naturally with any solution, there's going to be some investment, there's going to be some architecture changes, but with partnerships like AWS, and our partnership with pretty much every data provider, data storage provider, data solution provider in the world, we want to provide fine-grain data protection, any data in any system on any platform. And that's our mission. >> Well, Rick Farnell, this has been really fascinating conversation with you, thank you so much. The CEO of Protegrity, really great to have you on this program for the AWS Startup Showcase, talking about how ransomware game has changed with the kinetic data protection. Really appreciate it. Again, I'm your host Natalie Erlich, thank you again very much for watching. (light music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase Great to be here. and how it has changed with and kind of that backup mixed together that's really central to your business. in the architecture. but at the same time, and have that personal data coming in and some of the most valuable and the impact it can have on businesses, have the ability to unlock and make sure that the analytic systems And that is the big that don't have to happen. really great to have you on this program

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalie ErlichPERSON

0.99+

Rick FarnellPERSON

0.99+

NataliePERSON

0.99+

AWSORGANIZATION

0.99+

ProtegrityORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

24QUANTITY

0.99+

pandemicEVENT

0.99+

hundreds of millionsQUANTITY

0.99+

17 yearsQUANTITY

0.99+

100%QUANTITY

0.99+

secondQUANTITY

0.99+

oneQUANTITY

0.98+

thirdQUANTITY

0.98+

todayDATE

0.98+

TodayDATE

0.98+

billions of peopleQUANTITY

0.98+

OneQUANTITY

0.97+

three-stepQUANTITY

0.97+

hundreds of millions of dollarsQUANTITY

0.96+

bothQUANTITY

0.96+

billions of human beingsQUANTITY

0.96+

billions of individualsQUANTITY

0.93+

sevenQUANTITY

0.9+

theCUBEORGANIZATION

0.89+

Next Big ThingTITLE

0.85+

Startup ShowcaseEVENT

0.85+

firstQUANTITY

0.83+

this yearDATE

0.78+

lastDATE

0.78+

Number twoQUANTITY

0.76+

single industryQUANTITY

0.76+

single employeeQUANTITY

0.75+

weeksDATE

0.73+

yearsQUANTITY

0.72+

single sizeQUANTITY

0.7+

oneOTHER

0.7+

Startup Showcase The Next Big Thing inEVENT

0.68+

Security, &EVENT

0.67+

ransomwareTITLE

0.64+

anDATE

0.63+

24DATE

0.59+

coupleQUANTITY

0.59+

single individual pieceQUANTITY

0.59+

SciencesEVENT

0.58+

stepQUANTITY

0.54+

versionQUANTITY

0.46+

sunEVENT

0.36+

Ariel Assaraf, Coralogix | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat music) >> Hello and welcome today's session for the AWS Startup Showcase, the next big thing in AI, Security and Life Sciences featuring Coralogix for the AI track. I'm your host, John Furrier with theCUBE. We're here we're joined by Ariel Assaraf, CEO of Coralogix. Ariel, great to see you calling in from remotely, videoing in from Tel Aviv. Thanks for coming on theCUBE. >> Thank you very much, John. Great to be here. >> So you guys are features a hot next thing, start next big thing startup. And one of the things that you guys do we've been covering for many years is, you're into the log analytics, from a data perspective, you guys decouple the analytics from the storage. This is a unique thing. Tell us about it. What's the story? >> Yeah. So what we've seen in the market is that probably because of the great job that a lot of the earlier generation products have done, more and more companies see the value in log data, what used to be like a couple rows, that you add, whenever you have something very important to say, became a standard to document all communication between different components, infrastructure, network, monitoring, and the application layer, of course. And what happens is that data grows extremely fast, all data grows fast, but log data grows even faster. What we always say is that for sure data grows faster than revenue. So as fast as a company grows, its data is going to outpace that. And so we found ourselves thinking, how can we help companies be able to still get the full coverage they want without cherry picking data or deciding exactly what they want to monitor and what they're taking risk with. But still give them the real time analysis that they need to make sure that they get the full insight suite for the entire data, wherever it comes from. And that's why we decided to decouple the analytics layer from storage. So instead of ingesting the data, then indexing and storing it, and then analyzing the stored data, we analyze everything, and then we only store it matters. So we go from the insights backwards. That allowed us to reduce the amount of data, reduce the digital exhaust that it creates, and also provide better insights. So the idea is that as this world of data scales, the need for real time streaming analytics is going to increase. >> So what's interesting is we've seen this decoupling with storage and compute be a great success formula and cloud scale, for instance, that's a known best practice. You're taking a little bit different. I love how you're coming backwards from it, you're working backwards from the insights, almost doing some intelligence on the front end of the data, probably sees a lot of storage costs. But I want to get specifically back to this real time. How do you do that? And how did you come up with this? What's the vision? How did you guys come up with the idea? What was the magic light bulb that went off for Coralogix? >> Yes, the Coralogix story is very interesting. Actually, it was no light bulb, it was a road of pain for years and years, we started by just you know, doing the same, maybe faster, a couple more features. And it didn't work out too well. The first few years, the company were not very successful. And we've grown tremendously in the past three years, almost 100X, since we've launched this, and it came from a pain. So once we started scaling, we saw that the side effects of accessing the storage for analytics, the latency it creates, the the dependency on schema, the price that it poses on our customers became unbearable. And then we started thinking, so okay, how do we get the same level of insights, because there's this perception in the world of storage. And now it started to happen in analytics, also, that talks about tiers. So you want to get a great experience, you pay a lot, you want to get a less than great experience, you pay less, it's a lower tier. And we decided that we're looking for a way to give the same level of real time analytics and the same level of insights. Only without the issue of dependencies, decoupling all the storage schema issues and latency. And we built our real time pipeline, we call it Streama. Streama is a Coralogix real time analysis platform that analyzes everything in real time, also the stateful thing. So stateless analytics in real time is something that's been done in the past and it always worked well. The issue is, how do you give a stateful insight on data that you analyze in real time without storing and I'll explain how can you tell that a certain issue happened that did not happen in the past three months if you did not store the past three months? Or how can you tell that behavior is abnormal if you did not store what's normal, you did not store to state. So we created what we call the state store that holds the state of the system, the state of data, were a snapshot on that state for the entire history. And then instead of our state being the storage, so you know, you asked me, how is this compared to last week? Instead of me going to the storage and compare last week, I go to the state store, and you know, like a record bag, I just scroll fast, I find out one piece of state. And I say, okay, this is how it looked like last week, compared to this week, it changed in ABC. And once we started doing that we on boarded more and more services to that model. And our customers came in and say, hey, you're doing everything in real time. We don't need more than that. Yeah, like a very small portion of data, we actually need to store and frequently search, how about you guys fit into our use cases, and not just sell on quota? And we decided to basically allow our customers to choose what is the use case that they have, and route the data through different use cases. And then each log records, each log record stops at the relevant stops in our data pipeline based on the use case. So just like you wouldn't walk into the supermarket, you fill in a bag, you go out, they weigh it and they say, you know, it's two kilograms, you pay this amount, because different products have different costs and different meaning to you. That same way, exactly, We analyze the data in real time. So we know the importance of data, and we allow you to route it based on your use case and pay a different amount per use case. >> So this is really interesting. So essentially, you guys, essentially capture insights and store those, you call them states, and then not have to go through the data. So it's like you're eliminating the old problem of, you know, going back to the index and recovering the data to get the insights, did we have that? So anyway, it's a round trip query, if you will, you guys are start saving all that data mining cost and time. >> We call it node zero side effects, that round trip that you that you described is exactly it, no side effects to an analysis that is done in real time. I don't need to get the latency from the storage, a bit of latency from the database that holds the model, a bit of latency from the cache, everything stays in memory, everything stays in stream. >> And so basically, it's like the definition of insanity, doing the same thing over and over again and expecting a different result. Here, that's kind of what that is, the old model of insight is go query the database and get something back, you're actually doing the real time filtering on the front end, capturing the insights, if you will, storing those and replicating that as use case. Is that right? >> Exactly. But then, you know, there's still the issue of customer saying, yeah, but I need that data. Someday, I need to really frequently search, I don't know, you know, the unknown unknowns, or some of the day I need for compliance, and I need an immutable record that stays in my compliance bucket forever. So we allowed customers, we have this some that screen, we call the TCO optimizer, that allows them to define those use cases. And they can always access the data by creating their remote storage from Coralogix, or carrying the hot data that is stored with Coralogix. So it's all about use cases. And it's all about how you consume the data because it doesn't make sense for me to pay the same amount or give the same amount of attention to a record that is completely useless. It's just there for the record or for a compliance audit, that may or may not happen in the future. And, you know, do the same with the most critical exception in my application log that has immediate business impact. >> What's really good too, is you can actually set some policy up if you want a certain use cases, okay, store that data. So it's not to say you don't want to store it, but you might want to store it on certain use cases. So I can see that. So I got to ask the question. So how does this differ from the competition? How do you guys compete? Take us through a use case of a customer? How do you guys go to the customer and you just say, hey, we got so much scar tissue from this, we learned the hard way, take it from us? How does it go? Take us through an example. >> So an interesting example of actually a company that is not the your typical early adopter, let's call it this way. A very advanced in technology and smart company, but a huge one, one of the largest telecommunications company in India. And they were actually cherry picking about 100 gigs of data per day, and sending it to one of the legacy providers which has a great solution that does give value. But they weren't even thinking about sending their entire data set because of cost because of scale, because of, you know, just a clutter. Whenever you search, you have to sift through millions of records that many of them are not that important. And we help them actually ask analyze their data and work with them to understand these guys had over a terabyte of data that had incredible insights, it was like a goldmine of insights. But now you just needed to prioritize it by their use case, and they went from 100 gig with the other legacy solution to a terabyte, at almost the same cost, with more advanced insights within one week, which isn't in that scale of an organization is something that is is out of the ordinary, took them four months to implement the other product. But now, when you go from the insights backwards, you understand your data before you have to store it, you understand the data before you have to analyze it, or before you have to manually sift through it. So if you ask about the difference, it's all about the architecture. We analyze and only then index instead of indexing and then analyzing. It sounds simple. But of course, when you look at this stateful analytics, it's a lot more, a lot more complex. >> Take me through your growth story, because first of all, I'll get back to the secret sauce in the same way. I want to get back to how you guys got here. (indistinct) you had this problem? You kind of broke through, you hit the magic formula, talking about the growth? Where's the growth coming from? And what's the real impact? What's the situation relative to the company's growth? >> Yeah, so we had a first rough three years that I kind of mentioned, and then I was not the CEO at the beginning, I'm one of the co founders. I'm more of the technical guy, was the product manager. And I became CEO after the company was kind of on the verge of closing at the end of 2017. And the CTO left the CEO left, the VP of R&D became the CTO, I became the CEO, we were five people with $200,000 in the bank that you know, you know that that's not a long runway. And we kind of changed attitudes. So we kind of, so we first we launched this product, and then we understood that we need to go bottoms up, you can go to enterprises and try to sell something that is out of the ordinary, or that changes how they're used to working or just, you know, sell something, (indistinct) five people will do under $1,000 in the bank. So we started going from bottoms up, and the earlier adopters. And it's still until today, you know, the the more advanced companies, the more advanced teams. This is our Gartner friend Coralogix, the preferred solution for Advanced, DevOps and Platform Teams. So they started adopting Coralogix, and then it grew to the larger organization, and they were actually pushing, there are champions within their organizations. And ever since. So until the beginning of 2018, we raised about $2 million and had sales or marginal. Today, we have over 1500, pink accounts, and we raised almost $100 million more. >> Wow, what a great pivot. That was great example of kind of getting the right wave here, cloud wave. You said in terms of customers, you had the DevOps kind of (indistinct) initially. And now you said expanded out to a lot more traditional enterprise, you can take me through the customer profile. >> Yeah, so I'd say it's still the core would be cloud native and (indistinct) companies. These are typical ones, we have very tight integration with AWS, all the services, all the integrations required, we know how to read and write back to the different services and analysis platforms in AWS. Also for Asia and GCP, but mostly AWS. And then we do have quite a few big enterprise accounts, actually, five of the largest 50 companies in the world use Coralogix today. And it grew from those DevOps and platform evangelists into the level of IT, execs and even (indistinct). So today, we have our security product that already sells to some of the biggest companies in the world, it's a different profile. And the idea for us is that, you know, once you solve that issue of too much data, too expensive, not proactive enough, too couple with the storage, you can actually expand that from observability logging metrics, now into tracing and then into security and maybe even to other fields, where the cost and the productivity are an issue for many companies. >> So let me ask you this question, then Ariel, if you don't mind. So if a customer has a need for Coralogix, is it because the data fall? Or they just got data kind of sprawled all over the place? Or is it that storage costs are going up on S3 or what's some of the signaling that you would see, that would be like, telling you, okay, okay, what's the opportunity to come in and either clean house or fix the mess or whatnot, Take us through what you see. What do you see is the trend? >> Yeah. So like the tip customer (indistinct) Coralogix will be someone using one of the legacy solution and growing very fast. That's the easiest way for us to know. >> What grows fast? The storage, the storage is growing fast? >> The company is growing fast. >> Okay. And you remember, the data grows faster than revenue. And we know that. So if I see a company that grew from, you know, 50 people to 500, in three years, specifically, if it's cloud native or internet company, I know that their data grew not 10X, but 100X. So I know that that company that might started with a legacy solution at like, you know, $1,000 a month, and they're happy with it. And you know, for $1,000 a month, if you don't have a lot of data, those legacy solutions, you know, they'll do the trick. But now I know that they're going to get asked to pay 50, 60, $70,000 a month. And this is exactly where we kick in. Because now, when it doesn't fit the economic model, when it doesn't fit the unit economics, and he started damaging the margins of those companies. Because remember, those internet and cloud companies, it's not costs are not the classic costs that you'll see in an enterprise, they're actually damaging your unit economics and the valuation of the business, the bigger deal. So now, when I see that type of organization, we come in and say, hey, better coverage, more advanced analytics, easier integration within your organization, we support all the common open source syntaxes, and dashboards, you can plug it into your entire environment, and the costs are going to be a quarter of whatever you're paying today. So once they see that they see, you know, the Dev friendliness of the product, the ease of scale, the stability of the product, it makes a lot more sense for them to engage in a PLC, because at the end of the day, if you don't prove value, you know, you can come with 90% discount, it doesn't do anything, not to prove the value to them. So it's a great door opener. But from then on, you know, it's a PLC like any other. >> Cloud is all about the PLC or pilot, as they say. So take me through the product, today, and what's next for the product, take us through the vision of the product and the product strategy. >> Yeah, so today, the product allows you to send any log data, metric data or security information, analyze it a million ways, we have one of the most extensive alerting mechanism to market, automatic anomaly detection, data flustering. And all the real law, you know, the real time pipeline, things that help companies make their data smarter, and more readable, parsing, enriching, getting external sources to enrich the data, and so on, so forth. Where we're stepping in now is actually to make the final step of decoupling the analytics from storage, what we call the datalist data platform in which no data will sit or reside within the Coralogix cloud, everything will be analyzed in real time, stored in a storage of choice of our customers, then we'll allow our customers to remotely query that incredible performance. So that'll bring our customers away, to have the first ever true SaaS experience for observability. Think about no quota plans, no retention, you send whatever you want, you pay only for what you send, you retain it, how long you want to retain it, and you get all the real time insights much, much faster than any other product that keeps it on a hot storage. So that'll be our next step to really make sure that, you know, we're kind of not reselling cloud storage, because a lot of the times when you are dependent on storage, and you know, we're a cloud company, like I mentioned, you got to keep your unit economics. So what do you do? You sell storage to the customer, you add your markup, and then you you charge for it. And this is exactly where we don't want to be. We want to sell the intelligence and the insights and the real time analysis that we know how to do and let the customers enjoy the, you know, the wealth of opportunities and choices their cloud providers offer for storage. >> That's great vision in a way, the hyper scalars early days showed that decoupling compute from storage, which I mentioned earlier, was a huge category creation. Here, you're doing it for data. We call hyper data scale, or like, maybe there's got to be a name for this. What do you see, about five years from now? Take us through the trajectory of the next five years, because certainly observability is not going away. I mean, it's data management, monitoring, real time, asynchronous, synchronous, linear, all the stuffs happening, what's the what's the five year vision? >> Now add security and observability, which is something we started preaching for, because no one can say I have observability to my environment when people you know, come in and out and steal data. That's no observability. But the thing is that because data grows exponentially, because it grows faster than revenue what we believe is that in five years, there's not going to be a choice, everyone are going to have to analyze the data in real time. Extract the insights and then decide whether to store it on a you know long term archive or not, or not store it at all. You still want to get the full coverage and insights. But you know, when you think about observability, unlike many other things, the more data you have many times, the less observability you get. So you think of log data unlike statistics, if my system was only in recording everything was only generating 10 records a day, I have full, incredible observability I know everything that I've done. what happens is that you pay more, you get less observability, and more uncertainty. So I think that you know, with time, we'll start seeing more and more real time streaming analytics, and a lot less storage based and index based solutions. >> You know, Ariel, I've always been saying to Dave Vellante on theCUBE, many times that there needs to be insights as to be the norm, not the exception, where, and then ultimately, it would be a database of insights. I mean, at the end of the day, the insights become more plentiful. You have the ability to actually store those insights, and refresh them and challenge them and model update them, verify them, either sunset them or add to them or you know, saying that's like, when you start getting more data into your organization, AI and machine learning prove that pattern recognition works. So why not grab those insights? >> And use them as your baseline to know what's important, and not have to start by putting everything in a bucket. >> So we're going to have new categories like insight, first, software (indistinct) >> Go from insights backwards, that'll be my tagline, if I have to, but I'm a terrible marketing (indistinct). >> Yeah, well, I mean, everyone's like cloud, first data, data is data driven, insight driven, what you're basically doing is you're moving into the world of insights driven analytics, really, as a way to kind of bring that forward. So congratulations. Great story. I love the pivot love how you guys entrepreneurially put it all together and had the problem your own problem and brought it out and to the to the rest of the world. And certainly DevOps in the cloud scale wave is just getting bigger and bigger and taking over the enterprise. So great stuff. Real quick while you're here. Give a quick plug for the company. What you guys are up to, stats, vitals, hiring, what's new, give the commercial. >> Yeah, so like mentioned over 1500 being customers growing incredibly in the past 24 months, hiring, almost doubling the company in the next few months. offices in Israel, East Center, West US, and UK and Mumbai. Looking for talented engineers to join the journey and build the next generation of data lists data platforms. >> Ariel Assaraf, CEO of Coralogix. Great to have you on theCUBE and thank you for participating in the AI track for our next big thing in the Startup Showcase. Thanks for coming on. >> Thank you very much John, really enjoyed it. >> Okay, I'm John Furrier with theCUBE. Thank you for watching the AWS Startup Showcase presented by theCUBE. (calm music)

Published Date : Jun 24 2021

SUMMARY :

Ariel, great to see you Thank you very much, John. And one of the things that you guys do So instead of ingesting the data, And how did you come up with this? and we allow you to route and recovering the data database that holds the model, capturing the insights, if you will, that may or may not happen in the future. So it's not to say you that is not the your sauce in the same way. and the earlier adopters. And now you said expanded out to And the idea for us is that, the opportunity to come in So like the tip customer and the costs are going to be a quarter and the product strategy. and let the customers enjoy the, you know, of the next five years, the more data you have many times, You have the ability to and not have to start by Go from insights backwards, I love the pivot love how you guys and build the next generation and thank you for Thank you very much the AWS Startup Showcase

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ariel AssarafPERSON

0.99+

$200,000QUANTITY

0.99+

IsraelLOCATION

0.99+

IndiaLOCATION

0.99+

90%QUANTITY

0.99+

JohnPERSON

0.99+

last weekDATE

0.99+

$1,000QUANTITY

0.99+

Tel AvivLOCATION

0.99+

10XQUANTITY

0.99+

John FurrierPERSON

0.99+

two kilogramsQUANTITY

0.99+

100 gigQUANTITY

0.99+

MumbaiLOCATION

0.99+

UKLOCATION

0.99+

50QUANTITY

0.99+

ArielPERSON

0.99+

50 peopleQUANTITY

0.99+

CoralogixORGANIZATION

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

this weekDATE

0.99+

three yearsQUANTITY

0.99+

todayDATE

0.99+

five peopleQUANTITY

0.99+

100XQUANTITY

0.99+

TodayDATE

0.99+

five yearQUANTITY

0.99+

each logQUANTITY

0.99+

about $2 millionQUANTITY

0.99+

four monthsQUANTITY

0.99+

five yearsQUANTITY

0.99+

one pieceQUANTITY

0.99+

millions of recordsQUANTITY

0.99+

60QUANTITY

0.99+

50 companiesQUANTITY

0.99+

almost $100 millionQUANTITY

0.99+

one weekQUANTITY

0.99+

GartnerORGANIZATION

0.99+

500QUANTITY

0.98+

AsiaLOCATION

0.98+

CoralogixPERSON

0.98+

West USLOCATION

0.98+

over 1500QUANTITY

0.98+

East CenterLOCATION

0.97+

under $1,000QUANTITY

0.97+

firstQUANTITY

0.96+

each log recordsQUANTITY

0.96+

10 records a dayQUANTITY

0.96+

oneQUANTITY

0.96+

end of 2017DATE

0.96+

about 100 gigsQUANTITY

0.96+

StreamaTITLE

0.95+

$1,000 a monthQUANTITY

0.95+

R&DORGANIZATION

0.95+

beginningDATE

0.95+

first few yearsQUANTITY

0.93+

past three monthsDATE

0.93+

$70,000 a monthQUANTITY

0.9+

CoralogixTITLE

0.9+

GCPORGANIZATION

0.88+

TCOORGANIZATION

0.88+

AWS Startup ShowcaseEVENT

0.87+

Toni Manzano, Aizon | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences


 

(up-tempo music) >> Welcome to today's session of the cube's presentation of the AWS startup showcase. The next big thing in AI security and life sciences. Today, we'll be speaking with Aizon, as part of our life sciences track and I'm pleased to welcome the co-founder as well as the chief science officer of Aizon: Toni Monzano, will be discussing how artificial intelligence is driving key processes in pharma manufacturing. Welcome to the show. Thanks so much for being with us today. >> Thank you Natalie to you and to your introduction. >> Yeah. Well, as you know industry 4.0 is revolutionizing manufacturing across many industries. Let's talk about how it's impacting biotech and pharma and as well as Aizon's contributions to this revolution. >> Well, actually pharmacogenetics is totally introducing a new concept of how to manage processes. So, nowadays the industry is considering that everything is particularly static, nothing changes and this is because they don't have the ability to manage the complexity and the variability around the biotech and the driving factor in processes. Nowadays, with pharma - technologies cloud, our computing, IOT, AI, we can get all those data. We can understand the data and we can interact in real time, with processes. This is how things are going on nowadays. >> Fascinating. Well, as you know COVID-19 really threw a wrench in a lot of activity in the world, our economies, and also people's way of life. How did it impact manufacturing in terms of scale up and scale out? And what are your observations from this year? >> You know, the main problem when you want to do a scale-up process is not only the equipment, it is also the knowledge that you have around your process. When you're doing a vaccine on a smaller scale in your lab, the only parameters you're controlling in your lab, they have to be escalated when you work from five liters to 2,500 liters. How to manage this different of a scale? Well, AI is helping nowadays in order to detect and to identify the most relevant factors involved in the process. The critical relationship between the variables and the final control of all the full process following a continued process verification. This is how we can help nowadays in using AI and cloud technologies in order to accelerate and to scale up vaccines like the COVID-19. >> And how do you anticipate pharma manufacturing to change in a post COVID world? >> This is a very good question. Nowadays, we have some assumptions that we are trying to overpass yet with human efforts. Nowadays, with the new situation, with the pandemic that we are living in, the next evolution that we are doing humans will take care about the good practices of the new knowledge that we have to generate. So AI will manage the repetitive tasks, all the human condition activity that we are doing, So that will be done by AI, and humans will never again do repetitive tasks in this way. They will manage complex problems and supervise AI output. >> So you're driving more efficiencies in the manufacturing process with AI. You recently presented at the United nations industrial development organization about the challenges brought by COVID-19 and how AI is helping with the equitable distribution of vaccines and therapies. What are some of the ways that companies like Aizon can now help with that kind of response? >> Very good point. Could you imagine you're a big company, a top pharma company, that you have an intellectual property of COVID-19 vaccine based on emergency and principle, and you are going to, or you would like to, expand this vaccination in order not to get vaccination, also to manufacture the vaccine. What if you try to manufacture these vaccines in South Africa or in Asia in India? So the secret is to transport, not only the raw material not only the equipment, also the knowledge. How to appreciate how to control the full process from the initial phase 'till their packaging and the vials filling. So, this is how we are contributing. AI is packaging all this knowledge in just AI models. This is the secret. >> Interesting. Well, what are the benefits for pharma manufacturers when considering the implementation of AI and cloud technologies. And how can they progress in their digital transformation by utilizing them? >> One of the benefits is that you are able to manage the variability the real complexity in the world. So, you can not create processes, in order to manufacture drugs, just considering that the raw material that you're using is never changing. You cannot consider that all the equipment works in the same way. You cannot consider that your recipe will work in the same way in Brazil than in Singapore. So the complexity and the variability is must be understood as part of the process. This is one of the benefits. The second benefit is that when you use cloud technologies, you have not a big care about computing's licenses, software updates, antivirals, scale up of cloud ware computing. Everything is done in the cloud. So well, this is two main benefits. There are more, but this is maybe the two main ones. >> Yeah. Well, that's really interesting how you highlight how this is really. There's a big shift how you handle this in different parts of the world. So, what role does compliance and regulation play here? And of course we see differences the way that's handled around the world as well. >> Well, I think that is the first time the human race in the pharma - let me say experience - that we have a very strong commitment from the 30 bodies, you know, to push forward using this kind of technologies actually, for example, the FDA, they are using cloud, to manage their own system. So why not use them in pharma? >> Yeah. Well, how does AWS and Aizon help manufacturers address these kinds of considerations? >> Well, we have a very great partner. AWS, for us, is simplifying a lot our life. So, we are a very, let me say different startup company, Aizon, because we have a lot of PhDs in the company. So we are not in the classical geeky company with guys all day parameter developing. So we have a lot of science inside the company. So this is our value. So everything that is provided by Amazon, why we have to aim to recreate again so we can rely on Sage Maker. we can rely on Cogito, we can rely on Landon we can rely on Esri to have encryption data with automatic backup. So, AWS is simplifying a lot of our life. And we can dedicate all our knowledge and all our efforts to the things that we know: pharma compliance. >> And how do you anticipate that pharma manufacturing will change further in the 2021 year? Well, we are participating not only with business cases. We also participate with the community because we are leading an international project in order to anticipate this kind of new breakthroughs. So, we are working with, let me say, initiatives in the - association we are collaborating in two different projects in order to apply AI in computer certification in order to create more robust process for the MRA vaccine. We are collaborating with the - university creating the standards for AI application in GXP. We collaborating with different initiatives with the pharma community in order to create the foundation to move forward during this year. >> And how do you see the competitive landscape? What do you think Aizon provides compared to its competitors? >> Well, good question. Probably, you can find a lot of AI services, platforms, programs softwares that can run in the industrial environment. But I think that it will be very difficult to find a GXP - a full GXP-compliant platform working on cloud with AI when AI is already qualified. I think that no one is doing that nowadays. And one of the demonstration for that is that we are also writing some scientific papers describing how to do that. So you will see that Aizon is the only company that is doing that nowadays. >> Yeah. And how do you anticipate that pharma manufacturing will change or excuse me how do you see that it is providing a defining contribution to the future of cloud-scale? >> Well, there is no limits in cloud. So as far as you accept that everything is varied and complex, you will need power computing. So the only way to manage this complexity is running a lot of power computation. So cloud is the only system, let me say, that allows that. Well, the thing is that, you know pharma will also have to be compliant with the cloud providers. And for that, we created a new layer around the platform that we say qualification as a service. We are creating this layer in order to qualify continuously any kind of cloud platform that wants to work on environment. This is how we are doing that. >> And in what areas are you looking to improve? How are you constantly trying to develop the product and bring it to the next level? >> Always we have, you know, in mind the patient. So Aizon is a patient-centric company. Everything that we do is to improve processes in order to improve at the end, to deliver the right medicine at the right time to the right patient. So this is how we are focusing all our efforts in order to bring this opportunity to everyone around the world. For this reason, for example, we want to work with this project where we are delivering value to create vaccines for COVID-19, for example, everywhere. Just packaging the knowledge using AI. This is how we envision and how we are acting. >> Yeah. Well, you mentioned the importance of science and compliance. What do you think are the key themes that are the foundation of your company? >> The first thing is that we enjoy the task that we are doing. This is the first thing. The other thing is that we are learning every day with our customers and for real topics. So we are serving to the patients. And everything that we do is enjoying science enjoying how to achieve new breakthroughs in order to improve life in the factory. We know that at the end will be delivered to the final patient. So enjoying making science and creating breakthroughs; being innovative. >> Right, and do you think that in the sense that we were lucky, in light of COVID, that we've already had these kinds of technologies moving in this direction for some time that we were somehow able to mitigate the tragedy and the disaster of this situation because of these technologies? >> Sure. So we are lucky because of this technology because we are breaking the distance, the physical distance, and we are putting together people that was so difficult to do that in all the different aspects. So, nowadays we are able to be closer to the patients to the people, to the customer, thanks to these technologies. Yes. >> So now that also we're moving out of, I mean, hopefully out of this kind of COVID reality, what's next for Aizon? Do you see more collaboration? You know, what's next for the company? >> The next for the company is to deliver AI models that are able to be encapsulated in the drug manufacturing for vaccines, for example. And that will be delivered with the full process not only materials, equipment, personnel, recipes also the AI models will go together as part of the recipe. >> Right, well, we'd love to hear more about your partnership with AWS. How did you get involved with them? And why them, and not another partner? >> Well, let me explain to you a secret. Seven years ago, we started with another top cloud provider, but we saw very soon, that this other cloud provider were not well aligned with the GXP requirements. For this reason, we met with AWS. We went together to some seminars, conferences with top pharma communities and pharma organizations. We went there to make speeches and talks. We felt that we fit very well together because AWS has a GXP white paper describing very well how to rely on AWS components. One by one. So this is for us, this is a very good credential, when we go to our customers. Do you know that when customers are acquiring and are establishing the Aizon platform in their systems, they are outbidding us. They are outbidding Aizon. Well we have to also outbid AWS because this is the normal chain in pharma supplier. Well, that means that we need this documentation. We need all this transparency between AWS and our partners. This is the main reason. >> Well, this has been a really fascinating conversation to hear how AI and cloud are revolutionizing pharma manufacturing at such a critical time for society all over the world. Really appreciate your insights, Toni Monzano: the chief science officer and co-founder of Aizon. I'm your host, Natalie Erlich, for the Cube's presentation of the AWS startup showcase. Thanks very much for watching. (soft upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS startup showcase. and to your introduction. contributions to this revolution. and the variability around the biotech in a lot of activity in the world, the knowledge that you the next evolution that we are doing in the manufacturing process with AI. So the secret is to transport, considering the implementation You cannot consider that all the equipment And of course we see differences from the 30 bodies, you and Aizon help manufacturers to the things that we in order to create the is that we are also to the future of cloud-scale? So cloud is the only system, at the right time to the right patient. the importance of science and compliance. the task that we are doing. and we are putting in the drug manufacturing love to hear more about This is the main reason. of the AWS startup showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Toni MonzanoPERSON

0.99+

Natalie ErlichPERSON

0.99+

AWSORGANIZATION

0.99+

NataliePERSON

0.99+

AizonORGANIZATION

0.99+

SingaporeLOCATION

0.99+

BrazilLOCATION

0.99+

South AfricaLOCATION

0.99+

AmazonORGANIZATION

0.99+

AsiaLOCATION

0.99+

COVID-19OTHER

0.99+

oneQUANTITY

0.99+

2,500 litersQUANTITY

0.99+

five litersQUANTITY

0.99+

2021 yearDATE

0.99+

30 bodiesQUANTITY

0.99+

TodayDATE

0.99+

second benefitQUANTITY

0.99+

IndiaLOCATION

0.99+

Toni ManzanoPERSON

0.99+

OneQUANTITY

0.99+

two main benefitsQUANTITY

0.99+

pandemicEVENT

0.98+

todayDATE

0.98+

two different projectsQUANTITY

0.98+

COVIDOTHER

0.97+

Seven years agoDATE

0.97+

two main onesQUANTITY

0.97+

this yearDATE

0.96+

LandonORGANIZATION

0.95+

first thingQUANTITY

0.92+

FDAORGANIZATION

0.89+

MRAORGANIZATION

0.88+

CubeORGANIZATION

0.85+

United nationsORGANIZATION

0.82+

first timeQUANTITY

0.8+

Sage MakerTITLE

0.77+

Startup ShowcaseEVENT

0.73+

GXPORGANIZATION

0.64+

EsriORGANIZATION

0.64+

GXPTITLE

0.6+

CogitoORGANIZATION

0.6+

AizonTITLE

0.57+

benefitsQUANTITY

0.36+

GXPCOMMERCIAL_ITEM

0.36+

Gil Geron, Orca Security | AWS Startup Showcase: The Next Big Thing in AI, Security, & Life Sciences


 

(upbeat electronic music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. The Next Big Thing in AI, Security, and Life Sciences. In this segment, we feature Orca Security as a notable trend setter within, of course, the security track. I'm your host, Dave Vellante. And today we're joined by Gil Geron. Who's the co-founder and Chief Product Officer at Orca Security. And we're going to discuss how to eliminate cloud security blind spots. Orca has a really novel approach to cybersecurity problems, without using agents. So welcome Gil to today's sessions. Thanks for coming on. >> Thank you for having me. >> You're very welcome. So Gil, you're a disruptor in security and cloud security specifically and you've created an agentless way of securing cloud assets. You call this side scanning. We're going to get into that and probe that a little bit into the how and the why agentless is the future of cloud security. But I want to start at the beginning. What were the main gaps that you saw in cloud security that spawned Orca Security? >> I think that the main gaps that we saw when we started Orca were pretty similar in nature to gaps that we saw in legacy, infrastructures, in more traditional data centers. But when you look at the cloud when you look at the nature of the cloud the ephemeral nature, the technical possibilities and disruptive way of working with a data center, we saw that the usage of traditional approaches like agents in these environments is lacking, it actually not only working as well as it was in the legacy world, it's also, it's providing less value. And in addition, we saw that the friction between the security team and the IT, the engineering, the DevOps in the cloud is much worse or how does that it was, and we wanted to find a way, we want for them to work together to bridge that gap and to actually allow them to leverage the cloud technology as it was intended to gain superior security than what was possible in the on-prem world. >> Excellent, let's talk a little bit more about agentless. I mean, maybe we could talk a little bit about why agentless is so compelling. I mean, it's kind of obvious it's less intrusive. You've got fewer processes to manage, but how did you create your agentless approach to cloud security? >> Yes, so I think the basis of it all is around our mission and what we try to provide. We want to provide seamless security because we believe it will allow the business to grow faster. It will allow the business to adopt technology faster and to be more dynamic and achieve goals faster. And so we've looked on what are the problems or what are the issues that slow you down? And one of them, of course, is the fact that you need to install agents that they cause performance impact, that they are technically segregated from one another, meaning you need to install multiple agents and they need to somehow not interfere with one another. And we saw this friction causes organization to slow down their move to the cloud or slow down the adoption of technology. In the cloud, it's not only having servers, right? You have containers, you have manage services, you have so many different options and opportunities. And so you need a different approach on how to secure that. And so when we understood that this is the challenge, we decided to attack it in three, using three periods; one, trying to provide complete security and complete coverage with no friction, trying to provide comprehensive security, which is taking an holistic approach, a platform approach and combining the data in order to provide you visibility into all of your security assets, and last but not least of course, is context awareness, meaning being able to understand and find these the 1% that matter in the environment. So you can actually improve your security posture and improve your security overall. And to do so, you had to have a technique that does not involve agents. And so what we've done, we've find a way that utilizes the cloud architecture in order to scan the cloud itself, basically when you integrate Orca, you are able within minutes to understand, to read, and to view all of the risks. We are leveraging a technique that we are calling side scanning that uses the API. So it uses the infrastructure of the cloud itself to read the block storage device of every compute instance and every instance, in the environment, and then we can deduce the actual risk of every asset. >> So that's a clever name, side scanning. Tell us a little bit more about that. Maybe you could double click on, on how it works. You've mentioned it's looking into block storage and leveraging the API is a very, very clever actually quite innovative. But help us understand in more detail how it works and why it's better than traditional tools that we might find in this space. >> Yes, so the way that it works is that by reading the block storage device, we are able to actually deduce what is running on your computer, meaning what kind of waste packages applications are running. And then by con combining the context, meaning understanding that what kind of services you have connected to the internet, what is the attack surface for these services? What will be the business impact? Will there be any access to PII or any access to the crown jewels of the organization? You can not only understand the risks. You can also understand the impact and then understand what should be our focus in terms of security of the environment. Different factories, the fact that we are doing it using the infrastructure itself, we are not installing any agents, we are not running any packet. You do not need to change anything in your architecture or design of how you use the cloud in order to utilize Orca Orca is working in a pure SaaS way. And so it means that there is no impact, not on cost and not on performance of your environment while using Orca. And so it reduces any friction that might happen with other parties of the organization when you enjoy the security or improve your security in the cloud. >> Yeah, and no process management intrusion. Now, I presume Gil that you eat your own cooking, meaning you're using your own product. First of all, is that true? And if so, how has your use of Orca as a chief product officer help you scale Orca as a company? >> So it's a great question. I think that something that we understood early on is that there is a, quite a significant difference between the way you architect your security in cloud and also the way that things reach production, meaning there's a difference, that there's a gap between how you imagined, like in everything in life how you imagine things will be and how they are in real life in production. And so, even though we have amazing customers that are extremely proficient in security and have thought of a lot of ways of how to secure the environment. Ans so, we of course, we are trying to secure environment as much as possible. We are using Orca because we understand that no one is perfect. We are not perfect. We might, the engineers might, my engineers might make mistakes like every organization. And so we are using Orca because we want to have complete coverage. We want to understand if we are doing any mistake. And sometimes the gap between the architecture and the hole in the security or the gap that you have in your security could take years to happen. And you need a tool that will constantly monitor your environment. And so that's why we are using Orca all around from day one not to find bugs or to do QA, we're doing it because we need security to our cloud environment that will provide these values. And so we've also passed the compliance auditing like SOC 2 and ISO using Orca and it expedited and allowed us to do these processes extremely fast because of having all of these guardrails and metrics has. >> Yeah, so, okay. So you recognized that you potentially had and did have that same problem as your customer has been. Has it helped you scale as a company obviously but how has it helped you scale as a company? >> So it helped us scale as a company by increasing the trust, the level of trust customer having Orca. It allowed us to adopt technology faster, meaning we need much less diligence or exploration of how to use technology because we have these guardrails. So we can use the richness of the technology that we have in the cloud without the need to stop, to install agents, to try to re architecture the way that we are using the technology. And we simply use it. We simply use the technology that the cloud offer as it is. And so it allows you a rapid scalability. >> Allows you allows you to move at the speed of cloud. Now, so I'm going to ask you as a co-founder, you got to wear many hats first of a co-founder and the leadership component there. And also the chief product officer, you got to go out, you got to get early customers, but but even more importantly you have to keep those customers retention. So maybe you can describe how customers have been using Orca. Did they, what was their aha moment that you've seen customers react to when you showcase the new product? And then how have you been able to keep them as loyal partners? >> So I think that we are very fortunate, we have a lot of, we are blessed with our customers. Many of our customers are vocal customers about what they like about Orca. And I think that something that comes along a lot of times is that this is a solution they have been waiting for. I can't express how many times I hear that I could go on a call and a customer says, "I must say, I must share. "This is a solution I've been looking for." And I think that in that respect, Orca is creating a new standard of what is expected from a security solution because we are transforming the security all in the company from an inhibitor to an enabler. You can use the technology. You can use new tools. You can use the cloud as it was intended. And so (coughs) we have customers like one of these cases is a customer that they have a lot of data and they're all super scared about using S3 buckets. We call over all of these incidents of these three buckets being breached or people connecting to an s3 bucket and downloading the data. So they had a policy saying, "S3 bucket should not be used. "We do not allow any use of S3 bucket." And obviously you do need to use S3 bucket. It's a powerful technology. And so the engineering team in that customer environment, simply installed a VM, installed an FTP server, and very easy to use password to that FTP server. And obviously two years later, someone also put all of the customer databases on that FTP server, open to the internet, open to everyone. And so I think it was for him and for us as well. It was a hard moment. First of all, he planned that no data will be leaked but actually what happened is way worse. The data was open to the to do to the world in a technology that exists for a very long time. And it's probably being scanned by attackers all the time. But after that, he not only allowed them to use S3 bucket because he knew that now he can monitor. Now, you can understand that they are using the technology as intended, now that they are using it securely. It's not open to everyone it's open in the right way. And there was no PII on that S3 bucket. And so I think the way he described it is that, now when he's coming to a meeting about things that needs to be improved, people are waiting for this meeting because he actually knows more than what they know, what they know about the environment. And I see it really so many times where a simple mistake or something that looks benign when you look at the environment in a holistic way, when you are looking on the context, you understand that there is a huge gap. That should be the breech. And another cool example was a case where a customer allowed an access from a third party service that everyone trusts to the crown jewels of the environment. And he did it in a very traditional way. He allowed a certain IP to be open to that environment. So overall it sounds like the correct way to go. You allow only a specific IP to access the environment but what he failed to to notice is that everyone in the world can register for free for this third-party service and access the environment from this IP. And so, even though it looks like you have access from a trusted service, a trusted third party service, when it's a Saas service, it's actually, it can mean that everyone can use it in order to access the environment and using Orca, you saw immediately the access, you saw immediately the risk. And I see it time after time that people are simply using Orca to monitor, to guardrail, to make sure that the environment stays safe throughout time and to communicate better in the organization to explain the risk in a very easy way. And the, I would say the statistics show that within few weeks, more than 85% of the different alerts and risks are being fixed, and think it comes to show how effective it is and how effective it is in improving your posture, because people are taking action. >> Those are two great examples, and of course they have often said that the shared responsibility model is often misunderstood. And those two examples underscore thinking that, "oh I hear all this, see all this press about S3, but it's up to the customer to secure the endpoint components et cetera. Configure it properly is what I'm saying. So what an unintended consequence, but but Orca plays a role in helping the customer with their portion of that shared responsibility. Obviously AWS is taking care of this. Now, as part of this program we ask a little bit of a challenging question to everybody because look it as a startup, you want to do well you want to grow a company. You want to have your employees, you know grow and help your customers. And that's great and grow revenues, et cetera but we feel like there's more. And so we're going to ask you because the theme here is all about cloud scale. What is your defining contribution to the future of cloud at scale, Gil? >> So I think that cloud is allowed the revolution to the data centers, okay? The way that you are building services, the way that you are allowing technology to be more adaptive, dynamic, ephemeral, accurate, and you see that it is being adopted across all vendors all type of industries across the world. I think that Orca is the first company that allows you to use this technology to secure your infrastructure in a way that was not possible in the on-prem world, meaning that when you're using the cloud technology and you're using technologies like Orca, you're actually gaining superior security that what was possible in the pre cloud world. And I think that, to that respect, Orca is going hand in hand with the evolution and actually revolutionizes the way that you expect to consume security, the way that you expect to get value, from security solutions across the world. >> Thank You for that Gil. And so we're at the end of our time, but we'll give you a chance for final wrap up. Bring us home with your summary, please. >> So I think that Orca is building the cloud security solution that actually works with its innovative aid agentless approach to cyber security to gain complete coverage, comprehensive solution and to gain, to understand the complete context of the 1% that matters in your security challenges across your data centers in the cloud. We are bridging the gap between the security teams, the business needs to grow and to do so in the paste of the cloud, I think the approach of being able to install within minutes, a security solution in getting complete understanding of your risk which is goes hand in hand in the way you expect and adopt cloud technology. >> That's great Gil. Thanks so much for coming on. You guys doing awesome work. Really appreciate you participating in the program. >> Thank you very much. >> And thank you for watching this AWS Startup Showcase. We're covering the next big thing in AI, Security, and Life Science on theCUBE. Keep it right there for more great content. (upbeat music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase. agentless is the future of cloud security. and the IT, the engineering, but how did you create And to do so, you had to have a technique into block storage and leveraging the API is that by reading the you eat your own cooking, or the gap that you have and did have that same problem And so it allows you a rapid scalability. to when you showcase the new product? the to do to the world And so we're going to ask you the way that you expect to get value, but we'll give you a in the way you expect and participating in the program. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

OrcaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

1%QUANTITY

0.99+

GilPERSON

0.99+

Gil GeronPERSON

0.99+

oneQUANTITY

0.99+

more than 85%QUANTITY

0.99+

two examplesQUANTITY

0.99+

two years laterDATE

0.99+

Orca SecurityORGANIZATION

0.98+

threeQUANTITY

0.98+

two great examplesQUANTITY

0.98+

ISOORGANIZATION

0.98+

three bucketsQUANTITY

0.97+

three periodsQUANTITY

0.96+

todayDATE

0.96+

S3TITLE

0.96+

FirstQUANTITY

0.95+

firstQUANTITY

0.94+

first companyQUANTITY

0.91+

day oneQUANTITY

0.9+

SOC 2TITLE

0.87+

theCUBEORGANIZATION

0.86+

SaasORGANIZATION

0.82+

Startup ShowcaseEVENT

0.8+

s3TITLE

0.7+

doubleQUANTITY

0.57+

GilORGANIZATION

0.55+

Next Big ThingTITLE

0.51+

yearsQUANTITY

0.5+

S3COMMERCIAL_ITEM

0.47+

Rohan D'Souza, Olive | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.


 

(upbeat music) (music fades) >> Welcome to today's session of theCUBE's presentation of the AWS Startup Showcase, I'm your host Natalie Erlich. Today, we're going to feature Olive, in the life sciences track. And of course, this is part of the future of AI, security, and life sciences. Here we're joined by our very special guest Rohan D'Souza, the Chief Product Officer of Olive. Thank you very much for being with us. Of course, we're going to talk today about building the internet of healthcare. I do you appreciate you joining the show. >> Thanks, Natalie. My pleasure to be here, I'm excited. >> Yeah, likewise. Well tell us about AI and how it's revolutionizing health systems across America. >> Yeah, I mean, we're clearly living around, living at this time of a lot of hype with AI, and there's a tremendous amount of excitement. Unfortunately for us, or, you know, depending on if you're an optimist or a pessimist, we had to wait for a global pandemic for people to realize that technology is here to really come into the aid of assisting everybody in healthcare, not just on the consumer side, but on the industry side, and on the enterprise side of delivering better care. And it's a truly an exciting time, but there's a lot of buzz and we play an important role in trying to define that a little bit better because you can't go too far today and hear about the term AI being used/misused in healthcare. >> Definitely. And also I'd love to hear about how Olive is fitting into this, and its contributions to AI in health systems. >> Yeah, so at its core, we, the industry thinks of us very much as an automation player. We are, we've historically been in the trenches of healthcare, mostly on the provider side of the house, in leveraging technology to automate a lot of the high velocity, low variability items. Our founding and our DNA is in this idea of, we think it's unfair that healthcare relies on humans as being routers. And we have looked to solve the problem of technology not talking to each other, by using humans. And so we set out to really go in into the trenches of healthcare and bring about core automation technology. And you might be sitting there wondering, well why are we talking about automation under the umbrella of AI? And that's because we are challenging the very status quo of siloed-based automation, and we're building, what we say, is the internet of healthcare. And more importantly what we've done is, we've brought in a human, very empathetic approach to automation, and we're leveraging technology by saying when one Olive learns, all Olives learn, so that we take advantage of the network effect of a single Olive worker in the trenches of healthcare, sharing that knowledge and wisdom, both with her human counterparts, but also with her AI worker counterparts that are showing up to work every single day in some of the most complex health systems in this country. >> Right. Well, when you think about AI and, you know, computer technology, you don't exactly think of, you know, humanizing kind of potential. So how are you seeking to make AI really humanistic, and empathetic, potentially? >> Well, most importantly the way we're starting with that is where we are treating Olive just like we would any single human counterpart. We don't want to think of this as just purely a technology player. Most importantly, healthcare is deeply rooted in this idea of investing in outcomes, and not necessarily investing in core technology, right? So we have learned that from the early days of us doing some really robust integrated AI-based solutions, but we've humanized it, right? Take, for example, we treat Olive just like any other human worker would, she shows up to work, she's onboarded, she has an obligation to her customers and to her human worker counterparts. And we care very deeply about the cost of the false positive that exists in healthcare, right? So, and we do this through various different ways. Most importantly, we do it in an extremely transparent and interpretable way. By transparent I mean, Olive provides deep insights back to her human counterparts in the form of reporting and status reports, and we even, we even have a term internally, that we call is a sick day. So when Olive calls in sick, we don't just tell our customers Olive's not working today, we tell our customers that Olive is taking a sick day, because a human worker that might require, that might need to stay home and recover. In our case, we just happened to have to rewire a certain portal integration because a portal just went through a massive change, and Olive has to take a sick day in order to make that fix, right? So. And this is, you know, just helping our customers understand, or feel like they can achieve success with AI-based deployments, and not sort of this like robot hanging over them, where we're waiting for Skynet to come into place, and truly humanizing the aspects of AI in healthcare. >> Right. Well that's really interesting. How would you describe Olive's personality? I mean, could you attribute a personality? >> Yeah, she's unbiased, data-driven, extremely transparent in her approach, she's empathetic. There are certain days where she's direct, and there are certain ways where she could be quirky in the way she shares stuff. Most importantly, she's incredibly knowledgeable, and we really want to bring that knowledge that she has gained over the years of working in the trenches of healthcare to her customers. >> That sounds really fascinating, and I love hearing about the human side of Olive. Can you tell us about how this AI, though, is actually improving efficiencies in healthcare systems right now? >> Yeah, not too many people know that about a third of every single US dollar is spent in the administrative burden of delivering care. It's really, really unfortunate. In the capitalistic world, of, just us as a system of healthcare in the United States, there is a lot of tail wagging the dog that ends up happening. Most importantly, I don't know that the last time, if you've been through a process where you have to go and get an MRI or a CT scan, and your provider tells you that we first have to wait for the insurance company in order to give us permission to perform this particular task. And when you think about that, one, there's, you know the tail wagging the dog scenario, but two, the administrative burden to actually seek the approval for that test, that your provider is telling you that you need to perform. Right? And what we've done is, as humans, or as sort of systems, we have just put humans in the supply chain of connecting the left side to the right side. So what we're doing is we're taking advantage of massive distributing cloud computing platforms, I mean, we're fully built on the AWS stack, we take advantage of things that we can very quickly stand up, and spin up. And we're leveraging core capabilities in our computer vision, our natural language processing, to do a lot of the tasks that, unfortunately, we have relegated humans to do, and our goal is can we allow humans to function at the top of their license? Irrespective of what the license is, right? It could be a provider, it could be somebody working in the trenches of revenue cycle management, or it could be somebody in a call center talking to a very anxious patient that just learned that he or she might need to take a test in order to rule out something catastrophic, like a very adverse diagnosis. >> Yeah, really fascinating. I mean, do you think that this is just like the tip of the iceberg? I mean, how much more potential does AI have for healthcare? >> Yeah, I think we're very much in the early, early, early days of AI being applied in a production in practical sense. You know, AI has been talked about for many, many many years, in the trenches of healthcare. It has found its place very much in challenging status quos in research, it has struggled to find its way in the trenches of just the practicality on the application of AI. And that's partly because we, you know, going back to the point that I raised earlier, the cost of the false positive in healthcare is really high. You know, it can't just be a, you know, I bought a pair of shoes online, and it recommended that I buy a pair of socks, and I happen to get the socks and I returned them back because I realized that they're really ugly and hideous and I don't want them. In healthcare, you can't do that. Right? In healthcare you can't tell a patient or somebody else oops, I really screwed up, I should not have told you that. So, what that's meant for us, in the trenches of delivery of AI-based applications, is we've been through a cycle of continuous pilots and proof of concepts. Now, though, with AI starting to take center stage, where a lot of what has been hardened in the research world can be applied towards the practicality to avoid the burnout, and the sheer cost that the system is under, we're starting to see this real upwards tick of people implementing AI-based solutions, whether it's for decision-making, whether it's for administrative tasks, drug discovery, it's just, is an amazing, amazing time to be at the intersection of practical application of AI and really, really good healthcare delivery for all of us. >> Yeah, I mean, that's really, really fascinating, especially your point on practicality. Now how do you foresee AI, you know, being able to be more commercial in its appeal? >> I think you have to have a couple of key wins under your belt, is number one, number two, the standard, sort of outcomes-based publications that is required. Two, I think we need, we need real champions on the inside of systems to support the narrative that us as vendors are pushing heavily on the AI-driven world or the AI-approachable world, and we're starting to see that right now. You know, it took a really, really long time for providers, first here in the United States, but now internationally, on this adoption and move away from paper-based records to electronic medical records. You know, you still hear a lot of pain from people saying oh my God, I used an EMR, but try to take the EMR away from them for a day or two, and you'll very quickly realize that life without an EMR is extremely hard right now. AI is starting to get to that point where, for us, we, you know, we treat, we always say that Olive needs to pass the Turing test. Right? So when you clearly get this, this sort of feeling that I can trust my AI counterpart, my AI worker to go and perform these tasks, because I realized that, you know, as long as it's unbiased, as long as it's data-driven, as long as it's interpretable, and something that I can understand, I'm willing to try this out in a routine basis, but we really, really need those champions on the internal side to promote the use of this safe application. >> Yeah. Well, just another thought here is, you know, looking at your website, you really focus on some of the broken systems in healthcare, and how Olive is uniquely prepared to shine the light on that, where others aren't. Can you just give us an insight onto that? >> Yeah. You know, the shine the light is a play on the fact that there's a tremendous amount of excitement in technology and AI in healthcare applied to the clinical side of the house. And it's the obvious place that most people would want to invest in, right? It's like, can I bring an AI-based technology to the clinical side of the house? Like decision support tools, drug discovery, clinical NLP, et cetera, et cetera. But going back to what I said, 30% of what happens today in healthcare is on the administrative side. And so what we call as the really, sort of the dark side of healthcare where it's not the most exciting place to do true innovation, because you're controlled very much by some big players in the house, and that's why we we provide sort of this insight on saying we can shine a light on a place that has typically been very dark in healthcare. It's around this mundane aspects of traditional, operational, and financial performance, that doesn't get a lot of love from the tech community. >> Well, thank you Rohan for this fascinating conversation on how AI is revolutionizing health systems across the country, and also the unique role that Olive is now playing in driving those efficiencies that we really need. Really looking forward to our next conversation with you. And that was Rohan D'Souza, the Chief Product Officer of Olive, and I'm Natalie Erlich, your host for the AWS Startup Showcase, on theCUBE. Thank you very much for joining us, and look forward for you to join us on the next session. (gentle music)

Published Date : Jun 24 2021

SUMMARY :

of the AWS Startup Showcase, My pleasure to be here, I'm excited. and how it's revolutionizing and on the enterprise side And also I'd love to hear about in some of the most complex So how are you seeking to in the form of reporting I mean, could you attribute a personality? that she has gained over the years the human side of Olive. know that the last time, is just like the tip of the iceberg? and the sheer cost that you know, being able to be first here in the United and how Olive is uniquely prepared is on the administrative side. and also the unique role

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rohan D'SouzaPERSON

0.99+

NataliePERSON

0.99+

Natalie ErlichPERSON

0.99+

United StatesLOCATION

0.99+

30%QUANTITY

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

AmericaLOCATION

0.99+

RohanPERSON

0.99+

OlivePERSON

0.99+

United StatesLOCATION

0.99+

TodayDATE

0.99+

a dayQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

TwoQUANTITY

0.98+

singleQUANTITY

0.97+

OlivesPERSON

0.96+

OliveORGANIZATION

0.92+

oneQUANTITY

0.88+

Startup ShowcaseEVENT

0.88+

theCUBEORGANIZATION

0.88+

single dayQUANTITY

0.82+

pandemicEVENT

0.81+

about a thirdQUANTITY

0.81+

a pair of socksQUANTITY

0.8+

AWS Startup ShowcaseEVENT

0.8+

AWS Startup ShowcaseEVENT

0.75+

single humanQUANTITY

0.73+

SkynetORGANIZATION

0.68+

USLOCATION

0.67+

every singleQUANTITY

0.65+

dollarQUANTITY

0.62+

pairQUANTITY

0.6+

numberQUANTITY

0.56+

NLPORGANIZATION

0.5+

shoesQUANTITY

0.5+

Zach Booth, Explorium | AWS Startup Showcase | The Next Big Thing in AI, Security, & Life Sciences.


 

(gentle upbeat music) >> Everyone welcome to the AWS Startup Showcase presented by theCUBE. I'm John Furrier, host of theCUBE. We are here talking about the next big thing in cloud featuring Explorium. For the AI track, we've got AI cybersecurity and life sciences. Obviously AI is hot, machine learning powering that. Today we're joined by Zach Booth, director of global partnerships and channels like Explorium. Zach, thank you for joining me today remotely. Soon we'll be in person, but thanks for coming on. We're going to talk about rethinking external data. Thanks for coming on theCUBE. >> Absolutely, thanks so much for having us, John. >> So you guys are a hot startup. Congratulations, we just wrote about on SiliconANGLE, you're a new $75 million of fresh funding. So you're part of the Amazon partner network and growing like crazy. You guys have a unique value proposition looking at external data and that having a platform for advanced analytics and machine learning. Can you take a minute to explain what you guys do? What is this platform? What's the value proposition and why do you exist? >> Bottom line, we're bringing context to decision-making. The premise of Explorium and kind of this is consistent with the framework of advanced analytics is we're helping customers to reach better, more relevant, external data to feed into their predictive and analytical models. It's quite a challenge to actually integrate and effectively leverage data that's coming from beyond your organization's walls. It's manual, it's tedious, it's extremely time consuming and that's a problem. It's really a problem that Explorium was built to solve. And our philosophy is it shouldn't take so long. It shouldn't be such an arduous process, but it is. So we built a company, a technology that's capable for any given analytical process of connecting a customer to relevant sources that are kind of beyond their organization's walls. And this really impacts decision-making by bringing variety and context into their analytical processes. >> You know, one of the things I see a lot in my interviews with theCUBE and talking to people in the industry is that everyone talks a big game about having some machine learning and AI, they're like, "Okay, I got all this cool stuff". But at the end of the day, people are still using spreadsheets. They're wrangling data. And a lot of it's dominated by these still fenced-off data warehousing and you start to see the emergence of really companies built on the cloud. I saw the snowflake IPO, you're seeing a whole new shift of new brands emerging that are doing things differently, right? And because there's such a need for just move out of the archaic spreadsheet and data presentation layers, it's a slower antiquated, outdated. How do you guys solve that problem? You guys are on the other side of that equation, you're on the new wave of analytics. What are you guys solving? How do you make that work? How do you get on that way? >> So basically the way Explorium sees the world, and I think that most analytical practitioners these days see it in a similar way, but the key to any analytical problem is having the right data. And the challenge that we've talked about and that we're really focused on is helping companies reach that right data. Our focus is on the data part of data science. The science part is the algorithmic side. It's interesting. It was kind of the first frontier of machine learning as practitioners and experts were focused on it and cloud and compute really enabled that. The challenge today isn't so much "What's the right model for my problem?" But it's "What's the right data?" And that's the premise of what we do. Your model's only as strong as the data that it trains on. And going back to that concept of just bringing context to decision-making. Within that framework that we talked about, the key is bringing comprehensive, accurate and highly varied data into my model. But if my model is only being informed with internal data which is wonderful data, but only internal, then it's missing context. And we're helping companies to reach that external variety through a pretty elegant platform that can connect the right data for my analytical process. And this really has implications across several different industries and a multitude of use cases. We're working with companies across consumer packaged goods, insurance, financial services, retail, e-commerce, even software as a service. And the use cases can range between fraud and risk to marketing and lifetime value. Now, why is this such a challenge today with maybe some antiquated or analog means? With a spreadsheet or with a rule-based approach where we're pretty limited, it was an effective means of decision-making to generate and create actions, but it's highly limited in its ability to change, to be dynamic, to be flexible. And with modeling and using data, it's really a huge arsenal that we have at our fingertips. The trick is extracting value from within it. There's obviously latent value from within our org but every day there's more and more data that's being created outside of our org. And that is a challenge to go out and get to effectively filter and navigate and connect to. So we've basically built that tech to help us navigate and query for any given analytical question. Find me the right data rather than starting with what's the problem I'm looking for, now let me think about the right data. Which is kind of akin to going into a library and searching for a specific book. You know which book you're looking for. Instead of saying, there's a world, a universe of data outside there. I want to access it. I want to tap into what's right. Can I use a tool that can effectively query all that data, find what's relevant for me, connect it and match it with my own and distill signals or features from that data to provide more variety into my modeling efforts yielding a robust decision as an output. >> I love that paradigm of just having that searchable kind of paradigm. I got to ask you one of the big things that I've heard people talk about. I want to get your thoughts on this, is that how do I know if I even have the right data? Is the data addressable? Can I find it? Is it even, can I even be queried? How do you solve that problem for customers when they say, "I really want the best analytics but do I even have the data or is it the right data?" How do you guys look at that? >> So the way our technology was built is that it's quite relevant for a few different profile types of customers. Some of these customers, really the genesis of the company started with those cloud-based, model-driven since day one organizations, and they're working with machine learning and they have models in production. They're quite mature in fact. And the problem that they've been facing is, again, our models are only as strong as the data that they're training on. The only data that they're training on is internal data. And we're seeing diminishing returns from those decisions. So now suddenly we're looking for outside data and we're finding that to effectively use outside data, we have to spend a lot of time. 60% of our time spent thinking of data, going out and getting it, cleaning it, validating it, and only then can we actually train a model and assess if there's an ROI. That takes months. And if it doesn't push the needle from an ROI standpoint, then it's an enormous opportunity cost, which is very, very painful, which goes back to their decision-making. Is it even worth it if it doesn't push the needle? That's why there had to be a better way. And what we built is relevant for that audience as well as companies that are in the midst of their digital transformation. We're data rich, but data science poor. We have lots of data. A latent value to extract from within our own data and at the same time tons of valuable data outside of our org. Instead of waiting 18, 36 months to transform ourselves, get our infrastructure in place, our data collection in place, and really start having models in production based on our own data. You can now do this in tandem. And that's what we're seeing with a lot of our enterprise customers. By using their analysts, their data engineers, some of them in their innovation or kind of center of excellences have a data science group as well. And they're using the platform to inform a lot of their different models across lines of businesses. >> I love that expression, "data-rich". A lot of people becoming full of data too. They have a data problem. They have a lot of it. I think I want to get your thoughts but I think that connects to my next question which is as people look at the cloud, for instance, and again, all these old methods were internal, internal to the company, but now that you have this idea of cloud, more integration's happening. More people are connecting with APIs. There's more access to potentially more signals, more data. How does a company go to that next level to connect in and acquire the data and make it faster? Because I can almost imagine that the signals that come from that context of merging external data and that's the topic of this theme, re-imagining external data is extremely valuable signaling capability. And so it sounds like you guys make it go faster. So how does it work? Is it the cloud? Take us through that value proposition. >> Well, it's a real, it's amazing how fast the rate of change organizations have been moving onto the cloud over the past year during COVID and the fact that alternative or external data, depending on how you refer to it, has really, really blown up. And it's really exciting. This is coming in the form of data providers and data marketplaces, and everybody is kind of, more and more organizations are moving from rule-based decision-making to predictive decision making, and that's exciting. Now what's interesting about this company, Explorium, we're working with a lot of different types of customers but our long game has a real high upside. There's more and more companies that are starting to use data and are transformed or already are in the midst of their transformation. So they need outside data. And that challenge that I described is exists for all of them. So how does it really work? Today, if I don't have data outside, I have to think. It's based on hypothesis and it all starts with that hypothesis which is already prone to error from the get-go. You and I might be domain experts for a given use case. Let's say we're focusing on fraud. We might think about a dozen different types of data sources, but going out and getting it like I said, it takes a lot of time harmonizing it, cleaning it, and being able to use it takes even more time. And that's just for each one. So if we have to do that across dozens of data sources it's going to take far too much time and the juice isn't worth the squeeze. And so I'm going to forego using that. And a metaphor that I like to use when I try to describe what Explorium does to my mom. I basically use this connection to buying your first home. It's a very, very important financial decision. You would, when you're buying this home, you're thinking about all the different inputs in your decision-making. It's not just about the blueprint of the house and how many rooms and the criteria you're looking for. You're also thinking external variables. You're thinking about the school zone, the construction, the property value, alternative or similar neighborhoods. That's probably your most important financial decision or one of the largest at least. A machine learning model in production is an extremely important and expensive investment for an organization. Now, the problem is as a consumer buying a home, we have all this data at our fingertips to find out all of those external-based inputs. Organizations don't, which is kind of crazy when I first kind of got into this world. And so, they're making decisions with their first party data only. First party data's wonderful data. It's the best, it's representative, it's high quality, it's high value for their specific decision-making and use cases but it lacks context. And there's so much context in the form of location-based data and business information that can inform decision-making that isn't being used. It translates to sub-optimal decision-making, let's say. >> Yeah, and I think one of the insights around looking at signal data in context is if by merging it with the first party, it creates a huge value window, it gives you observational data, maybe potentially insights into customer behavior. So totally agree, I think that's a huge observation. You guys are definitely on the right side of history here. I want to get into how it plays out for the customer. You mentioned the different industries, obviously data's in every vertical. And vertical specialization with the data it has to be, is very metadata driven. I mean, metadata and oil and gas is different than fintech. I mean, some overlap, but for the most part you got to have that context, acute context, each one. How are you guys working? Take us through an example of someone getting it right, getting that right set up, taking us through the use case of how someone on boards Explorium, how they put it to use, and what are some of the benefits? >> So let's break it down into kind of a three-step phase. And let's use that example of fraud earlier. An organization would have basically past historical data of how many customers were actually fraudulent in the end of the day. So this use case, and it's a core business problem, is with an intention to reduce that fraud. So they would basically provide, going with your description earlier, something similar to an Excel file. This can be pulled from any database out there, we're working with loads of them, and they would provide this what's called training data. This training data is their historical data and would have as an output, the outcome, the conclusion, was this business fraudulent or not? Yes or no. Binary. The platform would understand that data itself to train a model with external context in the form of enrichments. These data enrichments at the end of the day are important, they're relevant, but their purpose is to generate signals. So to your point, signals is the bottom line what everyone's trying to achieve and identify and discover, and even engineer by using data that they have and data that they yet to integrate with. So the platform would connect to your data, infer and understand the meaning of that data. And based on this matching of internal plus external context, the platform automates the process of distilling signals. Or in machine learning this is called, referred to as features. And these features are really the bread and butter of your modeling efforts. If you can leverage features that are coming from data that's outside of your org, and they're quantifiably valuable which the platform measures, then you're putting yourself in a position to generate an edge in your modeling efforts. Meaning now, you might reduce your fraud rate. So your customers get a much better, more compelling offer or service or price point. It impacts your business in a lot of ways. What Explorium is bringing to the table in terms of value is a single access point to a huge universe of external data. It expedites your time to value. So rather than data analysts, data engineers, data scientists, spending a significant amount of time on data preparation, they can now spend most of their time on feature or signal engineering. That's the more fun and interesting part, less so the boring part. But they can scale their modeling efforts. So time to value, access to a huge universe of external context, and scale. >> So I see two things here. Just make sure I get this right 'cause it sounds awesome. So one, the core assets of the engineering side of it, whether it's the platform engineer or data engineering, they're more optimized for getting more signaling which is more impactful for the context acquisition, looking at contexts that might have a business outcome, versus wrangling and doing mundane, heavy lifting. >> Yeah so with it, sorry, go ahead. >> And the second one is you create a democratization for analysts or business people who just are used to dealing with spreadsheets who just want to kind of play and play with data and get a feel for it, or experiment, do querying, try to match planning with policy - >> Yeah, so the way I like to kind of communicate this is Explorium's this one, two punch. It's got this technology layer that provides entity resolution, so matching with external data, which otherwise is a manual endeavor. Explorium's automated that piece. The second is a huge universe of outside data. So this circumvents procurement. You don't have to go out and spend all of these one-off efforts on time finding data, organizing it, cleaning it, etc. You can use Explorium as your single access point to and gateway to external data and match it, so this will accelerate your time to value and ultimately the amount of valuable signals that you can discover and leverage through the platform and feed this into your own pipelines or whatever system or analytical need you have. >> Zach, great stuff. I love talking with you and I love the hot startup action here. Cause you're again, you're on the net new wave here. Like anything new, I was just talking to a colleague here. (indistinct) When you have something new, it's like driving a car for the first time. You need someone to give you some driving lessons or figure out how to operationalize it or take advantage of the one, two, punch as you pointed out. How do you guys get someone up and running? 'Cause let's just say, I'm like, okay, I'm bought into this. So no brainer, you got my attention. I still don't understand. Do you provide a marketplace of data? Do I need to get my own data? Do I bring my own data to the party? Do you guys provide relationships with other data providers? How do I get going? How do I drive this car? How do you answer that? >> So first, explorium.ai is a free trial and we're a product-focused company. So a practitioner, maybe a data analyst, a data engineer, or data scientist would use this platform to enrich their analytical, so BI decision-making or any models that they're working on either in production or being trained. Now oftentimes models that are being trained don't actually make it to production because they don't meet a minimum threshold. Meaning they're not going to have a positive business outcome if they're deployed. With Explorium you can now bring variety into that and increase your chances that your model that's being trained will actually be deployed because it's being fed with the right data. The data that you need that's not just the data that you have. So how a business would start working with us would typically be with a use case that has a high business value. Maybe this could be a fraud use case or a risk use case and B2B, or even B2SMB context. This might be a marketing use case. We're talking about LTV modeling, lookalike modeling, lead acquisition and generation for our CPGs and field sales optimization. Explore and understand your data. It would enrich that data automatically, it would generate and discover new signals from external data plus from your own and feed this into either a model that you have in-house or end to end in the platform itself. We provide customer success to generate, kind of help you build out your first model perhaps, and hold your hands through that process. But typically most of our customers are after a few months time having run in building models, multiple models in production on their own. And that's really exciting because we're helping organizations move from a more kind of rule-based decision making and being their bridge to data science. >> Awesome. I noticed that in your title you handle global partnerships and channels which I'm assuming is you guys have a network and ecosystem you're working with. What are some of the partnerships and channel relationships that you have that you bring to bear in the marketplace? >> So data and analytics, this space is very much an ecosystem. Our customers are working across different clouds, working with all sorts of vendors, technologies. Basically they have a pretty big stack. We're a part of that stack and we want to symbiotically play within our customer stack so that we can contribute value whether they sit here, there, or in another place. Our partners range from consulting and system integration firms, those that perhaps are building out the blueprint for a digital transformation or actually implementing that digital transformation. And we contribute value in both of these cases as a technology innovation layer in our product. And a customer would then consume Explorium afterwards, after that transformation is complete as a part of their stack. We're also working with a lot of the different cloud vendors. Our customers are all cloud-based and data enrichment is becoming more and more relevant with some wonderful machine-learning tools. Be they AutoML, or even some data marketplaces are popping up and very exciting. What we're bringing to the table as an edge is accelerating the connection between the data that I think I want as a company and how to actually extract value from that data. Being part of this ecosystem means that we can be working with and should be working with a lot of different partners to contribute incremental value to our end customers. >> Final question I want to ask you is if I'm in a conference room with my team and someone says, "Hey, we should be rethinking our external data." What would I say? How would I pound my fist on the table or raise my hand in saying, "Hey, I have an idea, we should be thinking this way." What would be my argument to the team, to re-imagine how we deal with external data? >> So it might be a scenario that rather than banging your hands on the table, you might be banging your heads on the table because it's such a challenging endeavor today. Companies have to think about, What's the right data for my specific use cases? I need to validate that data. Is it relevant? Is it real? Is it representative? Does it have good coverage, good depth and good quality? Then I need to procure that data. And this is about getting a license from it. I need to integrate that data with my own. That means I need to have some in-house expertise to do so. And then of course, I need to monitor and maintain that data on an ongoing basis. All of this is a pretty big thing to undertake and undergo and having a partner to facilitate that external data integration and ongoing refresh and monitoring, and being able to trust that this is all harmonized, high quality, and I can find the valuable ones without having to manually pick and choose and try to discover it myself is a huge value add, particularly the larger the organization or partner. Because there's so much data out there. And there's a lot of noise out there too. And so if I can through a single partner or access point, tap into that data and quantify what's relevant for my specific problem, then I'm putting myself in a really good position and optimizing the allocation of my very expensive and valuable data analysts and engineering resources. >> Yeah, I think one of the things you mentioned earlier I thought was a huge point was good call out was it goes beyond the first party data because and even just first party if you just in an internal view, some of the best, most successful innovators that we've been covering with cloud scale is they're extending their first party data to external providers. So they're in the value chains of solutions that share their first party data with other suppliers. And so that's just, again, more of an extension of the first party data. You're kind of taking it to a whole 'nother level of there's another external, external set of data beyond it that's even more important. I think this is a fascinating growth area and I think you guys are onto it. Great stuff. >> Thank you so much, John. >> Well, I really appreciate you coming on Zach. Final word, give a quick plug for the company. What are you up to, and what's going on? >> What's going on with Explorium? We are growing very fast. We're a very exciting company. I've been here since the very early days and I can tell you that we have a stellar working environment, a very, very, strong down to earth, high work ethic culture. We're growing in the sense of our office in San Mateo, New York, and Tel Aviv are growing rapidly. As you mentioned earlier, we raised our series C so that totals Explorium to raising I think 127 million over the past two years and some change. And whether you want to partner with Explorium, work with us as a customer, or join us as an employee, we welcome that. And I encourage everybody to go to explorium.ai. Check us out, read some of the interesting content there around data science, around the processes, around the business outcomes that a lot of our customers are seeing, as well as joining a free trial. So you can check out the platform and everything that has to offer from machine learning engine to a signal studio, as well as what type of information might be relevant for your specific use case. >> All right Zach, thanks for coming on. Zach Booth, director of global partnerships and channels that explorium.ai. The next big thing in cloud featuring Explorium and a part of our AI track, I'm John Furrier, host of theCUBE. Thanks for watching.

Published Date : Jun 24 2021

SUMMARY :

For the AI track, we've Absolutely, thanks so and that having a platform It's quite a challenge to actually of really companies built on the cloud. And that is a challenge to go out and get I got to ask you one of the big things and at the same time tons of valuable data and that's the topic of this theme, And a metaphor that I like to use of the insights around and data that they yet to integrate with. the core assets of the and gateway to external data Do I bring my own data to the party? that's not just the data that you have. What are some of the partnerships a lot of the different cloud vendors. to re-imagine how we and optimizing the allocation of the first party data. plug for the company. that has to offer from and a part of our AI track,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Zach BoothPERSON

0.99+

ExploriumORGANIZATION

0.99+

ZachPERSON

0.99+

AmazonORGANIZATION

0.99+

60%QUANTITY

0.99+

$75 millionQUANTITY

0.99+

John FurrierPERSON

0.99+

San MateoLOCATION

0.99+

two thingsQUANTITY

0.99+

Tel AvivLOCATION

0.99+

127 millionQUANTITY

0.99+

ExcelTITLE

0.99+

explorium.aiOTHER

0.99+

first partyQUANTITY

0.99+

TodayDATE

0.99+

first timeQUANTITY

0.99+

first modelQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

first homeQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

three-stepQUANTITY

0.98+

secondQUANTITY

0.97+

two punchQUANTITY

0.97+

twoQUANTITY

0.97+

first frontierQUANTITY

0.95+

New YorkLOCATION

0.95+

theCUBEORGANIZATION

0.94+

AWSORGANIZATION

0.93+

explorium.aiORGANIZATION

0.91+

each oneQUANTITY

0.9+

second oneQUANTITY

0.9+

single partnerQUANTITY

0.89+

AWS Startup ShowcaseEVENT

0.87+

dozensQUANTITY

0.85+

past yearDATE

0.84+

single accessQUANTITY

0.84+

First partyQUANTITY

0.84+

series COTHER

0.79+

COVIDEVENT

0.74+

past two yearsDATE

0.74+

36 monthsQUANTITY

0.73+

18,QUANTITY

0.71+

Startup ShowcaseEVENT

0.7+

SiliconANGLEORGANIZATION

0.55+

tonsQUANTITY

0.53+

thingsQUANTITY

0.53+

snowflake IPOEVENT

0.52+

HPE Accelerating Next | HPE Accelerating Next 2021


 

momentum is gathering [Music] business is evolving more and more quickly moving through one transformation to the next because change never stops it only accelerates this is a world that demands a new kind of compute deployed from edge to core to cloud compute that can outpace the rapidly changing needs of businesses large and small unlocking new insights turning data into outcomes empowering new experiences compute that can scale up or scale down with minimum investment and effort guided by years of expertise protected by 360-degree security served up as a service to let it control own and manage massive workloads that weren't there yesterday and might not be there tomorrow this is the compute power that will drive progress giving your business what you need to be ready for what's next this is the compute power of hpe delivering your foundation for digital transformation welcome to accelerating next thank you so much for joining us today we have a great program we're going to talk tech with experts we'll be diving into the changing economics of our industry and how to think about the next phase of your digital transformation now very importantly we're also going to talk about how to optimize workloads from edge to exascale with full security and automation all coming to you as a service and with me to kick things off is neil mcdonald who's the gm of compute at hpe neil always a pleasure great to have you on it's great to see you dave now of course when we spoke a year ago you know we had hoped by this time we'd be face to face but you know here we are again you know this pandemic it's obviously affected businesses and people in in so many ways that we could never have imagined but in the reality is in reality tech companies have literally saved the day let's start off how is hpe contributing to helping your customers navigate through things that are so rapidly shifting in the marketplace well dave it's nice to be speaking to you again and i look forward to being able to do this in person some point the pandemic has really accelerated the need for transformation in businesses of all sizes more than three-quarters of cios report that the crisis has forced them to accelerate their strategic agendas organizations that were already transforming or having to transform faster and organizations that weren't on that journey yet are having to rapidly develop and execute a plan to adapt to this new reality our customers are on this journey and they need a partner for not just the compute technology but also the expertise and economics that they need for that digital transformation and for us this is all about unmatched optimization for workloads from the edge to the enterprise to exascale with 360 degree security and the intelligent automation all available in that as a service experience well you know as you well know it's a challenge to manage through any transformation let alone having to set up remote workers overnight securing them resetting budget priorities what are some of the barriers that you see customers are working hard to overcome simply per the organizations that we talk with are challenged in three areas they need the financial capacity to actually execute a transformation they need the access to the resource and the expertise needed to successfully deliver on a transformation and they have to find the way to match their investments with the revenues for the new services that they're putting in place to service their customers in this environment you know we have a data partner called etr enterprise technology research and the spending data that we see from them is it's quite dramatic i mean last year we saw a contraction of roughly five percent of in terms of i.t spending budgets etc and this year we're seeing a pretty significant rebound maybe a six to seven percent growth range is the prediction the challenge we see is organizations have to they've got to iterate on that i call it the forced march to digital transformation and yet they also have to balance their investments for example at the corporate headquarters which have kind of been neglected is there any help in sight for the customers that are trying to reduce their spend and also take advantage of their investment capacity i think you're right many businesses are understandably reluctant to loosen the purse strings right now given all of the uncertainty and often a digital transformation is viewed as a massive upfront investment that will pay off in the long term and that can be a real challenge in an environment like this but it doesn't need to be we work through hpe financial services to help our customers create the investment capacity to accelerate the transformation often by leveraging assets they already have and helping them monetize them in order to free up the capacity to accelerate what's next for their infrastructure and for their business so can we drill into that i wonder if we could add some specifics i mean how do you ensure a successful outcome what are you really paying attention to as those sort of markers for success well when you think about the journey that an organization is going through it's tough to be able to run the business and transform at the same time and one of the constraints is having the people with enough bandwidth and enough expertise to be able to do both so we're addressing that in two ways for our customers one is by helping them confidently deploy new solutions which we have engineered leveraging decades of expertise and experience in engineering to deliver those workload optimized portfolios that take the risk and the complexity out of assembling some of these solutions and give them a pre-packaged validated supported solution intact that simplifies that work for them but in other cases we can enhance our customers bandwidth by bringing them hpe point next experts with all of the capabilities we have to help them plan deliver and support these i.t projects and transformations organizations can get on a faster track of modernization getting greater insight and control as they do it we're a trusted partner to get the most for a business that's on this journey in making these critical compute investments to underpin the transformations and whether that's planning to optimizing to safe retirement at the end of life we can bring that expertise to bayer to help amplify what our customers already have in-house and help them accelerate and succeed in executing these transformations thank you for that neil so let's talk about some of the other changes that customers are seeing and the cloud has obviously forced customers and their suppliers to really rethink how technology is packaged how it's consumed how it's priced i mean there's no doubt in that to take green lake it's obviously a leading example of a pay as pay-as-you-scale infrastructure model and it could be applied on-prem or hybrid can you maybe give us a sense as to where you are today with green lake well it's really exciting you know from our first pay-as-you-go offering back in 2006 15 years ago to the introduction of green lake hpe has really been paving the way on consumption-based services through innovation and partnership to help meet the exact needs of our customers hpe green lake provides an experience that's the best of both worlds a simple pay-per-use technology model with the risk management of data that's under our customers direct control and it lets customers shift to everything as a service in order to free up capital and avoid that upfront expense that we talked about they can do this anywhere at any scale or any size and really hpe green lake is the cloud that comes to you like that so we've touched a little bit on how customers can maybe overcome some of the barriers to transformation what about the nature of transformations themselves i mean historically there was a lot of lip service paid to digital and and there's a lot of complacency frankly but you know that covered wrecking ball meme that so well describes that if you're not a digital business essentially you're going to be out of business so neil as things have evolved how is hpe addressed the new requirements well the new requirements are really about what customers are trying to achieve and four very common themes that we see are enabling the productivity of a remote workforce that was never really part of the plan for many organizations being able to develop and deliver new apps and services in order to service customers in a different way or drive new revenue streams being able to get insights from data so that in these tough times they can optimize their business more thoroughly and then finally think about the efficiency of an agile hybrid private cloud infrastructure especially one that now has to integrate the edge and we're really thrilled to be helping our customers accelerate all of these and more with hpe compute i want to double click on that remote workforce productivity i mean again the surveys that we see 46 percent of the cios say that productivity improved with the whole work from home remote work trend and on average those improvements were in the four percent range which is absolutely enormous i mean when you think about that how does hpe specifically you know help here what do you guys do well every organization in the world has had to adapt to a different style of working and with more remote workers than they had before and for many organizations that's going to become the new normal even post pandemic many it shops are not well equipped for the infrastructure to provide that experience because if all your workers are remote the resiliency of that infrastructure the latencies of that infrastructure the reliability of are all incredibly important so we provide comprehensive solutions expertise and as a service options that support that remote work through virtual desktop infrastructure or vdi so that our customers can support that new normal of virtual engagements online everything across industries wherever they are and that's just one example of many of the workload optimized solutions that we're providing for our customers is about taking out the guesswork and the uncertainty in delivering on these changes that they have to deploy as part of their transformation and we can deliver that range of workload optimized solutions across all of these different use cases because of our broad range of innovation in compute platforms that span from the ruggedized edge to the data center all the way up to exascale and hpc i mean that's key if you're trying to affect the digital transformation and you don't have to fine-tune you know be basically build your own optimized solutions if i can buy that rather than having to build it and rely on your r d you know that's key what else is hpe doing you know to deliver things new apps new services you know your microservices containers the whole developer trend what's going on there well that's really key because organizations are all seeking to evolve their mix of business and bring new services and new capabilities new ways to reach their customers new way to reach their employees new ways to interact in their ecosystem all digitally and that means app development and many organizations of course are embracing container technology to do that today so with the hpe container platform our customers can realize that agility and efficiency that comes with containerization and use it to provide insights to their data more and more that data of course is being machine generated or generated at the edge or the near edge and it can be a real challenge to manage that data holistically and not have silos and islands an hpe esmerald data fabric speeds the agility and access to data with a unified platform that can span across the data centers multiple clouds and even the edge and that enables data analytics that can create insights powering a data-driven production-oriented cloud-enabled analytics and ai available anytime anywhere in any scale and it's really exciting to see the kind of impact that that can have in helping businesses optimize their operations in these challenging times you got to go where the data is and the data is distributed it's decentralized so i i i like the esmerel of vision and execution there so that all sounds good but with digital transformation you get you're going to see more compute in in hybrid's deployments you mentioned edge so the surface area it's like the universe it's it's ever-expanding you mentioned you know remote work and work from home before so i'm curious where are you investing your resources from a cyber security perspective what can we count on from hpe there well you can count on continued leadership from hpe as the world's most secure industry standard server portfolio we provide an enhanced and holistic 360 degree view to security that begins in the manufacturing supply chain and concludes with a safeguarded end-of-life decommissioning and of course we've long set the bar for security with our work on silicon root of trust and we're extending that to the application tier but in addition to the security customers that are building this modern hybrid are private cloud including the integration of the edge need other elements too they need an intelligent software-defined control plane so that they can automate their compute fleets from all the way at the edge to the core and while scale and automation enable efficiency all private cloud infrastructures are competing with web scale economics and that's why we're democratizing web scale technologies like pinsando to bring web scale economics and web scale architecture to the private cloud our partners are so important in helping us serve our customers needs yeah i mean hp has really upped its ecosystem game since the the middle of last decade when when you guys reorganized it you became like even more partner friendly so maybe give us a preview of what's coming next in that regard from today's event well dave we're really excited to have hp's ceo antonio neri speaking with pat gelsinger from intel and later lisa sue from amd and later i'll have the chance to catch up with john chambers the founder and ceo of jc2 ventures to discuss the state of the market today yeah i'm jealous you guys had some good interviews coming up neil thanks so much for joining us today on the virtual cube you've really shared a lot of great insight how hpe is partnering with customers it's it's always great to catch up with you hopefully we can do so face to face you know sooner rather than later well i look forward to that and uh you know no doubt our world has changed and we're here to help our customers and partners with the technology the expertise and the economics they need for these digital transformations and we're going to bring them unmatched workload optimization from the edge to exascale with that 360 degree security with the intelligent automation and we're going to deliver it all as an as a service experience we're really excited to be helping our customers accelerate what's next for their businesses and it's been really great talking with you today about that dave thanks for having me you're very welcome it's been super neal and i actually you know i had the opportunity to speak with some of your customers about their digital transformation and the role of that hpe plays there so let's dive right in we're here on the cube covering hpe accelerating next and with me is rule siestermans who is the head of it at the netherlands cancer institute also known as nki welcome rule thank you very much great to be here hey what can you tell us about the netherlands cancer institute maybe you could talk about your core principles and and also if you could weave in your specific areas of expertise yeah maybe first introduction to the netherlands institute um we are one of the top 10 comprehensive cancers in the world and what we do is we combine a hospital for treating patients with cancer and a recent institute under one roof so discoveries we do we find within the research we can easily bring them back to the clinic and vis-a-versa so we have about 750 researchers and about 3 000 other employees doctors nurses and and my role is to uh to facilitate them at their best with it got it so i mean everybody talks about digital digital transformation to us it all comes down to data so i'm curious how you collect and take advantage of medical data specifically to support nki's goals maybe some of the challenges that your organization faces with the amount of data the speed of data coming in just you know the the complexities of data how do you handle that yeah it's uh it's it's it's challenge and uh yeah what we we have we have a really a large amount of data so we produce uh terabytes a day and we we have stored now more than one petabyte on data at this moment and yeah it's uh the challenge is to to reuse the data optimal for research and to share it with other institutions so that needs to have a flexible infrastructure for that so a fast really fast network uh big data storage environment but the real challenge is not not so much the i.t bus is more the quality of the data so we have a lot of medical systems all producing those data and how do we combine them and and yeah get the data fair so findable accessible interoperable and reusable uh for research uh purposes so i think that's the main challenge the quality of the data yeah very common themes that we hear from from other customers i wonder if you could paint a picture of your environment and maybe you can share where hpe solutions fit in what what value they bring to your organization's mission yeah i think it brings a lot of flexibility so what we did with hpe is that we we developed a software-defined data center and then a virtual workplace for our researchers and doctors and that's based on the hpe infrastructure and what we wanted to build is something that expect the needs of doctors and nurses but also the researchers and the two kind of different blood groups blood groups and with different needs so uh but we wanted to create one infrastructure because we wanted to make the connection between the hospital and the research that's that's more important so um hpe helped helped us not only with the the infrastructure itself but also designing the whole architecture of it and for example what we did is we we bought a lot of hardware and and and the hardware is really uh doing his his job between nine till five uh dennis everything is working within everyone is working within the institution but all the other time in evening and and nights hours and also the redundant environment we have for the for our healthcare uh that doesn't do nothing of much more or less uh in in those uh dark hours so what we created together with nvidia and hpe and vmware is that we we call it video by day compute by night so we reuse those those servers and those gpu capacity for computational research jobs within the research that's you mentioned flexibility for this genius and and so we're talking you said you know a lot of hard ways they're probably proliant i think synergy aruba networking is in there how are you using this environment actually the question really is when you think about nki's digital transformation i mean is this sort of the fundamental platform that you're using is it a maybe you could describe that yeah it's it's the fundamental platform to to to work on and and and what we see is that we have we have now everything in place for it but the real challenge is is the next steps we are in so we have a a software defined data center we are cloud ready so the next steps is to to make the connection to the cloud to to give more automation to our researchers so they don't have to wait a couple of weeks for it to do it but they can do it themselves with a couple of clicks so i think the basic is we are really flexible and we have a lot of opportunities for automation for example but the next step is uh to create that business value uh really for for our uh employees that's a great story and a very important mission really fascinating stuff thanks for sharing this with our audience today really appreciate your time thank you very much okay this is dave vellante with thecube stay right there for more great content you're watching accelerating next from hpe i'm really glad to have you with us today john i know you stepped out of vacation so thanks very much for joining us neil it's great to be joining you from hawaii and i love the partnership with hpe and the way you're reinventing an industry well you've always excelled john at catching market transitions and there are so many transitions and paradigm shifts happening in the market and tech specifically right now as you see companies rush to accelerate their transformation what do you see as the keys to success well i i think you're seeing actually an acceleration following the covet challenges that all of us faced and i wasn't sure that would happen it's probably at three times the paces before there was a discussion point about how quickly the companies need to go digital uh that's no longer a discussion point almost all companies are moving with tremendous feed on digital and it's the ability as the cloud moves to the edge with compute and security uh at the edge and how you deliver these services to where the majority of applications uh reside are going to determine i think the future of the next generation company leadership and it's the area that neil we're working together on in many many ways so i think it's about innovation it's about the cloud moving to the edge and an architectural play with silicon to speed up that innovation yes we certainly see our customers of all sizes trying to accelerate what's next and get that digital transformation moving even faster as a result of the environment that we're all living in and we're finding that workload focus is really key uh customers in all kinds of different scales are having to adapt and support the remote workforces with vdi and as you say john they're having to deal with the deployment of workloads at the edge with so much data getting generated at the edge and being acted upon at the edge the analytics and the infrastructure to manage that as these processes get digitized and automated is is so important for so many workflows we really believe that the choice of infrastructure partner that underpins those transformations really matters a partner that can help create the financial capacity that can help optimize your environments and enable our customers to focus on supporting their business are all super key to success and you mentioned that in the last year there's been a lot of rapid course correction for all of us a demand for velocity and the ability to deploy resources at scale is more and more needed maybe more than ever what are you hearing customers looking for as they're rolling out their digital transformation efforts well i think they're being realistic that they're going to have to move a lot faster than before and they're also realistic on core versus context they're they're their core capability is not the technology of themselves it's how to deploy it and they're we're looking for partners that can help bring them there together but that can also innovate and very often the leaders who might have been a leader in a prior generation may not be on this next move hence the opportunity for hpe and startups like vinsano to work together as the cloud moves the edge and perhaps really balance or even challenge some of the big big incumbents in this category as well as partners uniquely with our joint customers on how do we achieve their business goals tell me a little bit more about how you move from this being a technology positioning for hpe to literally helping your customers achieve their outcomes they want and and how are you changing hpe in that way well i think when you consider these transformations the infrastructure that you choose to underpin it is incredibly critical our customers need a software-defined management plan that enables them to automate so much of their infrastructure they need to be able to take faster action where the data is and to do all of this in a cloud-like experience where they can deliver their infrastructure as code anywhere from exascale through the enterprise data center to the edge and really critically they have to be able to do this securely which becomes an ever increasing challenge and doing it at the right economics relative to their alternatives and part of the right economics of course includes adopting the best practices from web scale architectures and bringing them to the heart of the enterprise and in our partnership with pensando we're working to enable these new ideas of web scale architecture and fleet management for the enterprise at scale you know what is fun is hpe has an unusual talent from the very beginning in silicon valley of working together with others and creating a win-win innovation approach if you watch what your team has been able to do and i want to say this for everybody listening you work with startups better than any other company i've seen in terms of how you do win win together and pinsando is just the example of that uh this startup which by the way is the ninth time i have done with this team a new generation of products and we're designing that together with hpe in terms of as the cloud moves to the edge how do we get the leverage out of that and produce the results for your customers on this to give the audience appeal for it you're talking with pensano alone in terms of the efficiency versus an amazon amazon web services of an order of magnitude i'm not talking 100 greater i'm talking 10x greater and things from throughput number of connections you do the jitter capability etc and it talks how two companies uniquely who believe in innovation and trust each other and have very similar cultures can work uniquely together on it how do you bring that to life with an hpe how do you get your company to really say let's harvest the advantages of your ecosystem in your advantages of startups well as you say more and more companies are faced with these challenges of hitting the right economics for the infrastructure and we see many enterprises of various sizes trying to come to terms with infrastructures that look a lot more like a service provider that require that software-defined management plane and the automation to deploy at scale and with the work we're doing with pinsando the benefits that we bring in terms of the observability and the telemetry and the encryption and the distributed network functions but also a security architecture that enables that efficiency on the individual nodes is just so key to building a competitive architecture moving forwards for an on-prem private cloud or internal service provider operation and we're really excited about the work we've done to bring that technology across our portfolio and bring that to our customers so that they can achieve those kind of economics and capabilities and go focus on their own transformations rather than building and running the infrastructure themselves artisanally and having to deal with integrating all of that great technology themselves makes tremendous sense you know neil you and i work on a board together et cetera i've watched your summarization skills and i always like to ask the question after you do a quick summary like this what are the three or four takeaways we would like for the audience to get out of our conversation well that's a great question thanks john we believe that customers need a trusted partner to work through these digital transformations that are facing them and confront the challenge of the time that the covet crisis has taken away as you said up front every organization is having to transform and transform more quickly and more digitally and working with a trusted partner with the expertise that only comes from decades of experience is a key enabler for that a partner with the ability to create the financial capacity to transform the workload expertise to get more from the infrastructure and optimize the environment so that you can focus on your own business a partner that can deliver the systems and the security and the automation that makes it easily deployable and manageable anywhere you need them at any scale whether the edge the enterprise data center or all the way up to exascale in high performance computing and can do that all as a service as we can at hpe through hpe green lake enabling our customers most critical workloads it's critical that all of that is underpinned by an ai powered digitally enabled service experience so that our customers can get on with their transformation and running their business instead of dealing with their infrastructure and really only hpe can provide this combination of capabilities and we're excited and committed to helping our customers accelerate what's next for their businesses neil it's fun i i love being your partner and your wingman our values and cultures are so similar thanks for letting me be a part of this discussion today thanks for being with us john it was great having you here oh it's friends for life okay now we're going to dig into the world of video which accounts for most of the data that we store and requires a lot of intense processing capabilities to stream here with me is jim brickmeyer who's the chief marketing and product officer at vlasics jim good to see you good to see you as well so tell us a little bit more about velocity what's your role in this tv streaming world and maybe maybe talk about your ideal customer sure sure so um we're leading provider of carrier great video solutions video streaming solutions and advertising uh technology to service providers around the globe so we primarily sell software-based solutions to uh cable telco wireless providers and broadcasters that are interested in launching their own um video streaming services to consumers yeah so this is this big time you know we're not talking about mom and pop you know a little video outfit but but maybe you can help us understand that and just the sheer scale of of the tv streaming that you're doing maybe relate it to you know the overall internet usage how much traffic are we talking about here yeah sure so uh yeah so our our customers tend to be some of the largest um network service providers around the globe uh and if you look at the uh the video traffic um with respect to the total amount of traffic that that goes through the internet video traffic accounts for about 90 of the total amount of data that uh that traverses the internet so video is uh is a pretty big component of um of how people when they look at internet technologies they look at video streaming technologies uh you know this is where we we focus our energy is in carrying that traffic as efficiently as possible and trying to make sure that from a consumer standpoint we're all consumers of video and uh make sure that the consumer experience is a high quality experience that you don't experience any glitches and that that ultimately if people are paying for that content that they're getting the value that they pay for their for their money uh in their entertainment experience i think people sometimes take it for granted it's like it's like we we all forget about dial up right those days are long gone but the early days of video was so jittery and restarting and and the thing too is that you know when you think about the pandemic and the boom in streaming that that hit you know we all sort of experienced that but the service levels were pretty good i mean how much how much did the pandemic affect traffic what kind of increases did you see and how did that that impact your business yeah sure so uh you know obviously while it was uh tragic to have a pandemic and have people locked down what we found was that when people returned to their homes what they did was they turned on their their television they watched on on their mobile devices and we saw a substantial increase in the amount of video streaming traffic um over service provider networks so what we saw was on the order of 30 to 50 percent increase in the amount of data that was traversing those networks so from a uh you know from an operator's standpoint a lot more traffic a lot more challenging to to go ahead and carry that traffic a lot of work also on our behalf and trying to help operators prepare because we could actually see geographically as the lockdowns happened [Music] certain areas locked down first and we saw that increase so we were able to help operators as as all the lockdowns happened around the world we could help them prepare for that increase in traffic i mean i was joking about dial-up performance again in the early days of the internet if your website got fifty percent more traffic you know suddenly you were you your site was coming down so so that says to me jim that architecturally you guys were prepared for that type of scale so maybe you could paint a picture tell us a little bit about the solutions you're using and how you differentiate yourself in your market to handle that type of scale sure yeah so we so we uh we really are focused on what we call carrier grade solutions which are designed for that massive amount of scale um so we really look at it you know at a very granular level when you look um at the software and and performance capabilities of the software what we're trying to do is get as many streams as possible out of each individual piece of hardware infrastructure so that we can um we can optimize first of all maximize the uh the efficiency of that device make sure that the costs are very low but one of the other challenges is as you get to millions and millions of streams and that's what we're delivering on a daily basis is millions and millions of video streams that you have to be able to scale those platforms out um in an effective in a cost effective way and to make sure that it's highly resilient as well so we don't we don't ever want a consumer to have a circumstance where a network glitch or a server issue or something along those lines causes some sort of uh glitch in their video and so there's a lot of work that we do in the software to make sure that it's a very very seamless uh stream and that we're always delivering at the very highest uh possible bit rate for consumers so that if you've got that giant 4k tv that we're able to present a very high resolution picture uh to those devices and what's the infrastructure look like underneath you you're using hpe solutions where do they fit in yeah that's right yeah so we uh we've had a long-standing partnership with hpe um and we work very closely with them to try to identify the specific types of hardware that are ideal for the the type of applications that we run so we run video streaming applications and video advertising applications targeted kinds of video advertising technologies and when you look at some of these applications they have different types of requirements in some cases it's uh throughput where we're taking a lot of data in and streaming a lot of data out in other cases it's storage where we have to have very high density high performance storage systems in other cases it's i gotta have really high capacity storage but the performance does not need to be quite as uh as high from an io perspective and so we work very closely with hpe on trying to find exactly the right box for the right application and then beyond that also talking with our customers to understand there are different maintenance considerations associated with different types of hardware so we tend to focus on as much as possible if we're going to place servers deep at the edge of the network we will make everything um maintenance free or as maintenance free as we can make it by putting very high performance solid state storage into those servers so that uh we we don't have to physically send people to those sites to uh to do any kind of maintenance so it's a it's a very cooperative relationship that we have with hpe to try to define those boxes great thank you for that so last question um maybe what the future looks like i love watching on my mobile device headphones in no distractions i'm getting better recommendations how do you see the future of tv streaming yeah so i i think the future of tv streaming is going to be a lot more personal right so uh this is what you're starting to see through all of the services that are out there is that most of the video service providers whether they're online providers or they're your traditional kinds of paid tv operators is that they're really focused on the consumer and trying to figure out what is of value to you personally in the past it used to be that services were one size fits all and um and so everybody watched the same program right at the same time and now that's uh that's we have this technology that allows us to deliver different types of content to people on different screens at different times and to advertise to those individuals and to cater to their individual preferences and so using that information that we have about how people watch and and what people's interests are we can create a much more engaging and compelling uh entertainment experience on all of those screens and um and ultimately provide more value to consumers awesome story jim thanks so much for keeping us helping us just keep entertained during the pandemic i really appreciate your time sure thanks all right keep it right there everybody you're watching hpes accelerating next first of all pat congratulations on your new role as intel ceo how are you approaching your new role and what are your top priorities over your first few months thanks antonio for having me it's great to be here with you all today to celebrate the launch of your gen 10 plus portfolio and the long history that our two companies share in deep collaboration to deliver amazing technology to our customers together you know what an exciting time it is to be in this industry technology has never been more important for humanity than it is today everything is becoming digital and driven by what i call the four key superpowers the cloud connectivity artificial intelligence and the intelligent edge they are super powers because each expands the impact of the others and together they are reshaping every aspect of our lives and work in this landscape of rapid digital disruption intel's technology and leadership products are more critical than ever and we are laser focused on bringing to bear the depth and breadth of software silicon and platforms packaging and process with at scale manufacturing to help you and our customers capitalize on these opportunities and fuel their next generation innovations i am incredibly excited about continuing the next chapter of a long partnership between our two companies the acceleration of the edge has been significant over the past year with this next wave of digital transformation we expect growth in the distributed edge and age build out what are you seeing on this front like you said antonio the growth of edge computing and build out is the next key transition in the market telecommunications service providers want to harness the potential of 5g to deliver new services across multiple locations in real time as we start building solutions that will be prevalent in a 5g digital environment we will need a scalable flexible and programmable network some use cases are the massive scale iot solutions more robust consumer devices and solutions ar vr remote health care autonomous robotics and manufacturing environments and ubiquitous smart city solutions intel and hp are partnering to meet this new wave head on for 5g build out and the rise of the distributed enterprise this build out will enable even more growth as businesses can explore how to deliver new experiences and unlock new insights from the new data creation beyond the four walls of traditional data centers and public cloud providers network operators need to significantly increase capacity and throughput without dramatically growing their capital footprint their ability to achieve this is built upon a virtualization foundation an area of intel expertise for example we've collaborated with verizon for many years and they are leading the industry and virtualizing their entire network from the core the edge a massive redesign effort this requires advancements in silicon and power management they expect intel to deliver the new capabilities in our roadmap so ecosystem partners can continue to provide innovative and efficient products with this optimization for hybrid we can jointly provide a strong foundation to take on the growth of data-centric workloads for data analytics and ai to build and deploy models faster to accelerate insights that will deliver additional transformation for organizations of all types the network transformation journey isn't easy we are continuing to unleash the capabilities of 5g and the power of the intelligent edge yeah the combination of the 5g built out and the massive new growth of data at the edge are the key drivers for the age of insight these new market drivers offer incredible new opportunities for our customers i am excited about recent launch of our new gen 10 plus portfolio with intel together we are laser focused on delivering joint innovation for customers that stretches from the edge to x scale how do you see new solutions that this helping our customers solve the toughest challenges today i talked earlier about the superpowers that are driving the rapid acceleration of digital transformation first the proliferation of the hybrid cloud is delivering new levels of efficiency and scale and the growth of the cloud is democratizing high-performance computing opening new frontiers of knowledge and discovery next we see ai and machine learning increasingly infused into every application from the edge to the network to the cloud to create dramatically better insights and the rapid adoption of 5g as i talked about already is fueling new use cases that demand lower latencies and higher bandwidth this in turn is pushing computing to the edge closer to where the data is created and consumed the confluence of these trends is leading to the biggest and fastest build out of computing in human history to keep pace with this rapid digital transformation we recognize that infrastructure has to be built with the flexibility to support a broad set of workloads and that's why over the last several years intel has built an unmatched portfolio to deliver every component of intelligent silicon our customers need to move store and process data from the cpus to fpgas from memory to ssds from ethernet to switch silicon to silicon photonics and software our 3rd gen intel xeon scalable processors and our data centric portfolio deliver new core performance and higher bandwidth providing our customers the capabilities they need to power these critical workloads and we love seeing all the unique ways customers like hpe leverage our technology and solution offerings to create opportunities and solve their most pressing challenges from cloud gaming to blood flow to brain scans to financial market security the opportunities are endless with flexible performance i am proud of the amazing innovation we are bringing to support our customers especially as they respond to new data-centric workloads like ai and analytics that are critical to digital transformation these new requirements create a need for compute that's warlord optimized for performance security ease of use and the economics of business now more than ever compute matters it is the foundation for this next wave of digital transformation by pairing our compute with our software and capabilities from hp green lake we can support our customers as they modernize their apps and data quickly they seamlessly and securely scale them anywhere at any size from edge to x scale but thank you for joining us for accelerating next today i know our audience appreciated hearing your perspective on the market and how we're partnering together to support their digital transformation journey i am incredibly excited about what lies ahead for hp and intel thank you thank you antonio great to be with you today we just compressed about a decade of online commerce progress into about 13 or 14 months so now we're going to look at how one retailer navigated through the pandemic and what the future of their business looks like and with me is alan jensen who's the chief information officer and senior vice president of the sawing group hello alan how are you fine thank you good to see you hey look you know when i look at the 100 year history plus of your company i mean it's marked by transformations and some of them are quite dramatic so you're denmark's largest retailer i wonder if you could share a little bit more about the company its history and and how it continues to improve the customer experience well at the same time keeping costs under control so vital in your business yeah yeah the company founded uh approximately 100 years ago with a department store in in oahu's in in denmark and i think in the 60s we founded the first supermarket in in denmark with the self-service and combined textile and food in in the same store and in beginning 70s we founded the first hyper market in in denmark and then the this calendar came from germany early in in 1980 and we started a discount chain and so we are actually building department store in hyber market info in in supermarket and in in the discount sector and today we are more than 1 500 stores in in three different countries in in denmark poland and germany and especially for the danish market we have a approximately 38 markets here and and is the the leader we have over the last 10 years developed further into online first in non-food and now uh in in food with home delivery with click and collect and we have done some magnetism acquisition in in the convenience with mailbox solutions to our customers and we have today also some restaurant burger chain and and we are running the starbuck in denmark so i can you can see a full plate of different opportunities for our customer in especially denmark it's an awesome story and of course the founder's name is still on the masthead what a great legacy now of course the pandemic is is it's forced many changes quite dramatic including the the behaviors of retail customers maybe you could talk a little bit about how your digital transformation at the sawing group prepared you for this shift in in consumption patterns and any other challenges that that you faced yeah i think uh luckily as for some of the you can say the core it solution in in 19 we just roll out using our computers via direct access so you can work from anywhere whether you are traveling from home and so on we introduced a new agile scrum delivery model and and we just finalized the rolling out teams in in in january february 20 and that was some very strong thing for suddenly moving all our employees from from office to to home and and more or less overnight we succeed uh continuing our work and and for it we have not missed any deadline or task for the business in in 2020 so i think that was pretty awesome to to see and for the business of course the pandemic changed a lot as the change in customer behavior more or less overnight with plus 50 80 on the online solution forced us to do some different priorities so we were looking at the food home delivery uh and and originally expected to start rolling out in in 2022 uh but took a fast decision in april last year to to launch immediately and and we have been developing that uh over the last eight months and has been live for the last three months now in the market so so you can say the pandemic really front loaded some of our strategic actions for for two to three years uh yeah that was very exciting what's that uh saying luck is the byproduct of great planning and preparation so let's talk about when you're in a company with some strong financial situation that you can move immediately with investment when you take such decision then then it's really thrilling yeah right awesome um two-part question talk about how you leverage data to support the solid groups mission and you know drive value for customers and maybe you could talk about some of the challenges you face with just the amount of data the speed of data et cetera yeah i said data is everything when you are in retail as a retailer's detail as you need to monitor your operation down to each store eats department and and if you can say we have challenge that that is that data is just growing rapidly as a year by year it's growing more and more because you are able to be more detailed you're able to capture more data and for a company like ours we need to be updated every morning as a our fully updated sales for all unit department single sku selling in in the stores is updated 3 o'clock in the night and send out to all top management and and our managers all over the company it's actually 8 000 reports going out before six o'clock every day in the morning we have introduced a loyalty program and and you are capturing a lot of data on on customer behavior what is their preferred offers what is their preferred time in the week for buying different things and all these data is now used to to personalize our offers to our cost of value customers so we can be exactly hitting the best time and and convert it to sales data is also now used for what we call intelligent price reductions as a so instead of just reducing prices with 50 if it's uh close to running out of date now the system automatically calculate whether a store has just enough to to finish with full price before end of day or actually have much too much and and need to maybe reduce by 80 before as being able to sell so so these automated [Music] solutions built on data is bringing efficiency into our operation wow you make it sound easy these are non-trivial items so congratulations on that i wonder if we could close hpe was kind enough to introduce us tell us a little bit about the infrastructure the solutions you're using how they differentiate you in the market and i'm interested in you know why hpe what distinguishes them why the choice there yeah as a when when you look out a lot is looking at moving data to the cloud but we we still believe that uh due to performance due to the availability uh more or less on demand we we still don't see the cloud uh strong enough for for for selling group uh capturing all our data we have been quite successfully having one data truth across the whole con company and and having one just one single bi solution and having that huge amount of data i think we have uh one of the 10 largest sub business warehouses in global and but on the other hand we also want to be agile and want to to scale when needed so getting close to a cloud solution we saw it be a green lake as a solution getting close to the cloud but still being on-prem and could deliver uh what we need to to have a fast performance on on data but still in a high quality and and still very secure for us to run great thank you for that and thank alan thanks so much for your for your time really appreciate your your insights and your congratulations on the progress and best of luck in the future thank you all right keep it right there we have tons more content coming you're watching accelerating next from hpe [Music] welcome lisa and thank you for being here with us today antonio it's wonderful to be here with you as always and congratulations on your launch very very exciting for you well thank you lisa and we love this partnership and especially our friendship which has been very special for me for the many many years that we have worked together but i wanted to have a conversation with you today and obviously digital transformation is a key topic so we know the next wave of digital transformation is here being driven by massive amounts of data an increasingly distributed world and a new set of data intensive workloads so how do you see world optimization playing a role in addressing these new requirements yeah no absolutely antonio and i think you know if you look at the depth of our partnership over the last you know four or five years it's really about bringing the best to our customers and you know the truth is we're in this compute mega cycle right now so it's amazing you know when i know when you talk to customers when we talk to customers they all need to do more and and frankly compute is becoming quite specialized so whether you're talking about large enterprises or you're talking about research institutions trying to get to the next phase of uh compute so that workload optimization that we're able to do with our processors your system design and then you know working closely with our software partners is really the next wave of this this compute cycle so thanks lisa you talk about mega cycle so i want to make sure we take a moment to celebrate the launch of our new generation 10 plus compute products with the latest announcement hp now has the broadest amd server portfolio in the industry spanning from the edge to exascale how important is this partnership and the portfolio for our customers well um antonio i'm so excited first of all congratulations on your 19 world records uh with uh milan and gen 10 plus it really is building on you know sort of our you know this is our third generation of partnership with epic and you know you are with me right at the very beginning actually uh if you recall you joined us in austin for our first launch of epic you know four years ago and i think what we've created now is just an incredible portfolio that really does go across um you know all of the uh you know the verticals that are required we've always talked about how do we customize and make things easier for our customers to use together and so i'm very excited about your portfolio very excited about our partnership and more importantly what we can do for our joint customers it's amazing to see 19 world records i think i'm really proud of the work our joint team do every generation raising the bar and that's where you know we we think we have a shared goal of ensuring that customers get the solution the services they need any way they want it and one way we are addressing that need is by offering what we call as a service delivered to hp green lake so let me ask a question what feedback are you hearing from your customers with respect to choice meaning consuming as a service these new solutions yeah now great point i think first of all you know hpe green lake is very very impressive so you know congratulations um to uh to really having that solution and i think we're hearing the same thing from customers and you know the truth is the compute infrastructure is getting more complex and everyone wants to be able to deploy sort of the right compute at the right price point um you know in in terms of also accelerating time to deployment with the right security with the right quality and i think these as a service offerings are going to become more and more important um as we go forward in the compute uh you know capabilities and you know green lake is a leadership product offering and we're very very you know pleased and and honored to be part of it yeah we feel uh lisa we are ahead of the competition and um you know you think about some of our competitors now coming with their own offerings but i think the ability to drive joint innovation is what really differentiate us and that's why we we value the partnership and what we have been doing together on giving the customers choice finally you know i know you and i are both incredibly excited about the joint work we're doing with the us department of energy the oak ridge national laboratory we think about large data sets and you know and the complexity of the analytics we're running but we both are going to deliver the world's first exascale system which is remarkable to me so what this milestone means to you and what type of impact do you think it will make yes antonio i think our work with oak ridge national labs and hpe is just really pushing the envelope on what can be done with computing and if you think about the science that we're going to be able to enable with the first exascale machine i would say there's a tremendous amount of innovation that has already gone in to the machine and we're so excited about delivering it together with hpe and you know we also think uh that the super computing technology that we're developing you know at this broad scale will end up being very very important for um you know enterprise compute as well and so it's really an opportunity to kind of take that bleeding edge and really deploy it over the next few years so super excited about it i think you know you and i have a lot to do over the uh the next few months here but it's an example of the great partnership and and how much we're able to do when we put our teams together um to really create that innovation i couldn't agree more i mean this is uh an incredible milestone for for us for our industry and honestly for the country in many ways and we have many many people working 24x7 to deliver against this mission and it's going to change the future of compute no question about it and then honestly put it to work where we need it the most to advance life science to find cures to improve the way people live and work but lisa thank you again for joining us today and thank you more most importantly for the incredible partnership and and the friendship i really enjoy working with you and your team and together i think we can change this industry once again so thanks for your time today thank you so much antonio and congratulations again to you and the entire hpe team for just a fantastic portfolio launch thank you okay well some pretty big hitters in those keynotes right actually i have to say those are some of my favorite cube alums and i'll add these are some of the execs that are stepping up to change not only our industry but also society and that's pretty cool and of course it's always good to hear from the practitioners the customer discussions have been great so far today now the accelerating next event continues as we move to a round table discussion with krista satrathwaite who's the vice president and gm of hpe core compute and krista is going to share more details on how hpe plans to help customers move ahead with adopting modern workloads as part of their digital transformations krista will be joined by hpe subject matter experts chris idler who's the vp and gm of the element and mark nickerson director of solutions product management as they share customer stories and advice on how to turn strategy into action and realize results within your business thank you for joining us for accelerate next event i hope you're enjoying it so far i know you've heard about the industry challenges the i.t trends hpe strategy from leaders in the industry and so today what we want to do is focus on going deep on workload solutions so in the most important workload solutions the ones we always get asked about and so today we want to share with you some best practices some examples of how we've helped other customers and how we can help you all right with that i'd like to start our panel now and introduce chris idler who's the vice president and general manager of the element chris has extensive uh solution expertise he's led hpe solution engineering programs in the past welcome chris and mark nickerson who is the director of product management and his team is responsible for solution offerings making sure we have the right solutions for our customers welcome guys thanks for joining me thanks for having us krista yeah so i'd like to start off with one of the big ones the ones that we get asked about all the time what we've been all been experienced in the last year remote work remote education and all the challenges that go along with that so let's talk a little bit about the challenges that customers have had in transitioning to this remote work and remote education environment uh so i i really think that there's a couple of things that have stood out for me when we're talking with customers about vdi first obviously there was a an unexpected and unprecedented level of interest in that area about a year ago and we all know the reasons why but what it really uncovered was how little planning had gone into this space around a couple of key dynamics one is scale it's one thing to say i'm going to enable vdi for a part of my workforce in a pre-pandemic environment where the office was still the the central hub of activity for work uh it's a completely different scale when you think about okay i'm going to have 50 60 80 maybe 100 of my workforce now distributed around the globe um whether that's in an educational environment where now you're trying to accommodate staff and students in virtual learning uh whether that's uh in the area of things like uh formula one racing where we had uh the desire to still have events going on but the need for a lot more social distancing not as many people able to be trackside but still needing to have that real-time experience this really manifested in a lot of ways and scale was something that i think a lot of customers hadn't put as much thought into initially the other area is around planning for experience a lot of times the vdi experience was planned out with very specific workloads or very specific applications in mind and when you take it to a more broad-based environment if we're going to support multiple functions multiple lines of business there hasn't been as much planning or investigation that's gone into the application side and so thinking about how graphically intense some applications are one customer that comes to mind would be tyler isd who did a fairly large roll out pre-pandemic and as part of their big modernization effort what they uncovered was even just changes in standard windows applications had become so much more graphically intense with windows 10 with the latest updates with programs like adobe that they were really needing to have an accelerated experience for a much larger percentage of their install base than than they had counted on so in addition to planning for scale you also need to have that visibility into what are the actual applications that are going to be used by these remote users how graphically intense those might be what's the login experience going to be as well as the operating experience and so really planning through that experience side as well as the scale and the number of users uh is is kind of really two of the biggest most important things that i've seen yeah mark i'll i'll just jump in real quick i think you you covered that pretty comprehensively there and and it was well done the couple of observations i've made one is just that um vdi suddenly become like mission critical for sales it's the front line you know for schools it's the classroom you know that this isn't a cost cutting measure or a optimization nit measure anymore this is about running the business in a way it's a digital transformation one aspect of about a thousand aspects of what does it mean to completely change how your business does and i think what that translates to is that there's no margin for error right you really need to deploy this in a way that that performs that understands what you're trying to use it for that gives that end user the experience that they expect on their screen or on their handheld device or wherever they might be whether it's a racetrack classroom or on the other end of a conference call or a boardroom right so what we do in in the engineering side of things when it comes to vdi or really understand what's a tech worker what's a knowledge worker what's a power worker what's a gp really going to look like what's time of day look like you know who's using it in the morning who's using it in the evening when do you power up when do you power down does the system behave does it just have the it works function and what our clients can can get from hpe is um you know a worldwide set of experiences that we can apply to making sure that the solution delivers on its promises so we're seeing the same thing you are krista you know we see it all the time on vdi and on the way businesses are changing the way they do business yeah and it's funny because when i talk to customers you know one of the things i heard that was a good tip is to roll it out to small groups first so you could really get a good sense of what the experience is before you roll it out to a lot of other people and then the expertise it's not like every other workload that people have done before so if you're new at it make sure you're getting the right advice expertise so that you're doing it the right way okay one of the other things we've been talking a lot about today is digital transformation and moving to the edge so now i'd like to shift gears and talk a little bit about how we've helped customers make that shift and this time i'll start with chris all right hey thanks okay so you know it's funny when it comes to edge because um the edge is different for for every customer in every client and every single client that i've ever spoken to of hp's has an edge somewhere you know whether just like we were talking about the classroom might be the edge but but i think the industry when we're talking about edge is talking about you know the internet of things if you remember that term from not to not too long ago you know and and the fact that everything's getting connected and how do we turn that into um into telemetry and and i think mark's going to be able to talk through a couple of examples of clients that we have in things like racing and automotive but what we're learning about edge is it's not just how do you make the edge work it's how do you integrate the edge into what you're already doing and nobody's just the edge right and and so if it's if it's um ai mldl there's that's one way you want to use the edge if it's a customer experience point of service it's another you know there's yet another way to use the edge so it turns out that having a broad set of expertise like hpe does to be able to understand the different workloads that you're trying to tie together including the ones that are running at the at the edge often it involves really making sure you understand the data pipeline you know what information is at the edge how does it flow to the data center how does it flow and then which data center uh which private cloud which public cloud are you using i think those are the areas where where we really sort of shine is that we we understand the interconnectedness of these things and so for example red bull and i know you're going to talk about that in a minute mark um uh the racing company you know for them the the edge is the racetrack and and you know milliseconds or partial seconds winning and losing races but then there's also an edge of um workers that are doing the design for for the cars and how do they get quick access so um we have a broad variety of infrastructure form factors and compute form factors to help with the edge and this is another real advantage we have is that we we know how to put the right piece of equipment with the right software we also have great containerized software with our esmeral container platform so we're really becoming um a perfect platform for hosting edge-centric workloads and applications and data processing yeah it's uh all the way down to things like our superdome flex in the background if you have some really really really big data that needs to be processed and of course our workhorse proliance that can be configured to support almost every um combination of workload you have so i know you started with edge krista but but and we're and we nail the edge with those different form factors but let's make sure you know if you're listening to this this show right now um make sure you you don't isolate the edge and make sure they integrate it with um with the rest of your operation mark you know what did i miss yeah to that point chris i mean and this kind of actually ties the two things together that we've been talking about here but the edge uh has become more critical as we have seen more work moving to the edge as where we do work changes and evolves and the edge has also become that much more closer because it has to be that much more connected um to your point uh talking about where that edge exists that edge can be a lot of different places but the one commonality really is that the edge is is an area where work still needs to get accomplished it can't just be a collection point and then everything gets shipped back to a data center or back to some some other area for the work it's where the work actually needs to get done whether that's edge work in a use case like vdi or whether that's edge work in the case of doing real-time analytics you mentioned red bull racing i'll i'll bring that up i mean you talk about uh an area where time is of the essence everything about that sport comes down to time you're talking about wins and losses that are measured as you said in milliseconds and that applies not just to how performance is happening on the track but how you're able to adapt and modify the needs of the car uh adapt to the evolving conditions on the track itself and so when you talk about putting together a solution for an edge like that you're right it can't just be here's a product that's going to allow us to collect data ship it back someplace else and and wait for it to be processed in a couple of days you have to have the ability to analyze that in real time when we pull together a solution involving our compute products our storage products our networking products when we're able to deliver that full package solution at the edge what you see are results like a 50 decrease in processing time to make real-time analytic decisions about configurations for the car and adapting to to real-time uh test and track conditions yeah really great point there um and i really love the example of edge and racing because i mean that is where it all every millisecond counts um and so important to process that at the edge now switching gears just a little bit let's talk a little bit about some examples of how we've helped customers when it comes to business agility and optimizing their workload for maximum outcome for business agility let's talk about some things that we've done to help customers with that mark yeah give it a shot so when we when we think about business agility what you're really talking about is the ability to to implement on the fly to be able to scale up to scale down the ability to adapt to real time changing situations and i think the last year has been has been an excellent example of exactly how so many businesses have been forced to do that i think one of the areas that that i think we've probably seen the most ability to help with customers in that agility area is around the space of private and hybrid clouds if you take a look at the need that customers have to to be able to migrate workloads and migrate data between public cloud environments app development environments that may be hosted on-site or maybe in the cloud the ability to move out of development and into production and having the agility to then scale those application rollouts up having the ability to have some of that some of that private cloud flexibility in addition to a public cloud environment is something that is becoming increasingly crucial for a lot of our customers all right well i we could keep going on and on but i'll stop it there uh thank you so much uh chris and mark this has been a great discussion thanks for sharing how we helped other customers and some tips and advice for approaching these workloads i thank you all for joining us and remind you to look at the on-demand sessions if you want to double click a little bit more into what we've been covering all day today you can learn a lot more in those sessions and i thank you for your time thanks for tuning in today many thanks to krista chris and mark we really appreciate you joining today to share how hpe is partnering to facilitate new workload adoption of course with your customers on their path to digital transformation now to round out our accelerating next event today we have a series of on-demand sessions available so you can explore more details around every step of that digital transformation from building a solid infrastructure strategy identifying the right compute and software to rounding out your solutions with management and financial support so please navigate to the agenda at the top of the page to take a look at what's available i just want to close by saying that despite the rush to digital during the pandemic most businesses they haven't completed their digital transformations far from it 2020 was more like a forced march than a planful strategy but now you have some time you've adjusted to this new abnormal and we hope the resources that you find at accelerating next will help you on your journey best of luck to you and be well [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Music] [Applause] [Music] [Applause] [Music] [Applause] so [Music] [Applause] [Music] you

Published Date : Apr 19 2021

SUMMARY :

and the thing too is that you know when

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
jim brickmeyerPERSON

0.99+

lisaPERSON

0.99+

antonioPERSON

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

sixQUANTITY

0.99+

2006DATE

0.99+

two companiesQUANTITY

0.99+

alan jensenPERSON

0.99+

2022DATE

0.99+

46 percentQUANTITY

0.99+

denmarkLOCATION

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

3 o'clockDATE

0.99+

windows 10TITLE

0.99+

10xQUANTITY

0.99+

mark nickersonPERSON

0.99+

germanyLOCATION

0.99+

30QUANTITY

0.99+

hawaiiLOCATION

0.99+

tomorrowDATE

0.99+

fifty percentQUANTITY

0.99+

50QUANTITY

0.99+

360-degreeQUANTITY

0.99+

100QUANTITY

0.99+

360 degreeQUANTITY

0.99+

chrisPERSON

0.99+

100 yearQUANTITY

0.99+

80QUANTITY

0.99+

austinLOCATION

0.99+

360 degreeQUANTITY

0.99+

8 000 reportsQUANTITY

0.99+

april last yearDATE

0.99+

kristaPERSON

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

krista satrathwaitePERSON

0.99+

january february 20DATE

0.99+

netherlands cancer instituteORGANIZATION

0.99+

last yearDATE

0.99+

fourQUANTITY

0.99+

five yearsQUANTITY

0.99+

amazonORGANIZATION

0.99+

john chambersPERSON

0.99+

windowsTITLE

0.99+

two-partQUANTITY

0.99+

ninth timeQUANTITY

0.99+

more than 1 500 storesQUANTITY

0.99+

verizonORGANIZATION

0.99+

three yearsQUANTITY

0.99+

johnPERSON

0.99+

neil mcdonaldPERSON

0.99+

a year agoDATE

0.99+

pat gelsingerPERSON

0.99+

netherlands instituteORGANIZATION

0.99+

markPERSON

0.99+

vinsanoORGANIZATION

0.99+

lisa suePERSON

0.99+

four years agoDATE

0.98+

pandemicEVENT

0.98+

two kindQUANTITY

0.98+

HPEORGANIZATION

0.98+

about 750 researchersQUANTITY

0.98+

two thingsQUANTITY

0.98+

50 percentQUANTITY

0.98+

14 monthsQUANTITY

0.98+

adobeTITLE

0.98+

HPE Accelerating Next Preview | HPE Accelerating Next 2021


 

>>We are coming out of a year, like none other and organizations of all types of pressing forward with planning for the future now in the realm of it. And really every industry, no single topic is getting as much attention as digital transformation. Every organization and industry defines digital transformation differently based on the outcomes that they're looking for from creating a digital first business models to driving greater operational efficiency or accelerating innovation by extracting better insights from their data, regardless of how they define their next stage of business. They must all pursue a path of infrastructure, hardware, and software modernization with trusted partners that have the technology and expertise to deliver successful outcomes at HPS accelerating next event. On April 21st, we have a really compelling lineup of industry luminaries. Pat Gelsinger is setting a new and bold direction for Intel. John Chambers is now investing in game changing and society changing tech, Dr. >>Lisa SU who's the CEO of AMD. That's a company that completely transformed itself and become a critical technology supplier for compute solutions. And of course, HPS CEO, Antonio Neary. These execs will be sharing their perspectives on what's next in the market. We'll also have HPE leaders and experts providing details on new requirements, being driven by the defining workloads of the digital era and what capabilities and expertise are needed to enable great future outcomes. We'll also hear directly from some of HP's customers that are on their own path to transformation and how they are accelerating next. This is Dave Volante, inviting you to join us on the 21st of April at 11:00 AM. Eastern time, 8:00 AM Pacific and join the conversation. We'll see you there.

Published Date : Apr 15 2021

SUMMARY :

on the outcomes that they're looking for from creating a digital first business models to driving are on their own path to transformation and how they are accelerating next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

Pat GelsingerPERSON

0.99+

April 21stDATE

0.99+

Antonio NearyPERSON

0.99+

AMDORGANIZATION

0.99+

Lisa SUPERSON

0.99+

HPSORGANIZATION

0.99+

HPORGANIZATION

0.99+

IntelORGANIZATION

0.99+

John ChambersPERSON

0.99+

HPEORGANIZATION

0.98+

a yearQUANTITY

0.96+

single topicQUANTITY

0.93+

21st of April atDATE

0.93+

first business modelsQUANTITY

0.89+

8:00 AM PacificDATE

0.88+

11:00 AMDATE

0.74+

Eastern timeDATE

0.61+

Next 2021DATE

0.53+

HPE Accelerating Next APAC Intro


 

(upbeat music) (upbeat music continues) >> Welcome to the Accelerating Next to our participants from Asia Pacific, Japan, and India. My name is Sandeep Kapoor. I'm the general manager for the Compute business in HP Asia Pacific based out of Singapore. As you know, COVID has come and disrupted the way we live and work. The businesses now need to manage a totally new level of complexity, a distributed workforce, while there is an increasing amount of data which is adding to more level of challenges to deal with. Several years of transformation has been accomplished in a very, very short period of time. However, for the businesses to take the journey forward and take this to the next level of digital transformation, a new set of new workloads, very complex workloads, need to be dealt with. And these workloads will need to be dealt with in order for our businesses, our customers, to stay agile, stay flexible, and develop the new business models. Good news is HP's here to partner with you to grow your transform solutions, which will be delivered as a service economics and a truly cloud-like experience. And these workloads range from the edge compute to the cloud and to the excess scale. And as this event will unfold, you will see this, the session to be very, very insightful and informative. Not only you'll hear the details about our transform solutions from our HP experts but also you will hear from our partners like Intel and AMD with whom jointly, we are launching new offerings to deal with challenges of the new workloads. For example, the in CPU acceleration capabilities, that are being brought forward, will help to foster track adoption of AI and machine learning. The capabilities for dealing with diverse network security will fast track the adoption of 5g, which I'm very excited about personally, and also in memory capabilities offer on the platform, the enhancement of that is going to deal with huge amount of new workloads which are coming up in databases, managing hyper convergence also managing new levels of what utilization capabilities and this all being done through our new Gen10 plus platform. So following the global broadcast, please do join us for a dedicated session room. And during the sessions we will discover and talk about how these workload solutions can be adopted, fast-tracked and, you know, joint partnership with you and take it forward to your digital transformation. With that, I would hand it over to Dave as we could start the Accelerating next.

Published Date : Apr 15 2021

SUMMARY :

the enhancement of that is going to deal

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Sandeep KapoorPERSON

0.99+

DavePERSON

0.99+

SingaporeLOCATION

0.99+

IndiaLOCATION

0.99+

AMDORGANIZATION

0.99+

JapanLOCATION

0.99+

Asia PacificLOCATION

0.99+

IntelORGANIZATION

0.99+

HPORGANIZATION

0.99+

COVIDORGANIZATION

0.85+

5gQUANTITY

0.77+

Asia PacificLOCATION

0.75+

Gen10COMMERCIAL_ITEM

0.65+

APACORGANIZATION

0.53+

Neil MacDonald, HPE | HPE Accelerating Next


 

>>Okay, >>welcome to Accelerating next. Thank you so much for joining us today. We have a great program. We're gonna talk tech with experts, will be diving into the changing economics of our industry and how to think about the next phase of your digital transformation. Now. Very importantly, we're also going to talk about how to optimize workloads from edge to excess scale with full security and automation all coming to you as a service. And with me to kick things off as Neil Mcdonald, who's the GM of compute at HP NEAL. Always a pleasure. Great to have you on. >>It's great to see you dad >>now, of course, when we spoke a year ago, we had hoped by this time we'd be face to face. But here we are again, you know, this pandemic, It's obviously affected businesses and people in so many ways that we could never have imagined. But the reality is in reality, tech companies have literally saved the day. Let's start off, how is HPV contributing to helping your customers navigate through things that are so rapidly shifting in the marketplace, >>although it's nice to be speaking to you again and I look forward to being able to do this in person. At some >>point. The >>pandemic has really accelerated the need for transformation and businesses of all sizes. More than three quarters of C. I. O. S. Report that the crisis has forced them to accelerate their strategic agendas, organizations that were ready transforming or having to transform faster and organizations that weren't on that journey yet are having to rapidly develop and execute a plan to adapt to this new reality. Our customers are on this journey and they need a partner for not just the computer technology but also the expertise and economics that they need for that digital transformation. And for us this is all about unmatched optimization for workloads from the edge to the enterprise to extra scale With 360° security and the intelligent automation all available in that as a service experience. >>Well, you know, as you well know, it's a challenge to manage through any transformation, let alone having to set up remote workers overnight, securing them, re setting budget priorities. What are some of the barriers that you see customers are working hard to overcome? >>Simply put the organizations that we talk with our challenged in three areas. They need the financial capacity to actually execute a transformation. They need the access to the resource and the expertise needed to successfully deliver on a transformation. And they have to find the way to match their investments with the revenues for the new services that they're putting in place to service their customers in this environment. >>You know, we have a data partner E. T. R. Enterprise Technology Research and the spending data that we see from them is it's quite dramatic. I mean last year we saw a contraction of roughly 5% of in terms of I. T. Spending budgets etcetera. And this year we're seeing a pretty significant rebound. Maybe a 67% growth ranges is the prediction. The challenge we see his organizations have to they got to iterate on that. I call it the forced march to digital transformation and yet they also have to balance their investments. For example that the corporate headquarters which have kind of been neglected. Is there any help in sight for the customers that are trying to reduce their spending and also take advantage of their investment capacity? >>I think you're right. Many businesses are understandably reluctant to loosen the purse strings right now given all of the uncertainty. And often a digital transformation is viewed as a massive upfront investment that will pay off in the long term, and that can be a real challenge in an environment like this, but it doesn't need to be uh, we work through HP financial services to help our customers create the investment capacity to accelerate the transformation, often by leveraging assets they already have and helping them monetize them in order to free up the capacity to accelerate what's next for their infrastructure and for the business. >>So can we drill into that? I would wonder if you could add some specifics. I mean, how do you ensure a successful outcome? What are you really paying attention to as those sort of markers for success? >>Well, when you think about the journey that an organization is going through, it's tough to be able to run the business and transform at the same time and one of the constraints is having the people with enough bandwidth and enough expertise to be able to do both. So we're addressing that in two ways for our customers. One is by helping them confidently deploy new solutions which we have engineered, leveraging decades of expertise and experience in engineering to deliver those workload optimized portfolios that take the risk and the complexity out of assembling some of these solutions and give them a prepackaged validated supported solution intact that simplifies that work for them. But in other cases we can enhance our customers bandwidth by bringing them HP point Next experts with all of the capabilities we have to help them plan, deliver and support these I. T. Projects and transformations. Organizations can get on a faster track of modernization, getting greater insight and control as they do it. We're a trusted partner to get the most for a business that's on this journey in making these critical computer investments to underpin the transformations and whether that's planning to optimizing to save for retirement at the end of life. We can bring that expertise to bear to help amplify what our customers already have in house and help them accelerate and succeed in executing these transformations. >>Thank you for that. Let's let's talk about some of the other changes that customers see him in the cloud is obviously forced customers and their suppliers to really rethink how technology is packaged, how it's consumed, how it's priced. I mean there's no doubt in that. So take Green Lake, it's obviously leading example of a pay as you scale infrastructure model and it could be applied on prem or hybrid. Can you maybe give us a sense as to where you are today with Green Lake? >>Well, it's really exciting now from our first pay, as you go offering back in 2006, 15 years ago to the introduction of Green Lake. HBs really been paving the way on consumption-based services through innovation and partnership to help meet the exact needs of our customers. Hp Green Lake provides an experience, is the best of both worlds. A simple paper use technology model with the risk management of data that's under our customers direct control and it lets customers shift to everything as a service in order to free up capital and avoid that upfront expense that we talked about. They can do this anywhere at any scale or any size and really HP Greenlee because the cloud that comes to you >>like that. So we've touched a little bit on how customers can maybe overcome some of the barriers to transformation. What about the nature of transformations themselves? I mean historically there was a lot of lip service paid to digital and and there's a lot of complacency, frankly, but you know that covid wrecking ball meme that so well describes that if you're not a digital business, essentially you're gonna be out of business. So, you know, those things have evolved, how is HPV addressed the new requirements? >>Well, the new requirements are really about what customers are trying to achieve. And four very common themes that we see are enabling the productivity of remote workforce. That was never really part of the plan for many organizations being able to develop and deliver new apps and services in order to service customers in a different way or drive new revenue streams, being able to get insights from data so that in these tough times they can optimize their business more thoroughly. And then finally think about the efficiency of an agile hybrid private cloud infrastructure. Especially one that now has to integrate the edge. And we're really thrilled to be helping our customers accelerate all of these and more with HP computer. >>I want to double click on that remote workforce productivity. I mean again the surveys that we see, 46 of the ceo say that productivity improved with the whole work from home remote work trend. And on average those improvements were in the four range which is absolutely enormous. I mean when you think about that how does HP specifically help here? What do you guys do? >>Well every organization in the world has had to adapt to a different style of working and with more remote workers than they had before. And for many organizations that's going to become the new normal. Even post pandemic, many I. T. Shops are not well equipped for the infrastructure to provide that experience because if all your workers are remote the resiliency of that infrastructure, the latency is of that infrastructure, the reliability of are all incredibly important. So we provide comprehensive solutions expertise and as a service options that support that remote work through virtual desktop infrastructure or V. D. I. So that our customers can support that new normal of virtual engagements online everything across industries wherever they are. And that's just one example of many of the workload optimized solutions that we're providing for our customers is about taking out the guesswork and the uncertainty in delivering on these changes that they have to deploy as part of their transformation. And we can deliver that range of workload optimized solutions across all of these different use cases. Because of our broad range of innovation in compute platforms that span from the ruggedized edge to the data center all the way up to exa scale in HPC. >>I mean that's key if you're trying to affect the digital transformation and you don't have to fine tune, you know, basically build your own optimized solutions if I can buy that rather than having to build it and rely on your R and D. You know, that's key. What else is HP doing? You know, to deliver new apps, new services, you your microservices, containers, the whole developer trend, what's going on there? >>Well, that's really key because organizations are all seeking to evolve their mix of business and bring new services and new capabilities, new ways to reach their customers, new way to reach their employees, new ways to interact in their ecosystem all digitally. And that means that development and many organizations of course are embracing container technology to do that today. So with the HP container platform, our customers can realize that agility and efficiency that comes with container ization and use it to provide insight to their data more and more on that data of course is being machine generated or generated the edge or the near edge. And it can be a real challenge to manage that data holistically and not of silos and islands at H. P. S. Moral data fabric speeds the agility and access to data with a unified platform that can span across the data centers, multiple clouds and even the edge. And that enables data analytics that can create insights powering a data driven production oriented cloud enabled analytics and AI available anytime anywhere and at any scale. And it's really exciting to see the kind of impact that that can have in helping businesses optimize their operations in these challenging times. >>You gotta go where the data is and the data is distributed. It's decentralized. I I like the liberal vision and execution there so that all sounds good. But with digital transformation you're gonna see more compute in hybrid deployments. You mentioned edge. So the surface area, it's like the universe its its ever expanding. You mentioned, you know, remote work and work from home before. So I'm curious where are you investing your resources from a cyber security perspective? What can we count on from H P. E there >>Or you can count on continued leadership from hp as the world's most secure industry standard server portfolio. We provide an enhanced and holistic 360° view to security that begins in the manufacturing supply chain and concludes with a safeguarded end of life Decommissioning. And of course we've long set the bar for security with our work on silicon root of trust and we're extending that to the application tier. But in addition to the security customers that are building this modern Khyber or private cloud, including the integration of the Edge need other elements to they need an intelligent software defined control plane so that they can automate their compute fleets from all the way at the edge to the core. And while scale and automation enable efficiency, all private cloud infrastructures are competing with Web scale economics and that's why we're democratizing web scale technologies like Pensando to bring web scale economics and web scale architecture to the private cloud. Our partners are so important in helping us serve our customers needs. >>Yeah. I mean H. P. Is really up to its ecosystem game since the middle of last decade when when you guys reorganized and it became even more partner friendly. So maybe give us a preview of what's coming next in that regard from today's event. >>Well, they were really excited to have HP. Ceo, Antonio Neri speaking with Pat Gelsinger's from Intel and later lisa su from A. M. D. And later I'll have the chance to catch up with john Chambers, the founder and Ceo of J. C. Two ventures to discuss the state of the market today. >>Yeah, I'm jealous. You got, yeah, that's a good interviews coming up, NEal, thanks so much for joining us today on the virtual cube. You've really shared a lot of great insight how HP is is partner with customers. It's, it's always great to catch up with you. Hopefully we can do so face to face, you know, sooner rather than later. >>I look forward to that. And you know, no doubt our world has changed and we're here to help our customers and partners with the technology, the expertise and the economics they need For these digital transformations. And we're going to bring them unmatched workload optimization from the edge to exa scale with that 360° security with the intelligent automation. And we're gonna deliver it all as an as a service experience. We're really excited to be helping our customers accelerate what's next for their businesses. And it's been really great talking with you today about that day. Thanks for having me >>very welcome. It's been super Neil and I actually, you know, I had the opportunity to speak with some of your customers about their digital transformation and the role of that HPV plays there. So let's dive right in. >>Yeah. Mm.

Published Date : Apr 7 2021

SUMMARY :

to excess scale with full security and automation all coming to you as a But here we are again, you know, although it's nice to be speaking to you again and I look forward to being able to do this in person. The enterprise to extra scale With 360° security and the What are some of the barriers that you see customers are working hard to overcome? And they have to find the way to match their investments with I call it the forced march to digital transformation and yet they also have to balance the investment capacity to accelerate the transformation, often by leveraging I would wonder if you could add some specifics. We can bring that expertise to bear to help amplify Let's let's talk about some of the other changes that customers see him in the cloud is obviously forced and really HP Greenlee because the cloud that comes to you What about the nature of transformations themselves? Especially one that now has to integrate the edge. 46 of the ceo say that productivity improved with the whole work from home in compute platforms that span from the ruggedized edge to the data center all the way You know, to deliver new apps, new services, you your microservices, P. S. Moral data fabric speeds the agility and access to data with a unified platform So the surface area, it's like the universe its its including the integration of the Edge need other elements to they need an intelligent decade when when you guys reorganized and it became even more partner friendly. to catch up with john Chambers, the founder and Ceo of J. C. Two ventures to discuss It's, it's always great to catch up with you. edge to exa scale with that 360° security with the intelligent It's been super Neil and I actually, you know, I had the opportunity to speak with some of your customers

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Neil McdonaldPERSON

0.99+

Neil MacDonaldPERSON

0.99+

2006DATE

0.99+

Antonio NeriPERSON

0.99+

NEalPERSON

0.99+

67%QUANTITY

0.99+

NeilPERSON

0.99+

last yearDATE

0.99+

HPORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

46QUANTITY

0.99+

IntelORGANIZATION

0.99+

todayDATE

0.99+

john ChambersPERSON

0.99+

CeoPERSON

0.99+

OneQUANTITY

0.99+

this yearDATE

0.99+

bothQUANTITY

0.99+

HP NEALORGANIZATION

0.99+

Hp Green LakeORGANIZATION

0.99+

E. T. R. Enterprise Technology ResearchORGANIZATION

0.99+

15 years agoDATE

0.99+

a year agoDATE

0.99+

hpORGANIZATION

0.99+

two waysQUANTITY

0.98+

HP GreenleeORGANIZATION

0.98+

oneQUANTITY

0.98+

first payQUANTITY

0.98+

fourQUANTITY

0.97+

pandemicEVENT

0.95+

both worldsQUANTITY

0.95+

I. T. ShopsORGANIZATION

0.95+

5%QUANTITY

0.93+

H. P.ORGANIZATION

0.93+

common themesQUANTITY

0.93+

one exampleQUANTITY

0.92+

HPEORGANIZATION

0.92+

HBsORGANIZATION

0.91+

H P. EORGANIZATION

0.9+

C. TwoPERSON

0.9+

J.ORGANIZATION

0.9+

lisa suPERSON

0.89+

More than three quartersQUANTITY

0.84+

KhyberORGANIZATION

0.84+

PensandoORGANIZATION

0.82+

C. I. O. S. ReportTITLE

0.8+

HPVORGANIZATION

0.75+

three areasQUANTITY

0.71+

last decadeDATE

0.7+

360°QUANTITY

0.66+

A. M. D.LOCATION

0.65+

middle ofDATE

0.64+

doubleQUANTITY

0.54+

H. P.LOCATION

0.49+

Alan Jensen, CIO, The Salling Group | HPE Accelerating Next


 

(upbeat music) >> We just compressed about a decade of online commerce progress into about 13 or 14 months. So now we're going to look at how one retailer navigated through the pandemic and what the future of their business looks like. And with me is Alan Jensen who is the chief information officer and senior vice president of the Salling Group. Hello, Alan, how are you? >> Fine, thank you. >> Good to see you. Look, you know, when I look at the hundred year history plus of your company, I mean, it's a marked by transformations and some of them are quite dramatic. So you're Denmark's largest retailer. I wonder if you could share a little bit more about the company its history and how it continues to improve the customer experience while at the same time keeping costs under control so vital in your business. >> Yeah, the company founded approximately 100 year ago with a department store in Denmark. And I think in the 60s we founded the first supermarket in Denmark with the self-service and combined textile and food in the same store. And in the beginning 70s, we found that the first hypermarket in Denmark and then the discounter came from Germany early in 1980 and we started a discount chain. And so we are actually building department store in hypermarket in supermarket, and in the discount sector. And today we are more than 1,500 stores in three different countries in Denmark, Poland, and Germany. And especially for the Danish market we have a approximately 38% market share and it is the leader. We have over the last 10 years developed further into online first in non-food and now in food with home delivery with Clayton Calais. And we have done some acquisition in the convenience with me box solutions to our customers. And we have today also some restaurant burger chain and we are running the Starbucks in Denmark. So you can see a full plate of different opportunities for our customer in especially Denmark. >> It's an awesome story. And of course the founder's name is still on the masthead. What a great legacy. Now, of course the pandemic has forced many changes quite dramatic including the behaviors of retail customers. Maybe you could talk a little bit about how your digital transformation at the Salling Group prepared you for this shift in consumption patterns and any other challenges that you faced. >> I think luckily as for some of the you can say the coati solution in 19 we just roll out using our computers. We are direct access, so you can work from anywhere whether you are traveling from home and so on. We introduced a new age from delivery model and we just finalized the rolling out teams in January, February 20. And that was some very strong thing for suddenly moving all our employees from office to home and more or less overnight we succeed continuing our work and for IT We have not missed any deadline or task for the business in 2020. So I think that was a pretty awesome to see. And for the business, of course the pandemic changed a lot as the change in customer behavior, more or less overnight with plus 50, 80% on the online solution forced us to do some different priorities as we were looking at food home delivery and originally expected to start rolling out in 2022 but took a fast decision in April last year to launch immediately. And we have been developing that over the last eight months and has been live for the last three months now in the market. So you can say the pandemic really front-loaded some of our strategic actions for two to three years. >> What's that saying? Luck is the by-product of great planning and preparation. So let's talk about... what happened? >> And when you are in a company with some strong financial situation that you can move immediately with investment when you take such a decision, then it's really failing yeah. >> Right, awesome. Two-part question. Talk about how you leverage data to support the Solent group's mission and you know drive value for customers. And maybe you could talk about some of the challenges you face with just the amount of data, the speed of data, et cetera. >> Yeah, I said data is everything when you are in retail, as retail is detail as you need to monitor your operation down to each store each department. And if you can say, we have challenged that data is just growing rapidly as a year by year it's growing more and more because you're able to be more detailed, you're able to capture more data. And for a company like ours we need to be updated every morning as our fully updated sales for all unit department single skew selling in the stores is updated three o'clock in the night and send out to all top management and our managers all over the company. It's actually 8,000 reports going out before six o'clock every day in the morning. We have introduced a loyalty program and we are capturing a lot of data on customer behavior. What is their preferred of us? What is their preferred time in the week for buying different things. And all these data is now used to personalize our offers to our value customers. So we can be exactly hitting the best time and convert it to sales. Data is also now used for what we call intelligent price reductions so instead of just reducing prices with 50% if it's a close to running out of date now the system automatically calculate whether a store has just enough to finish with full price before end of day, or actually have too much and need to maybe reduce by 80% before. So being able to sell. So these automated solutions build on data is bringing efficiency into our operation. >> Wow, you make it sound easy. These are non-trivial items. So congratulations on that. I wonder if we could close HPE was kind enough to introduce us, tell us a little bit about the infrastructure of the solutions you're using how they differentiate you in the market. And I'm interested in you know why HPE what distinguishes them, why they choose there. >> When you look out a lot is looking at moving data to the cloud, but we still believe that due to performance, due to the availability, more or less on demand, we still don't see the cloud strong enough for Salling Group capturing all our data. We have been quite successfully having one data truth across the whole company and having one just one single BI solution and having that huge amount of data. I think we have one of the 10 largest sub business warehouses global. And on the other hand we also want to be agile and want to scale when needed. So getting close to a cloud solution, we saw it be GreenLake as a solution, getting close to the cloud but still being on-prem and could deliver what we need to have fast performance on data, but still in a high quality and still very secure for us to run. >> Great, thank you for that. Alan thanks so much for your time really appreciate your insights and congratulations on the progress and best of luck in the future. >> Thank you. >> All right, keep it right there. We have tons more content coming. You're watching Accelerating Next from HPE. (upbeat music)

Published Date : Apr 7 2021

SUMMARY :

of the Salling Group. and how it continues to and in the discount sector. And of course the founder's And for the business, Luck is the by-product of And when you are in a company and you know drive value for customers. and our managers all over the company. about the infrastructure of And on the other hand and best of luck in the future. We have tons more content coming.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlanPERSON

0.99+

DenmarkLOCATION

0.99+

GermanyLOCATION

0.99+

PolandLOCATION

0.99+

Alan JensenPERSON

0.99+

twoQUANTITY

0.99+

2020DATE

0.99+

StarbucksORGANIZATION

0.99+

2022DATE

0.99+

50%QUANTITY

0.99+

Salling GroupORGANIZATION

0.99+

three o'clockDATE

0.99+

80%QUANTITY

0.99+

April last yearDATE

0.99+

Salling GroupORGANIZATION

0.99+

more than 1,500 storesQUANTITY

0.99+

8,000 reportsQUANTITY

0.99+

first hypermarketQUANTITY

0.99+

first supermarketQUANTITY

0.99+

three yearsQUANTITY

0.99+

14 monthsQUANTITY

0.99+

Two-partQUANTITY

0.99+

The Salling GroupORGANIZATION

0.98+

todayDATE

0.98+

hundred yearQUANTITY

0.98+

each departmentQUANTITY

0.98+

oneQUANTITY

0.98+

three different countriesQUANTITY

0.97+

pandemicEVENT

0.97+

60sDATE

0.97+

HPEORGANIZATION

0.97+

Clayton CalaisORGANIZATION

0.96+

10 largest sub business warehousesQUANTITY

0.95+

plus 50, 80%QUANTITY

0.95+

about 13QUANTITY

0.95+

singleQUANTITY

0.95+

70sDATE

0.93+

February 20DATE

0.92+

firstQUANTITY

0.92+

one dataQUANTITY

0.91+

each storeQUANTITY

0.9+

six o'clockDATE

0.89+

approximately 38% marketQUANTITY

0.84+

approximately 100 year agoDATE

0.83+

last eight monthsDATE

0.8+

last three monthsDATE

0.79+

a yearQUANTITY

0.75+

January,DATE

0.75+

last 10 yearsDATE

0.74+

one singleQUANTITY

0.72+

DanishOTHER

0.68+

Accelerating NextTITLE

0.66+

SolentORGANIZATION

0.66+

a decadeQUANTITY

0.66+

early in 1980DATE

0.61+

GreenLakeORGANIZATION

0.59+

19DATE

0.56+

AcceleratingTITLE

0.46+

morningDATE

0.44+

Jim Brickmeier, Velocix | HPE Accelerating Next


 

(light music) >> Okay. Now we're going to dig into the world of video which accounts for most of the data that we store and requires a lot of intense processing capabilities to stream. Here with me is Jim Brickmeier, who's the chief marketing and product officer at Velocix. Jim, good to see you. >> Good to see you, as well. >> So tell us a little bit more about Velocix. What's your role in this TV streaming world? And maybe talk about your ideal customer. >> Sure. So we're a leading provider of carrier grade video solutions, video streaming solutions and advertising technology to service providers around the globe. So we primarily sell software based solutions to cable telco, wireless providers and broadcasters that are interested in launching their own video streaming services to consumers. >> Yeah, so this is this big time. We're not (laughs) talking about mom and pop, a little video outfit but maybe you can help us understand that and just the sheer scale of the TV streaming that you're doing maybe relate it to the overall internet usage. How much traffic are we talking about here? >> Yeah, sure. So, yeah. So our customers tend to be some of the largest network service providers around the globe. And if you look at the video traffic with respect to the total amount of traffic that goes through the internet, video traffic account for about 90% of the total amount of data that traverses the internet. So video is a pretty big component of how people when they look at internet technologies, they look at video streaming technologies. You know, this is where we focus our energy is in carrying that traffic as efficiently as possible. And trying to make sure that from a consumer standpoint, we're all consumers of video and make sure that the consumer experience is a high quality experience that you don't experience any glitches and that ultimately if people are paying for that content that they're getting, the value that they pay for their money in their entertainment experience. >> Aight. People sometimes take it for granted. It's like, we all forget about dial up, right. Those days are long gone but the early days of videos so jittery and restarting and the thing too is that when you think about the pandemic and the boom in streaming that hit. We all sort of experienced that but the service levels were pretty good. I mean, how much did the pandemic affect traffic? What kind of increases did you see? And how did that impact your business? >> Yeah, sure. So obviously, well, it was a tragic to have a pandemic and have people locked down. What we found was that when people returned to their homes, what they did was they turned on their television, they've watched on their mobile devices and we saw a substantial increase in the amount of video streaming traffic over service provider networks. So what we saw was on the order of 30 to 50% increase in the amount of data that was traversing those networks. So from an operator standpoint, a lot more traffic, a lot more challenging to go ahead and carry that traffic, a lot of work also on our behalf and trying to help operators prepare cause we could actually see geographically as the lock downs happened, certain areas locked down first and why we saw that increase so we were able to help operators. As all the lock downs happened around the world, we could help them prepare for that increase with traffic. >> And I was joking about dial up before minimum. And again, in the early days of the internet if your website got 50% more traffic suddenly, your (chuckles) site was coming down. >> Yeah, that's right. >> So that says to me, Jim, that architecturally, you guys were prepared for that type of scale. So maybe you could paint a picture. Tell us a little bit about the solutions you're using and how you differentiate yourself and your market to handle that type of scale? >> Sure, yeah. So we really are focused on what we call carrier grade, solutions which are designed for that massive amount of scale. So we really look at it at a very granular level when you look at the software and performance capabilities of the software. What we're trying to do is get as many streams as possible out of each individual piece of hardware infrastructure so that we can optimize-- First of all, maximize the efficiency of that device. Make sure that the costs are very low. But one of the other challenges is as you get to millions and millions of streams and that's what we're delivering on a daily basis is millions and millions of video streams that you have to be able to scale those platforms out in an effective and a cost-effective way and to make sure that it's highly resilient, as well. So we don't ever want a consumer to have a circumstance where a network glitch or a server issue or something along those lines causes some sort of a glitch in their video. And so, there's a lot of work that we do in the software to make sure that it's a very very seamless stream and that we're always delivering at the very highest possible that rate for consumers so that if you've got that giant 4K TV that we're able to present a very high resolution picture to those devices. >> Hey, and what's the infrastructure look like underneath? You're using HPE solutions, where do they fit it? >> Yeah, that's right. Yeah, so we we've had a longstanding partnership with HPE and we worked very closely with them to try to identify the specific types of hardware that are ideal for the type of applications that we run. So we run video streaming applications and video advertising applications, targeted kinds of video, advertising technologies. And when you look at some of these applications, they have different types of requirements. In some cases, it's a throughput where we're taking a lot of data in and streaming a lot of data out and other cases. Its storage where we have to have very high density, high performance storage systems and other cases. It's I got to have really high capacity storage but the performance does not need to be quite as high from an IO perspective. And so, we worked very closely with HPE on trying to find exactly the right box for the right application. And then beyond that, also talking with our customers to understand there are different maintenance considerations associated with different types of hardware. So we tend to focus on as much as possible if we're going to place servers deep at the edge of the network, we will make everything maintenance free, area is maintenance free as we can make it by putting very high performance solid state storage into those servers so that we we don't have to physically send people to those sites to do any kind of maintenance. So it's a very cooperative relationship that we have with HPE to try to define those boxes. >> Great! Thank you for that. So last question, maybe what the future looks like? I love watching on my mobile device headphones in, no distractions, I'm getting better recommendations. How do you see the future of TV streaming? >> Yeah, so I think the future TV streaming is going to be a lot more personal, right? So this is what you're starting to see through all of the services that are out there is that most of the video service providers whether they're online providers or they're your traditional kinds of paid TV operators is that they're really focused on the consumer and trying to figure out what is a value to you personally. In the past, it used to be that services were one size fits all and so everybody watched the same program, right? at the same time and now that's-- We have this technology that allows us to deliver different types of content to people on different screens at different times and to advertise to those individuals and to cater to their individual preferences. And, so using that information that we have about how people watch and what people's interests are, we can create a much more engaging and compelling entertainment experience on all of those screens and ultimately provide more value to consumers. >> Awesome story, Jim. Thanks so much for keeping us-- Helping us keep entertained during the pandemic. We appreciate your time (chuckles). >> Sure, Thanks. >> All right. Keep it right there. What are you watching? HPE's Accelerating Next.

Published Date : Apr 6 2021

SUMMARY :

that we store and requires a lot So tell us a little So we primarily sell and just the sheer scale that the consumer experience and the thing too is that when and carry that traffic, a lot days of the internet So that says to me, Make sure that the costs are very low. of applications that we run. How do you see the future of TV streaming? is that most of the during the pandemic. What are you watching?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim BrickmeierPERSON

0.99+

JimPERSON

0.99+

millionsQUANTITY

0.99+

VelocixORGANIZATION

0.99+

50%QUANTITY

0.99+

HPEORGANIZATION

0.99+

30QUANTITY

0.99+

oneQUANTITY

0.99+

pandemicEVENT

0.97+

about 90%QUANTITY

0.97+

telcoORGANIZATION

0.96+

FirstQUANTITY

0.96+

each individual pieceQUANTITY

0.95+

millions of streamsQUANTITY

0.94+

one sizeQUANTITY

0.87+

firstQUANTITY

0.75+

HPETITLE

0.6+

video streamsQUANTITY

0.57+

4KOTHER

0.51+

Roel Sijstermans, Netherlands Cancer Institute | Accelerating Next


 

>>Okay we're here on the cube covering H. P. E. Accelerating next and with me is Rule Sister Mons, who was the head of it at the Netherlands Cancer Institute also known as NK I welcome rule. >>Thank you very much. Great to be here. >>Hey what can you tell us about the Netherlands Cancer Institute? Maybe you could talk about your core principles and and and also if you could weave in your specific areas of expertise. >>Yeah maybe first introduction to the National Cancer Institute. We are one of the top 10 comprehensive cancers in the world. And what we do is we combine a hospital for treating patients with cancer and research institute under one roof. So the discoveries we do we find within the research. We can easily bring them back to the clinic and and visa versa. So um we have about 750 researchers and about 3000 other employees, doctors, nurses and and my role is to facilitate them at their best with I. T. >>Got it. So I mean everybody talks about digital digital transformation to us that it all comes down to data. So curious how you collect and take advantage of medical data specifically to support NK eyes goals. Maybe some of the challenges that your organization faces with the amount of data, the speed of data coming in. Just the the complexities of data. How do you handle that? >>Yeah it's it's it's a challenge and uh what we we have we have a really a large amount of data so we produce uh terabytes today and we have stored now one petabyte of data at this moment. And it's uh the challenge is to reuse the data optimal for research and to share it with other institutions. So that needs to have a flexible infrastructure for that. So a fast really fast network big data storage environment. But the real challenge is not so much the I. T. But is more the quality of the data. So we have a lot of medical systems all producing those data and how do we combine them and you get the data fair. So findable accessible, interoperable and reusable for research purposes. So I think that's the main shell is the quality of the data. >>Very common themes that we hear from from other customers. I wonder if you could paint a picture of your environment and maybe you can share where HP solutions fit in what what value they bring to your organization's mission. >>Yeah I think it it brings a lot of flexibility. So what we did with HP. Es. That we we developed a software defined data center and then a virtual workplace for our researchers and doctors and that that's based on the HB infrastructure. And what we wanted to build is something that I expect the needs of doctors and nurses, but also the researchers and to kind of different blood groups, blood groups and with different needs. So, uh but we wanted to create one infrastructure because we wanted to make the connection between the hospital and research that's that's more important. So uh HPD helped helped us not only with the infrastructure itself, but also designing and the whole architecture of it. And for example, what we did is we bought a lot of hardware and and and the hardware is really doing his job between 9 to 5 Dennis, everything is working within, everyone is working with an institution, but all the other time in the evening and and nine hours and also the redundant environment we have for for our health care that doesn't do nothing of uh much more less uh in those uh dark hours. So, but we we created together with A V N H B and B M R, is that we we we call it Fidel today, compute by night. So we re use those uh those servers and those Gpu capacity for computational research jobs within the research. >>That's how you mentioned flexibility for this genius. And and so we're talking, you said a lot of hardware probably reliant, I think synergy Aruba networking is in there. How are you using this environment? Actually, the question really is when you think about NK eyes digital transformation, I mean, is this sort of the fundamental platform that you're using? Is it a maybe you could describe that? >>Yeah, it's the fundamental platform to work on. And what we see is that we have we have now everything in place for it. But the real challenge is, is the next steps we are in. So we have a software defined data center, we are cloud ready. So the next step is to make the connection to the cloud to give more automation to our researchers. So they don't have to wait a couple of weeks for I. T. To do it but they can do it themselves with a couple of clicks. So I think the basic is is we are really flexible and we have a lot of opportunities for automation for example. But the next step is to create that the business value really for for our employees. >>That's a great story and a very important mission. Really fascinating stuff. Thanks for sharing this with our audience today. Really appreciate your time. >>Thank you very much. >>Okay, this is Day Volonte with the cube stay right there for more great content. You're watching accelerating next from HP.

Published Date : Mar 29 2021

SUMMARY :

P. E. Accelerating next and with me is Rule Sister Mons, Thank you very much. Maybe you could talk about your core So the discoveries we do we find within the research. So curious how you collect and take advantage of medical data specifically to support the challenge is to reuse the data optimal for research and to share I wonder if you could paint a picture of your environment and in the evening and and nine hours and also the redundant environment we have for Actually, the question really is when you think about NK eyes digital transformation, So the next step is to make the connection to the cloud Thanks for sharing this with our audience today. Okay, this is Day Volonte with the cube stay right there for more great content.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
National Cancer InstituteORGANIZATION

0.99+

Netherlands Cancer InstituteORGANIZATION

0.99+

HPORGANIZATION

0.99+

Roel SijstermansPERSON

0.99+

todayDATE

0.99+

HPDORGANIZATION

0.99+

one petabyteQUANTITY

0.99+

nine hoursQUANTITY

0.98+

about 750 researchersQUANTITY

0.97+

oneQUANTITY

0.97+

NKORGANIZATION

0.95+

RulePERSON

0.91+

one roofQUANTITY

0.9+

Day VolontePERSON

0.87+

first introductionQUANTITY

0.85+

terabytesQUANTITY

0.84+

MonsPERSON

0.84+

A V N H BORGANIZATION

0.84+

one infrastructureQUANTITY

0.78+

5DATE

0.76+

I. T.ORGANIZATION

0.74+

B M RORGANIZATION

0.73+

ArubaORGANIZATION

0.73+

about 3000 other employeesQUANTITY

0.72+

10 comprehensive cancersQUANTITY

0.65+

9DATE

0.6+

DennisPERSON

0.6+

topQUANTITY

0.59+

dataQUANTITY

0.56+

FidelORGANIZATION

0.46+

Ecosystems Powering the Next Generation of Innovation in the Cloud


 

>> We're here at the Data Cloud Summit 2020, tracking the rise of the data cloud. And we're talking about the ecosystem powering the next generation of innovation in cloud, you know, for decades, the technology industry has been powered by great products. Well, the cloud introduced a new type of platform that transcended point products and the next generation of cloud platforms is unlocking data-centric ecosystems where access to data is at the core of innovation, tapping the resources of many versus the capabilities of one. Casey McGee is here. He's the vice president of global ISV sales at Microsoft, and he's joined by Colleen Kapase, who is the VP of partnerships and global alliances at Snowflake. Folks, welcome to theCUBE. It's great to see you. >> Thanks Dave, good to see you. Thank you. >> Thanks for having us here. >> You're very welcome. So, Casey, let me start with you please. You know, Microsoft's got a long heritage, of course, working with partners, you're renowned in that regard, built a unbelievable ecosystem, the envy of many in the industry. So if you think about as enterprises, they're speeding up their cloud adoption, what are you seeing as the role and the importance of ecosystem, the ISV ecosystem specifically, in helping make customers' outcomes successful? >> Yeah, let me start by saying we have a 45 year history of partnership, so from our very beginning as a company, we invested to build these partnerships. And so let me start by saying from day one, we looked at a diverse ecosystem as one of the most important strategies for us, both to bring innovation to customers and also to drive growth. And so we're looking to build that environment even today. So 45 years later, focused on how do we zero in on the business outcomes that matter most to customers, usually identified by the industry that they're serving. So really building an ecosystem that helps us serve both the customers and the business outcomes they're looking to drive. And so we're building that ecosystem of ISVs on the Microsoft cloud and focused on bringing that innovation as a platform provider through those companies. >> So Casey, let's stay on that for a moment, if we can. I mean, you work with a lot of ISVs and you got a big portfolio of your own solutions. Now, sometimes they overlap with the ISV offerings of your partners. How do you balance the focus on first party solutions and third-party ISV partner solutions? >> Yeah, first and foremost, we're a platform company. So our whole intent is to bring value to that partner ecosystem. Well, sometimes that means we may have offers in market that may compliment one another. Our focus is really on serving the customer. So anytime we see that, we're looking at what is the most desired outcome for our customer, driving innovation into that specific business requirement. So for us, it's always focusing on the customer, and really zeroing in on making sure that we're solving their business problems. Sometimes we do that together with partners like Snowflake. Sometimes that means we do that on our own, but the key for us is really deeply understanding what's important to the customer and then bringing the best of the Microsoft and Snowflake scenarios to bear. >> You know, Casey, I appreciate that. A lot times people say "Dave, don't ask me that question. It's kind of uncomfortable." So Colleen, I want to bring you into the discussion. How does Snowflake view this dynamic, where you're simultaneously partnering and competing sometimes with some of the big cloud companies on the planet? >> Yeah, Dave, I think it's a great question, and really in this era of innovation, so many large companies like Microsoft are so diverse in their product set, it's almost impossible for them to not have some overlap with most of their ecosystem. But I think Casey said it really well, as long as we stay laser focused on the customer, and there are a lot of very happy Snowflake customers and happy Azure customers, we really win together. And I think we're finding ways in which we're working better and better together, from a technology standpoint, and from a field standpoint. And customers want to see us come together and bring best of breed solutions. So I think we're doing a lot better, and I'm looking forward to our future, too. >> So Casey, Snowflake, you know, they're really growing, they've got a pretty large footprint on Azure. You're talking hundreds of customers here that are active on that platform. I wonder if you could talk about the product integration points that you kind of completed initially, and then kind of what's on the horizon that you see as particularly important for your joint customers? >> You have to say, so one of the things that I love about this partnership is that, well, we start with what the customer wants. We bring that back into the engineering-level relationship that we have between the two companies. And so that's produced some pretty incredibly rich functionality together. So let me start by saying, you know, we've got eight Azure regions today with nine coming on soon. And so we have a geographic diversity that is important for many of our customers. We've also got a series of engineering-level integrations that we've already built. So that's functionality for Azure Private Link, as well as integration between Power BI, Azure Data Factory, and Azure Data Lake, all of this back again to serve the business outcomes that are required for our customers. So it's this level of integration that I think really speaks to the power of the partnership. So we are intently focused on the democratization of data. So we know that Snowflake is the premier partner to help us do that. So getting that right is key to enabling high concurrency use cases with large numbers of businesses, users coming together, and getting the performance they expect. >> Yeah, I appreciate that Casey, because a lot of times I'll, you know, I'll look at the press release. Sometimes we laugh, we call them Barney deals. You know, "I love you. You love me." But I listen for the word engineering and integration. Those are sort of important triggers. Colleen, or Casey too, but I want to start with Colleen. I mean, anything you would add to that, are there things that you guys have worked on together that you're particularly proud of, or maybe that have pushed the envelope and enabled new capabilities for customers where they've given you great feedback? Any examples you can share? >> Great question. And we're definitely focusing on making sure stability is a core value for both of us, so that what we offer, that our customers can trust, is going to work well and be dependable, so that's a key focus for us. We're also looking at how can we advance into the future, what can we do around machine learning, it's an area that's really exciting for a lot of the CXO-level leadership at our customers, so we're certainly focused on that. And also looking at Power BI and the visualization of how do we bring these solutions together as well. I'd also say at the same time, we're trying to make the buying experience frictionless for our customers, so we're also leveraging and innovating with Azure's Marketplace, so that our customers can easily acquire Snowflake together with Azure. And even that is being helpful for our customers. Casey, what are your thoughts, too? >> Yeah, let me add to that. I think the work that we've done with Power BI is pretty, pretty powerful. I mean, ultimately, we've got customers out there that are looking to better visualize the data, better inform decisions that they're making. So as much as AI and ML and the inherent power of the data that's being stored within Snowflake is important in and of itself, Power BI really unlocks that and helps drive better decisions, better visualization, and help drive to decision outcomes that are important to the customer. So I love the work that we're doing on Power BI and Snowflake. >> Yeah, and you guys both mentioned, you know, machine learning. I mean, they really are an ecosystem of tools. And the thing to me about Azure, it's all about optionality. You mentioned earlier, Casey, you guys are a platform. So, you know, customer A may want to use Power BI. Another customer might want to use another visualization tool, fine, from a platform perspective, you really don't care, do you? So I wonder Colleen, if we could, and again, maybe Casey can chime in afterwards. You guys, obviously everybody these days, but you in particular, you're focused on customer outcomes. That's the sort of starting point, and Snowflake for sure has built pretty significant experience working with large enterprises and working alongside of Microsoft to get other partners. In your experience, what are customers really looking for out of the two joint companies when they engage with Snowflake and Microsoft, so that one plus one is, you know, much bigger than two. Maybe Colleen, you could start. >> Yeah, I definitely think that what our customers are looking for is both trust and seamlessness. They just want the technology to work. The beauty of Snowflake is our ease of use. So many customers have questions about their business, more so now in this pandemic world than ever before. So the seamlessness, the ease of use, the frictionless, all of these things really matter to our joint customers, and seeing our teams come together, too, in the field, to show here's how Snowflake and Azure are better together, in your local area, and having examples of customers where we've had win-wins, which I'd say Casey, we're getting more and more of those every day, frankly, so it's pretty exciting times. And having our sales teams work as a partnership, even though we compete, we know where we play well together, and I see us doing that over and over again, more and more, around the world, too, which is really important as Snowflake pushes forward, beyond the North America geographies into stronger and stronger in the global regions, where frankly, Microsoft's had a long, storied history at. That's very exciting, especially in Europe and Asia. >> Casey, anything you'd add to that? >> Yeah. Colleen, it's well said. I think ultimately, what customers are looking for is that when our two companies come together, we bring new innovation, new ideas, new ways to solve old problems. And so I think what I love about this partnership is ultimately when we come together, whether it's engineering teams coming together to build new product, whether it's our sales and marketing teams out in front of the customers, across that spectrum, I think customers are looking for us to help bring new ideas. And I love the fact that we've engineered this partnership to do just that. And ultimately we're focused on how do we come together and build something new and different. And I think we can solve some of the most challenging problems with the power of the data and the innovation that we're bringing to the table. >> I mean, you know, Casey, I mean, everybody's really quite in awe and amazed at Microsoft's transformation, and really openness and willingness to really, change and lean into some of the big waves. I wonder if you could talk about your multi-platform strategy and what problems that you're solving in conjunction with Snowflake. >> Yeah, let me start by saying, you know, I think as much as we appreciate that feedback on the progress that we've been striving for, I mean, we're still learning every day, looking for new opportunities to learn from customers, from partners, and so a lot of what you see on the outside is the result of a really focused culture, really focusing on what's important to our customers, focusing on how do we build diversity and inclusion to everything we do, whether that's within Microsoft, with our partners, our customers, and ultimately, how do we show up as one Microsoft, I call one Microsoft kind of the partner's gift. It's ultimately how do our companies show up together? So I think if you look multi-platform, we have the same concept, right? We have the Microsoft cloud that we're offering out in the marketplace. The Microsoft cloud consists of what we're serving up as far as the platform, consists of what we're serving up for data and AI, modern workplace and business applications. And so this multi-cloud strategy for us is really focused on how do we bring innovation across each of the solution areas that matter most to customers. And so I see really the power of the Snowflake partnership playing in there. >> Awesome. Colleen, are there any examples you can share where, maybe this partnership has unlocked the customer opportunity or unique value? >> Yeah, I can't speak about the customer-specific, but what I can do and say is, Casey and I play very corporate roles in terms of we're thinking about the long-term partnership, we're driving the strategy. But hey, look, we'll get called in, we're working a deal right now, it's almost close of the quarter for us, we're literally working on an opportunity right now, how can we win together, how can we be competitive, the customers, the CIO has asked us to come together, to work on that solution. Very large, well-known brand. And we're able to get up to the very senior levels of our companies very quickly to make decisions on what do we need to do to be better and stronger together. And that's really what a partnership is about, you can do the long-term plans and the strategics and you can have great products, but when your executives can pick up the phone and call each other to work on a particular deal, for a particular customer's need, I think that's where the power of the partnership really comes together, and that's where we're at. And that's been a growth opportunity for us this year, is, wasn't necessarily where we were at, and I really have to thank Casey for that. He's done a ton, getting us the right glue between our executives, making sure the relationships are there, and making sure the trust is there, so when our customers need us to come together, that dialogue and that shared diction of putting customers first is there between both companies. So thank you, Casey. >> Oh, thanks, Colleen, the feeling's mutual. >> Well, I think this is key because as I said up front, we've gone from sort of very product-focused to platform-focused. And now we're tapping the power of the ecosystem. That's not always easy to get all the parts moving together, but we live in this API economy. You could say "Hey, I'm a company, everything's going to be homogeneous. Everything is going to be my stack." And maybe that's one way to solve the problem, but really that's not how customers want to solve the problem. Casey, I'll give you the last word. >> Yeah, let me just end by saying, you know, first off the cultures between our two companies couldn't be more well aligned. So I think ultimately when you ask yourself the question, "What do we do to best show up in front of our customers?" It is, focus on their business outcomes, focus on the things that matter most to them. And this partnership will show up well. And I think ultimately our greatest opportunity is to tap into that need, to that interest. And I couldn't be happier about the partnership and the fact that we are so well aligned. So thank you for that. >> Well guys, thanks very much for coming on theCUBE and unpacking some of the really critical aspects of the ecosystem. It was really a pleasure having you. >> Thank you so much for having us. >> Okay, and thank you for watching. Keep it right there. We've got more great content coming your way at the Data Cloud Summit.

Published Date : Nov 19 2020

SUMMARY :

and the next generation of cloud platforms Thanks Dave, good to see you. of ecosystem, the ISV and focused on bringing that innovation and you got a big portfolio focusing on the customer, cloud companies on the planet? focused on the customer, the horizon that you see and getting the performance they expect. or maybe that have pushed the envelope BI and the visualization So I love the work that And the thing to me about Azure, So the seamlessness, the ease of use, And I love the fact that we've some of the big waves. And so I see really the power examples you can share where, and making sure the trust is there, the feeling's mutual. all the parts moving together, and the fact that we are so well aligned. of the ecosystem. Okay, and thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

Colleen KapasePERSON

0.99+

ColleenPERSON

0.99+

EuropeLOCATION

0.99+

CaseyPERSON

0.99+

Casey McGeePERSON

0.99+

two companiesQUANTITY

0.99+

two companiesQUANTITY

0.99+

AsiaLOCATION

0.99+

hundredsQUANTITY

0.99+

45 yearQUANTITY

0.99+

North AmericaLOCATION

0.99+

bothQUANTITY

0.99+

Power BITITLE

0.99+

both companiesQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

oneQUANTITY

0.99+

BarneyORGANIZATION

0.99+

nineQUANTITY

0.98+

two joint companiesQUANTITY

0.98+

todayDATE

0.98+

Data Cloud SummitEVENT

0.98+

one wayQUANTITY

0.97+

this yearDATE

0.97+

Data Cloud Summit 2020EVENT

0.97+

SnowflakeTITLE

0.97+

45 years laterDATE

0.96+

firstQUANTITY

0.95+

eightQUANTITY

0.95+

Power BITITLE

0.95+

twoQUANTITY

0.94+

AzureORGANIZATION

0.93+

SnowflakeEVENT

0.92+

Azure Private LinkTITLE

0.9+

Azure Data LakeTITLE

0.9+

AzureTITLE

0.89+

Expert Reaction | Workplace Next


 

>>from around the globe. It's the Cube with digital coverage of workplace next made possible by Hewlett Packard Enterprise. >>Thanks very much. Welcome back to the Cube. 3 65. Coverage of workplace next HP. I'm your host, Rebecca. Night. There was some great discussion there in the past panel, and we now are coming to you for some reaction. We have a panel of three people. Harold Senate in Miami. He is the prominent workplace futurist and influencer. Thanks so much for joining us, Harold. >>My pleasure. My pleasure. Way having me, >>we have Herbert loaning Ger. He is a digital workplace expert. And currently see Iot of University of Salzburg. Thanks so much for coming on the show. >>Thank you very much for the invitation. >>And last but not least, Chip McCullough. He is the executive director of partner Ecosystems and one hey is coming to us from Tampa, Florida. >>Thank you, Rebecca. Great to be here. >>Right. Well, I'm really looking forward to this. We're talking today about the future of work and co vid. The pandemic has certainly transformed so much about the way we live and the way we were is changed the way we communicate the way we collaborate, the way we accomplish what we want to accomplish. I want to start with you. Harold, can you give us, um, broad brush thoughts about how this pandemic has changed the future of >>work? Well, this is quite interesting because we were talking about the future of work as something that was going to come in the future. But the future waas very, very long, far away from where we are right now. Now, suddenly, we brought the future of work to our current reality covered, transformed or accelerated the digital transformation that was already happening. So digital transformation was something that we were pushing somehow or influencing a lot because it's a need because everything is common digital. All our life has transformed because of the digital implementation, off new technologies in all areas. But for companies, what was quite interesting is the fact that they were looking for or thinking about when toe implement or starting implementing nuisance in terms of technology. On suddenly the decision Waas, where now we are in this emergency emergency mode that the Covic that the pandemic created in our organizations on this prompted and push a lot of this decision that we were thinking maybe in the future to start doing to put it right now. But this gay also brought a lot of issues in terms off how we deal with customers. Because this is continuity is our priority. How we deal with employees, how we make sure that employees, customers on we and the management this in relation are all connected in the street and work together to provide our president services to our customers. >>So you're talking about Kobe is really a forcing mechanism that has has really accelerated the digital transformation that so many companies in the U. S. And also around the world. Um, we heard from the previous panel that there was this Yes. We can attitude this idea that we can make this happen, um, things that were ordinarily maybe too challenging or something that we push a little bit further down the road. Do you think that that is how pervasive is that attitude and is that yes, we can. And yes, we have Thio. >>Absolutely, absolutely. You know, here in Miami, in Florida, we are used to have the hurricanes. When we have a hurricane is something that Everybody gets an alarm mode emergency mode and everybody started running. But we think or we work on business continuity implementing the product culture policies. But at the same time we think, Okay, people before a couple of which no more than that. Now, when we have those situations we have really see, we really see this positive attitude. Everybody wants to work together. Everybody wants to push to make things happen. Everybody works in a very collaborative mode. Everybody really wants to team and bring ideas and bring the energy that is necessary so we can make it happen. So I would say that now that is something that the pandemic product to the new situation where we don't know how long this mist ake this will take maybe a couple of months more, maybe a year. Maybe more than that, we still don't know. But we really know is that digital transformation on the future of work that we were thinking was going to be on the wrong way Now is something that we're not going back with this >>chip. I want to bring you in here. We're hearing that the future of work is now and this shift toward the new normal. I want to hear you talk a little bit about what you're seeing in terms of increased agility and adaptability and flexibility. How is that playing out, particularly with regard to technology? >>Yeah, I think the the yes, we can attitude. We see that all over the place and many instances it's like heroic efforts. And we heard that from the panel, right? Literally heroic efforts happening and people are doing that. It reminds me of an example with the UK National Health System, where we rolled out 1.2 million teams, Microsoft teams users in seven days. I mean, those are the kinds of things we're seeing all over the place, and and now that yes, we can approach is kind of sinking in. And I think Harold was kind of talking about that, right? It's sinking in tow, how we're looking at technology every day. We're seeing things like, you know, the the acceleration of the move to cloud, for example, a substantial acceleration to the movement, the cloud, a substantial acceleration to be more agile, and we're just seeing that kind of in in all of our work now and and That's the focus for organizations they want to know now. How do we capture this amazing innovation that happened as a result of this event and take it forward in their organizations going forward? >>And so they're thinking about how they captured this. But Herbert, at this time of tremendous uncertainty and at a time when the economic recovery, the global economic recovery, is stop and start, how are you thinking about prioritizing? What kinds of criteria are you using and how are you evaluating what needs to happen? >>I think that's very simple, and I use my standard procedure here in the most e think it must be possible for the users and therefore, for the companies to work and be productive. That's that's, I think, the most important thing technology should be provided the best possible support here, for example, of the state off the our digital workplace. But in this uncertain times, we have some new demands At the moment. That means we have new priorities, for example, conducting teamwork ships online. Normally, we have conducted such events in special conference rooms or in a hotel for the will of the world, for example, we now have the requirement create all off our workshops and also the documentation off it we had to Allah instead of using, for example, physical pain, port to group topics and so on. So we saw here a change that larger events to We need the factions for breakout rooms and so on. And honestly, at the moment, big events in the with the world will not Still the same leg in a physical world, for example Ah, big conferences, technology conferences and so on. >>No, Absolutely. And what you're describing is this this hybrid world in which some people are going into offices and and others of us are not, And we are we're doing what we need to dio in in digital formats. I wanna ask you chip about this hybrid workplace. This appears to be this construct that we're seeing more and more in the marketplace. We heard Gen. Brent of HP talking about this in the previous panel. How do you see this playing out in the next 12 to 24 months and beyond, even in our pandemic and and post pandemic lives? And what do you see as the primary advantages and drawbacks of having this hybrid workforce. >>Well, I I think it's very interesting, right? And I think it s century. We were very lucky because we are 500,000 employees that have been fully, you know, kind of hybrid work or remote enabled, even going into the pandemic. And many other companies and organizations did not have that in place, right? The key to me is you had this protective environment will call the office right where everybody went in tow work to they had their technology there. The security was in place around that office, and everything was kind of focused on that office and all sudden, that office, it didn't disappear, but it became distributed. And the key behind we are a big user of Aruba Technologies within Accenture. And it became very important, in my view, to be able to take >>ah, >>lot of the concepts that you brought into the office and distributed it out. So we're we have offerings where we're using technologies such as Aruba's remote access points in virtual desktop technologies, right that enable us to take all the rules >>and >>capability and functionality and security that you had in that nice controlled office environment and roll it out, thio the workers wherever they may be sitting now, whether it be at home, whether it be sitting on the road someplace, um, traveling whatever. And that's really important. And I did see a couple instances with organizations where they had security incidents because of the way they rolled out that office of the future. So it's really important as we go forward that not only do we look at the enablement, but we also make sure we're securing that to our principles and standards going >>forward. >>So the principles and standards I wanna I wanna talk to you a little bit about that. Harold. There are the security elements that we that we just heard about. But there's also the culture, the workplace culture, the mission, the values of the organization when employees air not co located. When we are talking about distributed teams, how do you make sure that those values are are consistent throughout the organization and that employees do feel that they are part of something bigger, even if they're not in the cubicle next door or just in the hallway? >>That that is a great question, because here what happens now is that we still need to find a balance in the way we work. Maybe some company says we need to fool the day with busier conferences so we can see each other so we can make sure what we're doing and we're connected. But also we need to get some balance because we need to make sure that we have time to do the job. Everybody needs to do their job but also need to communicate to each other on communication, in the whole group, in a video in several video conferences in the day. Maybe it's not enough or not with effective for that communication. So we need to find the right balance because we have a lot of tools, a lot of technology that can help us on by helping us in this moment to make sure that we are sharing our values, values that common set off values that makes or defines on how organizations need to be present in every interaction that we have with our employees on. We need to also make sure that we're taking care off the needs off employees because when we see from a former employee standpoint, what is going on we need to understand the context that we're working today instead of working on at the office. We're working from home at home. Always. We have also we have our partners wife, Children also that are in the same place. We're also connected with work or with distance learning so that there is a new environment, the home environment, that from a company perspective, also needs to be taken into consideration now how we share our values well, it's a time something that we need to understand. Also, that we all always try to understand is that every crisis bring on opportunity together. So we should see. This also is an opportunity toe. Refocus our strategies on culture not to emerge stronger on to put everybody with the yes attitude with really desire to make things happen every day in this time in this same symphony. Oh, but how we do that also, it's an opportunity for delivering training. Delivery is an opportunity to make sure that we identify those skills that are needed for the future of work in the digitals, because we have a lot of digital training that is needed on those skills that are not exactly a tech, but they are needed also, from the human perspective to make sure that we are creating a strong culture that even working in a hybrid or or remote work, we can be strong enough in the market. >>So I wanna let everyone here have the last word in picking up on on that last point that this is an exceedingly complex time for everyone, Unprecedented. There's so much uncertainty. What is your best advice for leaders as they navigate their employees through this hybrid remote work environment? Um, I want to start with you, Herbert. >>From my opinion, I think communication is very important. So communicate with your team and your employees much more than in the past and toe and be clear in your statements and in your answers. I think it's very important for the team >>chip. Best advice. >>So you know, it feels like we've jumped maybe two years ahead and innovation, and I think you know, from a non organization standpoint, except that, you know, embrace it, capture it. But then also at the same time, make sure you're applying your principles of security and those pieces to it, so do it in the right way, but embrace the change that's that's happened, >>Harold. Last last. Best advice for for managers during this time >>he communication are absolutely essential. Now let's look for new way of communicating that it's not only sending emails is not only sending text messages, we need to find ways to connect to each other in this remote working environment on may be coming again. Toe pick up the phone on, Have a chat conversation with our employees are working remotely. But doing that with kind off frequently, I would say that would be very effective toe. Improve the communication on to create this environment where everybody feels part off an organization >>everyone feels part of the team. Well, thank you so much. All of you. To Harold, Herbert and Chip. I really appreciate a great conversation here. >>My pleasure. My pleasure. Very much. >>They tuned for more of the Cube 3 65 coverage of HPV workplace Next

Published Date : Nov 10 2020

SUMMARY :

It's the Cube with digital coverage and we now are coming to you for some reaction. My pleasure. we have Herbert loaning Ger. He is the executive director of partner Ecosystems and Great to be here. The pandemic has certainly transformed so much about the way we live and the way But this gay also brought a lot of issues in terms off how we deal with customers. that we can make this happen, um, things that were ordinarily maybe too But at the same time we think, We're hearing that the future of work is now and this shift And we heard that from the panel, right? What kinds of criteria are you using and how But in this uncertain times, we have some new demands At the moment. going into offices and and others of us are not, And we are we're doing And the key behind we are a big user of Aruba lot of the concepts that you brought into the office and distributed it out. that not only do we look at the enablement, but we also make sure we're securing that to There are the security elements that we that we just heard about. need to be present in every interaction that we have with our employees on. that this is an exceedingly complex time for everyone, Unprecedented. much more than in the past and toe and be clear in your statements and in your answers. chip. and I think you know, from a non organization standpoint, except that, Best advice for for managers during this time Improve the communication on to create this environment everyone feels part of the team. My pleasure.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

HaroldPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Chip McCulloughPERSON

0.99+

MiamiLOCATION

0.99+

HPORGANIZATION

0.99+

HerbertPERSON

0.99+

UK National Health SystemORGANIZATION

0.99+

500,000 employeesQUANTITY

0.99+

FloridaLOCATION

0.99+

U. S.LOCATION

0.99+

Tampa, FloridaLOCATION

0.99+

University of SalzburgORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

three peopleQUANTITY

0.99+

todayDATE

0.99+

Gen.PERSON

0.99+

two yearsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.98+

CovicEVENT

0.97+

a yearQUANTITY

0.97+

3 65OTHER

0.97+

seven daysQUANTITY

0.97+

pandemicEVENT

0.96+

1.2 million teamsQUANTITY

0.94+

oneQUANTITY

0.93+

EcosystemsORGANIZATION

0.92+

KobePERSON

0.91+

AllahPERSON

0.85+

ThioPERSON

0.83+

ChipPERSON

0.79+

coupleQUANTITY

0.77+

ArubaORGANIZATION

0.77+

Aruba TechnologiesORGANIZATION

0.75+

couple of monthsQUANTITY

0.7+

BrentPERSON

0.69+

GerPERSON

0.69+

CubeCOMMERCIAL_ITEM

0.62+

HPVOTHER

0.61+

Cube 3 65COMMERCIAL_ITEM

0.56+

SenateORGANIZATION

0.56+

next 12DATE

0.52+

24QUANTITY

0.51+

monthsDATE

0.48+

WaasORGANIZATION

0.4+

Intro | Workplace Next


 

>>from around the globe. It's the Cube with digital coverage of workplace Next made possible by Hewlett Packard Enterprise. >>Welcome to Workplace Next Brought to You by the Cube 3 65 and sponsored by Hewlett Packard Enterprise. We got a great show lineup for you today. If you like me, you've had to change the way you work this year, and so have your team's. Ah, lot of work has gone remote, of course, and very quickly we've had toe rethink how we operate on a day to day basis, and that's great. If, like me, you could do your job remotely. But let's not forget there are a lot of industries were going remote isn't an option, or at least it's not as much of an option. But the show has to go on, Of course, safely. This has brought about major Rethink Is leaders everywhere. Try to figure out how to adapt. How do you maintain productivity now and also positioned for the future? So let me run through today's lineup First, we'll look at some of these leaders who are adapting. We'll hear how they've taken remote work securely an unbelievably quickly and how they're keeping people safe. When the work has toe happen in person, in approximate locations. Well, look at what they've done the last six months or so and what learnings they'll take forward. Then we've got some great workplace experts to make sense of it all to talk through what the prescription is going forward. What's this hybrid world going to look like? And not just to survive the pandemic, but to use this moment as a launch point to transformation of the way in which we work that will serve us in the years and the decades to come. And finally, we'll delve into the practical. We'll look at some of the solutions that are available today and bring people and technology together with processes to help you realize this transformation. We have HBs best experts lined up to answer your questions on what the practical steps are to reinvent the ways in which you work in these unpredictable times. Whether you want to talk about security, I o. T at the edge ai technologies for safe workplaces or any of the things that you need to do to nag, navigate, change successfully. They've been there, they've done that and they're here to help. So with that, let's go to our first panel. I'll hand it over to our moderator, Maribel Lopez. She's with the independent analyst firm Lopez Associates and friend of the Cube over to you, Maribel.

Published Date : Nov 10 2020

SUMMARY :

It's the Cube with digital coverage firm Lopez Associates and friend of the Cube over to you, Maribel.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Maribel LopezPERSON

0.99+

MaribelPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

first panelQUANTITY

0.98+

Lopez AssociatesORGANIZATION

0.97+

FirstQUANTITY

0.97+

pandemicEVENT

0.97+

todayDATE

0.97+

this yearDATE

0.97+

CubeORGANIZATION

0.96+

Cube 3 65COMMERCIAL_ITEM

0.92+

last six monthsDATE

0.89+

CubeCOMMERCIAL_ITEM

0.79+

HBsORGANIZATION

0.72+