Image Title

Search Results for Agnostic:

Scott Baker, IBM Infrastructure | VMware Explore 2022


 

(upbeat music) >> Welcome back everyone to theCUBEs live coverage in San Francisco for VMware Explorer. I'm John Furrier with my host, Dave Vellante. Two sets, three days of wall to wall coverage. This is day two. We got a great guest, Scott Baker, CMO at IBM, VP of Infrastructure at IBM. Great to see you. Thanks for coming on. >> Hey, good to see you guys as well. It's always a pleasure. >> ()Good time last night at your event? >> Great time last night. >> It was really well-attended. IBM always has the best food so that was good and great props, magicians, and it was really a lot of fun, comedians. Good job. >> Yeah, I'm really glad you came on. One of the things we were chatting, before we came on camera was, how much changed. We've been covering IBM storage days, back on the Edge days, and they had the event. Storage is the center of all the conversations, cyber security- >> ()Right? >> ... But it's not just pure cyber. It's still important there. And just data and the role of multi-cloud and hybrid cloud and data and security are the two hottest areas, that I won't say unresolved, but are resolving themselves. And people are talking. It's the most highly discussed topics. >> Right. >> ()Those two areas. And it's just all on storage. >> Yeah, it sure does. And in fact, what I would even go so far as to say is, people are beginning to realize the importance that storage plays, as the data custodian for the organization. Right? Certainly you have humans that are involved in setting strategies, but ultimately whatever those policies are that get applied, have to be applied to a device that must act as a responsible custodian for the data it holds. >> So what's your role at IBM and the infrastructure team? Storage is one only one of the areas. >> ()Right. >> You're here at VMware Explore. What's going on here with IBM? Take us through what you're doing there at IBM, and then here at VMware. What's the conversations? >> Sure thing. I have the distinct pleasure to run both product marketing and strategy for our storage line. That's my primary focus, but I also have responsibility for the mainframe software, so the Z System line, as well as our Power server line, and our technical support organization, or at least the services side of our technical support organization. >> And one of the things that's going on here, lot of noise going on- >> Is that a bird flying around? >> Yeah >> We got fire trucks. What's changed? 'Cause right now with VMware, you're seeing what they're doing. They got the Platform, Under the Hood, Developer focus. It's still an OPS game. What's the relationship with VMware? What are you guys talking about here? What are some of the conversations you're having here in San Francisco? >> Right. Well, IBM has been a partner with VMware for at least the last 20 years. And VMware does, I think, a really good job about trying to create a working space for everyone to be an equal partner with them. It can be challenging too, if you want to sort of throw out your unique value to a customer. So one of the things that we've really been working on is, how do we partner much stronger? When we look at the customers that we support today, what they're looking for isn't just a solid product. They're looking for a solid ecosystem partnership. So we really lean in on that 20 years of partnership experience that we have with IBM. So one of the things that we announced was actually being one of the first VMware partners to bring both a technical innovation delivery mechanism, as well as technical services, alongside VMware technologies. I would say that was one of the first things that we really leaned in on, as we looked out at what customers are expecting from us. >> So I want to zoom out a little bit and talk about the industry. I've been following IBM since the early 1980s. It's trained in the mainframe market, and so we've seen, a lot of things you see come back to the mainframe, but we won't go there. But prior to Arvind coming on, it seemed like, okay, storage, infrastructure, yeah it's good business, and we'll let it throw off some margin. That's fine. But it's all about services and software. Okay, great. With Arvind, and obviously Red Hat, the whole focus shift to hybrid. We were talking, I think yesterday, about okay, where did we first hear hybrid? Obviously we heard that a lot from VMware. I heard it actually first, early on anyway, from IBM, talking hybrid. Some of the storage guys at the time. Okay, so now all of a sudden there's the realization that to make hybrid work, you need software and hardware working together. >> () Right. So it's now a much more fundamental part of the conversation. So when you look out, Scott, at the trends you're seeing in the market, when you talk to customers, what are you seeing and how is that informing your strategy, and how are you bringing together all the pieces? >> That's a really awesome question because it always depends on who, within the organization, you're speaking to. When you're inside the data center, when you're talking to the architects and the administrators, they understand the value in the necessity for a hybrid-cloud architecture. Something that's consistent. On The Edge, On-Prem, in the cloud. Something that allows them to expand the level of control that they have, without having to specialize on equipment and having to redo things as you move from one medium to the next. As you go upstack in that conversation, what I find really interesting is how leaders are beginning to realize that private cloud or on-prem, multi cloud, super cloud, whatever you call it, whatever's in the middle, those are just deployment mechanisms. What they're coming to understand is it's the applications and the data that's hybrid. And so what they're looking for IBM to deliver, and something that we've really invested in on the infrastructure side is, how do we create bidirectional application mobility? Making it easy for organizations, whether they're using containers, virtual machines, just bare metal, how do they move that data back and forth as they need to, and not just back and forth from on-prem to the cloud, but effectively, how do they go from cloud to cloud? >> Yeah. One of the things I noticed is your pin, says I love AI, with the I next to IBM and get all these (indistinct) in there. AI, remember the quote from IBM is, "You can't have AI without IA." Information architect. >> () Right. >> () Rob Thomas. >> Rob Thomas (indistinct) the sound bites. But that brings up the point about machine learning and some of these things that are coming down the like, how is your area devolving the smarts and the brains around leveraging the AI in the systems itself? We're hearing more and more softwares being coded into the hardware. You see Silicon advances. All this is kind of, not changing it, but bringing back the urgency of, hardware matters. >> That's right. >> () At the same time, it's still software too. >> That's right. So let's connect a couple of dots here. We talked a little bit about the importance of cyber resiliency, and let's talk about a little bit on how we use AI in that matter. So, if you look at the direct flash modules that are in the market today, or the SSDs that are in the market today, just standard-capacity drives. If you look at the flash core modules that IBM produces, we actually treat that as a computational storage offering, where you store the data, but it's got intelligence built into the processor, to offload some of the responsibilities of the controller head. The ability to do compression, single (indistinct), deduplication, you name it. But what if you can apply AI at the controller level, so that signals that are being derived by the flash core module itself, that look anomalous, can be handed up to an intelligence to say, "Hey, I'm all of a sudden getting encrypted rights from a host that I've never gotten encrypted rights for. Maybe this could be a problem." And then imagine if you connect that inferencing engine to the rest of the IBM portfolio, "Hey, Qradar. Hey IBM Guardian. What's going on on the network? Can we see some correlation here?" So what you're going to see IBM infrastructure continue to do is invest heavily into entropy and the ability to measure IO characteristics with respect to anomalous behavior and be able to report against that. And the trick here, because the array technically doesn't know if it's under attack or if the host just decided to turn on encryption, the trick here is using the IBM product relationships, and ecosystem relationships, to do correlation of data to determine what's actually happening, to reduce your false positives. >> And have that pattern of data too. It's all access to data too. Big time. >> That's right. >> And that innovation comes out of IBM R&D? Does it come out of the product group? Is it IBM research that then trickles its way in? Is it the storage innovation? Where's that come from? Where's that bubble up? That partnership? >> Well, I got to tell you, it doesn't take very long in this industry before your counterpart, your competitor, has a similar feature. Right? So we're always looking for, what's the next leg? What's the next advancement that we can make? We knew going into this process, that we had plenty of computational power that was untapped on the FPGA, the processor running on the flash core module. Right? So we thought, okay, well, what should we do next? And we thought, "Hey, why not just set this thing up to start watching IO patterns, do calculations, do trending, and report that back?" And what's great about what you brought up too, John, is that it doesn't stay on the box. We push that upstack through the AIOPS architecture. So if you're using Turbonomic, and you want to look applications stack down, to know if you've got threat potential, or your attack surface is open, you can make some changes there. If you want to look at it across your infrastructure landscape with a storage insight, you could do that. But our goal here is to begin to make the machine smarter and aware of impacts on the data, not just on the data they hold onto, but usage, to move it into the appropriate tier, different write activities or read activities or delete activities that could indicate malicious efforts that are underway, and then begin to start making more autonomous, how about managed autonomous responses? I don't want to turn this into a, oh, it's smart, just turn it on and walk away and it's good. I don't know that we'll ever get there just yet, but the important thing here is, what we're looking at is, how do we continually safeguard and protect that data? And how do we drive features in the box that remove more and more of the day to day responsibility from the administrative staff, who are technically hired really, to service and solve for bigger problems in the enterprise, not to be a specialist and have to manage one box at a time. >> Dave mentioned Arvind coming on, the new CEO of IBM, and the Red Hat acquisition and that change, I'd like to get your personal perspective, or industry perspective, so take your IBM-hat off for a second and put the Scott-experience-in-the-industry hat on, the transformation at the customer level right now is more robust, to use that word. I don't want to say chaotic, but it is chaotic. They say chaos in the cloud here at VM, a big part of their messaging, but it's changing the business model, how things are consumed. You're seeing new business models emerge. So IBM has this lot of storage old systems, you're transforming, the company's transforming. Customers are also transforming, so that's going to change how people market products. >> () Right. >> For example, we know that developers and DevOps love self-service. Why? Because they don't want to install it. Let me go faster. And they want to get rid of it, doesn't work. Storage is infrastructure and still software, so how do you see, in your mind's eye, with all your experience, the vision of how to market products that are super important, that are infrastructure products, that have to be put into play, for really new architectures that are going to transform businesses? It's not as easy as saying, "Oh, we're going to go to market and sell something." The old way. >> () Right. >> This shifting happening is, I don't think there's an answer yet, but I want to get your perspective on that. Customers want to hear the storage message, but it might not be speeds and fees. Maybe it is. Maybe it's not. Maybe it's solutions. Maybe it's security. There's multiple touch points now, that you're dealing with at IBM for the customer, without becoming just a storage thing or just- >> () Right. >> ... or just hardware. I mean, hardware does matter, but what's- >> Yeah, no, you're absolutely right, and I think what complicates that too is, if you look at the buying centers around a purchase decision, that's expanded as well, and so as you engage with a customer, you have to be sensitive to the message that you're telling, so that it touches the needs or the desires of the people that are all sitting around the table. Generally what we like to do when we step in and we engage, isn't so much to talk about the product. At some point, maybe later in the engagements, the importance of speeds, feeds, interconnectivity, et cetera, those do come up. Those are a part of the final decision, but early on it's really about outcomes. What outcomes are you delivering? This idea of being able to deliver, if you use the term zero trust or cyber-resilient storage capability as a part of a broader security architecture that you're putting into place, to help that organization, that certainly comes up. We also hear conversations with customers about, or requests from customers about, how do the parts of IBM themselves work together? Right? And I think a lot of that, again, continues to speak to what kind of outcome are you going to give to me? Here's a challenge that I have. How are you helping me overcome it? And that's a combination of IBM hardware, software, and the services side, where we really have an opportunity to stand out. But the thing that I would tell you, that's probably most important is, the engagement that we have up and down the stack in the market perspective, always starts with, what's the outcome that you're going to deliver for me? And then that drags with it the story that would be specific to the gear. >> Okay, so let's say I'm a customer, and I'm buying it to zero trust architecture, but it's going to be somewhat of a long term plan, but I have a tactical need. I'm really nervous about Ransomware, and I don't feel as though I'm prepared, and I want an outcome that protects me. What are you seeing? Are you seeing any patterns? I know it's going to vary, but are you seeing any patterns, in terms of best practice to protect me? >> Man, the first thing that we wanted to do at IBM is divorce ourselves from the company as we thought through this. And what I mean by that is, we wanted to do what's right, on day zero, for the customer. So we set back using the experience that we've been able to amass, going through various recovery operations, and helping customers get through a Ransomware attack. And we realized, "Hey. What we should offer is a free cyber resilience assessment." So we like to, from the storage side, we'd like to look at what we offer to the customer as following the NIST framework. And most vendors will really lean in hard on the response and the recovery side of that, as you should. But that means that there's four other steps that need to be addressed, and that free cyber-resilience assessment, it's a consultative engagement that we offer. What we're really looking at doing is helping you assess how vulnerable you are, how big is that attack surface? And coming out of that, we're going to give you a Vendor Agnostic Report that says here's your situation, here's your grade or your level of risk and vulnerability, and then here's a prioritized roadmap of where we would recommend that you go off and start solving to close up whatever the gaps or the risks are. Now you could say, "Hey, thanks, IBM. I appreciate that. I'm good with my storage vendor today. I'm going to go off and use it." Now, we may not get some kind of commission check. We may not sell the box. But what I do know is that you're going to walk away knowing the risks that you're in, and we're going to give you the recommendations to get started on closing those up. And that helps me sleep at night. >> That's a nice freebie. >> Yeah. >> Yeah, it really is, 'cause you guys got deep expertise in that area. So take advantage of that. >> Scott, great to have you on. Thanks for spending time out of your busy day. Final question, put a plug in for your group. What are you communicating to customers? Share with the audience here. You're here at VMware Explorer, the new rebranded- >> () Right? >> ... multi-cloud, hybrid cloud, steady state. There are three levels of transformation, virtualization, hybrid cloud, DevOps, now- >> Right? >> ... multi-cloud, so they're in chapter three of their journey- >> That's right. >> Really innovative company, like IBM, so put the plugin. What's going on in your world? Take a minute to explain what you want. >> Right on. So here we are at VMware Explorer, really excited to be here. We're showcasing two aspects of the IBM portfolio, all of the releases and announcements that we're making around the IBM cloud. In fact, you should come check out the product demonstration for the IBM Cloud Satellite. And I don't think they've coined it this, but I like to call it the VMware edition, because it has all of the VMware services and tools built into it, to make it easier to move your workloads around. We certainly have the infrastructure side on the storage, talking about how we can help organizations, not only accelerate their deployments in, let's say Tanzu or Containers, but even how we help them transform the application stack that's running on top of their virtualized environment in the most consistent and secure way possible. >> Multiple years of relationships with VMware. IBM, VMware together. Congratulations. >> () That's right. >> () Thanks for coming on. >> Hey, thanks (indistinct). Thank you very much. >> A lot more live coverage here at Moscone west. This is theCUBE. I'm John Furrier with Dave Vellante. Thanks for watching. Two more days of wall-to-wall coverage continuing here. Stay tuned. (soothing music)

Published Date : Aug 31 2022

SUMMARY :

Great to see you. Hey, good to see you guys as well. IBM always has the best One of the things we were chatting, And just data and the role of And it's just all on storage. for the data it holds. and the infrastructure team? What's the conversations? so the Z System line, as well What's the relationship with VMware? So one of the things that we announced and talk about the industry. of the conversation. and having to redo things as you move from AI, remember the quote from IBM is, but bringing back the () At the same time, that are in the market today, And have that pattern of data too. is that it doesn't stay on the box. and the Red Hat acquisition that have to be put into play, for the customer, ... or just hardware. that are all sitting around the table. and I'm buying it to that need to be addressed, expertise in that area. Scott, great to have you on. There are three levels of transformation, of their journey- Take a minute to explain what you want. because it has all of the relationships with VMware. Thank you very much. Two more days of wall-to-wall

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

ScottPERSON

0.99+

VMwareORGANIZATION

0.99+

Scott BakerPERSON

0.99+

JohnPERSON

0.99+

San FranciscoLOCATION

0.99+

20 yearsQUANTITY

0.99+

Rob ThomasPERSON

0.99+

John FurrierPERSON

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

John FurrierPERSON

0.99+

ArvindPERSON

0.99+

Two setsQUANTITY

0.99+

bothQUANTITY

0.99+

early 1980sDATE

0.99+

three daysQUANTITY

0.98+

OneQUANTITY

0.98+

two areasQUANTITY

0.97+

firstQUANTITY

0.97+

todayDATE

0.97+

last nightDATE

0.97+

one boxQUANTITY

0.96+

two hottest areasQUANTITY

0.94+

VMware ExplorerORGANIZATION

0.93+

first thingQUANTITY

0.93+

Red HatORGANIZATION

0.92+

VMware ExploreORGANIZATION

0.91+

chapter threeOTHER

0.91+

two aspectsQUANTITY

0.9+

Two more daysQUANTITY

0.9+

IBM InfrastructureORGANIZATION

0.89+

day twoQUANTITY

0.88+

zeroQUANTITY

0.88+

one mediumQUANTITY

0.88+

first thingsQUANTITY

0.85+

IBM R&DORGANIZATION

0.84+

TurbonomicTITLE

0.83+

Breaking Analysis: How Snowflake Plans to Change a Flawed Data Warehouse Model


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE in ETR. This is Breaking Analysis with Dave Vellante. >> Snowflake is not going to grow into its valuation by stealing the croissant from the breakfast table of the on-prem data warehouse vendors. Look, even if snowflake got 100% of the data warehouse business, it wouldn't come close to justifying its market cap. Rather Snowflake has to create an entirely new market based on completely changing the way organizations think about monetizing data. Every organization I talk to says it wants to be, or many say they already are data-driven. why wouldn't you aspire to that goal? There's probably nothing more strategic than leveraging data to power your digital business and creating competitive advantage. But many businesses are failing, or I predict, will fail to create a true data-driven culture because they're relying on a flawed architectural model formed by decades of building centralized data platforms. Welcome everyone to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, I want to share some new thoughts and fresh ETR data on how organizations can transform their businesses through data by reinventing their data architectures. And I want to share our thoughts on why we think Snowflake is currently in a very strong position to lead this effort. Now, on November 17th, theCUBE is hosting the Snowflake Data Cloud Summit. Snowflake's ascendancy and its blockbuster IPO has been widely covered by us and many others. Now, since Snowflake went public, we've been inundated with outreach from investors, customers, and competitors that wanted to either better understand the opportunities or explain why their approach is better or different. And in this segment, ahead of Snowflake's big event, we want to share some of what we learned and how we see it. Now, theCUBE is getting paid to host this event, so I need you to know that, and you draw your own conclusions from my remarks. But neither Snowflake nor any other sponsor of theCUBE or client of SiliconANGLE Media has editorial influence over Breaking Analysis. The opinions here are mine, and I would encourage you to read my ethics statement in this regard. I want to talk about the failed data model. The problem is complex, I'm not debating that. Organizations have to integrate data and platforms with existing operational systems, many of which were developed decades ago. And as a culture and a set of processes that have been built around these systems, and they've been hardened over the years. This chart here tries to depict the progression of the monolithic data source, which, for me, began in the 1980s when Decision Support Systems or DSS promised to solve our data problems. The data warehouse became very popular and data marts sprung up all over the place. This created more proprietary stovepipes with data locked inside. The Enron collapse led to Sarbanes-Oxley. Now, this tightened up reporting. The requirements associated with that, it breathed new life into the data warehouse model. But it remained expensive and cumbersome, I've talked about that a lot, like a snake swallowing a basketball. The 2010s ushered in the big data movement, and Data Lakes emerged. With a dupe, we saw the idea of no schema online, where you put structured and unstructured data into a repository, and figure it all out on the read. What emerged was a fairly complex data pipeline that involved ingesting, cleaning, processing, analyzing, preparing, and ultimately serving data to the lines of business. And this is where we are today with very hyper specialized roles around data engineering, data quality, data science. There's lots of batch of processing going on, and Spark has emerged to improve the complexity associated with MapReduce, and it definitely helped improve the situation. We're also seeing attempts to blend in real time stream processing with the emergence of tools like Kafka and others. But I'll argue that in a strange way, these innovations actually compound the problem. And I want to discuss that because what they do is they heighten the need for more specialization, more fragmentation, and more stovepipes within the data life cycle. Now, in reality, and it pains me to say this, it's the outcome of the big data movement, as we sit here in 2020, that we've created thousands of complicated science projects that have once again failed to live up to the promise of rapid cost-effective time to insights. So, what will the 2020s bring? What's the next silver bullet? You hear terms like the lakehouse, which Databricks is trying to popularize. And I'm going to talk today about data mesh. These are other efforts they look to modernize datalakes and sometimes merge the best of data warehouse and second-generation systems into a new paradigm, that might unify batch and stream frameworks. And this definitely addresses some of the gaps, but in our view, still suffers from some of the underlying problems of previous generation data architectures. In other words, if the next gen data architecture is incremental, centralized, rigid, and primarily focuses on making the technology to get data in and out of the pipeline work, we predict it's going to fail to live up to expectations again. Rather, what we're envisioning is an architecture based on the principles of distributed data, where domain knowledge is the primary target citizen, and data is not seen as a by-product, i.e, the exhaust of an operational system, but rather as a service that can be delivered in multiple forms and use cases across an ecosystem. This is why we often say the data is not the new oil. We don't like that phrase. A specific gallon of oil can either fuel my home or can lubricate my car engine, but it can't do both. Data does not follow the same laws of scarcity like natural resources. Again, what we're envisioning is a rethinking of the data pipeline and the associated cultures to put data needs of the domain owner at the core and provide automated, governed, and secure access to data as a service at scale. Now, how is this different? Let's take a look and unpack the data pipeline today and look deeper into the situation. You all know this picture that I'm showing. There's nothing really new here. The data comes from inside and outside the enterprise. It gets processed, cleanse or augmented so that it can be trusted and made useful. Nobody wants to use data that they can't trust. And then we can add machine intelligence and do more analysis, and finally deliver the data so that domain specific consumers can essentially build data products and services or reports and dashboards or content services, for instance, an insurance policy, a financial product, a loan, that these are packaged and made available for someone to make decisions on or to make a purchase. And all the metadata associated with this data is packaged along with the dataset. Now, we've broken down these steps into atomic components over time so we can optimize on each and make them as efficient as possible. And down below, you have these happy stick figures. Sometimes they're happy. But they're highly specialized individuals and they each do their job and they do it well to make sure that the data gets in, it gets processed and delivered in a timely manner. Now, while these individual pieces seemingly are autonomous and can be optimized and scaled, they're all encompassed within the centralized big data platform. And it's generally accepted that this platform is domain agnostic. Meaning the platform is the data owner, not the domain specific experts. Now there are a number of problems with this model. The first, while it's fine for organizations with smaller number of domains, organizations with a large number of data sources and complex domain structures, they struggle to create a common data parlance, for example, in a data culture. Another problem is that, as the number of data sources grows, organizing and harmonizing them in a centralized platform becomes increasingly difficult, because the context of the domain and the line of business gets lost. Moreover, as ecosystems grow and you add more data, the processes associated with the centralized platform tend to get further genericized. They again lose that domain specific context. Wait (chuckling), there are more problems. Now, while in theory organizations are optimizing on the piece parts of the pipeline, the reality is, as the domain requires a change, for example, a new data source or an ecosystem partnership requires a change in access or processes that can benefit a domain consumer, the reality is the change is subservient to the dependencies and the need to synchronize across these discrete parts of the pipeline or actually, orthogonal to each of those parts. In other words, in actuality, the monolithic data platform itself remains the most granular part of the system. Now, when I complain about this faulty structure, some folks tell me this problem has been solved. That there are services that allow new data sources to really easily be added. A good example of this is Databricks Ingest, which is, it's an auto loader. And what it does is it simplifies the ingestion into the company's Delta Lake offering. And rather than centralizing in a data warehouse, which struggles to efficiently allow things like Machine Learning frameworks to be incorporated, this feature allows you to put all the data into a centralized datalake. More so the argument goes, that the problem that I see with this, is while the approach does definitely minimizes the complexities of adding new data sources, it still relies on this linear end-to-end process that slows down the introduction of data sources from the domain consumer beside of the pipeline. In other words, the domain experts still has to elbow her way into the front of the line or the pipeline, in this case, to get stuff done. And finally, the way we are organizing teams is a point of contention, and I believe is going to continue to cause problems down the road. Specifically, we've again, we've optimized on technology expertise, where for example, data engineers, well, really good at what they do, they're often removed from the operations of the business. Essentially, we created more silos and organized around technical expertise versus domain knowledge. As an example, a data team has to work with data that is delivered with very little domain specificity, and serves a variety of highly specialized consumption use cases. All right. I want to step back for a minute and talk about some of the problems that people bring up with Snowflake and then I'll relate it back to the basic premise here. As I said earlier, we've been hammered by dozens and dozens of data points, opinions, criticisms of Snowflake. And I'll share a few here. But I'll post a deeper technical analysis from a software engineer that I found to be fairly balanced. There's five Snowflake criticisms that I'll highlight. And there are many more, but here are some that I want to call out. Price transparency. I've had more than a few customers telling me they chose an alternative database because of the unpredictable nature of Snowflake's pricing model. Snowflake, as you probably know, prices based on consumption, just like AWS and other cloud providers. So just like AWS, for example, the bill at the end of the month is sometimes unpredictable. Is this a problem? Yes. But like AWS, I would say, "Kill me with that problem." Look, if users are creating value by using Snowflake, then that's good for the business. But clearly this is a sore point for some users, especially for procurement and finance, which don't like unpredictability. And Snowflake needs to do a better job communicating and managing this issue with tooling that can predict and help better manage costs. Next, workload manage or lack thereof. Look, if you want to isolate higher performance workloads with Snowflake, you just spin up a separate virtual warehouse. It's kind of a brute force approach. It works generally, but it will add expense. I'm kind of reminded of Pure Storage and its approach to storage management. The engineers at Pure, they always design for simplicity, and this is the approach that Snowflake is taking. Usually, Pure and Snowflake, as I have discussed in a moment, is Pure's ascendancy was really based largely on stealing share from Legacy EMC systems. Snowflake, in my view, has a much, much larger incremental market opportunity. Next is caching architecture. You hear this a lot. At the end of the day, Snowflake is based on a caching architecture. And a caching architecture has to be working for some time to optimize performance. Caches work well when the size of the working set is small. Caches generally don't work well when the working set is very, very large. In general, transactional databases have pretty small datasets. And in general, analytics datasets are potentially much larger. Is it Snowflake in the analytics business? Yes. But the good thing that Snowflake has done is they've enabled data sharing, and it's caching architecture serves its customers well because it allows domain experts, you're going to hear this a lot from me today, to isolate and analyze problems or go after opportunities based on tactical needs. That said, very big queries across whole datasets or badly written queries that scan the entire database are not the sweet spot for Snowflake. Another good example would be if you're doing a large audit and you need to analyze a huge, huge dataset. Snowflake's probably not the best solution. Complex joins, you hear this a lot. The working set of complex joins, by definition, are larger. So, see my previous explanation. Read only. Snowflake is pretty much optimized for read only data. Maybe stateless data is a better way of thinking about this. Heavily right intensive workloads are not the wheelhouse of Snowflake. So where this is maybe an issue is real-time decision-making and AI influencing. A number of times, Snowflake, I've talked about this, they might be able to develop products or acquire technology to address this opportunity. Now, I want to explain. These issues would be problematic if Snowflake were just a data warehouse vendor. If that were the case, this company, in my opinion, would hit a wall just like the NPP vendors that proceeded them by building a better mouse trap for certain use cases hit a wall. Rather, my promise in this episode is that the future of data architectures will be really to move away from large centralized warehouses or datalake models to a highly distributed data sharing system that puts power in the hands of domain experts at the line of business. Snowflake is less computationally efficient and less optimized for classic data warehouse work. But it's designed to serve the domain user much more effectively in our view. We believe that Snowflake is optimizing for business effectiveness, essentially. And as I said before, the company can probably do a better job at keeping passionate end users from breaking the bank. But as long as these end users are making money for their companies, I don't think this is going to be a problem. Let's look at the attributes of what we're proposing around this new architecture. We believe we'll see the emergence of a total flip of the centralized and monolithic big data systems that we've known for decades. In this architecture, data is owned by domain-specific business leaders, not technologists. Today, it's not much different in most organizations than it was 20 years ago. If I want to create something of value that requires data, I need to cajole, beg or bribe the technology and the data team to accommodate. The data consumers are subservient to the data pipeline. Whereas in the future, we see the pipeline as a second class citizen, with a domain expert is elevated. In other words, getting the technology and the components of the pipeline to be more efficient is not the key outcome. Rather, the time it takes to envision, create, and monetize a data service is the primary measure. The data teams are cross-functional and live inside the domain versus today's structure where the data team is largely disconnected from the domain consumer. Data in this model, as I said, is not the exhaust coming out of an operational system or an external source that is treated as generic and stuffed into a big data platform. Rather, it's a key ingredient of a service that is domain-driven and monetizable. And the target system is not a warehouse or a lake. It's a collection of connected domain-specific datasets that live in a global mesh. What is a distributed global data mesh? A data mesh is a decentralized architecture that is domain aware. The datasets in the system are purposely designed to support a data service or data product, if you prefer. The ownership of the data resides with the domain experts because they have the most detailed knowledge of the data requirement and its end use. Data in this global mesh is governed and secured, and every user in the mesh can have access to any dataset as long as it's governed according to the edicts of the organization. Now, in this model, the domain expert has access to a self-service and obstructed infrastructure layer that is supported by a cross-functional technology team. Again, the primary measure of success is the time it takes to conceive and deliver a data service that could be monetized. Now, by monetize, we mean a data product or data service that it either cuts cost, it drives revenue, it saves lives, whatever the mission is of the organization. The power of this model is it accelerates the creation of value by putting authority in the hands of those individuals who are closest to the customer and have the most intimate knowledge of how to monetize data. It reduces the diseconomies at scale of having a centralized or a monolithic data architecture. And it scales much better than legacy approaches because the atomic unit is a data domain, not a monolithic warehouse or a lake. Zhamak Dehghani is a software engineer who is attempting to popularize the concept of a global mesh. Her work is outstanding, and it's strengthened our belief that practitioners see this the same way that we do. And to paraphrase her view, "A domain centric system must be secure and governed with standard policies across domains." It has to be trusted. As I said, nobody's going to use data they don't trust. It's got to be discoverable via a data catalog with rich metadata. The data sets have to be self-describing and designed for self-service. Accessibility for all users is crucial as is interoperability, without which distributed systems, as we know, fail. So what does this all have to do with Snowflake? As I said, Snowflake is not just a data warehouse. In our view, it's always had the potential to be more. Our assessment is that attacking the data warehouse use cases, it gave Snowflake a straightforward easy-to-understand narrative that allowed it to get a foothold in the market. Data warehouses are notoriously expensive, cumbersome, and resource intensive, but they're a critical aspect to reporting and analytics. So it was logical for Snowflake to target on-premise legacy data warehouses and their smaller cousins, the datalakes, as early use cases. By putting forth and demonstrating a simple data warehouse alternative that can be spun up quickly, Snowflake was able to gain traction, demonstrate repeatability, and attract the capital necessary to scale to its vision. This chart shows the three layers of Snowflake's architecture that have been well-documented. The separation of compute and storage, and the outer layer of cloud services. But I want to call your attention to the bottom part of the chart, the so-called Cloud Agnostic Layer that Snowflake introduced in 2018. This layer is somewhat misunderstood. Not only did Snowflake make its Cloud-native database compatible to run on AWS than Azure in the 2020 GCP, what Snowflake has done is to obstruct cloud infrastructure complexity and create what it calls the data cloud. What's the data cloud? We don't believe the data cloud is just a marketing term that doesn't have any substance. Just as SAS is Simplified Application Software and iOS made it possible to eliminate the value drain associated with provisioning infrastructure, a data cloud, in concept, can simplify data access, and break down fragmentation and enable shared data across the globe. Snowflake, they have a first mover advantage in this space, and we see a number of fundamental aspects that comprise a data cloud. First, massive scale with virtually unlimited compute and storage resource that are enabled by the public cloud. We talk about this a lot. Second is a data or database architecture that's built to take advantage of native public cloud services. This is why Frank Slootman says, "We've burned the boats. We're not ever doing on-prem. We're all in on cloud and cloud native." Third is an obstruction layer that hides the complexity of infrastructure. and fourth is a governed and secured shared access system where any user in the system, if allowed, can get access to any data in the cloud. So a key enabler of the data cloud is this thing called the global data mesh. Now, earlier this year, Snowflake introduced its global data mesh. Over the course of its recent history, Snowflake has been building out its data cloud by creating data regions, strategically tapping key locations of AWS regions and then adding Azure and GCP. The complexity of the underlying cloud infrastructure has been stripped away to enable self-service, and any Snowflake user becomes part of this global mesh, independent of the cloud that they're on. Okay. So now, let's go back to what we were talking about earlier. Users in this mesh will be our domain owners. They're building monetizable services and products around data. They're most likely dealing with relatively small read only datasets. They can adjust data from any source very easily and quickly set up security and governance to enable data sharing across different parts of an organization, or, very importantly, an ecosystem. Access control and governance is automated. The data sets are addressable. The data owners have clearly defined missions and they own the data through the life cycle. Data that is specific and purposely shaped for their missions. Now, you're probably asking, "What happens to the technical team and the underlying infrastructure and the cluster it's in? How do I get the compute close to the data? And what about data sovereignty and the physical storage later, and the costs?" All these are good questions, and I'm not saying these are trivial. But the answer is these are implementation details that are pushed to a self-service layer managed by a group of engineers that serves the data owners. And as long as the domain expert/data owner is driving monetization, this piece of the puzzle becomes self-funding. As I said before, Snowflake has to help these users to optimize their spend with predictive tooling that aligns spend with value and shows ROI. While there may not be a strong motivation for Snowflake to do this, my belief is that they'd better get good at it or someone else will do it for them and steal their ideas. All right. Let me end with some ETR data to show you just how Snowflake is getting a foothold on the market. Followers of this program know that ETR uses a consistent methodology to go to its practitioner base, its buyer base each quarter and ask them a series of questions. They focus on the areas that the technology buyer is most familiar with, and they ask a series of questions to determine the spending momentum around a company within a specific domain. This chart shows one of my favorite examples. It shows data from the October ETR survey of 1,438 respondents. And it isolates on the data warehouse and database sector. I know I just got through telling you that the world is going to change and Snowflake's not a data warehouse vendor, but there's no construct today in the ETR dataset to cut a data cloud or globally distributed data mesh. So you're going to have to deal with this. What this chart shows is net score in the y-axis. That's a measure of spending velocity, and it's calculated by asking customers, "Are you spending more or less on a particular platform?" And then subtracting the lesses from the mores. It's more granular than that, but that's the basic concept. Now, on the x-axis is market share, which is ETR's measure of pervasiveness in the survey. You can see superimposed in the upper right-hand corner, a table that shows the net score and the shared N for each company. Now, shared N is the number of mentions in the dataset within, in this case, the data warehousing sector. Snowflake, once again, leads all players with a 75% net score. This is a very elevated number and is higher than that of all other players, including the big cloud companies. Now, we've been tracking this for a while, and Snowflake is holding firm on both dimensions. When Snowflake first hit the dataset, it was in the single digits along the horizontal axis and continues to creep to the right as it adds more customers. Now, here's another chart. I call it the wheel chart that breaks down the components of Snowflake's net score or spending momentum. The lime green is new adoption, the forest green is customers spending more than 5%, the gray is flat spend, the pink is declining by more than 5%, and the bright red is retiring the platform. So you can see the trend. It's all momentum for this company. Now, what Snowflake has done is they grabbed a hold of the market by simplifying data warehouse. But the strategic aspect of that is that it enables the data cloud leveraging the global mesh concept. And the company has introduced a data marketplace to facilitate data sharing across ecosystems. This is all about network effects. In the mid to late 1990s, as the internet was being built out, I worked at IDG with Bob Metcalfe, who was the publisher of InfoWorld. During that time, we'd go on speaking tours all over the world, and I would listen very carefully as he applied Metcalfe's law to the internet. Metcalfe's law states that the value of the network is proportional to the square of the number of connected nodes or users on that system. Said another way, while the cost of adding new nodes to a network scales linearly, the consequent value scores scales exponentially. Now, apply that to the data cloud. The marginal cost of adding a user is negligible, practically zero, but the value of being able to access any dataset in the cloud... Well, let me just say this. There's no limitation to the magnitude of the market. My prediction is that this idea of a global mesh will completely change the way leading companies structure their businesses and, particularly, their data architectures. It will be the technologists that serve domain specialists as it should be. Okay. Well, what do you think? DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn? Remember, these episodes are all available as podcasts, so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com, and don't forget to check out etr.plus for all the survey analysis. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching. Be well, and we'll see you next time. (upbeat music)

Published Date : Nov 14 2020

SUMMARY :

This is Breaking Analysis and the data team to accommodate.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Frank SlootmanPERSON

0.99+

Bob MetcalfePERSON

0.99+

Zhamak DehghaniPERSON

0.99+

MetcalfePERSON

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

November 17thDATE

0.99+

75%QUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

fiveQUANTITY

0.99+

2020DATE

0.99+

SnowflakeTITLE

0.99+

1,438 respondentsQUANTITY

0.99+

2018DATE

0.99+

OctoberDATE

0.99+

david.vellante@siliconangle.comOTHER

0.99+

todayDATE

0.99+

more than 5%QUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

FirstQUANTITY

0.99+

2020sDATE

0.99+

Snowflake Data Cloud SummitEVENT

0.99+

SecondQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

both dimensionsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

iOSTITLE

0.99+

DSSORGANIZATION

0.99+

1980sDATE

0.99+

each companyQUANTITY

0.99+

decades agoDATE

0.98+

zeroQUANTITY

0.98+

firstQUANTITY

0.98+

2010sDATE

0.98+

each quarterQUANTITY

0.98+

ThirdQUANTITY

0.98+

20 years agoDATE

0.98+

DatabricksORGANIZATION

0.98+

earlier this yearDATE

0.98+

bothQUANTITY

0.98+

PureORGANIZATION

0.98+

fourthQUANTITY

0.98+

IDGORGANIZATION

0.97+

TodayDATE

0.97+

eachQUANTITY

0.97+

Decision Support SystemsORGANIZATION

0.96+

BostonLOCATION

0.96+

single digitsQUANTITY

0.96+

siliconangle.comOTHER

0.96+

oneQUANTITY

0.96+

SparkTITLE

0.95+

Legacy EMCORGANIZATION

0.95+

KafkaTITLE

0.94+

LinkedInORGANIZATION

0.94+

SnowflakeEVENT

0.92+

first moverQUANTITY

0.92+

AzureTITLE

0.91+

InfoWorldORGANIZATION

0.91+

dozens andQUANTITY

0.91+

mid toDATE

0.91+

Dinesh Nirmal, IBM | IBM Think 2020


 

>>Yeah, >>from the Cube Studios in Palo Alto and Boston. It's the Cube covering IBM think wrought by IBM. >>Welcome back. I'm Stew Middleman. And this is the Cube's coverage of IBM. Think 2020. The digital experience. Happy to welcome to the program. Dinesh Nirmal, Who's the chief product officer for cloud packs inside IBM Deneche. Nice to see you. Thanks so much for joining us. >>Thank you. Still really appreciate you taking the time. >>All right, So Ah, I've been to many IBM shows, and of course, I'm an analyst in the cloud space. So I'm familiar with IBM cloud packs. But maybe, you know, just refresh our audience minds here what they are. How long have they been around for? You know, what clouds do they live on? And maybe what's what's new in 2020? That if somebody had looked at this, you know, in the past that they might not know about IBM Cloud. >>Yeah, so thanks to start with, let me say that. Well, tax at 12. Agnostic. So the whole goal is that you build once and it can run anywhere. That is the basic mantra or principle that we want to build packs with, So they're looking at them as a set of micro services containerized in the form that it can run on any public or behind the firewall. So that's the whole premise of about Pass. So when you go back to cloud packs, it's an integrator set of services that solve a specific set of business problems and also accelerates building rich set of applications and solutions. That's what cloud back springs. So you know, especially in this and moments to think about it, you know, violent underprice. My goal is how can I accelerate and how can I automate? Those are the two key things you know that comes to my mind if I am a C level execs at an enterprise. So cloud practice enables that meaning you already have a set off stitched together services that accelerate the application development. It automates a lot of things for you. So you do. They have a lot of applications running on multiple clouds or behind the firewall. How do you manage those right about banks will help, so I'll let me give you one example. Since you are specifically on 12 packs, let's stay cloud back for data the set of services that is available in cloud back for data will make it easier for all the way from ingest to visualization. There's a set of services that you can use so you don't have to go build a service or a product or use a product for ingest. Then use another product for CTL. Use another product for building models, another product to manage those models. The cloud back for data will solve all the problems and to end. It's a rich set of services that will give you all the value that you need all the way from ingest to visualization and with any personas We know whether you are a data engineer, data scientist or you are, you know, business analyst. You all can cooperate through the part. So that's the you know, two minute answer your question. What about practice? >>Awesome. Thanks in. Actually, I I guess you pointed out something right at the beginning. There I hear IBM Cloud pack and I think IBM cloud. But you said specifically this is really cloud agnostic. So you know, this week is think Last week I was covering Red Hat Summit, so I heard a lot about multi cloud deployments. You know, talk to the well team, talk to the open shift team. Um, so help me understand. You know, where do cod packed bit when we're talking about, you know, these multi cloud employments, you know? And is there some connection with the partnership that, of course, IBM has with red hat >>off course. I mean, so all cloud packs are optimized for open shipped. Meaning, you know, how do we use the set of services that open ship gives that container management that open provides. So as we build containers or micro services, how do we make sure that we are optimizing or taking advantage of open ship? So, for example, the set of services like logging, monitoring, security, all those services meeting that comes from open shift is what we are using. Basketball packs of cloud packs are optimized for open shift. Um, you know, from an automation perspective, how do we use and simple Right? So all the value that red hat an open ship brings is what about back is built on. So if you look at it as a layer as a Lego, the based Lego is open shift and rail. And then on top of it sits cloud pass and applications and solutions on top of it. So if I look at layer bass, the bass Lego layer is open shift and red pepper. >>Well, great. That's that's super important because, you know, one of the things we've been looking at for a while is you talk about hybrid cloud, You talk about multi cloud, and often it's that platform that infrastructure discussion. But the biggest challenge for companies today is how do I build new applications? How do I modernize what I have? So >>it >>sounds like this is exactly, you know where you're targeting to help people, you know, through the through that transformation that they're going through. >>Yeah, exactly. Stew. Because if you look at it, you know, in the past, products for siloed I mean, you know, you build a product, you use a set of specs to build it. It was a silo, and customers becomes the software integrators, or system integrators, where they have to take the different products, put it together. So even if I am, you know, focused on the data space or AI space before I have to bring in three or four or five different products make it all work together to build a model, deploy the model, manage the model, the lifecycle of the model, the life cycle of the data. But the cloud packs bring it all in one box, were out of the box. You're ready to go, so your time to value is much more higher with cloud packs because you already get a several stitched together services that gets working right out of the box. >>So I love the idea of out of the box when I think of Cloud native modern modern application development. Simplicity is not the first thing I think of Danish. So help me understand. You know so many customers. It's, you know, the tools, the skill set. You know, they don't necessarily have the experience. How is what you know your products and your team's doing help customers deal with, You know, the ever changing landscape and the complexity that they're faced with. >>Yeah, so the that honest roots, too, is that enterprise applications are not an app that you create and put it on iPhone, right? I mean, it is much more complex because it's dealing with you know, hundreds of millions of people trying to transact with the system. You need to make sure there is a disaster recovery, backup scalability, the elasticity. I mean, all those things. Security, I mean, obviously very critical piece and multi tenancy. All those things has to come together in an enterprise application. So when people talk about, you know, simplicity, it comes at a price. So what cloud practice is done is that you know, we have really focused on the user experience and design piece. So you, as an end user, has a great Syrians using the integrated set of services. The complexity piece will still be there to some extent, because you're building a very complex, you know, multi tenant application, the price application. But how do we make it easier for a developer or a data scientist to collaborate or reuse the assets, find the data much more easier or trusted data much more easier than before? Use AI, you know, to predict a lot of the things including, you know, bias detection, all those things. So we're making a lot off the development, automation and acceleration easier. The complexity part will be there still, because You know, enterprise applications tend to be complex by nature, but we're making it much more easier for you to develop, deploy and manage and govern what you're building. >>Yeah, so? So how does cloud packs allow you to really, you know, work with customers focus on, you know, things like innovation showing them the latest in the IBM software portfolio. >>Yeah. So off the first pieces that we made it much more easier for the different personas to collaborate. So in the past, you know what is the biggest challenge? Me as a data scientist had me as a data scientist. The biggest challenge was that getting access to the data Trusted data. Now, you know, we have put some governance around it, or by average, you can get, you know, data trusted data much more easier using our back to data governments around the data. Meaning if you have a CDO, you want to see who is using the data? How clean is that data, right? I mean, a lot of times that data might not be clean, so we want to make sure we can. You can help with that. Now let me move into the you know the line of business. He's not just the data. If I am, you know, a l l o B. And I want to use order. Made a lot of the process I have in today in my enterprise and not go to the ah, the every process automation and go through your superior or supervisor to get approval. How do we use AI in the business process? Automation also. So those kind of things you will get to cloud parts now, the other piece of it. But if I'm an I t space, right, the day two operations scalability, security, delivery of the software, backup and restore how do we automate and help it at the storage layer? Those air day two operations. So we're taking it all the way from day one. The whole experience of setting it up today to where enterprises really wherever, making it seamless and easy. Using quote Thanks. I go back to what I said in the beginning, which is how do we accelerate and automate a lot of the work that enterprises today much more easier. >>Okay, Wei talked earlier in the discussion about that. This can be used across multiple cloud environments. My understanding you mentioned one of the IBM cloud packs, one for data. There's a number of different cloud tax out there. How does that >>work from >>a customer standpoint? Do I have to choose a cloud pack for a specific cloud? Does it is a license that goes across all of my environment, Help me understand on this deployment mechanism and support inmates works >>right? So we have the base. Obviously, I said, You know, look at us. A modular Lego model. The base is obviously open shift in Rome. On top of itself sits a bedroom because which is a common set of services on the logic to experience. On top of it sits lower back for data well back for security cloud. For applications, there's cloud back for multi cloud management. There's cloud platform integration, so there's total of six power packs that's available, but you can pick and choose which about back you want. So let's say you are. You're a CD or you are an enterprise. We want to focus on data and ai. You can just speak lower back for data or let's say you are a, you know, based on processes, BPM decision rules. You can go with our back for automation, which gives you the set of tools. But the biggest benefits do is that all these quarterbacks are set up in the greatest services that can all work together since optimized on top of open ship. So all of a sudden you lead bar Cloud back for data, and now you want to do data. But now you want to expand it in New York and line of business. And you want that for automation? You can bring that in. Now those two quarterbacks works together. Well, now you want to bring it back for multi cloud management because you have data or applications running on multiple clouds. So now you can will bring it back for EMC M, which is multi cloud management. And those three work together. So it's fall as set off integrated set of services that is optimized on top of open shift, which makes it much more easier for customers to bring the rich set of services together and accelerate and automate their lifecycle journey within the enterprise. >>Great last question for you, Dan Ashe. You know what? What new in 2020. What should customers be looking at today would love if you could give a little bit of guidance as to where customers should be looking at for things that might be coming a little bit down the line here. And if they want to learn more about IBM cloud backs, where should they be looking? >>Yeah, if they want to learn more, there's, you know, the VW IBM dot com slash power packs. That's a place to go there. All the details on power packs are there. You can also get in touch with me, and I can definitely much more detail. But what is coming is that look. So we have a set of cloud parts, but we want to expand on, Make it extensible. So how do we you know, already it's built on an open platform. But how do we make sure our partners and I s feeds can come and build on top of the space cloud? So that's the focus going to be as each quote back innovate and add more value in within those cloud grants. We also wanted bandit so that you know our partners and our eyes, fees and GS size and build on top of it. So this year the focus is continuously in a way across the cloud part, but also make it much more extensible for third parties to come and build more value. That's the you know, That's one area of focus in the various EMC em right multi cloud management, because there is tremendous appetite for customers to move data or applications on cloud, and not only on one cloud hybrid cloud. So how do you manage that? Right. So multi cloud management definitely helps from that perspective. So our focus this year is going to be one. Make it extensible, make it more open, but at the same time continuously innovate on every single cloud part to make that journey for customers on automation and accelerating off application element easier. >>All right, we'll do next. Thank you so much. Yeah, the things that you talked about, that absolutely, you know, top of mind for customers that we talk to multi cloud management. As you said, it was the ACM, the advanced cluster management that we heard about from the Red Hat team last week at Summit. So thank you so much for the updates. Definitely exciting to watch cloud packed. How you're helping customers, you know, deal with that. That huge. It's the opportunity. But also the challenge of building their next applications, modernizing what they're doing without, you know, still having to think about what they have from their existing. Thanks so much. Great to talk. >>Thanks to you. >>All right, lots more coverage from IBM. Think 2020 The digital experience. I'm stew minimum. And as always, Thank you for watching the Cube. >>Yeah, Yeah, yeah, yeah, yeah.

Published Date : Apr 24 2020

SUMMARY :

It's the Cube Dinesh Nirmal, Who's the chief product officer for Still really appreciate you taking the time. That if somebody had looked at this, you know, in the past that they might not know about So that's the you So you know, this week is think Last week I was covering Red Hat Summit, So if you look at it as a layer as a Lego, the based Lego is open shift and That's that's super important because, you know, one of the things we've been looking at you know, through the through that transformation that they're going through. So even if I am, you know, focused on the data space or AI space before It's, you know, the tools, the skill set. So what cloud practice is done is that you know, we have really focused on the user experience So how does cloud packs allow you to really, you know, So in the past, you know what My understanding you mentioned one of the IBM cloud packs, So all of a sudden you lead bar Cloud back for data, What should customers be looking at today would love if you could give a little bit of guidance as That's the you know, That's one area of focus in the various EMC em right multi cloud Yeah, the things that you talked about, that absolutely, you know, And as always, Thank you for watching the Cube. Yeah, Yeah, yeah, yeah,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan AshePERSON

0.99+

Dinesh NirmalPERSON

0.99+

IBMORGANIZATION

0.99+

New YorkLOCATION

0.99+

Stew MiddlemanPERSON

0.99+

fourQUANTITY

0.99+

2020DATE

0.99+

Palo AltoLOCATION

0.99+

12 packsQUANTITY

0.99+

fiveQUANTITY

0.99+

BostonLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Last weekDATE

0.99+

Red HatORGANIZATION

0.99+

RomeLOCATION

0.99+

last weekDATE

0.99+

threeQUANTITY

0.99+

LegoORGANIZATION

0.99+

VWORGANIZATION

0.99+

two minuteQUANTITY

0.99+

first piecesQUANTITY

0.99+

this weekDATE

0.99+

todayDATE

0.98+

one exampleQUANTITY

0.98+

this yearDATE

0.98+

Red Hat SummitEVENT

0.98+

twoQUANTITY

0.98+

six power packsQUANTITY

0.98+

oneQUANTITY

0.97+

two keyQUANTITY

0.97+

each quoteQUANTITY

0.97+

one boxQUANTITY

0.97+

Cube StudiosORGANIZATION

0.96+

WeiPERSON

0.95+

SyriansPERSON

0.95+

first thingQUANTITY

0.94+

this yearDATE

0.94+

EMC MORGANIZATION

0.92+

hundreds of millions of peopleQUANTITY

0.89+

day oneQUANTITY

0.89+

StewPERSON

0.88+

Think 2020COMMERCIAL_ITEM

0.85+

day twoQUANTITY

0.79+

CubeCOMMERCIAL_ITEM

0.77+

IBM DenecheORGANIZATION

0.77+

AgnosticPERSON

0.71+

CubeORGANIZATION

0.67+

single cloud partQUANTITY

0.63+

DanishLOCATION

0.6+

ACMORGANIZATION

0.6+

IBM dot comORGANIZATION

0.57+

banditORGANIZATION

0.52+

EMCORGANIZATION

0.5+

12QUANTITY

0.5+

CloudTITLE

0.34+

CloudCOMMERCIAL_ITEM

0.29+