Image Title

Search Results for one call:

Abdullah Abuzaid, Dell Technologies & Gil Hellmann, Wind River | MWC Barcelona 2023


 

(intro music) >> Narrator: "theCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (gentle music) >> Hey everyone, welcome back to "theCUBE," the leader in live and emerging tech coverage. As you well know, we are live at MWC23 in Barcelona, Spain. Lisa Martin with Dave Nicholson. Day three of our coverage, as you know, 'cause you've been watching the first two days. A lot of conversations about ecosystem, a lot about disruption in the telco industry. We're going to be talking about Open RAN. You've heard some of those great conversations, the complexities, the opportunities. Two guests join Dave and me. Abdullah Abuzaid, Technical Product Manager at Dell, and Gil Hellmann, VP Telecom Solutions Engineering and Architecture at Wind River. Welcome to the program guys. >> Thank you. >> Nice to be here. >> Let's talk a little bit about Dell and Wind River. We'll each ask you both the same question, and talk to us about how you're working together to really address the complexities that organizations are having when they're considering moving from a closed environment to an open environment. >> Definitely. Thank you for hosting us. By end of the day, the relationship between Dell and Wind River is not a new. We've been collaborating in the open ecosystem for long a time enough. And that's one of the, our partnership is a result of this collaboration where we've been trying to make more efficient operation in the ecosystem. The open environment ecosystem, it has the plus and a concern. The plus of simplicity, choice of multiple vendors, and then the concern of complexity managing these vendors. Especially if we look at examples for the Open RAN ecosystem, dealing with multiple vendors, trying to align them. It bring a lot of operational complexity and TCO challenges for our customers, from this outcome where we build our partnership with Wind River in order to help our customer to simplify, or run deployment, operation, and lifecycle management and sustain it. >> And who are the customers, by the way? >> Mainly the CSP customers who are targeting Open RAN and Virtual RAN deployments. That digital transformation moving towards unified cloud environment, or a seamless cloud experience from Core to RAN, these are the customers we are working with them. >> You'll give us your perspective, your thoughts on the partnership, and the capabilities that you're enabling, the CSPs with that. >> Sure. It's actually started last year here in Barcelona, when we set together, and started to look at the, you know, the industry, the adoption of Open RAN, and the challenges. And Open RAN brings a lot of possibilities and benefit, but it does bring a lot of challenges of reintegrating what you desegregate. In the past, you purchase everything from one vendor, they provide the whole solution. Now you open it, you have different layers. So if you're looking at Open RAN, you have, I like to look at it as three major layers, the management, application, and the infrastructure. And we're starting to look what are the challenges. And the challenges of integration, of complexity, knowledge that operator has with cloud infrastructure. And this is where we basically, Dell and Winder River set together and say, "How can we ease this? "How we can make it simpler?" And we decided to partner and bring a joint infrastructure solution to market, that's not only integrated at a lab at the factory level, but it basically comes with complete lifecycle management from the day zero deployment, through the day two operation, everything done through location, through Dell supported, working out of the box. So basically taking this whole infrastructure layer integration pain out, de-risking everything, and then continuing from there to work with the ecosystem vendor to reintegrate, validate the application, on top of this infrastructure. >> So what is the, what is the Wind River secret sauce in this, in this mix, for folks who aren't familiar with what Wind River does? >> Yes, absolutely. So Wind River, for many, many don't know, we're in business since 1981. So over 40 years. We specialize high performance, high reliability infrastructure. We touch every aspect of your day and your life. From the airplane that you fly, the cars, the medical equipment. And if we go into the telco, most of the telco equipment that it's not virtualized, not throughout the fight today, using our operating system. So from all the leading equipment manufacturers and even the smaller one. And as the world started to go into desegregation in cloud, Wind River started to look at this and say, "Okay, everything is evolving. Instead of a device that included the application, the hardware, everything fused together, it's now being decomposed. So instead of providing the operating environment to develop and deploy the application to the device manufacturer, now we're providing it basically to build the cloud. So to oversimplify, I call it a cloud OS, okay. It's a lot more than OS, it's an operating environment. But we took basically our experience, the same experience that, you know, we used in all those years with the telco equipment manufacturer, and brought it into the cloud. So we're basically providing solution to build an on-premises scalable cloud from the core all the way to the far edge, that doesn't compromise reliability, doesn't compromise performance, and address all the telco needs. >> So I, Abdullah, maybe you can a answer this. >> Yeah. >> What is the, what does the go-to-market motion look like, considering that you have two separate companies that can address customers directly, separately. What does that, what does that look like if you're approaching a possible customer who is, who's knocking on the door? >> How does that work? >> Exactly. And this effort is a Dell turnkey sales service offering, or solution offering to our customers. Where Dell, in collaboration with Wind River, we proactively validate, integrate, and productize the solution as engineered system, knock door on our customer who are trying to transform to Open RAN or open ecosystem. We can help you to go through that seamless experience, by pre-validating with whatever workload you want to introduce, enable zero touch provisioning, and during the day one deployment, and ensure we have sustainable lifecycle management throughout the lifecycle of the product in, in operate, in operational network, as well as having a unified single call of support from Dell side. >> Okay. So I was just going to ask you about support. So I'm a CSP, I have the solution, I go to Dell for support. >> Exactly. >> Okay. So start with Dell, and level one, level two. And if there are complex issues related to the cloud core itself, then Wind River will be on our back supporting us. >> Talk a little bit about a cust, a CSP example that is, is using the technology, and some of the outcomes that they're able to achieve. I'd love to get both of your perspectives on that. >> Vodafone is a great example. We're here in Barcelona. Vodafone is the first ora network in Europe, and it's using our joint solution. >> What are some of the, the outcomes that it's helping them to achieve? >> Faster time to market. As you see, they already started to deploy the ORAN in commercial network, and very successful in the trials that they did last year. We're also not stopping there. We're evolving, working with them together to improve like stuff around energy efficiency. So continue to optimize. So the outcome, it's just simplifying it, and you know, ready to go. Using experience that we have, Wind River is powering the first basically virtualized RAN 5G network in the world. This is with Verizon. We're at the very large scale. We started this deployment in late '20 and '19, the first site. And then through 2020 to 2022, we basically rolled in large scale. We have a lot of experience learning from it, which what we brought into the table when we partnered with Dell. A lot of experience from how you deploy at scale. Many sites from a central location, updates, upgrade. So the whole day two operation, and this is coming to bearing the solution that basically Vodafone is deploying now, and which allowed them... If I, if I look at my engagement with Verizon, started years before we started. And it took quite some time until we got stuff running. And if you look at the Vodafone time schedule, was significantly compressed compared to the Verizon first deployment. And I can tell you that there are other service providers that were announced here by KDI, for example. It's another one moving even faster. So it's accelerating the whole movement to Ora. >> We've heard a lot of acceleration talk this week. I'd love to get your perspective, Abdullah, talking about, you know, you, you just mentioned two huge names in Telco, Vodafone and Verizon. >> Yep. >> Talk a little bit about Dell's commitment to helping telecommunications companies really advance, accelerate innovation so that all of us on the other end have this thing that just works wherever we are 24 by 7. >> Not exactly. And this, we go back to the challenges in Open ecosystem. Managing multiple vendors at the same time, is a challenge for our customers. And that's why we are trying to simplify their life cycle by have, by being a trusted partner, working with our customer through all the journey. We started with Dish in their 5G deployment. Also with Vodafone. We're finding the right partners working with them proactively before getting into, in front of the customer to, we've done our homework, we are ready to simplify the process for you to go for it. If you look at the RAN in particular, we are talking with the 5g. We have ran the simplification, but they still have on the other side, limited resources and skillset can support it. So, bringing a pro, ahead of time engineer system, with a zero touch of provisioning enablement, and sustainable life cycle management, it lead to the faster time to market deployment, TCO savings, improved margins for our customers, and faster business revenue for their end users. >> Solid outcomes. >> And, and what you just just described, justifies the pain associated with disaggregating and reintegrating, which is the way that Gill referenced it, which I think is great because you're not, you're not, you're not re-aggregating, (laughs) you're reintegrating, and you're creating something that's better. >> Exactly. >> Moving forward. Otherwise, why would you do it? >> Exactly. And if you look at it, the player in the ecosystem, you have the vendors, you have the service integrators, you have the automation enablers, but kind of they are talking in silos. Everyone, this is my raci, this is what I'm responsible for. I, I'm not able, I don't want to get into something else while we are going the extra mile by working proactively in that ecosystem to... Let's bring brains together, find out what's one plus one can bring three for our customers, so we make it end-to-end seamless experience, not only on the technical part, but also on the business aspect side of it. >> So, so the partnership, it's about reducing the pen. I will say eliminating it. So this is the, the core of it. And you mentioned getting better coverage for your phone. I do want to point out that the phones are great, but if you look at the premises of a 5G network, it's to enable a lot more things that will touch your life that are beyond the consumer and the phone. Stuff like connected vehicles. So for example, something as simple as collision avoidance, the ability for the car that goes in front of you to be able to see what's happening and broadcast this information to the car behind that have no ability to see it. And basically affect our life in a way that makes our driving safer. And for this, you need a ultra low, reliable low latency communication. You need a 5G network. >> I'm glad you brought that up, because you know, we think about, "Well we just have to be connected all the time." But those are some of the emerging technologies that are going to be potentially lifesaving, and, and really life transforming that you guys are helping to enable. So, really great stuff there, but so much promise coming down the road. What's next for Dell and Wind River? And, and when you're in conversations with prospective CSP's, what is the superpower that you deliver together? I'd love to get both of your perspectives. >> So, if you look at it, number one, customers look at it, last savings and their day-to-day operation. In 5G nature, we are talking the introduction of ORAN. This is still picking up. But there is a mutualization and densification of ORAN. And this is where we're talking on monetizing my deployment. Then the third phase, we're talking sustainability and advanced service introduction. Where I want to move not only ORAN, I want to bring the edge at the same side, I want to define the advanced use cases of edge, where it enables me with this pre-work being done to deliver more services and better SLA services. By end of the day, 5G as a girl mentioned earlier, is not about a good better phone coverage, or a better speed robot, but what customized SLA's I can deliver. So it enables me to deliver different business streams to my end users. >> Yeah. >> So yeah. I will say there are two pens. One, it's the technology side. So for an example, energy efficiency. It's a very big pin point. And sustainability. So we work a lot around this, and basically to advance this. So if you look at the integrated solution today, it's very highly optimized for resource consumption. But to be able to more dynamically be able to change your power profile without compromising the SLA. So this is one side. The other side, it's about all those applications that will come to the 5G network to make our life better. It's about integrating, validating, certifying those applications. So, it's not just easy to deploy an ORAN network, but it's easy to deploy those applications. >> I'd be curious to get your perspective on the question of ROI in this, in this space. Specifically with the sort of the macro headwinds (clears throat) the economies of the world are facing right now, if you accept that. What does the ROI timeline look like when you're talking about moving towards ORAN, adopting VRAN, an amazing, you know, a plethora of new services that can be delivered, but will these operators have the appetite to take that, make that investment and take on that risk based upon the ROI time horizon? Any thoughts on that? >> Yeah. So if you look at the early days or ORAN introduction in particular, most of the entrepreneurs of ORAN and Virtual RAN ran into the challenges of not only the complexity of open ecosystem, but the integration, is like the redos of the work. And that's where we are trying to address it via pre-engineered system or building an engineer system proactively before getting it to the customers. Per our result or outcomes we get, we are talking about 30 to 50% savings on the optics. We are talking 110 ROI for our customers, simply because we are reducing the redos, the time spent to discover and explore. Because we've done that rework ahead of time, we found the optimization issues. Just for example, any customer can buy the same components from any multiple vendors, but how I can bring them together and give, deliver for me the best performance that I can fully utilize, that's, that's where it brings the value for our customer, and accelerate the deployment and the operation of the network. >> Do you have anything to add before we close in the next 30 seconds? >> Yeah. Yeah. (laughs) >> Absolutely. I would say, we start to see the data coming from two years of operation at scale. And the data supports performance. It's the same or better than traditional system. And the cost of operation, it's as good or better than traditional. Unfortunately, I can't provide more specific data. But the point is, when something is unknown in the beginning, of course you're more afraid, you take more conservative approach. Now the data starts to flow. And from here, the intention needs to go even better. So more efficiency, so cost less than traditional system, both to operate as well as to build up. But it's definitely the data that we have today says, the, ORAN system is at part, at the minimum. >> So, definite ROI there. Guys, thank you so much for joining Dave and me talking about how you're helping organizations not just address the complexities of moving from close to open, but to your point, eliminating them. We appreciate your time and, and your insights. >> Thank you. >> All right. For our guests and for Dave Nicholson, I'm Lisa Martin. You're watching "theCUBE," the leader in live and emerging tech coverage. Live from MWC23. We'll be back after a short break. (outro music)

Published Date : Mar 1 2023

SUMMARY :

that drive human progress. in the telco industry. and talk to us about how By end of the day, Mainly the CSP and the capabilities that you're enabling, In the past, you purchase From the airplane that you fly, the cars, you can a answer this. considering that you have and during the day one deployment, So I'm a CSP, I have the solution, issues related to the and some of the outcomes Vodafone is the first and this is coming to bearing the solution I'd love to get your Dell's commitment to helping front of the customer to, justifies the pain associated with Otherwise, why would you do it? but also on the business that are beyond the but so much promise coming down the road. By end of the day, 5G as and basically to advance this. of the macro headwinds the time spent to discover and explore. (laughs) Now the data starts to flow. not just address the the leader in live and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

VodafoneORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

DellORGANIZATION

0.99+

Abdullah AbuzaidPERSON

0.99+

EuropeLOCATION

0.99+

Wind RiverORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

AbdullahPERSON

0.99+

Gil HellmannPERSON

0.99+

last yearDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

two yearsQUANTITY

0.99+

telcoORGANIZATION

0.99+

110 ROIQUANTITY

0.99+

2020DATE

0.99+

Two guestsQUANTITY

0.99+

threeQUANTITY

0.99+

1981DATE

0.99+

2022DATE

0.99+

Winder RiverORGANIZATION

0.99+

first siteQUANTITY

0.99+

two pensQUANTITY

0.99+

over 40 yearsQUANTITY

0.99+

KDIORGANIZATION

0.99+

GillPERSON

0.99+

bothQUANTITY

0.99+

Open RANTITLE

0.99+

Barcelona, SpainLOCATION

0.99+

todayDATE

0.99+

OneQUANTITY

0.99+

late '20DATE

0.99+

24QUANTITY

0.98+

third phaseQUANTITY

0.98+

7QUANTITY

0.98+

first two daysQUANTITY

0.98+

firstQUANTITY

0.98+

eachQUANTITY

0.98+

one vendorQUANTITY

0.98+

50%QUANTITY

0.98+

two separate companiesQUANTITY

0.98+

Open RANTITLE

0.98+

oneQUANTITY

0.97+

one sideQUANTITY

0.97+

first deploymentQUANTITY

0.97+

three major layersQUANTITY

0.97+

telORGANIZATION

0.96+

Day threeQUANTITY

0.96+

DishORGANIZATION

0.95+

'19DATE

0.95+

ORANTITLE

0.95+

Srinivas Mukkamala & David Shepherd | Ivanti


 

(gentle music) >> Announcer: "theCube's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) (logo whooshing) >> Hey, everyone, welcome back to "theCube's" coverage of day one, MWC23 live from Barcelona, Lisa Martin here with Dave Vellante. Dave, we've got some great conversations so far This is the biggest, most packed show I've been to in years. About 80,000 people here so far. >> Yeah, down from its peak of 108, but still pretty good. You know, a lot of folks from China come to this show, but with the COVID situation in China, that's impacted the attendance, but still quite amazing. >> Amazing for sure. We're going to be talking about trends and mobility, and all sorts of great things. We have a couple of guests joining us for the first time on "theCUBE." Please welcome Dr. Srinivas Mukkamala or Sri, chief product officer at Ivanti. And Dave Shepherd, VP Ivanti. Guys, welcome to "theCUBE." Great to have you here. >> Thank you. >> So, day one of the conference, Sri, we'll go to you first. Talk about some of the trends that you're seeing in mobility. Obviously, the conference renamed from Mobile World Congress to MWC mobility being part of it, but what are some of the big trends? >> It's interesting, right? I mean, I was catching up with Dave. The first thing is from the keynotes, it took 45 minutes to talk about security. I mean, it's quite interesting when you look at the shore floor. We're talking about Edge, we're talking about 5G, the whole evolution. And there's also the concept of are we going into the Cloud? Are we coming back from the Cloud, back to the Edge? They're really two different things. Edge is all decentralized while you recompute. And one thing I observed here is they're talking about near real-time reality. When you look at automobiles, when you look at medical, when you look at robotics, you can't have things processed in the Cloud. It'll be too late. Because you got to make millisecond-based stations. That's a big trend for me. When I look at staff... Okay, the compute it takes to process in the Cloud versus what needs to happen on-prem, on device, is going to revolutionize the way we think about mobility. >> Revolutionize. David, what are some of the things that you're saying? Do you concur? >> Yeah, 100%. I mean, look, just reading some of the press recently, they're predicting 22 billion IoT devices by 2024. Everything Sri just talked about there. It's growing exponentially. You know, problems we have today are a snapshot. We're probably in the slowest place we are today. Everything's just going to get faster and faster and faster. So it's a, yeah, 100% concur with that. >> You know, Sri, on your point, so Jose Maria Alvarez, the CEO of Telefonica, said there are three pillars of the future of telco, low latency, programmable networks, and Cloud and Edge. So, as to your point, Cloud and low latency haven't gone hand in hand. But the Cloud guys are saying, "All right, we're going to bring the Cloud to the Edge." That's sort of an interesting dynamic. We're going to bypass them. We heard somebody, another speaker say, "You know, Cloud can't do it alone." You know? (chuckles) And so, it's like these worlds need each other in a way, don't they? >> Definitely right. So that's a fantastic way to look at it. The Cloud guys can say, "We're going to come closer to where the computer is." And if you really take a look at it with data localization, where are we going to put the Cloud in, right? I mean, so the data sovereignty becomes a very interesting thing. The localization becomes a very interesting thing. And when it comes to security, it gets completely different. I mean, we talked about moving everything to a centralized compute, really have massive processing, and give you the addition back wherever you are. Whereas when you're localized, I have to process everything within the local environment. So there's already a conflict right there. How are we going to address that? >> Yeah. So another statement, I think, it was the CEO of Ericsson, he was kind of talking about how the OTT guys have heard, "We can't let that happen again. And we're going to find new ways to charge for the network." Basically, he's talking about monetizing the API access. But I'm interested in what you're hearing from customers, right? 'Cause our mindset is, what value you're going to give to customers that they're going to pay for, versus, "I got this data I'm going to charge developers for." But what are you hearing from customers? >> It's amazing, Dave, the way you're looking at it, right? So if we take a look at what we were used to perpetual, and we said we're going to move to a subscription, right? I mean, everybody talks about subscription economy. Telcos on the other hand, had subscription economy for a long time, right? They were always based on usage, right? It's a usage economy. But today, we are basically realizing on compute. We haven't even started charging for compute. If you go to AWS, go to Azure, go to GCP, they still don't quite charge you for actual compute, right? It's kind of, they're still leaning on it. So think about API-based, we're going to break the bank. What people don't realize is, we do millions of API calls for any high transaction environment. A consumer can't afford that. What people don't realize is... I don't know how you're going to monetize. Even if you charge a cent a call, that is still going to be hundreds and thousands of dollars a day. And that's where, if you look at what you call low-code no-code motion? You see a plethora of companies being built on that. They're saying, "Hey, you don't have to write code. I'll give you authentication as a service. What that means is, Every single time you call my API to authenticate a user, I'm going to charge you." So just imagine how many times we authenticate on a single day. You're talking a few dozen times. And if I have to pay every single time I authenticate... >> Real friction in the marketplace, David. >> Yeah, and I tell you what. It's a big topic, right? And it's a topic that we haven't had to deal with at the Edge before, and we hear it probably daily really, complexity. The complexity's growing all the time. That means that we need to start to get insight, visibility. You know? I think a part of... Something that came out of the EU actually this week, stated, you know, there's a cyber attack every 11 seconds. That's fast, right? 2016, that was 40 seconds. So actually that speed I talked about earlier, everything Sri says that's coming down to the Edge, we want to embrace the Edge and that is the way we're going to move. But customers are mindful of the complexity that's involved in that. And that, you know, lens thought to how are we going to deal with those complexities. >> I was just going to ask you, how are you planning to deal with those complexities? You mentioned one ransomware attack every 11 seconds. That's down considerably from just a few years ago. Ransomware is a household word. It's no longer, "Are we going to get attacked?" It's when, it's to what extent, it's how much. So how is Ivanti helping customers deal with some of the complexities, and the changes in the security landscape? >> Yeah. Shall I start on that one first? Yeah, look, we want to give all our customers and perspective customers full visibility of their environment. You know, devices that are attached to the environment. Where are they? What are they doing? How often are we going to look for those devices? Not only when we find those devices. What applications are they running? Are those applications secure? How are we going to manage those applications moving forward? And overall, wrapping it round, what kind of service are we going to do? What processes are we going to put in place? To Sri's point, the low-code no-code angle. How do we build processes that protect our organization? But probably a point where I'll pass to Sri in a moment is how do we add a level of automation to that? How do we add a level of intelligence that doesn't always require a human to be fixing or remediating a problem? >> To Sri, you mentioned... You're right, the keynote, it took 45 minutes before it even mentioned security. And I suppose it's because they've historically, had this hardened stack. Everything's controlled and it's a safe environment. And now that's changing. So what would you add? >> You know, great point, right? If you look at telcos, they're used to a perimeter-based network. >> Yep. >> I mean, that's what we are. Boxed, we knew our perimeter. Today, our perimeter is extended to our home, everywhere work, right? >> Yeah- >> We don't have a definition of a perimeter. Your browser is the new perimeter. And a good example, segueing to that, what we have seen is horizontal-based security. What we haven't seen is verticalization, especially in mobile. We haven't seen vertical mobile security solutions, right? Yes, you hear a little bit about automobile, you hear a little bit about healthcare, but what we haven't seen is, what about food sector? What about the frontline in food? What about supply chain? What security are we really doing? And I'll give you a simple example. You brought up ransomware. Last night, Dole was attacked with ransomware. We have seen the beef producer colonial pipeline. Now, if we have seen agritech being hit, what does it mean? We are starting to hit humanity. If you can't really put food on the table, you're starting to really disrupt the supply chain, right? In a massive way. So you got to start thinking about that. Why is Dole related to mobility? Think about that. They don't carry service and computers. What they carry is mobile devices. that's where the supply chain works. And then that's where you have to start thinking about it. And the evolution of ransomware, rather than a single-trick pony, you see them using multiple vulnerabilities. And Pegasus was the best example. Spyware across all politicians, right? And CEOs. It is six or seven vulnerabilities put together that actually was constructed to do an attack. >> Yeah. How does AI kind of change this? Where does it fit in? The attackers are going to have AI, but we could use AI to defend. But attackers are always ahead, right? (chuckles) So what's your... Do you have a point of view on that? 'Cause everybody's crazy about ChatGPT, right? The banks have all banned it. Certain universities in the United States have banned it. Another one's forcing his students to learn how to use ChatGPT to prompt it. It's all over the place. You have a point of view on this? >> So definitely, Dave, it's a great point. First, we all have to have our own generative AI. I mean, I look at it as your digital assistant, right? So when you had calculators, you can't function without a calculator today. It's not harmful. It's not going to take you away from doing multiplication, right? So we'll still teach arithmetic in school. You'll still use your calculator. So to me, AI will become an integral part. That's one beautiful thing I've seen on the short floor. Every little thing there is a AI-based solution I've seen, right? So ChatGPT is well played from multiple perspective. I would rather up level it and say, generated AI is the way to go. So there are three things. There is human intense triaging, where humans keep doing easy work, minimal work. You can use ML and AI to do that. There is human designing that you need to do. That's when you need to use AI. >> But, I would say this, in the Enterprise, that the quality of the AI has to be better than what we've seen so far out of ChatGPT, even though I love ChatGPT, it's amazing. But what we've seen from being... It's got to be... Is it true that... Don't you think it has to be cleaner, more accurate? It can't make up stuff. If I'm going to be automating my network with AI. >> I'll answer that question. It comes down to three fundamentals. The reason ChatGPT is giving addresses, it's not trained on the latest data. So for any AI and ML method, you got to look at three things. It's your data, it's your domain expertise, who is training it, and your data model. In ChatGPT, it's older data, it's biased to the people that trained it, right? >> Mm-hmm. >> And then, the data model is it's going to spit out what it's trained on. That's a precursor of any GPT, right? It's pre-trained transformation. >> So if we narrow that, right? Train it better for the specific use case, that AI has huge potential. >> You flip that to what the Enterprise customers talk about to us is, insight is invaluable. >> Right. >> But then too much insight too quickly all the time means we go remediation crazy. So we haven't got enough humans to be fixing all the problems. Sri's point with the ChatGPT data, some of that data we are looking at there could be old. So we're trying to triage something that may still be an issue, but it might have been superseded by something else as well. So that's my overriding when I'm talking to customers and we talk ChatGPT, it's in the news all the time. It's very topical. >> It's fun. >> It is. I even said to my 13-year-old son yesterday, your homework's out a date. 'Cause I knew he was doing some summary stuff on ChatGPT. So a little wind up that's out of date just to make that emphasis around the model. And that's where we, with our Neurons platform Ivanti, that's what we want to give the customers all the time, which is the real-time snapshot. So they can make a priority or a decision based on what that information is telling them. >> And we've kind of learned, I think, over the last couple of years, that access to real-time data, real-time AI, is no longer nice to have. It's a massive competitive advantage for organizations, but it's going to enable the on-demand, everything that we expect in our consumer lives, in our business lives. This is going to be table stakes for organizations, I think, in every industry going forward. >> Yeah. >> But assumes 5G, right? Is going to actually happen and somebody's going to- >> Going to absolutely. >> Somebody's going to make some money off it at some point. When are they going to make money off of 5G, do you think? (all laughing) >> No. And then you asked a very good question, Dave. I want to answer that question. Will bad guys use AI? >> Yeah. Yeah. >> Offensive AI is a very big thing. We have to pay attention to it. It's got to create an asymmetric war. If you look at the president of the United States, he said, "If somebody's going to attack us on cyber, we are going to retaliate." For the first time, US is willing to launch a cyber war. What that really means is, we're going to use AI for offensive reasons as well. And we as citizens have to pay attention to that. And that's where I'm worried about, right? AI bias, whether it's data, or domain expertise, or algorithmic bias, is going to be a big thing. And offensive AI is something everybody have to pay attention to. >> To your point, Sri, earlier about critical infrastructure getting hacked, I had this conversation with Dr. Robert Gates several years ago, and I said, "Yeah, but don't we have the best offensive, you know, technology in cyber?" And he said, "Yeah, but we got the most to lose too." >> Yeah, 100%. >> We're the wealthiest nation of the United States. The wealthiest is. So you got to be careful. But to your point, the president of the United States saying, "We'll retaliate," right? Not necessarily start the war, but who started it? >> But that's the thing, right? Attribution is the hardest part. And then you talked about a very interesting thing, rich nations, right? There's emerging nations. There are nations left behind. One thing I've seen on the show floor today is, digital inequality. Digital poverty is a big thing. While we have this amazing technology, 90% of the world doesn't have access to this. >> Right. >> What we have done is we have created an inequality across, and especially in mobility and cyber, if this technology doesn't reach to the last mile, which is emerging nations, I think we are creating a crater back again and putting societies a few miles back. >> And at much greater risk. >> 100%, right? >> Yeah. >> Because those are the guys. In cyber, all you need is a laptop and a brain to attack. >> Yeah. Yeah. >> If I don't have it, that's where the civil war is going to start again. >> Yeah. What are some of the things in our last minute or so, guys, David, we'll start with you and then Sri go to you, that you're looking forward to at this MWC? The theme is velocity. We're talking about so much transformation and evolution in the telecom industry. What are you excited to hear and learn in the next couple of days? >> Just getting a complete picture. One is actually being out after the last couple of years, so you learn a lot. But just walking around and seeing, from my perspective, some vendor names that I haven't seen before, but seeing what they're doing and bringing to the market. But I think goes back to the point made earlier around APIs and integration. Everybody's talking about how can we kind of do this together in a way. So integrations, those smart things is what I'm kind of looking for as well, and how we plug into that as well. >> Excellent, and Sri? >> So for us, there is a lot to offer, right? So while I'm enjoying what I'm seeing here, I'm seeing at an opportunity. We have an amazing portfolio of what we can do. We are into mobile device management. We are the last (indistinct) company. When people find problems, somebody has to go remediators. We are the world's largest patch management company. And what I'm finding is, yes, all these people are embedding software, pumping it like nobody's business. As you find one ability, somebody has to go fix them, and we want to be the (indistinct) company. We had the last smile. And I find an amazing opportunity, not only we can do device management, but do mobile threat defense and give them a risk prioritization on what needs to be remediated, and manage all that in our ITSM. So I look at this as an amazing, amazing opportunity. >> Right. >> Which is exponential than what I've seen before. >> So last question then. Speaking of opportunities, Sri, for you, what are some of the things that customers can go to? Obviously, you guys talk to customers all the time. In terms of learning what Ivanti is going to enable them to do, to take advantage of these opportunities. Any webinars, any events coming up that we want people to know about? >> Absolutely, ivanti.com is the best place to go because we keep everything there. Of course, "theCUBE" interview. >> Of course. >> You should definitely watch that. (all laughing) No. So we have quite a few industry events we do. And especially there's a lot of learning. And we just raised the ransomware report that actually talks about ransomware from a global index perspective. So one thing what we have done is, rather than just looking at vulnerabilities, we showed them the weaknesses that led to the vulnerabilities, and how attackers are using them. And we even talked about DHS, how behind they are in disseminating the information and how it's actually being used by nation states. >> Wow. >> And we did cover mobility as a part of that as well. So there's a quite a bit we did in our report and it actually came out very well. >> I have to check that out. Ransomware is such a fascinating topic. Guys, thank you so much for joining Dave and me on the program today, sharing what's going on at Ivanti, the changes that you're seeing in mobile, and the opportunities that are there for your customers. We appreciate your time. >> Thank you >> Thank you. >> Yes. Thanks, guys. >> Thanks, guys. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching "theCUBE" live from MWC23 in Barcelona. As you know, "theCUBE" is the leader in live tech coverage. Dave and I will be right back with our next guest. (gentle upbeat music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. This is the biggest, most packed from China come to this show, Great to have you here. Talk about some of the trends is going to revolutionize the Do you concur? Everything's just going to get bring the Cloud to the Edge." I have to process everything that they're going to pay for, And if I have to pay every the marketplace, David. to how are we going to deal going to get attacked?" of automation to that? So what would you add? If you look at telcos, extended to our home, And a good example, segueing to that, The attackers are going to have AI, It's not going to take you away the AI has to be better it's biased to the people the data model is it's going to So if we narrow that, right? You flip that to what to be fixing all the problems. I even said to my This is going to be table stakes When are they going to make No. And then you asked We have to pay attention to it. got the most to lose too." But to your point, have access to this. reach to the last mile, laptop and a brain to attack. is going to start again. What are some of the things in But I think goes back to a lot to offer, right? than what I've seen before. to customers all the time. is the best place to go that led to the vulnerabilities, And we did cover mobility I have to check that out. As you know, "theCUBE" is the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Dave ShepherdPERSON

0.99+

Jose Maria AlvarezPERSON

0.99+

EricssonORGANIZATION

0.99+

David ShepherdPERSON

0.99+

sixQUANTITY

0.99+

TelefonicaORGANIZATION

0.99+

Srinivas MukkamalaPERSON

0.99+

40 secondsQUANTITY

0.99+

ChinaLOCATION

0.99+

45 minutesQUANTITY

0.99+

100%QUANTITY

0.99+

2024DATE

0.99+

United StatesLOCATION

0.99+

2016DATE

0.99+

90%QUANTITY

0.99+

ChatGPTTITLE

0.99+

Robert GatesPERSON

0.99+

FirstQUANTITY

0.99+

AWSORGANIZATION

0.99+

SriORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

millionsQUANTITY

0.99+

this weekDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

TelcosORGANIZATION

0.99+

USORGANIZATION

0.99+

Last nightDATE

0.98+

TodayDATE

0.98+

SriPERSON

0.98+

Mobile World CongressEVENT

0.98+

oneQUANTITY

0.98+

EdgeORGANIZATION

0.98+

three thingsQUANTITY

0.98+

first timeQUANTITY

0.98+

Dr.PERSON

0.98+

108QUANTITY

0.98+

telcoORGANIZATION

0.98+

several years agoDATE

0.97+

firstQUANTITY

0.97+

MWCEVENT

0.96+

hundreds and thousands of dollars a dayQUANTITY

0.96+

MWC23EVENT

0.96+

About 80,000 peopleQUANTITY

0.95+

one thingQUANTITY

0.95+

13-year-oldQUANTITY

0.95+

theCUBETITLE

0.95+

theCUBEORGANIZATION

0.95+

two different thingsQUANTITY

0.94+

day oneQUANTITY

0.93+

IvantiPERSON

0.92+

seven vulnerabilitiesQUANTITY

0.91+

VPPERSON

0.91+

presidentPERSON

0.9+

three pillarsQUANTITY

0.89+

first thingQUANTITY

0.89+

Scott Walker, Wind River & Gautam Bhagra, Dell Technologies | MWC Barcelona 2023


 

(light music) >> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Spain everyone. Lisa Martin here with theCUBE Dave Vellante, my co-host for the next four days. We're live in Barcelona, covering MWC23. This is only day one, but I'll tell you the theme of this conference this year is velocity. And I don't know about you Dave, but this day is flying by already. This is ecosystem day. We're going to have a great discussion on the ecosystem next. >> Well we're seeing the disaggregation of the hardened telco stack, and that necessitates an ecosystem open- we're going to talk about Open RAN, we've been talking about even leading up to the show. It's a critical technology enabler and it's compulsory to have an ecosystem to support that. >> Absolutely compulsory. We've got two guests here joining us, Gautam Bhagra, Vice President partnerships at Dell, and Scott Walker, Vice President of global Telco ecosystem at Wind River. Guys, welcome to the program. >> Nice to be here. >> Thanks For having us. >> Thanks for having us. >> So you've got some news, this is day one of the conference, there's some news, Gautam, and let's start with you, unpack it. >> Yeah, well there's a lot of news, as you know, on Dell World. One of the things we are very excited to announce today is the launch of the Open Telecom Ecosystems Community. I think Dave, as you mentioned, getting into an Open RAN world is a challenge. And we know some of the challenges that our customers face. To help solve for those challenges, Dell wants to work with like-minded partners and customers to build innovative solutions, and join go-to-market. So we are launching that today. Wind River is one of our flagship partners for that, and I'm excited to be here to talk about that as well. >> Can you guys talk a little bit about the partnership, maybe a little bit about Wind River so the audience gets that context? >> Sure, absolutely, and the theme of the show, Velocity, is what this partnership is all about. We create velocity for operators if they want to adopt Open RAN, right? We simplify it. Wind River as a company has been around for 40 years. We were part of Intel at one point, and now we're independent, owned by a company called Aptiv. And with that we get another round of investment to help continue our acceleration into this market. So, the Dell partnership is about, like I said, velocity, accelerating the adoption. When we talk to operators, they have told us there are many roadblocks that they face, right? Like systems integration, operating at scale. 'Cause when you buy a traditional radio access network solution from a single supplier, it's very easy. It's works, it's been tested. When you break these components apart and disaggregate 'em, as we talked about David, it creates integration points and support issues, right? And what Dell and Wind River have done together is created a cloud infrastructure solution that could host a variety of RAN workloads, and essentially create a two layer cake. What we're, overall, what we're trying to do is create a traditional RAN experience, with the innovation agility and flexibility of Open RAN. And that's really what this partnership does. >> So these work, this workload innovation is interesting to me because you've got now developers, you know, the, you know, what's the telco developer look like, you know, is to be defined, right? I mean it's like this white sheet of paper that can create all this innovation. And to do that, you've got to have, as I said earlier, an ecosystem. But you've got now, I'm interested in your Open RAN agenda and how you see that sort of maturity model taking place. 'Cause today, you got disruptors that are going to lean right in say "Hey, yeah, that's great." The traditional carriers, they have to have a, you know, they have to migrate, they have to have a hybrid world. We know that takes time. So what's that look like in the marketplace today? >> Yeah, so I mean, I can start, right? So from a Dell's perspective, what we see in the market is yes, there is a drive towards, everyone understands the benefits of being open, right? There's the agility piece, the innovation piece. That's a no-brainer. The question is how do we get there? And I think that's where partnerships become critical to get there, right? So we've been working with partners like Wind River to build solutions that make it easier for customers to start adopting some of the foundational elements of an open network. The, one of the purposes in the agenda of building this community is to bring like-minded developers, like you said like we want those guys to come and work with the customers to create new solutions, and come up with something creative, which no one's even thought about, that accelerates your option even quicker, right? So that's exactly what we want to do as well. And that's one of the reasons why we launched the community. >> Yeah, and what we find with a lot of carriers, they are used to buying, like I said, traditional RAN solutions which are provided from a single provider like Erickson or Nokia and others, right? And we break this apart, and you cloudify that network infrastructure, there's usually a skills gap we see at the operator level, right? And so from a developer standpoint, they struggle with having the expertise in order to execute on that. Wind River helps them, working with companies like Dell, simplify that bottom portion of the stack, the infrastructure stack. So, and we lifecycle manage it, we test- we're continually testing it, and integrating it, so that the operator doesn't have to do that. In addition to that, wind River also has a history and legacy of working with different RAN vendors, both disruptors like Mavenir and Parallel Wireless, as well as traditional RAN providers like Samsung, Erickson, and others soon to be announced. So what we're doing on the northbound side is making it easy by integrating that, and on the southbound side with Dell, so that again, instead of four or five solutions that you need to put together, it's simply two. >> And you think about today how we- how you consume telco services are like there's these fixed blocks of services that you can buy, that has to change. It's more like the, the app stores. It's got to be an open marketplace, and that's where the innovation's going to come in, you know, from the developers, you know, top down maybe. I don't know, how do you see that maturity model evolving? People want to know how long it's going to take. So many questions, when will Open RAN be as reliable. Does it even have to be? You know, so many interesting dynamics going on. >> Yeah, and I think that's something we at Dell are also trying to find out, right? So we have been doing a lot of good work here to help our customers move in that direction. The work with Dish is an example of that. But I think we do understand the challenges as well in terms of getting, adopting the technologies, and adopting the innovation that's being driven by Open. So one of the agendas that we have as a company this year is to work with the community to drive this a lot further, right? We want to have customers adopt the technology more broadly with the tier one, tier two telcos globally. And our sales organizations are going to be working together with Wind Rivers to figure out who's the right set of customers to have these conversations with, so we can drop, drive, start driving this agenda a lot quicker than what we've seen historically. >> And where are you having those customer conversations? Is that at the operator level, is it higher, is it both? >> Well, all operators are deploying 5G in preparation for 6G, right? And we're all looking for those killer use cases which will drive top line revenue and not just make it a TCO discussion. And that starts at a very basic level today by doing things like integrating with Juniper, for their cloud router. So instead of at the far edge cell site, having a separate device that's doing the routing function, right? We take that and we cloudify that application, run it on the same server that's hosting the RAN applications, so you eliminate a device and reduce TCO. Now with Aptiv, which is primarily known as an automotive company, we're having lots of conversations, including with Dell and Intel and others about vehicle to vehicle communication, vehicle to anything communication. And although that's a little bit futuristic, there are shorter term use cases that, like, vehicle to vehicle accident avoidance, which are going to be much nearer term than autonomous driving, for example, which will help drive traffic and new revenue streams for operators. >> So, oh, that's, wow. So many other things (Scott laughs) that's just opened up there too. But I want to come back to, sort of, the Open RAN adoption. And I think you're right, there's a lot of questions that that still have to be determined. But my question is this, based on your knowledge so far does it have to be as hardened and reliable, obviously has to be low latency as existing networks, or can flexibility, like the cloud when it first came out, wasn't better than enterprise IT, it was just more flexible and faster, and you could rent it. And, is there a similar dynamic here where it doesn't have to replicate the hardened stack, it can bring in new benefits that drive adoption, what are your thoughts on that? >> Well there's a couple of things on that, because Wind River, as you know, where our legacy and history is in embedded devices like F-15 fighter jets, right? Or the Mars Rover or the James Web telescope, all run Wind River software. So, we know about can't fail ultra reliable systems, and operators are not letting us off the hook whatsoever. It has to be as hardened and locked down, as secure as a traditional RAN environment. Otherwise they will (indistinct). >> That's table stakes. >> That's table stakes that gets us there. And when River, with our legacy and history, and having operator experience running live commercial networks with a disaggregated stack in the tens of thousands of nodes, understand what this is like because they're running live commercial traffic with live customers. So we can't fail, right? And with that, they want their cake and eat it too, right? Which is, I want ultra reliable, I want what I have today, but I want the agility and flexibility to onboard third party apps. Like for example, this JCNR, this Juniper Cloud-Native Router. You cannot do something as simple as that on a traditional RAN Appliance. In an open ecosystem you can take that workload and onboard it because it is an open ecosystem, and that's really one of the true benefits. >> So they want the mainframe, but they want (Scott laughs) the flexibility of the developer cloud, right? >> That's right. >> They want their, have their cake eat it too and not gain weight. (group laughs) >> Yeah I mean David, I come from the public cloud world. >> We all don't want to do that. >> I used to work with a public cloud company, and nine years ago, public cloud was in the same stage, where you would go to a bank, and they would be like, we don't trust the cloud. It's not secure, it's not safe. It was the digital natives that adopted it, and that that drove the industry forward, right? And that's where the enterprises that realized that they're losing business because of all these innovative new companies that came out. That's what I saw over the last nine years in the cloud space. I think in the telco space also, something similar might happen, right? So a lot of this, I mean a lot of the new age telcos are understanding the value, are looking to innovate are adopting the open technologies, but there's still some inertia and hesitancy, for the reasons as Scott mentioned, to go there so quickly. So we just have to work through and balance between both sides. >> Yeah, well with that said, if there's still some inertia, but there's a theme of velocity, how do you help organizations balance that so they trust evolving? >> Yeah, and I think this is where our solution, like infrastructure block, is a foundational pillar to make that happen, right? So if we can take away the concerns that the organizations have in terms of security, reliability from the fundamental elements that build their infrastructure, by working with partners like Wind River, but Dell takes the ownership end-to-end to make sure that service works and we have those telco grade SLAs, then the telcos can start focusing on what's next. The applications and the customer services on the top. >> Customer service customer experience. >> You know, that's an interesting point Gautam brings up, too, because support is an issue too. We all talk about when you break these things apart, it creates integration points that you need to manage, right? But there's also, so the support aspect of it. So imagine if you will, you had one vendor, you have an outage, you call that one vendor, one necktie to choke, right, for accountability for the network. Now you have four or five vendors that you have to work. You get a lot of finger pointing. So at least at the infrastructure layer, right? Dell takes first call support for both the hardware infrastructure and the Wind River cloud infrastructure for both. And we are training and spinning them up to support, but we're always behind them of course as well. >> Can you give us a favorite customer example of- that really articulates the value of the partnership and the technologies that it's delivering to customers? >> Well, Infra Block- >> (indistinct) >> Is quite new, and we do have our first customer which is LG U plus, which was announced yesterday. Out of Korea, small customer, but a very important one. Okay, and I think they saw the value of the integrated system. They don't have the (indistinct) expertise and they're leveraging Dell and Wind River in order to make that happen. But I always also say historically before this new offering was Vodafone, right? Vodafone is a leader in Europe in terms of Open RAN, been very- Yago and Paco have been very vocal about what they're doing in Open RAN, and Dell and Wind River have been there with them every step of the way. And that's what I would say, kind of, led up to where we are today. We learned from engagements like Vodafone and I think KDDI as well. And it got us where we are today and understanding what the operators need and what the impediments are. And this directly addresses that. >> Those are two very different examples. You were talking about TCO before. I mean, so the earlier example is, that's an example to me of a disruptor. They'll take some chances, you know, maybe not as focused on TCO, of course they're concerned about it. Vodafone I would think very concerned about TCO. But I'm inferring from your comments that you're trying to get the industry, you're trying to check the TCO box, get there. And then move on to higher levels of value monetization. The TCO is going to come down to how many humans it takes to run the network, is it not, is that- >> Well a lot of, okay- >> Or is it devices- >> So the big one now, particularly with Vodafone, is energy cost, right? >> Of course, greening the network. >> Two-thirds of the energy consumption in RAN is the the Radio Access Network. Okay, the OPEX, right? So any reductions, even if they're 5% or 10%, can save tens or hundreds of millions of dollars. So we do things creatively with Dell to understand if there's a lot of traffic at the cell site and if it's not, we will change the C state or P state of the server, which basically spins it down, so it's not consuming power. But that's just at the infrastructure layer. Where this gets really powerful is working with the RAN vendors like Samsung and Ericson and others, and taking data from the traffic information there, applying algorithms to that in AI to shut it down and spin it back up as needed. 'Cause the idea is you don't want that thing powered up if there's no traffic on it. >> Well there's a sustainability, ESG, benefit to that, right? >> Yes. >> And, and it's very compute intensive. >> A hundred percent. >> Which is great for Dell. But at the same time, if you're not able to manage that power consumption, the whole thing fails. I mean it's, because there's going to be so much data, and such a intense requirement. So this is a huge issue. Okay, so Scott, you're saying that in the TCO equation, a big chunk is energy consumption? >> On the OPEX piece. Now there's also the CapEx, right? And Open RAN solutions are now, what we've heard from our customers today, are they're roughly at parity. 'Cause you can do things like repurpose servers after the useful life for a lower demand application which helps the TCO, right? Then you have situations like Juniper, where you can take, now software that runs on the same device, eliminating at a whole other device at the cell site. So we're not just taking a server and software point of view, we're taking a whole cell site point of view as it relates to both CapEx and OPEX. >> And then once that infrastructure it really gets adopted, that's when the innovation occurs. The ecosystem comes in. Developers now start to think of new applications that we haven't thought of yet. >> Gautam: Exactly. >> And that's where, that's going to force the traditional carriers to respond. They're responding, but they're doing so very carefully right now, it's understandable why. >> Yeah, and I think you're already seeing some news in the, I mean Nokia's announcement yesterday with the rebranding, et cetera. That's all positive momentum in my opinion, right? >> What'd you think of the logo? >> I love the logo. >> I liked it too. (group laughs) >> It was beautiful. >> I thought it was good. You had the connectivity down below, You need pipes, right? >> Exactly. >> But you had this sort of cool letters, and then the the pink horizon or pinkish, it was like (Scott laughs) endless opportunity. It was good, I thought it was well thought out. >> Exactly. >> Well, you pick up on an interesting point there, and what we're seeing, like advanced carriers like Dish, who has one of the true Open RAN networks, publishing APIs for programmers to build in their 5G network as part of the application. But we're also seeing the network equipment providers also enable carriers do that, 'cause carriers historically have not been advanced in that way. So there is a real recognition that in order for these networks to monetize new use cases, they need to be programmable, and they need to publish standard APIs, so you can access the 5G network capabilities through software. >> Yeah, and the problem from the carriers, there's not enough APIs that the carriers have produced yet. So that's where the ecosystem comes in, is going to >> A hundred percent >> I think there's eight APIs that are published out of the traditional carriers, which is, I mean there's got to be 8,000 for a marketplace. So that's where the open ecosystem really has the advantage. >> That's right. >> That's right. >> That's right. >> Yeah. >> So it all makes sense on paper, now you just, you got a lot of work to do. >> We got to deliver. Yeah, we launched it today. We got to get some like-minded partners and customers to come together. You'll start seeing results coming out of this hopefully soon, and we'll talk more about it over time. >> Dave: Great Awesome, thanks for sharing with us. >> Excellent. Guys, thank you for sharing, stopping by, sharing what's going on with Dell and Wind River, and why the opportunity's in it for customers and the technological evolution. We appreciate it, you'll have to come back, give us an update. >> Our pleasure, thanks for having us. (Group talks over each other) >> All right, thanks guys >> Appreciate it. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE, Live from MWC23 in Barcelona. theCUBE is the leader in live tech coverage. (upbeat music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. the theme of this conference and it's compulsory to have and Scott Walker, Vice President and let's start with you, unpack it. One of the things we are very excited and the theme of the show, Velocity, they have to have a, you know, And that's one of the reasons the operator doesn't have to do that. from the developers, you and adopting the innovation So instead of at the far edge cell site, that that still have to be determined. Or the Mars Rover or and flexibility to and not gain weight. I come from the public cloud world. and that that drove the that the organizations and the Wind River cloud of the integrated system. I mean, so the earlier example is, and taking data from the But at the same time, if that runs on the same device, Developers now start to think the traditional carriers to respond. Yeah, and I think you're I liked it too. You had the connectivity down below, and then the the pink horizon or pinkish, and they need to publish Yeah, and the problem I mean there's got to be now you just, you got a lot of work to do. and customers to come together. thanks for sharing with us. for customers and the Our pleasure, thanks for having us. Live from MWC23 in Barcelona.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SamsungORGANIZATION

0.99+

DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

NokiaORGANIZATION

0.99+

DellORGANIZATION

0.99+

EricksonORGANIZATION

0.99+

VodafoneORGANIZATION

0.99+

DavePERSON

0.99+

BarcelonaLOCATION

0.99+

Scott WalkerPERSON

0.99+

ScottPERSON

0.99+

Dave VellantePERSON

0.99+

MavenirORGANIZATION

0.99+

Wind RiverORGANIZATION

0.99+

Parallel WirelessORGANIZATION

0.99+

GautamPERSON

0.99+

KoreaLOCATION

0.99+

tensQUANTITY

0.99+

Gautam BhagraPERSON

0.99+

fourQUANTITY

0.99+

8,000QUANTITY

0.99+

5%QUANTITY

0.99+

IntelORGANIZATION

0.99+

10%QUANTITY

0.99+

EuropeLOCATION

0.99+

Wind RiverORGANIZATION

0.99+

AptivORGANIZATION

0.99+

twoQUANTITY

0.99+

SpainLOCATION

0.99+

EricsonORGANIZATION

0.99+

one vendorQUANTITY

0.99+

five vendorsQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

Wind RiversORGANIZATION

0.99+

oneQUANTITY

0.99+

F-15COMMERCIAL_ITEM

0.99+

both sidesQUANTITY

0.99+

two guestsQUANTITY

0.99+

Two-thirdsQUANTITY

0.99+

wind RiverORGANIZATION

0.98+

first callQUANTITY

0.98+

Brian Stevens, Neural Magic | Cube Conversation


 

>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.

Published Date : Feb 13 2023

SUMMARY :

CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

BrianPERSON

0.99+

Brian StevensPERSON

0.99+

DavePERSON

0.99+

95%QUANTITY

0.99+

2015DATE

0.99+

John FurrierPERSON

0.99+

90QUANTITY

0.99+

2016DATE

0.99+

32 bitQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

Brian StevePERSON

0.99+

Neural MagicORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two callsQUANTITY

0.99+

both thingsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

second thingQUANTITY

0.99+

bothQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

first callQUANTITY

0.99+

two thingsQUANTITY

0.99+

second partQUANTITY

0.99+

OneQUANTITY

0.99+

both feetQUANTITY

0.98+

OracleORGANIZATION

0.98+

both modesQUANTITY

0.98+

todayDATE

0.98+

80sDATE

0.98+

firstQUANTITY

0.98+

second commandQUANTITY

0.98+

Chris Grusz, AWS | AWS Marketplace Seller Conference 2022


 

>>Hello. And welcome back to the cubes live coverage here in Seattle for the cubes coverage of AWS marketplace seller conference. Now part of really big move and news, Amazon partner network combines with AWS marketplace to form one organization, the Amazon partner organization, APO where the efficiencies, the next iteration, as they say in Amazon language, where they make things better, simpler, faster, and, and for customers is happening. We're here with Chris Cruz, who's the general manager, worldwide leader of ISV alliances and marketplace, which includes all the channel partners and the buyer and seller relationships all now under one partner organization, bringing together years of work. Yes. If you work with AWS and are a partner and, or sell with them, all kind of coming together, kind of in a new way for the next generation, Chris, congratulations on the new role and the reor. >>Thank you. Yeah, it's very exciting. We're we think it invent, simplifies the process on how we work with our partners and we're really optimistic so far. The feedback's been great. And I think it's just gonna get even better as we kind of work out the final details. >>This is huge news because one, we've been very close to the partner that we've been working with and we talking to, we cover them. We cover the news, the startups from startups, channel partners, big ISVs, big and small from the dorm room to the board room. You guys have great relationships. So check marketplace, the future of procurement, how software will be bought, implemented and deployed is also changed. So you've got the confluence of two worlds coming together, growth in the ecosystem. Yep. NextGen cloud on the horizon for AWS and the customers as digital transformation goes from lift and shift to refactoring businesses. Yep. This is really a seminal moment. Can you share what you talked about on the keynote stage here, around why this is happening now? Yeah. What's the guiding principle. What's the north star where, why what's what's the big news. >>Yeah. And so, you know, a lot of reasons on why we kind of, we pulled the two teams together, but you know, a lot of it kind gets centered around co-sell. And so if you take a look at marketplace where we started off, where it was really a machine image business, and it was a great self-service model and we were working with ISVs that wanted to have this new delivery mechanism on how to bring in at the time was Amazon machine images and you fast forward, we started adding more product types like SAS and containers. And the experience that we saw was that customers would use marketplace for kind of up to a certain limit on a self-service perspective. But then invariably, they wanted by a quantity discount, they wanted to get an enterprise discount and we couldn't do that through marketplace. And so they would exit us and go do a direct deal with a, an ISV. >>And, and so to remedy that we launched private offers, you know, four years ago. And private offers now allowed ISVs to do these larger deals, but do 'em all through marketplace. And so they could start off doing self-service business. And then as a customer graduated up to buying for a full department or an organization, they can now use private offers to execute that larger agreement. And it, we started to do more and more private offers, really kind of coincided with a lot of the initiatives that were going on within Amazon partner network at the time around co-sell. And, and so we started to launch programs like ISV accelerate that really kind of focused on our co-sell relationship with ISVs. And what we found was that marketplace private offers became this awesome way to automate how we co-sell with ISV. And so we kinda had these two organizations that were parallel. We said, you know what, this is gonna be better together. If we put together, it's gonna invent simplify and we can use marketplace private offers as part of that co-sell experience and really feed that automation layer for all of our ISVs as they interacted with native >>Discussions. Well, I gotta give you props, you and Mona work on stage. You guys did a great job and it reminds me of the humble nature of AWS and Amazon. I used to talk to Andy jazzy about this all the time. That reminds me of 2013 here right now, because you're in that mode where Amazon reinvent was in 2013. Yeah. Where you knew it was breaking out. Yeah. Everyone's it was kind of small, but we haven't made it yet. Yeah. But you guys are doing billions of vows in transactions. Yeah. But this event is really, I think the beginning of what we're seeing as the change over from securing and deploying applications in the cloud, because there's a lot of nuanced things I want to get your reaction on one. I heard making your part product as an ISV, more native to AWS's stack. That was one major call out. I heard the other one was, Hey, if you're a channel partner, you can play too. And by the way, there's more choice. There's a lot going on here. That's about to kind of explode in a good way for customers. Yeah. Buyers get more access to assemble their solutions. Yeah. And you got all kinds of like business logic, compensation, integration, and scale. Yeah. This is like unprecedented. >>Yeah. It's, it's exciting to see what's going on. I mean, I think we kind of saw the tipping point probably about two years ago, which, you know, prior to that, you know, we would be working with ISVs and customers and it was really much more of an evangelism role where we were just getting people to try it. Just, just list a product. We think this is gonna be a good idea. And if you're a buyer, it's like just try out a private offer, try out a self, you know, service subscription. And, and what's happened now is there's no longer a lot of that convincing that needs to happen. It's really become accepted. And so a lot of the conversations I have now with ISVs, it's not about, should I do marketplace it's how do I do it better? And how do I really leverage marketplace as part of my co-sell initiatives as, as part of my go to market strategy. >>And so you've, you've really kind of passed this tipping point where marketplaces are now becoming very accepted ways to buy third party software. And so that's really exciting. And, and we see that we, you know, we can really enhance that experience, you know, and what we saw on the machine image side is we had this awesome integrated experience where you would buy it. It was tied right into the EC two control plane. And you could go from buying to deploying in one single motion. SAS is a little bit different, you know, we can do all the buying in a very simple motion, but then deploying it. There's a whole bunch of other stuff that our customers have to do. And so we see all kinds of ways that we can simplify that. You know, recently we launched the ability to put third party solutions outta marketplace, into control tower, which is how we deploy all of our landing zones for AWS. And now it's like, instead of having to go wire that up as you're adding new AWS environments, why not just use that third party solution that you've already integrated to you and have it there as you're span those landing zones through >>Control towers, again, back to humble nature, you guys have dominated the infrastructure as a service layer. You kind of mentioned it. You didn't really kind of highlight it other than saying you're doing pretty good. Yeah. On the IAS or the technology partners as you call or infrastructure as you guys call it. Okay. I can see how the, the, the pan, the control panel is great for those customers. But outside that, when you get into like CRM, you mentioned E R P these business apps, these horizontal and verticals have data they're gonna have SageMaker, they're gonna have edge. They might have, you know, other services that are coming online from Amazon. How do I, as an ISV, get my stuff in there. Yeah. And how do I succeed? And what are you doing to make that better? Cause I know it's kind of new, but not new. Yeah, >>No, it's not. I mean, that's one of the things that we've really invested on is how do we make it really easy to list marketplace? And, you know, again, when we first start started, it was a big, huge spreadsheet that you had to fill out. It was very cumbersome and we've really automated all those aspects. So now we've exposed an API as an example. So you can go straight out of your own build process and you might have your own C I CD pipeline. And then you have a build step at the end. And now you can have that execute marketplace update from your build script, right across that API all the way over to AWS marketplace. So it's taking that effectively, a C CD pipeline from an ISV and extending it all the way to AWS and then eventually to a customer, because now it's just an automated supply chain for that software coming into their environment. And we see that being super powerful. There's nowhere manual steps >>Along. Yeah. I wanna dig into that because you made a comment and I want you to clarify it here in the cube. Some have said, even us on the cube. Oh, marketplace. Just the website's a catalog. Yeah. Feels old school. Yeah. Feels like 1995 database. I'm kind of just, you know, saying no offense sake. And now you're saying, you're now looking at this and, and implementing more of a API based. Why is that relevant? I'm I know the answer. You already set up with APIs, but explain the transition from the mindset of it's a website. Yeah. Buy stuff on a catalog to full blown API layer. Yeah. Services. >>Absolutely. Well, when you look at all AWS services, you know, our customers will interface, you know, they'll interface them through a console initially, but when they're using them in production, they're, it's all about APIs and marketplace, as you mentioned, did start off as a website. And so we've kind of taken the opposite approach. We've got this great website experience, which is great for demand gen and, you know, highlighting those listings. But what we want to do is really have this API service layer that you're interfacing with so that an ISV effectively is not even in our marketplace. They interfacing over APIs to do a variety of their high, you know, value functions, whether it's listing soy, private offers. We don't have that all available through APIs and the same thing on the buyer side. So it's integrating directly into their AWS environment and then they can view all their third party spend within things like our cost management suites. They can look at things like cost Explorer, see third party software, right next to first party software, and have that all integrated this nice as seamless >>For the customer. That's a nice cloud native kind of native experience. I think that's a huge advantage. I'm gonna track that closer. We're we're gonna follow that. I think that's gonna be the killer killer feature. All right. Now let's get to the killer feature and the business logic. Okay. Yeah. All partners all wanna know what's in it for me. Yeah. How do I make more cash? Yeah. How do I compensate my sales people? Yeah. What do you guys don't compete with me? Give me leads. Yeah. Can I get MDF market development funds? Yeah. So take me through the, how you're thinking about supporting the partners that are leaning in that, you know, the parachute will open when they jump outta the plane. Yeah. It's gonna be, they're gonna land safely with you. Yeah. MDF marketing to leads. What are you doing to support the partners to help them serve their >>Customers? It's interesting. Market marketplace has become much more of an accepted way to buy, you know, our customers are, are really defaulting to that as the way to go get that third party software. So we've had some industry analysts do some studies and in what they found, they interviewed a whole cohort of ISVs across various categories within marketplace, whether it was security or network or even line of business software. And what they've found is that on average, our ISVs will see a 24% increased close rate by using marketplace. Right. So when I go talk to a CRO and say, do you want to close, you know, more deals? Yes. Right. And we've got data to show that we're also finding that customers on average, when an ISV sales marketplace, they're seeing an 80% uplift in the actual deal size. And so if your ASP is a hundred K 180 K has a heck of a lot better, right? >>So we're seeing increased deal sizes by going through marketplace. And then the third thing that we've seen, that's a value prop for ISVs is speed of closure. And so on average, what we're finding is that our ISVs are closing deals 40% faster by using marketplace. So if you've got a 10 month sales cycle, shaving four months off of a sales cycle means you're bringing deals in, in an earlier calendar year, earlier quarter. And for ISVs getting that cash flow early is very important. So those are great metrics that we're seeing. And, and, you know, we think that they're only >>Gonna improve and from startups who also want, they don't have a lot of cash ISVs that are rich and doing well. Yeah. They have good, good, good, good, good to market funding. Yeah. You got the range of partners and you know, the next startup could be the next Figma could be in that batch startups. Exactly. Yeah. You don't know the game is changing. Yeah. The next brand could be one of those batch of startups. Yeah. What's the message to the startup community. Yeah. >>I mean, marketplace in a lot of ways becomes a level in effect, right. Because, you know, if, if you look at pre marketplace, if you were a startup, you were having to go generate sales, have a sales force, go compete, you know, kind of hand to hand with these largest ISVs marketplace is really kind of leveling that because now you can both list in marketplace. You have the same advantage of putting that directly in the AWS bill, taking advantage of all the management go features that we offer all the automation that we bring to the table. And so >>A lot of us joint selling >>And joint selling, right? When it goes through marketplace, you know, it's gonna feed into a number of our APN programs like ISV accelerate, our sales teams are gonna get recognized for those deals. And so, you know, it brings nice co-sell behavior to how we work with our, our field sales teams together. It brings nice automation that, you know, pre marketplaces, they would have to go build all that. And that was a heavy lift that really now becomes just kind of table stakes for any kind of ISV selling to an, any of >>Customer. Well, you know, I'm a big fan of the marketplace. I've always have been, even from the early days, I saw this as a procurement game changer. It makes total sense. It's so obvious. Yeah. Not obvious to everyone, but there's a lot of moving parts behind the scenes behind the curtain. So to speak that you're handling. Yeah. What's your message to the audience out there, both the buyers and the sellers. Yeah. About what your mission is, what you're you wake up every day thinking about. Yeah. And what's your promise to them and what you're gonna work on. Cause it's not easy. You're building a, an operating model. That's not a website. It's a full on cloud service. Yeah. What's your promise. And what's >>Your goals. No. And like, you know, ultimately we're trying to do from an Aus market perspective is, is provide that selection experience to the ABUS customer, right? There's the infamous flywheel that Jeff put together that had the concepts of why Amazon is successful. And one are the concepts he points to is the concept of selection. And, and what we mean by that is if you come to Amazon it's is effectively that everything stored. And when you come across, AWS marketplace becomes that selection experience. And so that's what we're trying to do is provide whatever our AWS customers wanna buy, whatever form factor, whatever software type, whatever data type it's gonna be available in AWS marketplace for consumption. And that ultimately helps our customers because now they can get whatever technologies that they need to use alongside Avis. >>And I want, wanna give you props too. You answered the hard question on stage. I've asked Andy EY this on the cube when he was the CEO, Adam Celski last year, I asked him the same question and the answer has been consistent. We have some solutions that people want a AWS end to end, but your ecosystem, you want people to compete yes. And build a product and mostly point to things like snowflake, new Relic. Yeah. Other people that compete with Amazon services. Yeah. You guys want that. You encourage that. Yeah. You're ratifying that same statement. >>Absolutely. Right. Again, it feeds into that selection experience. Right. If a customer wants something, we wanna make sure it's gonna be a great experience. Right. And so a lot of these ISVs are building on top of AWS. We wanna make sure that they're successful. And, you know, while we have a number of our first party services, we have a variety of third party technologies that run very well in a AWS. And ultimately the customer's gonna make their decision. We're customer obsessed. And if they want to go with a third party product, we're absolutely gonna support them in every way shape we can and make sure that's a successful experience for our customers. >>I, I know you referenced two studies check out the website's got buyer and seller surveys on there for Boer. Yeah. I don't want to get into that. I want to just end on one. Yeah. Kind of final note, you got a lot of successful buyers and a lot of successful sellers. The word billions, yes. With an S was and the slide. Can you say the number, how much, how many billions are sold yeah. Through the marketplace. Yeah. And the buyer experience future what's those two things. >>Yeah. So we went on record at reinvent last year, so it's approaching it birthday, but it was the first year that we've in our 10 year history announced how much was actually being sold to the marketplace. And, you know, we are now selling billions of dollars to our marketplace and that's with an S so you can assume, at least it's two, but it's, it's a, it's a large number and it's going >>Very quickly. Yeah. Can't disclose, you know, >>But it's a, it's been a very healthy part of our business. And you know, we look at this, the experience that we >>Saw, there's a lot of headroom. I mean, oh yeah, you have infrastructure nailed down. That's long, you get better, but you have basically growth up upside with these categor other categories. What's the hot categories. You >>Know, we, we started off with infrastructure related products and we've kind of hit critical mass there. Right? We've, there's very few ISVs left that are in that infrastructure related space that are not in our marketplace. And what's happened now is our customers are saying, well, I've been buying infrastructure products for years. I'm gonna buy everything. I wanna buy my line of business software. I wanna buy my vertical solutions. I wanna buy my data and I wanna buy all my services alongside of that. And so there's tons of upside. We're seeing all of these either horizontal business applications coming to our marketplace or vertical specific solutions. Yeah. Which, you know, when we first designed our marketplace, we weren't sure if that would ever happen. We're starting to see that actually really accelerate because customers are now just defaulting to buying everything through their marketplace. >>Chris, thanks for coming on the queue. I know we went a little extra long. There wanted to get that clarification on the new role. Yeah. New organization. Great, great reorg. It makes a lot of sense. Next level NextGen. Thanks for coming on the cube. Okay. >>Thank you for the opportunity. >>All right here, covering the new big news here of AWS marketplace and the AWS partner network coming together under one coherent organization, serving fires and sellers, billions sold the future of how people are gonna be buying software, deploying it, managing it, operating it. It's all happening in the marketplace. This is the big trend. It's the cue here in Seattle with more coverage here at Davis marketplace sellers conference. After the short break.

Published Date : Sep 21 2022

SUMMARY :

If you work with AWS and are a partner and, or sell with them, And I think it's just gonna get even better Can you share what you talked about on the keynote stage here, And so if you take a look at marketplace where And, and so to remedy that we launched private offers, you know, four years ago. And you got all kinds of like business logic, compensation, integration, And so a lot of the conversations I have now with ISVs, it's not about, should I do marketplace it's how do I do and we see that we, you know, we can really enhance that experience, you know, and what we saw on the machine image side is we And what are you doing to make that better? And then you have a build step at the end. I'm kind of just, you know, saying no offense sake. of their high, you know, value functions, whether it's listing soy, private offers. you know, the parachute will open when they jump outta the plane. Market marketplace has become much more of an accepted way to buy, you know, And, and, you know, we think that they're only of partners and you know, the next startup could be the next Figma could be in that batch startups. have a sales force, go compete, you know, kind of hand to hand with these largest ISVs When it goes through marketplace, you know, it's gonna feed into a number of our APN programs And what's your promise to them and what you're gonna work on. And one are the concepts he points to is the concept of selection. And I want, wanna give you props too. And, you know, while we have a number of our first party services, And the buyer experience future what's those two things. And, you know, we are now selling billions of dollars to our marketplace and that's with an S so you can assume, And you know, we look at this, the experience that we I mean, oh yeah, you have infrastructure nailed down. Which, you know, when we first designed our marketplace, we weren't sure if that would ever happen. I know we went a little extra long. It's the cue here in Seattle with more coverage here at Davis marketplace sellers conference.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

ChrisPERSON

0.99+

Chris CruzPERSON

0.99+

SeattleLOCATION

0.99+

AmazonORGANIZATION

0.99+

Chris GruszPERSON

0.99+

10 monthQUANTITY

0.99+

APOORGANIZATION

0.99+

Adam CelskiPERSON

0.99+

24%QUANTITY

0.99+

JeffPERSON

0.99+

twoQUANTITY

0.99+

40%QUANTITY

0.99+

2013DATE

0.99+

80%QUANTITY

0.99+

10 yearQUANTITY

0.99+

last yearDATE

0.99+

two studiesQUANTITY

0.99+

two teamsQUANTITY

0.99+

four monthsQUANTITY

0.99+

bothQUANTITY

0.99+

two thingsQUANTITY

0.99+

billionsQUANTITY

0.98+

two organizationsQUANTITY

0.98+

two worldsQUANTITY

0.98+

four years agoDATE

0.98+

oneQUANTITY

0.98+

1995DATE

0.97+

billions of vowsQUANTITY

0.97+

one organizationQUANTITY

0.97+

one partner organizationQUANTITY

0.97+

firstQUANTITY

0.97+

FigmaORGANIZATION

0.97+

MonaPERSON

0.96+

third thingQUANTITY

0.95+

ABUSORGANIZATION

0.94+

first yearQUANTITY

0.94+

billions of dollarsQUANTITY

0.94+

reinventEVENT

0.93+

ISVORGANIZATION

0.92+

C CDTITLE

0.92+

Andy EYPERSON

0.9+

C I CDTITLE

0.9+

Davis marketplace sellersEVENT

0.9+

one major callQUANTITY

0.86+

one coherent organizationQUANTITY

0.85+

NextGenORGANIZATION

0.85+

Andy jazzyPERSON

0.83+

SageMakerORGANIZATION

0.83+

about two years agoDATE

0.83+

first partyQUANTITY

0.81+

one single motionQUANTITY

0.79+

earlier quarterDATE

0.78+

AWS Marketplace Seller Conference 2022EVENT

0.77+

AvisORGANIZATION

0.77+

Bill Stratton, Snowflake | Snowflake Summit 2022


 

(ethereal music) >> Good morning, everyone, and welcome to theCUBE's day-two coverage of Snowflake Summit '22. Lisa Martin here with Dave Vellante. We are live in Las Vegas at Caesar's Forum, looking forward to an action-packed day here on theCUBE. Our first guest joins us, Bill Stratton, the global industry lead, media, entertainment and advertising at Snowflake. Bill, great to have you on the program talking about industry specifics. >> Glad to be here, excited to have a conversation. >> Yeah, the media and entertainment industry has been keeping a lot of us alive the last couple of years, probably more of a dependence on it than we've seen stuck at home. Talk to us about the data culture in the media, entertainment and advertising landscape, how is data being used today? >> Sure. Well, let's start with what you just mentioned, these last couple of years, I think, coming out of the pandemic, a lot of trends and impact to the media industry. I think there were some things happening prior to COVID, right? Streaming services were starting to accelerate. And obviously, Netflix was an early mover. Disney launched their streaming service right before the pandemic, Disney+, with ESPN+ as well. I think then, as the pandemic occurred these last two years, the acceleration of consumers' habits, obviously, of not just unbundling their cable subscription, but then choosing, you know, what services they want to subscribe to, right? I mean, I think we all sort of grew up in this era of, okay, the bundle was the bundle, you had sports, you had news, you had entertainment, whether you watched the channel or not, you had the bundle. And what the pandemic has accelerated is what I call, and I think a lot of folks call, the golden age of content. And really, the golden age of content is about the consumer. They're in control now, they pick and choose what services they want, what they watch, when they watch it. And I think that has extremely, sort of accelerated this adoption on the consumer side, and then it's creating this data ecosystem, as a result of companies like Disney having a direct-to-consumer relationship for the first time. It used to be a Disney or an NBC was a wholesaler, and the cable or satellite company had the consumer data and relationship. Now, the companies that are producing the content have the data and the consumer relationships. It's a fascinating time. >> And they're still coming over the top on the Telco networks, right? >> Absolutely right. >> Telco's playing in this game? >> Yeah, Telco is, I think what the interesting dynamic with Telco is, how do you bundle access, high speed, everybody still needs high speed at their home, with content? And so I think it's a similar bundle, but it takes on a different characteristic, because the cable and Telcos are not taking the content risk. AT&T sold Warner Media recently, and I think they looked at it and said, we're going to stay with the infrastructure, let somebody else do the content. >> And I think I heard, did I hear this right the other day, that Roku is now getting into the content business? >> Roku is getting into it. And they were early mover, right? They said the TVs aren't, the operating system in the television is not changing fast enough for content. So their dongle that you would slide into a TV was a great way to get content on connected televisions, which is the fastest growing platform. >> I was going to say, what are the economics like in this business? Because the bundles were sort of a limiting factor, in terms of the TAM. >> Yeah. >> And now, we get great content, all right, to watch "Better Call Saul", I have to get AMC+ or whatever. >> You know, your comment, your question about the economics and the TAM is an interesting one, because I think we're still working through it. One of the things, I think, that's coming to the forefront is that you have to have a subscription revenue stream. Okay? Netflix had a subscription revenue stream for the last six, eight, 10 years, significantly, but I think you even see with Netflix that they have to go to a second revenue model, which is going to be an ad-supported model, right? We see it in the press these last couple days with Reid Hastings. So I think you're going to see, obviously subscription, obviously ad-supported, but the biggest thing, back to the consumer, is that the consumer's not going to sit through two minutes of advertising to watch a 22 minute show. >> Dave: No way. >> Right? So what's then going to happen is that the content companies want to know what's relevant to you, in terms of advertising. So if I have relevancy in my ad experience, then it doesn't quite feel, it's not intrusive, and it's relevant to my experience. >> And the other vector in the TAM, just one last follow-up, is you see Amazon, with Prime, going consumption. >> Bill: That's right. >> You get it with Prime, it's sort of there, and the movies aren't the best in the world, but you can buy pretty much any movie you want on a consumption basis. >> Yeah. Just to your last quick point, there is, we saw last week, the Boston Red Sox are bundling tickets, season tickets, with a subscription to their streaming service. >> NESN+, I think it is, yeah. So just like Prime, NESN+- >> And it's like 30 bucks a month. >> -just like Prime bundling with your delivery service, you're going to start to see all kinds of bundles happen. >> Dave: Interesting. >> Man, the sky is the limit, it's like it just keeps going and proliferating. >> Bill: It does. >> You talk about, on the ad side for a second, you mentioned the relevance, and we expect that as consumers, we're so demanding, (clears throat) excuse me, we don't have the patience, one of the things I think that was in short supply during COVID, and probably still is, is patience. >> That's right. >> I think with all of us, but we expect that brands know us enough to surf up the content that they think we watched, we watched "Breaking Bad", "Better Call Saul", don't show me other things that aren't relevant to the patterns I've been showing you, the content creators have to adapt quickly to the rising and changing demands of the consumer. >> That's right. Some people even think, as you go forward and consumers have this expectation, like you just mentioned, that brands not only need to understand their own view of the consumer, and this is going to come into the Snowflake points that we talk about in a minute, but the larger view that a brand has about a consumer, not just their own view, but how they consume content, where they consume it, what other brands they even like, that all builds that picture of making it relevant for the consumer and viewer. >> Where does privacy come into the mix? So we want it to be relevant and personalized in a non-creepy way. Talk to us about the data clean rooms that Snowflake launched, >> Bill: That's right. >> and how is that facilitating from a PII perspective, or is it? >> Yeah. Great question. So I think the other major development, in addition to the pandemic, driving people watching all these shows is the fact that privacy legislation is increasing. So we started with California with the CCPA, we had GDPR in Europe, and what we're starting to see is state by state roll out different privacy legislations. At some point, it may be true that we have a federal privacy legislation, and there are some bills that are working through the legislature right now. Hard to tell what's going to happen. But to your question, the importance of privacy, and respecting privacy, is exactly happening at the same time that media companies and publishers need to piece together all the viewing habits that you have. You've probably watched, already this morning, on your PC, on your phone, and in order to bring that experience together a media company has to be able to tie that together, right? Collaborate. So you have collaboration on one side, and then you have privacy on the other, and they're not necessarily, normally, go together, Right? They're opposing forces. So now though, with Snowflake, and our data clean room, we like to call it a data collaboration platform, okay? It's not really what a data warehouse function traditionally has been, right? So if I can take data collaboration, and our clean room, what it does is it brings privacy controls to the participants. So if I'm an advertiser, and I'm a publisher, and I want to collaborate to create an advertising campaign, they both can design how they want to do that privacy-based collaboration, Because it's interesting, one company might have a different perspective of privacy, on a risk profile, than another company. So it's very hard to say one size is going to fit all. So what we at Snowflake do, with our infrastructure, is let you design how you create your own clean room. >> Is that a differentiator for Snowflake, the clean rooms? >> It's absolutely a very big differentiator. Two reasons, or probably two, three reasons, really. One is, it's cross cloud. So all the advertisers aren't going to be in the same cloud, all the publishers aren't going to be in the same cloud. One big differentiator there. Second big differentiator is, we want to be able to bring applications to the data, so our clean room can enable you to create measurement against an ad campaign without moving your data. So bringing measurement to the data, versus sending data to applications then improves the privacy. And then the third one is, frankly, our pricing model. You only pay for Snowflake what you use. So in the advertising world, there's what's called an ad tech tax, there is no ad tech tax for Snowflake, because we're simply a pay-as-you-go service. So it's a very interesting dynamic. >> So what's that stack look like, in your world? So I've pulled up Frank's chart, I took a picture of his, he's called it the new, modern data stack, I think he called it, but it had infrastructure in the bottom, okay, that's AWS, Google, Azure, and then a lot of you, live data, that would be the media data cloud, the workload execution, the specific workload here is media and entertainment, and then application development, that's a new layer of value that you're bringing in, marketplace, which is the whole ecosystem, and then monetization comes from building on top. >> Bill: Yes. >> So I got AWS in there, and other clouds, you got a big chunk of that, where do your customers add value on top of that? >> Yeah. So the way you described it, I think, with Frank's point, is right on. You have the infrastructure. We know that a lot of advertisers, for example, aren't going to use Amazon, because the retailer competes with Amazon, So they want to might be in Google or Azure. And then sort of as you go up the stack, for the data layer that is Snowflake, especially what we call first-party data, is sitting in that Snowflake environment, right? But that Snowflake environment is a distributed environment, so a Disney, who was on stage with me yesterday, she talked about, Jaya talked about their first-party datas in Snowflake, their advertisers' datas in their own Snowflake account, in their own infrastructure. And then what's interesting is is that application layer is coming to the data, and so what we're really seeing is an acceleration of companies building that application natively on Snowflake to do measurement, to do targeting, to do activation. And so, that growth of that final application layer is what we're seeing as the acceleration in the stack. >> So the more data that's in that massive distributed data cloud, the more value your customers can get out of it. And I would imagine you're just looking to tick things off that where customers are going outside of the Snowflake data cloud, let's attack that so they don't have to. >> Yeah, I think these partners, (clears throat) excuse me, and customers, it's an interesting dynamic, because they're customers of ours. But now, because anybody who is already in Snowflake can be their customer, then they're becoming our partner. So it's an interesting dynamic, because we're bringing advertisers to a Disney or an NBCU, because they already have their data in Snowflake. So the network effect that's getting created because of this layer that's being built is accelerated. >> In 2013, right after the second reinvent, I wrote a piece called "How to Compete with the Amazon Gorilla." And it seemed to us pretty obvious at the time, you're not going to win an infrastructure again, you got to build on top of it, you got to build ecosystems within industries, and the data, the connection points, that network effect that you just talked about, it's actually quite thrilling to see you guys building that. >> Well, and I think you know this too, I mean, Amazon's a great partner of ours as well, right? So they're part of our media data cloud, as Amazon, right? So we're making it easier and easier for companies to be able to spin up a clean room in places like AWS, so that they get the privacy controls and the governance that's required as well. >> What do you advise to, say, the next generation of media and advertising companies who may be really early in the data journey? Obviously, there's competition right here in the rear view mirror, but we've seen services that launch and fail, what do you advise to those folks that maybe are early in the journey and how can Snowflake help them accelerate that to be able to launch services they can monetize, and get those consumers watching? >> I think the first thing for a lot of these brands is that they need to really own their data. And what I mean by that is, they need to understand the consumer relationship that they have, they need to take the privacy and the governance very seriously, and they need to start building that muscle. It's almost, it's a routine and a muscle that they just need to continue to kind of build up, because if you think about it, a media company spends two, three hours a day with their customer. You might watch two hours of a streaming show, but how much time do you spend with a single brand a day? Maybe 30 seconds, maybe 10 seconds, right? And so, their need to build the muscle, to be able to collect the data in a privacy-compliant way, build the intelligence off of that, and then leverage the intelligence. We talked about it a few days ago, and you look at a retailer, as a really good example, a retailer is using Snowflake and the retail data cloud to optimize their supply chain. Okay? But their supply chain extends beyond their own infrastructure to the advertising and marketing community, because if I can't predict demand, how do I then connect it to my supply chain? So our media data cloud is helping retailers and consumer product goods companies actually drive demand into their reconstructed supply chain. So they both work together. >> So you have a big focus, obviously, on the monetization piece, of course, that's a great place to start. Where do you see the media data cloud going? >> Yeah. I think we'll start to expand beyond advertising and beyond marketing. There's really important sub-segments of media. Gaming is one. You talk about the pandemic and teenagers playing games on their phones. So we'll have an emphasis around gaming. We'll have an emphasis in sports. Sports is going through a big change in an ecosystem. And there's a big opportunity to connect the dots in those ecosystems as well. And then I think, to what we were just talking about, I think connecting commerce and media is a very important area. And I think the two are still very loosely connected today. It used to be, could I buy the Jennifer Aniston sweater from "Friends", right? That was always the analogy. Now, media and social media, and TikTok and everything else, are combining media and commerce very closely. So I think we'll start to see more focus around that as well. So that adds to your monetization. >> Right, right. And you can NFT that. (Lisa laughs) >> Bill: That's right, there you go, you can mint an NFT on that. >> It's the tip of the iceberg. >> Absolutely. >> There's so much more potential to go. Bill, thank you so much for joining us bright and early this morning, talking about what snowflake is doing in media, entertainment and advertising. Exciting stuff, relevant to all of us, we appreciate your insights and your forward-looking statements. >> Thank you for having me. I enjoyed it. >> Our pleasure. >> Thank you. >> Good >> Bill: Bye now. >> For our guest and Dave Vellante, I'm Lisa Martin, you're up early with us watching theCUBE's day-two coverage of Snowflake Summit '22. We'll be back in a moment with our next guest. (upbeat music)

Published Date : Jun 15 2022

SUMMARY :

Bill, great to have you on the program Glad to be here, excited in the media, entertainment and the cable or satellite company are not taking the content risk. So their dongle that you in terms of the TAM. I have to get AMC+ or whatever. is that the consumer's not going to sit is that the content companies want to know And the other vector in the and the movies aren't Just to your last quick point, there is, So just like Prime, NESN+- with your delivery service, Man, the sky is the limit, one of the things I think the content creators have to adapt quickly and this is going to come Where does privacy come into the mix? and in order to bring So in the advertising world, of his, he's called it the So the way you described it, I think, So the more data So the network effect and the data, the connection points, and the governance and the retail data cloud to on the monetization piece, of course, So that adds to your monetization. And you can NFT that. Bill: That's right, there you go, There's so much more potential to go. Thank you for having me. We'll be back in a moment

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Dave VellantePERSON

0.99+

DisneyORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

NBCUORGANIZATION

0.99+

DavePERSON

0.99+

NBCORGANIZATION

0.99+

TelcosORGANIZATION

0.99+

AWSORGANIZATION

0.99+

FrankPERSON

0.99+

AmazonORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Warner MediaORGANIZATION

0.99+

two hoursQUANTITY

0.99+

Bill StrattonPERSON

0.99+

2013DATE

0.99+

Better Call SaulTITLE

0.99+

NetflixORGANIZATION

0.99+

Breaking BadTITLE

0.99+

30 secondsQUANTITY

0.99+

twoQUANTITY

0.99+

10 secondsQUANTITY

0.99+

Two reasonsQUANTITY

0.99+

yesterdayDATE

0.99+

two minutesQUANTITY

0.99+

Las VegasLOCATION

0.99+

AT&TORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

eightQUANTITY

0.99+

BillPERSON

0.99+

22 minuteQUANTITY

0.99+

EuropeLOCATION

0.99+

LisaPERSON

0.99+

last weekDATE

0.99+

RokuORGANIZATION

0.99+

TikTokORGANIZATION

0.99+

JayaPERSON

0.99+

Boston Red SoxORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

first timeQUANTITY

0.99+

PrimeCOMMERCIAL_ITEM

0.99+

three reasonQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

OneQUANTITY

0.99+

first guestQUANTITY

0.99+

one sizeQUANTITY

0.98+

third oneQUANTITY

0.98+

bothQUANTITY

0.98+

pandemicEVENT

0.98+

oneQUANTITY

0.97+

Snowflake Summit '22EVENT

0.97+

one companyQUANTITY

0.97+

30 bucks a monthQUANTITY

0.97+

CCPAORGANIZATION

0.97+

SnowflakeTITLE

0.97+

one sideQUANTITY

0.97+

theCUBEORGANIZATION

0.96+

Snowflake Summit 2022EVENT

0.96+

GDPRTITLE

0.95+

Atri Basu & Necati Cehreli | Root Cause as a Service - Never dig through logs again


 

(upbeat music) >> Okay, we're back with Atri Basu who is Cisco's resident philosopher who also holds a master's in computer science. We're going to have to unpack that a little bit. And Necati Cehreli, who's technical lead at Cisco. Welcome, guys. Thanks for coming on theCUBE. >> Happy to be here. >> Thanks a lot. >> All right, let's get into it. We want you to explain how Cisco validated the Zebrium technology and the proof points that you have that it actually works as advertised. So first Atri, first tell us about Cisco TAC. What does Cisco TAC do? >> So TAC is otherwise it's an acronym for Technical Assistance Center, is Cisco's support arm, the support organization. And the risk of sounding like I'm spouting a corporate line. The easiest way to summarize what TAC does is provide world class support to Cisco customers. What that means is we have about 8,000 engineers worldwide and any of our Cisco customers can either go on our web portal or call us to open a support request. And we get about 2.2 million of these support requests a year. And what these support requests are, are essentially the customer will describe something that they need done some networking goal that they have that they want to accomplish. And then it's TACs job to make sure that that goal does get accomplished. Now, it could be something like they're having trouble with an existing network solution and it's not working as expected or it could be that they're integrating with a new solution. They're, you know, upgrading devices maybe there's a hardware failure anything really to do with networking support and, you know the customer's network goals. If they open up a case for testing for help then TACs job is to respond and make sure the customer's, you know questions and requirements are met. About 44% of these support requests are usually trivial and, you know can be solved within a call or within a day. But the rest of TAC cases really involve getting into the network device, looking at logs. It's a very technical role. It's a very technical job. You need to be conversed with network solutions, their designs, protocols, et cetera. >> Wow. So 56% non-trivial. And so I would imagine you spend a lot of time digging through logs. Is that true? Can you quantify that like, you know, every month how much time you spend digging through logs and is that a pain point? >> Yeah, it's interesting you asked that because when we started on this journey to augment our support engineers workflow with Zebrium solution, one of the things that we did was we went out and asked our engineers what their experience was like doing log analysis. And the anecdotal evidence was that on average an engineer will spend three out of their eight hours reviewing logs either online or offline. So what that means is either with the customer live on a WebEx, they're going to be going over logs, network, state information, et cetera or they're going to do it offline where the customer sends them the logs it's attached to a, you know, a service request and they review it and try to figure out what's going on and provide the customer with information. So it's a very large chunk of our day. You know, I said 8,000 plus engineers and so three hours a day that's 24,000 man hours a day spent on log analysis. Now the struggle with logs or analyzing logs is there by out of necessity, logs are very contrite. They try to pack a lot of information in a very little space. And this is for performance reasons, storage reasons, et cetera, but the side effect of that is they're very esoteric. So they're hard to read if you're not conversant if you're not the developer who wrote these logs or you aren't doing code deep dives. And you're looking at where this logs getting printed and things like that, it may not be immediately obvious or even after a little while it may not be obvious what that log line means or how it correlates to whatever problem you're troubleshooting. So it requires tenure. It requires, you know, like I was saying before it requires a lot of knowledge about the protocol what's expected because when you're doing log analysis what you're really looking for is a needle in a haystack. You're looking for that one anomalous event, that single thing that tells you this shouldn't have happened, and this was a problem right. Now doing that kind of anomaly detection requires you to know what is normal. It requires, you know, what the baseline is. And that requires a very in depth understanding of, you know the state changes for that network solution or product. So it requires time to near and expertise to do well. And it takes a lot of time even when you have that kind of expertise. >> Wow. So thank you, Atri. And Necati, that's almost two days a week for a technical resource. That's not inexpensive. So what was Cisco looking for to sort of help with this and how'd you stumble upon Zebrium? >> Yeah, so, we have our internal automation system which has been running more than a decade now. And what happens is when a customer attach log bundle or diagnostic bundle into the service request we take that from the Sr we analyze it and we represent some kind of information. You know, it can be alerts or some tables, some graph, to the engineer, so they can, you know troubleshoot this particular issue. This is an incredible system, but it comes with its own challenges around maintenance to keep it up to date and relevant with Cisco's new products or a new version of a product, new defects, new issues and all kind of things. And when I mean with those challenges are let's say Cisco comes up with a product today. We need to come together with those engineers. We need to figure out how this bundle works, how it's structured out. We need to select individual logs, which are relevant and then start modeling these logs and get some values out of those logs, using PaaS or some rag access to come to a level that we can consume the logs. And then people start writing rules on top of that abstraction. So people can say in this log I'm seeing this value together with this other value in another log, maybe I'm hitting this particular defect. So that's how it works. And if you look at it, the abstraction it can fail the next time. And the next release when the development or engineer decides to change that log line which you write that rag X or we can come up with a new version which we completely change the services or processes then whatever you have wrote needs to be re-written for the new service. And we see that a lot with products, like for instance, WebEx where you have a very short release cycle that things can change maybe the next week with a new release. So whatever you are writing, especially for that abstraction and for those rules are maybe not relevant with that new release. With that being said we have a incredible rule creation process and governance process around it which starts with maybe a defect. And then it takes it to a level where we have an automation in place. But if you look at it, this really ties to human bandwidth. And our engineers are really busy working on you know, customer facing, working on issues daily and sometimes creating news rules or these PaaS are not their biggest priorities so they can be delayed a bit. So we have this delay between a new issue being identified to a level where we have the automation to detect it next time that some customer faces it. So with all these questions and with all challenges in mind we start looking into ways of actually how we can automate these automation. So these things that we are doing manually how we can move it a bit further and automate. And we had actually a couple of things in mind that we were looking for and this being one of them being this has to be product agnostic. Like if Cisco comes up with a product tomorrow I should be able to take it logs without writing, you know, complex regs, PaaS, whatever and deploy it into this system. So it can embrace our logs and make sense of it. And we wanted this platform to be unsupervised. So none of the engineers need to create rules, you know, label logs, this is bad, this is good. Or train the system like which requires a lot of computational power. And the other most important thing for us was we wanted this to be not noisy at all because what happens with noises when your level of false positives really high your engineers start ignoring the good things between that noise. So they start the next time, you know thinking that this thing will not be relevant. So we want something with a lot more less noise. And ultimately we wanted this new platform or new framework to be easily adaptable to our existing workflow. So this is where we started. We start looking into the, you know first of all, internally, if we can build this thing and also start researching it, and we came up to Zebrium actually Larry, one of the co-founders of Zebrium. We came upon his presentation where he clearly explained why this is different, how this works and it immediately clicked in and we said, okay, this is exactly what we were looking for. We dive deeper. We checked the block posts where Zebrium guys really explain everything very clearly there. They're really open about it. And most importantly, there is a button in their system. And so what happens usually with AI ML vendors is they have this button where you fill in your details and a sales guys call you back and you know, explains the system here. They were like, this is our trial system. We believe in the system you can just sign up and try it yourself. And that's what we did. We took one of our Cisco live DNA Center, wireless platforms. We start streaming logs out of it. And then we synthetically, you know, introduce errors like we broke things. And then we realized that Zebrium was really catching the errors perfectly. And on top of that, it was really quiet unless you are really breaking something. And the other thing we realized was during that first trial is Zebrium was actually bringing a lot of context on top of the logs. During those failures, we worked with couple of technical leaders and they said, "Okay if this failure happens I'm expecting this individual log to be there." And we found out with Zebrium apart from that individual log there were a lot of other things which gives a bit more context around the root cause, which was great. And that's where we wanted to take it to the next level. Yeah. >> Okay. So, you know, a couple things to unpack there. I mean, you have the dart board behind you which is kind of interesting, 'cause a lot of times it's like throwing darts at the board to try to figure this stuff out. But to your other point, Cisco actually has some pretty rich tools with AppD and doing observability and you've made acquisitions like thousand eyes. And like you said, I'm presuming you got to eat your own dog food or drink your own champagne. And so you've got to be tools agnostic. And when I first heard about Zebrium, I was like wait a minute. Really? I was kind of skeptical. I've heard this before. You're telling me all I need is plain text and a timestamp. And you got my problem solved. So, and I understand that you guys said, okay let's run a POC. Let's see if we can cut that from, let's say two days a week down to one day, a week. In other words, 50%, let's see if we can automate 50% of the root cause analysis. And so you funded a POC. How did you test it? You put, you know, synthetic, you know errors and problems in there, but how did you test that, it actually works Necati? >> Yeah. So we wanted to take it to the next level which is meaning that we wanted to back test is with existing SaaS. And we decided, you know, we chose four different products from four different verticals, data center security, collaboration, and enterprise networking. And we find out SaaS where the engineer put some kind of log in the resolution summary. So they closed the case. And in the summary of the SR, they put "I identified these log lines and they led me to the root cause" and we ingested those log bundles. And we tried to see if Zebrium can surface that exact same log line in their analysis. So we initially did it with archery ourself and after 50 tests or so we were really happy with the results. I mean, almost most of them we saw the log line that we were looking for but that was not enough. And we brought it of course to our management and they said, "Okay, let's try this with real users" because the log being there is one thing but the engineer reaching to that log is another take. So we wanted to make sure that when we put it in front of our users, our engineers, they can actually come to that log themselves because, you know, we know this platform so we can, you know make searches and find whatever we are looking for but we wanted to do that. So we extended our pilots to some selected engineers and they tested with their own SaaS. Also due some back testing for some SaaS which are closed in the past or recently. And with a sample set of, I guess, close to 200 SaaS we find out like majority of the time, almost 95% of the time the engineer could find the log they were looking for in Zebrium's analysis. >> Yeah. Okay. So you were looking for 50%, you got the 95%. And my understanding is you actually did it with four pretty well known Cisco products, WebEx client, DNA Center Identity services, engine ISE, and then UCS. Unified pursuit. So you use actual real data and that was kind of your proof point, but Atri, so that sounds pretty impressive. And have you put this into production now and what have you found? >> Well, yes, we've launched this with the four products that you mentioned. We're providing our TAC engineers with the ability, whenever a support bundle for that product gets attached to the support request. We are processing it, using sense and then providing that sense analysis to the TAC engineer for their review. >> So are you seeing the results in production? I mean, are you actually able to reclaim that time that people are spending? I mean, it was literally almost two days a week down to you know, a part of a day, is that what you're seeing in production and what are you able to do with that extra time and people getting their weekends back? Are you putting 'em on more strategic tasks? How are you handling that? >> Yeah. So what we're seeing is, and I can tell you from my own personal experience using this tool that troubleshooting any one of the cases, I don't take more than 15 to 20 minutes to go through the Zebrium report. And I know within that time either what the root causes or I know that Zebrium doesn't have the information that I need to solve this particular case. So we've definitely seen, well it's been very hard to measure exactly how much time we've saved per engineer, right? Again, anecdotally, what we've heard from our users is that out of those three hours that they were spending per day, we're definitely able to reclaim at least one of those hours and what even more importantly, you know, what the kind of feedback that we've gotten in terms of I think one statement that really summarizes how Zebrium's impacted our workflow was from one of our users. And they said, "Well, you know, until you provide us with this tool, log analysis was a very black and white affair, but now it's become really colorful." And I mean, if you think about it log analysis is indeed black and white. You're looking at it on a terminal screen where the background is black and the text is white, or you're looking at it as a text where the background is white and the text is black, but what they're really trying to say is there are hardly any visual cues that help you navigate these logs which are so esoteric, so dense, et cetera. But what Zebrium does is it provides a lot of color and context to the whole process. So now you're able to quickly get to, you know using their Word Cloud, using their interactive histogram, using the summaries of every incident. You're very quickly able to summarize what might be happening and what you need to look into. Like, what are the important aspects of this particular log bundle that might be relevant to you? So we've definitely seen that. A really great use case that kind of encapsulates all of this was very early on in our experiment. There was this support request that had been escalated to the business unit or the development team. And the TAC engineer had really, they had an intuition about what was going wrong because of their experience because of, you know the symptoms that they'd seen. They kind of had an idea but they weren't able to convince the development team because they weren't able to find any evidence to back up what they thought was happening. And it was entirely happenstance that I happened to pick up that case and did an analysis using Zebrium. And then I sat down with a TAC engineer and we were very quickly within 15 minutes we were able to get down to the exact sequence of events that highlighted what the customer thought was happening, evidence of what the sorry not the customer what the TAC engineer thought was a root cause. And then we were able to share that evidence with our business unit and, you know redirect their resources so that we could chase down what the problem was. And that that really shows you how that color and context helps in log analysis. >> Interesting. You know, we do a fair amount of work in theCUBE in the RPA space, the robotic process automation and the narrative in the press when our RPA first started taking off was, oh, it's, you know machines replacing humans, or we're going to lose jobs. And what actually happened was people were just eliminating mundane tasks and the employees actually very happy about it. But what my question to you is was there ever a reticence amongst your team? Like, oh, wow, I'm going to lose my job if the machine's going to replace me or have you found that people were excited about this and what's been the reaction amongst the team? >> Well, I think, you know, every automation and AI project has that immediate gut reaction of you're automating away our jobs and so forth. And there is initially there's a little bit of reticence but I mean, it's like you said once you start using the tool, you realize that it's not your job, that's getting automated away. It's just that your job's becoming a little easier to do and it's faster and more efficient. And you're able to get more done in less time. That's really what we're trying to accomplish here. At the end of the day, Zebrium will identify these incidents. They'll do the correlation, et cetera. But if you don't understand what you're reading then that information's useless to you. So you need the human you need the network expert to actually look at these incidents, but what we are able to skin away or get rid of is all of is all the fat that's involved in our process like without having to download the bundle, which, you know when it's many gigabytes in size and now we're working from home with the pandemic and everything, you're, you know pulling massive amounts of logs from the corporate network onto your local device that takes time and then opening it up, loading it in a text editor that takes time. All of these things are we're trying to get rid of. And instead we're trying to make it easier and quicker for you to find what you're looking for. So it's like you said, you take away the mundane you take away the difficulties and the slog but you don't really take away the work the work still needs to be done. >> Yeah, great. Guys, thanks so much appreciate you sharing your story. It's quite, quite fascinating. Really. Thank you for coming on. >> Thanks for having us. >> You're very welcome. >> Excellent. >> Okay. In a moment, I'll be back to wrap up with some final thoughts. This is Dave Vellante and you're watching theCUBE. (upbeat music)

Published Date : May 25 2022

SUMMARY :

We're going to have to that you have that it the customer's, you know And so I would imagine you spend a lot it's attached to a, you and how'd you stumble upon Zebrium? And the other thing we realized was And like you said, I'm And we decided, you know, and what have you found? with the four products that you mentioned. And they said, "Well, you But what my question to you is the bundle, which, you know you sharing your story. I'll be back to wrap up

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

one dayQUANTITY

0.99+

50%QUANTITY

0.99+

LarryPERSON

0.99+

Necati CehreliPERSON

0.99+

95%QUANTITY

0.99+

ZebriumORGANIZATION

0.99+

56%QUANTITY

0.99+

AtriPERSON

0.99+

eight hoursQUANTITY

0.99+

Atri BasuPERSON

0.99+

TACsORGANIZATION

0.99+

NecatiORGANIZATION

0.99+

50 testsQUANTITY

0.99+

firstQUANTITY

0.99+

TACORGANIZATION

0.98+

oneQUANTITY

0.98+

about 8,000 engineersQUANTITY

0.98+

singleQUANTITY

0.98+

first trialQUANTITY

0.98+

three hoursQUANTITY

0.98+

four productsQUANTITY

0.98+

a weekQUANTITY

0.98+

next weekDATE

0.98+

pandemicEVENT

0.97+

about 2.2 millionQUANTITY

0.97+

todayDATE

0.97+

threeQUANTITY

0.97+

Word CloudTITLE

0.96+

UCSORGANIZATION

0.96+

more than a decadeQUANTITY

0.95+

one statementQUANTITY

0.95+

20 minutesQUANTITY

0.95+

two days a weekQUANTITY

0.94+

About 44%QUANTITY

0.93+

tomorrowDATE

0.93+

15 minutesQUANTITY

0.92+

almost two days a weekQUANTITY

0.92+

more than 15QUANTITY

0.92+

AppDTITLE

0.92+

one thingQUANTITY

0.91+

almost 95%QUANTITY

0.91+

a yearQUANTITY

0.91+

four different productsQUANTITY

0.9+

8,000 plus engineersQUANTITY

0.88+

three hours a dayQUANTITY

0.88+

fourQUANTITY

0.86+

200 SaaSQUANTITY

0.86+

AtriORGANIZATION

0.86+

24,000 man hours a dayQUANTITY

0.84+

a dayQUANTITY

0.84+

ISETITLE

0.8+

Manish Devgan, Hazelcast | Kubecon + Cloudnativecon Europe 2022


 

>>The cube presents, Coon and cloud native con Europe, 2022. Brought to you by red hat, the cloud native computing foundation and its ecosystem partners. >>Welcome to Licia Spain and cube con cloud native con 2022 Europe. I'm Keith Townsend, along with Paul Gillon senior editor, enterprise architecture for Silicon angle. We're gonna talk to some amazing folks. Day two coverage of Q con cloud native con Paul. We did the wrap up yesterday. Great. A great back and forth about what en Rico about yesterday's, uh, session. What are you looking for to today? >>I'm looking for, uh, to understand better, uh, how Kubernetes is being put into production, the types of applications that are being built on top of it. Yesterday, we talked a lot about infrastructure today. I think we're gonna talk a little bit more about applications, including with our first guest. >>Yeah, I was speaking our first guest. We have ish Degan CPO chief product officer at Hazelcast Hazelcast has been on the program before, but you, this is your first time in the queue, correct? >>It, it is Keith. Yeah. Well, >>Welcome to been Cuban. So we're talking data, which is always a fascinating topic. Containers are, have been known for not being supportive of stateful applications. At least you shouldn't hold the traditional thought. You shouldn't hold stateful data in containers. Tell me about the relationship between Hazel cast and containers we're at Cuan. >>Yeah, so a little bit about, uh, Hazelcast. We are a real time data platform and, uh, we are not a database, but a data platform because we basically allow, uh, data at rest as well as data in motion. So you can imagine that if you're writing an application, you can basically query and join a data coming in events, as well as data, which might have been persisted. So you can do both stream processing as well as, you know, low latency data access. And, and this platform of course, is supported on all the clouds. And we kind of delegate the orchestration of this kind of scale out system to Kubernetes. Um, and you know, that provides a resiliency and many things which go along with that. >>So you say you don't, you're not a database platform. What are you used for to manage the data? >>So we are, uh, we are memory first. So we are, you know, we started with low latency applications, but then we realized that real time has really become a business term. It's it's more of a business SLA mm-hmm, <affirmative>, it's really the, we see the opportunity, the punctuated change, which is happening in the market today is about real time data access to real time. I mean, there are real time applications. Our customers are building around real time offers, um, realtime thread detection. I mean, just imagine, you know, one of our customers like B and P par bars, they have, they basically originate a loan while the customer is banking. So you are in an ATM machine and you swipe your card and you are asking for, you know, taking 50 euros out. And at that point they can actually originate a custom loan offer based on your existing balance you're existing request and your credit score in that moment. So that's a value moment for them and they actually saw 400% loan origination go up because of that, because nobody's gonna be thinking about a credit, uh, line of credit after they're done banking. So it's in that value moment and we allow basically our data platform allows you to have fast access to data and also process incoming streams. So not before they get stored, but as they're coming in. >>So if I'm a developer and cuon is definitely a conference for developer and I, I come to the booth and I hear <inaudible>, that's the end value. I, I hear what I can do with my application. I guess the question is, how do I get there? I mean, uh, if it's not a database, how do I make a call from a container to, from my microservice to Hazel cath? Like, do I think of this as a, uh, a CNI or, or C CSI? How do I access >>PA care? Yeah. So, so we, uh, you know, we are, our server is actually built in Java. So a lot of the application which get written on top of the data platform are basically accessing through Java APIs. Or as you have a.net shop, you can actually use.net API. So we are basically an API first platform and SQL is basically the polyglot way of accessing data, both streaming data, as well as it store data. So most of the application developers, a lot of it is run done in microservices, and they're doing these fast get inputs for data. So they, they have a key, they want to get to a customer, they give a customer ID. And the beauty is that, um, while they're processing the events, they can actually enrich it because you need contextual information as well. So going back to the ATM example, you know, at that event happened, somebody swiped the card and ask for 50 euros, and now you want more information like credit score information, all that needs to be combined in that, in that value moment. >>So we allow you to do those joins and, you know, the contextual information is very important. So you see a lot of streaming platform out there, which just do streaming, but if you're an application developer, like you asked, you have to basically do call out to a streaming platform to get, um, to do streaming analytics and then do another call to get the context of that. You know, what is the credit score for this customer? But whereas in our case, because the data platform supports both streaming as well as data at rest, you can do that in one call and, you know, you don't want to have the operational complexity to stand out. Two different scale out servers is, is, is, is humongous, right? I mean, you want to build your business application. So, >>So you are querying data streaming data and data rest yes. In the same query >>Yes. In the same query. And we are memory first. So what happens is that we store a lot of the hot data in memory. So we have a scale out Ram based server. So that's where you get the low latency from. In fact, last year we did a benchmark. We were able to process a billion events a second, uh, with 99% of the latency under 30 milliseconds. So that kind of processing and that kind of power is, and, and the most important thing is determinism. I mean, you know, there's a lot of, um, if you look at real time, what real time is, is about this predictable latency at scale, because ultimately your, your adhering to a business SLA is not about milliseconds or microsecond. It's what your business needs. If your business needs that you need to deny or, uh, approve a credit credit card transaction in 50 milliseconds, that's your business SLA, and you need that predictability for every transaction. >>So talk to us about how how's this packaged in consumed. Cause I'm hearing a, a bunch of server Ram I'm hearing numbers that we're trying to adapt away from at this conference. We don't wanna see the onlay. We just want to use it. >>Yeah. So, so we kind of take a bit that, that complexity of managing this scale out, um, uh, uh, cluster, which actually utilizes Rams from each server. And then, you know, if you, you can configure it so that the hard set of data is in Ram, but the data, which is, you know, not so hard can actually go into a tiered storage model. So we are memory first. So, but what you are doing is you're doing simple, it's an API. So you do basically a crud, right? You create records, you read them through SQL. So for you, it's, it's, it's kind of like how you access that database. And we also provide you, you know, real time is also a journey. I mean, a lot of customers, you know, you don't want to rip their existing system and deploy another kind of scale out platform. Right? So we, we see a lot of these use cases where they have a database and we can sit in between the database, a system of record and the application. So we are kind of in between there. So that's, that's the journey you can take to real time. >>How does Kubernetes, uh, containers and Kubernetes change the game for real time analytics? >>Yeah. So, uh, Kubernetes does change it because what's hap first of all, we service most of the operational workloads. So it's, it's more on the, a lot of our customers. We have most, most of the big banks credit card companies in financial services and retail. Those are the two big sectors for us. And first of all, you know, a lot of these operational workloads are moving to the cloud and with move to the cloud, they're actually taking their existing applications and, and moving to, you know, one of the providers and to kind of orchestrate this scale out platform, which does auto scaling, that's where the benefit comes from mm-hmm <affirmative>. And it also gives them the freedom of choice. So, you know, the Kubernetes is, you know, a standard which goes across cloud providers. So that gives them the benefit that they can actually take their application. And if they want, they can actually move it to a different, a different cloud provider because we take away the orchestration complexity, you know, in that abstraction layer. >>So what happens when I need to go really fast? I mean, I, I, I need, uh, I'm looking at bare metal and I'm looking at really scaling a, a, a homogeneous application in a single data center set of data centers. Is there a bare metal play here? >>Yes. There, there, there are some very, very, uh, like if you want microsecond latency, mm-hmm, <affirmative>, um, you know, we have customers who actually store two to four terabytes in Ram and, and they can actually stand up. Um, you know, again, it depends on what kind of deployment you want. You can either scale up or scale out, scaling up is expensive, you know, because those boxes are not cheap, but if you have a requirement like that, where there is sub millisecond or microphone latency requirement, you could actually store the entire data set. I mean, a lot of the operational data sets are under four terabytes. So it's not uncommon that you could actually take the entire operational transactional data set, actually move, move that to a pure Ram. But, uh, I think now we, we also see that these operational workloads are also, there's a need for analytics to be done on top as well. >>I mean, we, going back to the example I gave you, so this, this, uh, customer is not only doing stream crossing, they're also influencing a machine learning algorithm in that same, in the same kind of cycle in the life cycle. So they might have trained a machine learning or algorithm on a data lake somewhere, but once they're ready, they're actually influencing the ML algorithm in our kind of life cycle right there. So, you know, that that really brings analytics and transactions kind of together because after all transactions are where the real, you know, insights are. >>Yeah. I'm, I'm struggling a little bit with this, with these two different use cases where I have transactional basically a transactional database or transactional data platform alongside a analytics platform. Those are two, like they're two different things. I have a, you know, I, I have spinning rust for one, and then I have memory and, and MBME for another. Uh, and that requires tuning requires DBAs. It requires a lot of overhead, there seems to be some type of secret sauce going on here. >>Yeah. Yeah. So, I mean, you know, we, we basically say that if you are, if you have a business case where you want to make a decision, you know, you, the only chance to succeed is where you are not making a decision tomorrow based on today's data. Right? I mean, the only way to act on that data is today. So the act is a keyword here. We actually let you generate a realtime offer. We, we let you do credit card fraud detection. In that moment, the analytics is about knowing less about acting on it. Right? Most of our applications are machine critical. They're acting on real time. I think when you talk about like the data lakes there, there's actually a real time there as well, but it's about knowing, and we believe that the operational side is where, you know, that value moment is there, you know, what good is, is to know about something tomorrow, you know, if something wrong happened, I mean, it, yeah, so there's a latency squeeze there as well, but we are on, on more on the kind of transaction and operational side. >>I gotcha. Yeah. So help me understand, like integrations. A lot of the, the, when I think of transactions, I'm thinking of SAP, Oracle, where the process is done, or some legacy banking or not legacy or new modern banking app, how does the data get from one platform to a, to Hazel cast so I can make those >>Decisions? Yeah. So we have, uh, this, the streaming engine, we have has a whole bunch of connectors to a lot of data sources. So in fact, most of our use cases already have data sources underneath there, their databases there's KA connectors, you know, joining us because if you look at it, events is, are comprised of transactions. So something, a customer did, uh, a credit card swipe, right. And also events events could be machine or IOT. So it's really unique connectivity and data ingestion before you can process that. So we have, uh, a whole suite of connectors to kind of bring data in, in our platform. >>We've been talking a lot, these last couple of days about, uh, about the edge and about moving processing capability closer to the edge. How do you enable that? >>Yeah. So edge is actually very, very relevant because of what's happening is that, um, you know, if you, if you look at like a edge deployment use case, um, you know, we have a use case where data is being pushed from these different edge devices to cloud data warehouse. Right. But just imagine that you want to be filtering data at the, at, at where it is being originated from, and you wanna push only relevant data to, to maybe a central data lake where you might want to do, you know, train your machine learning models. Mm-hmm <affirmative> so that at the edge, we are actually able to process that data. So Hazel cast will allow you to actually write a data pipeline and do stream processing so that you might want to just push, you know, a part or a subset of data, which applies by the rules. Uh, so there's, there's a big, um, uh, I think edge is, you know, there's a lot of data being generated and you don't want like garbage and garbage out there's there's, there is there's filtration done at the edge. So that only the relevant data lands in a data, data lake or something like that. >>Well, Monash, we really appreciate you stopping by realtime data is an exciting area of coverage for the queue overall from Valencia Spain, I'm Keith Townsend, along with Paul Gillon, and you're watching the queue, the leader in high tech coverage.

Published Date : May 19 2022

SUMMARY :

Brought to you by red hat, What are you looking for to today? the types of applications that are being built on top of it. product officer at Hazelcast Hazelcast has been on the program before, It, it is Keith. At least you shouldn't hold the traditional thought. So you can imagine that if you're writing an application, So you say you don't, you're not a database platform. So we are, you know, we started with low So if I'm a developer and cuon is definitely a conference for developer So a lot of the application which get written on top of the data platform are basically accessing through Java So we allow you to do those joins and, you know, the contextual information is very important. So you are querying data streaming data and data rest yes. I mean, you know, So talk to us about how how's this packaged in consumed. I mean, a lot of customers, you know, you don't want to rip their existing system and deploy another a different cloud provider because we take away the orchestration complexity, you know, So what happens when I need to go really fast? So it's not uncommon that you could after all transactions are where the real, you know, insights are. I have a, you know, I, I have spinning rust for one, you know, that value moment is there, you know, what good is, is to know about something tomorrow, not legacy or new modern banking app, how does the data get from one platform to a, you know, joining us because if you look at it, events is, are comprised of transactions. How do you enable that? um, you know, if you, if you look at like a edge deployment use Well, Monash, we really appreciate you stopping by realtime data is an

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

Paul GillonPERSON

0.99+

99%QUANTITY

0.99+

400%QUANTITY

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

Hazel castORGANIZATION

0.99+

JavaTITLE

0.99+

HazelcastORGANIZATION

0.99+

50 millisecondsQUANTITY

0.99+

50 eurosQUANTITY

0.99+

KeithPERSON

0.99+

Manish DevganPERSON

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

YesterdayDATE

0.99+

OracleORGANIZATION

0.99+

tomorrowDATE

0.99+

first guestQUANTITY

0.99+

first timeQUANTITY

0.99+

Valencia SpainLOCATION

0.99+

50 eurosQUANTITY

0.99+

SQLTITLE

0.99+

one callQUANTITY

0.99+

four terabytesQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

each serverQUANTITY

0.98+

one platformQUANTITY

0.98+

SAPORGANIZATION

0.98+

firstQUANTITY

0.97+

under 30 millisecondsQUANTITY

0.97+

first platformQUANTITY

0.97+

a billion eventsQUANTITY

0.95+

CoonORGANIZATION

0.94+

2022DATE

0.94+

singleQUANTITY

0.94+

two different thingsQUANTITY

0.94+

KubeconORGANIZATION

0.93+

CloudnativeconORGANIZATION

0.93+

two different use casesQUANTITY

0.92+

Day twoQUANTITY

0.92+

two big sectorsQUANTITY

0.91+

red hatORGANIZATION

0.87+

EuropeLOCATION

0.84+

use.netOTHER

0.83+

under four terabytesQUANTITY

0.82+

Two different scaleQUANTITY

0.78+

KubernetesORGANIZATION

0.75+

a secondQUANTITY

0.72+

KubernetesTITLE

0.71+

cube con cloud native conORGANIZATION

0.7+

cloud native conORGANIZATION

0.67+

DeganPERSON

0.66+

SiliconLOCATION

0.63+

Licia SpainORGANIZATION

0.62+

Hazel cathORGANIZATION

0.61+

con cloud native conORGANIZATION

0.58+

RicoLOCATION

0.57+

CubanOTHER

0.56+

MonashORGANIZATION

0.55+

HazelTITLE

0.53+

CuanLOCATION

0.53+

foundationORGANIZATION

0.52+

QEVENT

0.51+

last coupleDATE

0.5+

CNITITLE

0.46+

CTITLE

0.45+

PaulPERSON

0.44+

2022EVENT

0.33+

Breaking Analysis: What you May not Know About the Dell Snowflake Deal


 

>> From theCUBE Studios in Palo Alto, in Boston bringing you Data Driven Insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the pre-cloud era hardware companies would run benchmarks, showing how database and or application performance ran better on their systems relative to competitors or previous generation boxes. And they would make a big deal out of it. And the independent software vendors, you know they'd do a little golf clap if you will, in the form of a joint press release it became a game of leaprog amongst hardware competitors. That was pretty commonplace over the years. The Dell Snowflake Deal underscores that the value proposition between hardware companies and ISVs is changing and has much more to do with distribution channels, volumes and the amount of data that lives On-Prem in various storage platforms. For cloud native ISVs like Snowflake they're realizing that despite their Cloud only dogma they have to grit their teeth and deal with On-premises data or risk getting shut out of evolving architectures. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we unpack what little is known about the Snowflake announcement from Dell Technologies World and discuss the implications of a changing Cloud landscape. We'll also share some new data for Cloud and Database platforms from ETR that shows Snowflake has actually entered the Earth's orbit when it comes to spending momentum on its platform. Now, before we get into the news I want you to listen to Frank's Slootman's answer to my question as to whether or not Snowflake would ever architect the platform to run On-Prem because it's doable technically, here's what he said, play the clip >> Forget it, this will only work in the Public Cloud. Because it's, this is how the utility model works, right. I think everybody is coming through this realization, right? I mean, excuses are running out at this point. You know, we think that it'll, people will come to the Public Cloud a lot sooner than we will ever come to the Private Cloud. It's not that we can't run a private Cloud. It's just diminishes the potential and the value that we bring. >> So you may be asking yourselves how do you square that circle? Because basically the Dell Snowflake announcement is about bringing Snowflake to the private cloud, right? Or is it let's get into the news and we'll find out. Here's what we know at Dell Technologies World. One of the more buzzy announcements was the, by the way this was a very well attended vet event. I should say about I would say 8,000 people by my estimates. But anyway, one of the more buzzy announcements was Snowflake can now run analytics on Non-native Snowflake data that lives On-prem in a Dell object store Dell's ECS to start with. And eventually it's software defined object store. Here's Snowflake's clark, Snowflake's Clark Patterson describing how it works this past week on theCUBE. Play the clip. The way it works is I can now access Non-native Snowflake data using what materialized views, external tables How does that work? >> Some combination of the, all the above. So we've had in Snowflake, a capability called External Tables, which you refer to, it goes hand in hand with this notion of external stages. Basically there's a through the combination of those two capabilities, it's a metadata layer on data, wherever it resides. So customers have actually used this in Snowflake for data lake data outside of Snowflake in the Cloud, up until this point. So it's effectively an extension of that functionality into the Dell On-Premises world, so that we can tap into those things. So we use the external stages to expose all the metadata about what's in the Dell environment. And then we build external tables in Snowflake. So that data looks like it is in Snowflake. And then the experience for the analyst or whomever it is, is exactly as though that data lives in the Snowflake world. >> So as Clark explained, this capability of External tables has been around in the Cloud for a while, mainly to suck data out of Cloud data lakes. Snowflake External Tables use file level metadata, for instance, the name of the file and the versioning so that it can be queried in a stage. A stage is just an external location outside of Snowflake. It could be an S3 bucket or an Azure Blob and it's soon will be a Dell object store. And in using this feature, the Dell looks like it lives inside of Snowflake and Clark essentially, he's correct to say to an analyst that looks exactly like the data is in Snowflake, but uh, not exactly the data's read only which means you can't do what are called DML operations. DML stands for Data Manipulation Language and allows for things like inserting data into tables or deleting and modifying existing data. But the data can be queried. However, the performance of those queries to External Tables will almost certainly be slower. Now users can build things like materialized views which are going to speed things up a bit, but at the end of the day, it's going to run faster than the Cloud. And you can be almost certain that's where Snowflake wants it to run, but some organizations can't or won't move data into the Cloud for a variety of reasons, data sovereignty, compliance security policies, culture, you know, whatever. So data can remain in place On-prem, or it can be moved into the Public Cloud with this new announcement. Now, the compute today presumably is going to be done in the Public Cloud. I don't know where else it's going to be done. They really didn't talk about the compute side of things. Remember, one of Snowflake's early innovations was to separate compute from storage. And what that gave them is you could more efficiently scale with unlimited resources when you needed them. And you could shut off the compute when you don't need us. You didn't have to buy, and if you need more storage you didn't have to buy more compute and vice versa. So everybody in the industry has copied that including AWS with Redshift, although as we've reported not as elegantly as Snowflake did. RedShift's more of a storage tiering solution which minimizes the compute required but you can't really shut it off. And there are companies like Vertica with Eon Mode that have enabled this capability to be done On-prem, you know, but of course in that instance you don't have unlimited elastic compute scale on-Prem but with solutions like Dell Apex and HPE GreenLake, you can certainly, you can start to simulate that Cloud elasticity On-prem. I mean, it's not unlimited but it's sort of gets you there. According to a Dell Snowflake joint statement, the companies the quote, the companies will pursue product integrations and joint go to market efforts in the second half of 2022. So that's a little vague and kind of benign. It's not really clear when this is going to be available based on that statement from the two first, but, you know, we're left wondering will Dell develop an On-Prem compute capability and enable queries to run locally maybe as part of an extended apex offering? I mean, we don't know really not sure there's even a market for that but it's probably a good bet that again, Snowflake wants that data to land in the Snowflake data Cloud kind of makes you wonder how this deal came about. You heard Sloop on earlier Snowflake has always been pretty dogmatic about getting data into its native snowflake format to enable the best performance as we talked about but also data sharing and governance. But you could imagine that data architects they're building out their data mesh we've reported on this quite extensively and their data fabric and those visions around that. And they're probably telling Snowflake, Hey if you want to be a strategic partner of ours you're going to have to be more inclusive of our data. That for whatever reason we're not putting in your Cloud. So Snowflake had to kind of hold its nose and capitulate. Now the good news is it further opens up Snowflakes Tam the total available market. It's obviously good marketing posture. And ultimately it provides an on ramp to the Cloud. And we're going to come back to that shortly but let's look a little deeper into what's happening with data platforms and to do that we'll bring in some ETR data. Now, let me just say as companies like Dell, IBM, Cisco, HPE, Lenovo, Pure and others build out their hybrid Clouds. The cold hard fact is not only do they have to replicate the Cloud Operating Model. You will hear them talk about that a lot, but they got to do that. So it, and that's critical from a user experience but in order to gain that flywheel momentum they need to build a robust ecosystem that goes beyond their proprietary portfolios. And, you know, honestly they're really not even in the first inning most companies and for the likes of Snowflake to sort of flip this, they've had to recognize that not everything is moving into the Cloud. Now, let's bring up the next slide. One of the big areas of discussion at Dell Tech World was Apex. That's essentially Dell's nascent as a service offering. Apex is infrastructure as a Service Cloud On-prem and obviously has the vision of connecting to the Cloud and across Clouds and out to the Edge. And it's no secret that database is one of the most important ingredients of infrastructure as a service generally in Cloud Infrastructure specifically. So this chart here shows the ETR data for data platforms inside of Dell accounts. So the beauty of ETR platform is you can cut data a million different ways. So we cut it. We said, okay, give us the Cloud platforms inside Dell accounts, how are they performing? Now, this is a two dimensional graphic. You got net score or spending momentum on the vertical axis and what ETR now calls Overlap formally called Market Share which is a measure of pervasiveness in the survey. That's on the horizontal axis that red dotted line at 40% represents highly elevated spending on the Y. The table insert shows the raw data for how the dots are positioned. Now, the first call out here is Snowflake. According to ETR quote, after 13 straight surveys of astounding net scores, Snowflake has finally broken the trend with its net score dropping below the 70% mark among all respondents. Now, as you know, net score is measured by asking customers are you adding the platform new? That's the lime green in the bar that's pointing from Snowflake in the graph and or are you increasing spend by 6% or more? That's the forest green is spending flat that's the gray is you're spend decreasing by 6% or worse. That's the pinkish or are you decommissioning the platform bright red which is essentially zero for Snowflake subtract the reds from the greens and you get a net score. Now, what's somewhat interesting is that snowflakes net score overall in the survey is 68 which is still huge, just under 70%, but it's net score inside the Dell account base drops to the low sixties. Nonetheless, this chart tells you why Snowflake it's highly elevated spending momentum combined with an increasing presence in the market over the past two years makes it a perfect initial data platform partner for Dell. Now and in the Ford versus Ferrari dynamic. That's going on between the likes of Dell's apex and HPE GreenLake database deals are going to become increasingly important beyond what we're seeing with this recent Snowflake deal. Now noticed by the way HPE is positioned on this graph with its acquisition of map R which is now part of HPE Ezmeral. But if these companies want to be taken seriously as Cloud players, they need to further expand their database affinity to compete ideally spinning up databases as part of their super Clouds. We'll come back to that that span multiple Clouds and include Edge data platforms. We're a long ways off from that. But look, there's Mongo, there's Couchbase, MariaDB, Cloudera or Redis. All of those should be on the short list in my view and why not Microsoft? And what about Oracle? Look, that's to be continued on maybe as a future topic in a, in a Breaking Analysis but I'll leave you with this. There are a lot of people like John Furrier who believe that Dell is playing with fire in the Snowflake deal because he sees it as a one way ticket to the Cloud. He calls it a one way door sometimes listen to what he said this past week. >> I would say that that's a dangerous game because we've seen that movie before, VMware and AWS. >> Yeah, but that we've talked about this don't you think that was the right move for VMware? >> At the time, but if you don't nurture the relationship AWS will take all those customers ultimately from VMware. >> Okay, so what does the data say about what John just said? How is VMware actually doing in Cloud after its early missteps and then its subsequent embracing of AWS and other Clouds. Here's that same XY graphic spending momentum on the Y and pervasiveness on the X and the same table insert that plots the dots and the, in the breakdown of Dell's net score granularity. You see that at the bottom of the chart in those colors. So as usual, you see Azure and AWS up and to the right with Google well behind in a distant third, but still in the mix. So very impressive for Microsoft and AWS to have both that market presence in such elevated spending momentum. But the story here in context is that the VMware Cloud on AWS and VMware's On-Prem Cloud like VMware Cloud Foundation VCF they're doing pretty well in the market. Look, at HPE, gaining some traction in Cloud. And remember, you may not think HPE and Dell and VCF are true Cloud but these are customers answering the survey. So their perspective matters more than the purest view. And the bad news is the Dell Cloud is not setting the world on fire from a momentum standpoint on the vertical axis but it's above the line of zero and compared to Dell's overall net score of 20 you could see it's got some work to do. Okay, so overall Dell's got a pretty solid net score to you know, positive 20, as I say their Cloud perception needs to improve. Look, Apex has to be the Dell Cloud brand not Dell reselling VMware. And that requires more maturity of Apex it's feature sets, its selling partners, its compensation models and it's ecosystem. And I think Dell clearly understands that. I think they're pretty open about that. Now this includes partners that go beyond being just sellers has to include more tech offerings in the marketplace. And actually they got to build out a marketplace like Cloud Platform. So they got a lot of work to do there. And look, you've got Oracle coming up. I mean they're actually kind of just below the magic 40% in the line which is pro it's pretty impressive. And we've been telling you for years, you can hate Oracle all you want. You can hate its price, it's closed system all of that it's red stack shore. You can say it's legacy. You can say it's old and outdated, blah, blah, blah. You can say Oracle is irrelevant in trouble. You are dead wrong. When it comes to mission critical workloads. Oracle is the king of the hill. They're a founder led company that knows exactly what it's doing and they're showing Cloud momentum. Okay, the last point is that while Microsoft AWS and Google have major presence as shown on the X axis. VMware and Oracle now have more than a hundred citations in the survey. You can see that on the insert in the right hand, right most column. And IBM had better keep the momentum from last quarter going, or it won't be long before they get passed by Dell and HP in Cloud. So look, John might be right. And I would think Snowflake quietly agrees that this Dell deal is all about access to Dell's customers and their data. So they can Hoover it into the Snowflake Data Cloud but the data right now, anyway doesn't suggest that's happening with VMware. Oh, by the way, we're keeping an eye close eye on NetApp who last September ink, a similar deal to VMware Cloud on AWS to see how that fares. Okay, let's wrap with some closing thoughts on what this deal means. We learned a lot from the Cloud generally in AWS, specifically in two pizza teams, working backwards, customer obsession. We talk about flywheel all the time and we've been talking today about marketplaces. These have all become common parlance and often fundamental narratives within strategic plans investor decks and customer presentations. Cloud ecosystems are different. They take both competition and partnerships to new heights. You know, when I look at Azure service offerings like Apex, GreenLake and similar services and I see the vendor noise or hear the vendor noise that's being made around them. I kind of shake my head and ask, you know which movie were these companies watching last decade? I really wish we would've seen these initiatives start to roll out in 2015, three years before AWS announced Outposts not three years after but Hey, the good news is that not only was Outposts a wake up call for the On-Prem crowd but it's showing how difficult it is to build a platform like Outposts and bring it to On-Premises. I mean, Outpost isn't currently even a rounding era in the marketplace. It really doesn't do much in terms of database support and support of other services. And, you know, it's unclear where that that is going. And I don't think it has much momentum. And so the Hybrid Cloud Vendors they've had time to figure it out. But now it's game on, companies like Dell they're promising a consistent experience between On-Prem into the Cloud, across Clouds and out to the Edge. They call it MultCloud which by the way my view has really been multi-vendor Chuck, Chuck Whitten. Who's the new co-COO of Dell called it Multi-Cloud by default. (laughing) That's really, I think an accurate description of that. I call this new world Super Cloud. To me, it's different than MultiCloud. It's a layer that runs on top of hyperscale infrastructure kind of hides the underlying complexity of the Cloud. It's APIs, it's primitives. And it stretches not only across Clouds but out to the Edge. That's a big vision and that's going to require some seriously intense engineering to build out. It's also going to require partnerships that go beyond the portfolios of companies like Dell like their own proprietary stacks if you will. It's going to have to replicate the Cloud Operating Model and to do that, you're going to need more and more deals like Snowflake and even deeper than Snowflake, not just in database. Sure, you'll need to have a catalog of databases that run in your On-Prem and Hybrid and Super Cloud but also other services that customers can tap. I mean, can you imagine a day when Dell offers and embraces a directly competitive service inside of apex. I have trouble envisioning that, you know not with their historical posture, you think about companies like, you know, Nutanix, you know, or Cisco where they really, you know those relationships cooled quite quickly but you know, look, think about it. That's what AWS does. It offers for instance, Redshift and Snowflake side by side happily and the Redshift guys they probably hate Snowflake. I wouldn't blame them, but the EC Two Folks, they love them. And Adam SloopesKy understands that ISVs like Snowflake are a key part of the Cloud ecosystem. Again, I have a hard time envisioning that occurring with Dell or even HPE, you know maybe less so with HPE, but what does this imply that the Edge will allow companies like Dell to a reach around on the Cloud and somehow create a new type of model that begrudgingly accommodates the Public Cloud but drafts of the new momentum of the Edge, which right now to these companies is kind of mostly telco and retail. It's hard to see that happening. I think it's got to evolve in a more comprehensive and inclusive fashion. What's much more likely is companies like Dell are going to substantially replicate that Cloud Operating Model for the pieces that they own pieces that they control which admittedly are big pieces of the market. But unless they're able to really tap that ecosystem magic they're not going to be able to grow much beyond their existing install bases. You take that lime green we showed you earlier that new adoption metric from ETR as an example, by my estimates, AWS and Azure are capturing new accounts at a rate between three to five times faster than Dell and HPE. And in the more mature US and mere markets it's probably more like 10 X and a major reason is because of the Cloud's robust ecosystem and the optionality and simplicity of transaction that that is bringing to customers. Now, Dell for its part is a hundred billion dollar revenue company. And it has the capability to drive that kind of dynamic. If it can pivot its partner ecosystem mindset from kind of resellers to Cloud services and technology optionality. Okay, that's it for now? Thanks to my colleagues, Stephanie Chan who helped research topics for Breaking Analysis. Alex Myerson is on the production team. Kristen Martin and Cheryl Knight and Rob Hof, on editorial they helped get the word out and thanks to Jordan Anderson for the new Breaking Analysis branding and graphics package. Remember these episodes are all available as podcasts wherever you listen. All you do is search Breaking Analysis podcasts. You could check out ETR website @etr.ai. We publish a full report every week on wikibon.com and siliconangle.com. You want to get in touch. @dave.vellente @siliconangle.com. You can DM me @dvellante. You can make a comment on our LinkedIn posts. This is Dave Vellante for the Cube Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 7 2022

SUMMARY :

bringing you Data Driven and the amount of data that lives On-Prem and the value that we bring. One of the more buzzy into the Dell On-Premises world, Now and in the Ford I would say that At the time, but if you And it has the capability to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jordan AndersonPERSON

0.99+

Stephanie ChanPERSON

0.99+

IBMORGANIZATION

0.99+

DellORGANIZATION

0.99+

Clark PattersonPERSON

0.99+

Alex MyersonPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

Rob HofPERSON

0.99+

LenovoORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

2015DATE

0.99+

GoogleORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

ClarkPERSON

0.99+

HPORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

HPEORGANIZATION

0.99+

6%QUANTITY

0.99+

FordORGANIZATION

0.99+

threeQUANTITY

0.99+

40%QUANTITY

0.99+

Chuck WhittenPERSON

0.99+

VMwareORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

FerrariORGANIZATION

0.99+

Adam SloopesKyPERSON

0.99+

EarthLOCATION

0.99+

13 straight surveysQUANTITY

0.99+

70%QUANTITY

0.99+

firstQUANTITY

0.99+

68QUANTITY

0.99+

last quarterDATE

0.99+

RedshiftTITLE

0.99+

siliconangle.comOTHER

0.99+

theCUBE StudiosORGANIZATION

0.99+

SnowflakeEVENT

0.99+

SnowflakeTITLE

0.99+

8,000 peopleQUANTITY

0.99+

bothQUANTITY

0.99+

20QUANTITY

0.99+

VCFORGANIZATION

0.99+

SnowflakeORGANIZATION

0.99+

Jon Dahl, Mux | AWS Startup Showcase S2 E2


 

(upbeat music) >> Welcome, everyone, to theCUBE's presentation of the AWS Startup Showcase. And this episode two of season two is called "Data as Code," the ongoing series covering exciting new startups in the AWS ecosystem. I'm John Furrier, your host of theCUBE. Today, we're excited to be joined by Jon Dahl, who is the co-founder and CEO of MUX, a hot new startup building cloud video for developers, video with data. John, great to see you. We did an interview on theCube Conversation. Went into big detail of the awesomeness of your company and the trend that you're on. Welcome back. >> Thank you, glad to be here. >> So, video is everywhere, and video for pivot to video, you hear all these kind of terms in the industry, but now more than ever, video is everywhere and people are building with it, and it's becoming part of the developer experience in applications. So people have to stand up video into their code fast, and data is code, video is data. So you guys are specializing this. Take us through that dynamic. >> Yeah, so video clearly is a growing part of how people are building applications. We see a lot of trends of categories that did not involve video in the past making a major move towards video. I think what Peloton did five years ago to the world of fitness, that was not really a big category. Now video fitness is a huge thing. Video in education, video in business settings, video in a lot of places. I think Marc Andreessen famously said, "Software is eating the world" as a pretty, pretty good indicator of what the internet is actually doing to the economy. I think there's a lot of ways in which video right now is eating software. So categories that we're not video first are becoming video first. And that's what we help with. >> It's not obvious to like most software developers when they think about video, video industries, it's industry shows around video, NAB, others. People know, the video folks know what's going on in video, but when you start to bring it mainstream, it becomes an expectation in the apps. And it's not that easy, it's almost a provision video is hard for a developer 'cause you got to know the full, I guess, stack of video. That's like low level and then kind of just basic high level, just play something. So, in between, this is a media stack kind of dynamic. Can you talk about how hard it is to build video for developers? How is it going to become easier? >> Yeah, I mean, I've lived this story for too long, maybe 13 years now, when I first build my first video stack. And, you know, I'll sometimes say, I think it's kind of a miracle every time a video plays on the internet because the internet is not a medium designed for video. It's been hijacked by video, video is 70% of internet traffic today in an unreliable, sort of untrusted network space, which is totally different than how television used to work or cable or things like that. So yeah, so video is hard because there's so many problems from top to bottom that need to be solved to make video work. So you have to worry about video compression encoding, which is a complicated topic in itself. You have to worry about delivering video around the world at scale, delivering it at low cost, at low latency, with good performance, you have to worry about devices and how every device, Android, iOS, web, TVs, every device handles video differently and so there's a lot of work there. And at the end of the day, these are kind of unofficial standards that everyone's using. So one of the miracles is like, if you want to watch a video, somehow you have to get like Apple and Google to agree on things, which is not always easy. And so there's just so many layers of complexity that are behind it. I think one way to think about it is, if you want to put an image online, you just put an image online. And if you want to put video online, you build complex software, and that's the exact problem that MUX was started to help solve. >> It's interesting you guys have almost creating a whole new category around video infrastructure. And as you look at, you mentioned stack, video stack. I'm looking at a market where the notion of a media stack is developing, and you're seeing these verticals having similar dynamics with cloud. And if you go back to the early days of cloud computing, what was the developer experience or entrepreneurial experience, you had to actually do a lot of stuff before you even do anything, provision a server. And this has all kind of been covered in great detail in the glory of Agile and whatnot. It was expensive, and you had that actually engineer before you could even stand up any code. Now you got video that same thing's happening. So the developers have two choices, go do a bunch of stuff complex, building their own infrastructure, which is like building a data center, or lean in on MUX and say, "Hey, thank you for doing all that years of experience building out the stacks to take that hard part away," but using APIs that they have. This is a developer focused problem that you guys are solving. >> Yeah, that's right. my last company was a company called Zencoder, that was an API to video encoding. So it was kind of an API to a small part of what MUX does today, just one of those problems. And I think the thing that we got right at Zencoder, that we're doing again here at MUX, was building four developers first. So our number one persona is a software developer. Not necessarily a video expert, just we think any developer should be able to build with video. It shouldn't be like, yeah, got to go be a specialist to use this technology, because it should become just of the internet. Video should just be something that any developer can work with. So yeah, so we build for developers first, which means we spend a lot of time thinking about API design, we spend a lot of time thinking about documentation, transparent pricing, the right features, great support and all those kind of things that tend to be characteristics of good developer companies. >> Tell me about the pipe lining of the products. I'm a developer, I work for a company, my boss is putting pressure on me. We need video, we have all this library, it's all stacking up. We hired some people, they left. Where's the video, we've stored it somewhere. I mean, it's a nightmare, right? So I'm like, okay, I'm cloud native, I got an API. I need to get my product to market fast, 'cause that is what Agile developers want. So how do you describe that acceleration for time to market? You mentioned you guys are API first, video first. How do these customers get their product into the market as fast as possible? >> Yeah, well, I mean the first thing we do is we put what we think is probably on average, three to four months of hard engineering work behind a single API call. So if you want to build a video platform, we tell our customers like, "Hey, you can do that." You probably need a team, you probably need video experts on your team so hire them or train them. And then it takes several months just to kind of to get video flowing. One API call at MUX gives you on-demand video or live video that works at scale, works around the world with good performance, good reliability, a rich feature set. So maybe just a couple specific examples, we worked with Robin Hood a few years ago to bring video into their newsfeed, which was hugely successful for them. And they went from talking to us for the first time to a big launch in, I think it was three months, but the actual code time there was like really short. I want to say they had like a proof of concept up and running in a couple days, and then the full launch in three months. Another customer of ours, Bandcamp, I think switched from a legacy provider to MUX in two weeks in band. So one of the big advantages of going a little bit higher in the abstraction layer than just building it yourself is that time to market. >> Talk about this notion of video pipeline 'cause I know I've heard people I talk about, "Hey, I just want to get my product out there. I don't want to get stuck in the weeds on video pipeline." What does that mean for folks that aren't understanding the nuances of video? >> Yeah, I mean, it's all the steps that it takes to publish video. So from ingesting the video, if it's live video from making sure that you have secure, reliable ingest of that live feed potentially around the world to the transcoding, which is we talked a little bit about, but it is a, you know, on its own is a massively complicated problem. And doing that, well, doing that well is hard. Part of the reason it's hard is you really have to know where you're publishing too. And you might want to transcode video differently for different devices, for different types of content. You know, the pipeline typically would also include all of the workflow items you want to do with the video. You want to thumbnail a video, you want clip, create clips of the video, maybe you want to restream the video to Facebook or Twitter or a social platform. You want to archive the video, you want it to be available for downloads after an event. If it's just a, if it's a VOD upload, if it's not live in the first place. You have all those things and you might want to do simulated live with the video. You might want to actually record something and then play it back as a live stream. So, the pipeline Ty typically refers to everything from the ingest of the video to the time that the bits are delivered to a device. >> You know, I hear a lot of people talking about video these days, whether it's events, training, just want peer to peer experience, video is powerful, but customers want to own their own platform, right? They want to have the infrastructure as a service. They kind of want platform as a service, this is cloud talk now, but they want to have their own capability to build it out. This allows them to get what they want. And so you see this, like, is it SaaS? Is it platform? People want customization? So kind of the general purpose video solution does it really exist or doesn't? I mean, 'cause this is the question. Can I just buy software and work or is it going to be customized always? How do you see that? Because this becomes a huge discussion point. Is it a SaaS product or someone's going to make a SaaS product? >> Yeah, so I think one of the most important elements of designing any software, but especially when you get into infrastructure is choosing an abstraction level. So if you think of computing, you can go all the way down to building a data center, you can go all the way down to getting a colo and racking a server like maybe some of us used to do, who are older than others. And that's one way to run a server. On the other extreme, you have just think of the early days of cloud competing, you had app engine, which was a really fantastic, really incredible product. It was one push deploy of, I think Python code, if I remember correctly, and everything just worked. But right in the middle of those, you had EC2, which was, EC2 is basically an API to a server. And it turns out that that abstraction level, not Colo, not the full app engine kind of platform, but the API to virtual server was the right abstraction level for maybe the last 15 years. Maybe now some of the higher level application platforms are doing really well, maybe the needs will shift. But I think that's a little bit of how we think about video. What developers want is an API to video. They don't want an API to the building blocks of video, an API to transcoding, to video storage, to edge caching. They want an API to video. On the other extreme, they don't want a big application that's a drop in white label video in a box like a Shopify kind of thing. Shopify is great, but developers don't want to build on top of Shopify. In the payments world developers want Stripe. And that abstraction level of the API to the actual thing you're getting tends to be the abstraction level that developers want to build on. And the reason for that is, it's the most productive layer to build on. You get maximum flexibility and also maximum velocity when you have that API directly to a function like video. So, we like to tell our customers like you, you own your video when you build on top of MUX, you have full control over everything, how it's stored, when it's stored, where it goes, how it's published, we handle all of the hard technology and we give our customers all of the flexibility in terms of designing their products. >> I want to get back some use case, but you brought that up I might as well just jump to my next point. I'd like you to come back and circle back on some references 'cause I know you have some. You said building on infrastructure that you own, this is a fundamental cloud concept. You mentioned API to a server for the nerds out there that know that that's cool, but the people who aren't super nerdy, that means you're basically got an interface into a server behind the scenes. You're doing the same for video. So, that is a big thing around building services. So what wide range of services can we expect beyond MUX? If I'm going to have an API to video, what could I do possibly? >> What sort of experience could you build? >> Yes, I got a team of developers saying I'm all in API to video, I don't want to do all that transit got straight there, I want to build experiences, video experiences on my app. >> Yeah, I mean, I think, one way to think about it is that, what's the range of key use cases that people do with video? We tend to think about six at MUX, one is kind of the places where the content is, the prop. So one of the things that use video is you can create great video. Think of online courses or fitness or entertainment or news or things like that. That's kind of the first thing everyone thinks of, when you think video, you think Netflix, and that's great. But we see a lot of really interesting uses of video in the world of social media. So customers of ours like Visco, which is an incredible photo sharing application, really for photographers who really care about the craft. And they were able to bring video in and bring that same kind of Visco experience to video using MUX. We think about B2B tools, videos. When you think about it, all video is, is a high bandwidth way of communicating. And so customers are as like HubSpot use video for the marketing platform, for business collaboration, you'll see a lot of growth of video in terms of helping businesses engage their customers or engage with their employees. We see live events obviously have been a massive category over the last few years. You know, we were all forced into a world where we had to do live events two years ago, but I think now we're reemerging into a world where the online part of a conference will be just as important as the in-person component of a conference. So that's another big use case we see. >> Well, full disclosure, if you're watching this live right now, it's being powered by MUX. So shout out, we use MUX on theCUBE platform that you're experiencing in this. Actually in real time, 'cause this is one application, there's many more. So video as code, is data as code is the theme, that's going to bring up the data ops. Video also is code because (laughs) it's just like you said, it's just communicating, but it gets converted to data. So data ops, video ops could be its own new category. What's your reaction to that? >> Yeah, I mean, I think, I have a couple thoughts on that. The first thought is, video is a way that, because the way that companies interact with customers or users, it's really important to have good monitoring and analytics of your video. And so the first product we ever built was actually a product called MUX video, sorry, MUX data, which is the best way to monitor a video platform at scale. So we work with a lot of the big broadcasters, we work with like CBS and Fox Sports and Discovery. We work with big tech companies like Reddit and Vimeo to help them monitor their video. And you just get a huge amount of insight when you look at robust analytics about video delivery that you can use to optimize performance, to make sure that streaming works well globally, especially in hard to reach places or on every device. That's we actually build a MUX data platform first because when we started MUX, we spent time with some of our friends at companies like YouTube and Netflix, and got to know how they use data to power their video platforms. And they do really sophisticated things with data to ensure that their streams well, and we wanted to build the product that would help everyone else do that. So, that's one use. I think the other obvious use is just really understanding what people are doing with their video, who's watching what, what's engaging, those kind of things. >> Yeah, data is definitely there. You guys mentioned some great brands that are working with you guys, and they're doing it because of the developer experience. And I'd like you to explain, if you don't mind, in your words, why is the MUX developer experience so good? What are some of the results you're seeing from your customers? What are they saying to you? Obviously when you win, you get good feedback. What are some of the things that they're saying and what specific develop experiences do they like the best? >> Yeah, I mean, I think that the most gratifying thing about being a startup founder is when your customers like what you're doing. And so we get a lot of this, but it's always, we always pay attention to what customers say. But yeah, people, the number one thing developers say when they think about MUX is that the developer experience is great. I think when they say that, what they mean is two things, first is it's easy to work with, which helps them move faster, software velocity is so important. Every company in the world is investing and wants to move quickly and to build quickly. And so if you can help a team speed up, that's massively valuable. The second thing I think when people like our developer experience is, you know, in a lot of ways that think that we get out of the way and we let them do what they want to do. So well, designed APIs are a key part of that, coming back to abstraction, making sure that you're not forcing customers into decisions that they actually want to make themselves. Like, if our video player only had one design, that that would not be, that would not work for most developers, 'cause developers want to bring their own design and style and workflow and feel to their video. And so, yeah, so I think the way we do that is just think comprehensively about how APIs are designed, think about the workflows that users are trying to accomplish with video, and make sure that we have the right APIs, make sure they're the right information, we have the right webhooks, we have the right SDKs, all of those things in place so that they can build what they want. >> We were just having a conversation on theCUBE, Dave Vellante and I, and our team, and I'd love to get you a reaction to this. And it's more and more, a riff real quick. We're seeing a trend where video as code, data as code, media stack, where you're starting to see the emergence of the media developer, where the application of media looks a lot like kind of software developer, where the app, media as an app. It could be a chat, it could be a peer to peer video, it could be part of an event platform, but with all the recent advances, in UX designers, coders, the front end looks like an emergence of these creators that are essentially media developers for all intent and purpose, they're coding media. What's your reaction to that? How do you see that evolving? >> I think the. >> Or do you agree with it? >> It's okay. >> Yeah, yeah. >> Well, I think a couple things. I think one thing, I think this goes along through saying, but maybe it's disagreement, is that we don't think you should have to be an expert at video or at media to create and produce or create and publish good video, good audio, good images, those kind of things. And so, you know, I think if you look at software overall, I think of 10 years ago, the kind of DevOps movement, where there was kind of a movement away from specialization in software where the same software developer could build and deploy the same software developer maybe could do front end and back end. And we want to bring that to video as well. So you don't have to be a specialist to do it. On the other hand, I do think that investments and tooling, all the way from video creation, which is not our world, but there's a lot of amazing companies out there that are making it easier to produce video, to shoot video, to edit, a lot of interesting innovations there all the way to what we do, which is helping people stream and publish video and video experiences. You know, I think another way about it is, that tool set and companies doing that let anyone be a media developer, which I think is important. >> It's like DevOps turning into low-code, no-code, eventually it's just composability almost like just, you know, "Hey Siri, give me some video." That kind of thing. Final question for you why I got you here, at the end of the day, the decision between a lot of people's build versus buy, "I got to get a developer. Why not just roll my own?" You mentioned data center, "I want to build a data center." So why MUX versus do it yourself? >> Yeah, I mean, part of the reason we started this company is we have a pretty, pretty strong opinion on this. When you think about it, when we started MUX five years ago, six years ago, if you were a developer and you wanted to accept credit cards, if you wanted to bring payment processing into your application, you didn't go build a payment gateway. You just probably used Stripe. And if you wanted to send text messages, you didn't build your own SMS gateway, you probably used Twilio. But if you were a developer and you wanted to stream video, you built your own video gateway, you built your own video application, which was really complex. Like we talked about, you know, probably three, four months of work to get something basic up and running, probably not live video that's probably only on demand video at that point. And you get no benefit by doing it yourself. You're no better than anyone else because you rolled your own video stack. What you get is risk that you might not do a good job, maybe you do worse than your competitors, and you also get distraction where you've just taken, you take 10 engineers and 10 sprints and you apply it to a problem that doesn't actually really give you differentiated value to your users. So we started MUX so that people would not have to do that. It's fine if you want to build your own video platform, once you get to a certain scale, if you can afford a dozen engineers for a VOD platform and you have some really massively differentiated use case, you know, maybe, live is, I don't know, I don't have the rule of thumb, live videos maybe five times harder than on demand video to work with. But you know, in general, like there's such a shortage of software engineers today and software engineers have, frankly, are in such high demand. Like you see what happens in the marketplace and the hiring markets, how competitive it is. You need to use your software team where they're maximally effective, and where they're maximally effective is building differentiation into your products for your customers. And video is just not that, like very few companies actually differentiate on their video technology. So we want to be that team for everyone else. We're 200 people building the absolute best video infrastructure as APIs for developers and making that available to everyone else. >> John, great to have you on with the showcase, love the company, love what you guys do. Video as code, data as code, great stuff. Final plug for the company, for the developers out there and prospects watching for MUX, why should they go to MUX? What are you guys up to? What's the big benefit? >> I mean, first, just check us out. Try try our APIs, read our docs, talk to our support team. We put a lot of work into making our platform the best, you know, as you dig deeper, I think you'd be looking at the performance around, the global performance of what we do, looking at our analytics stack and the insight you get into video streaming. We have an emerging open source video player that's really exciting, and I think is going to be the direction that open source players go for the next decade. And then, you know, we're a quickly growing team. We're 60 people at the beginning of last year. You know, we're one 50 at the beginning of this year, and we're going to a add, we're going to grow really quickly again this year. And this whole team is dedicated to building the best video structure for developers. >> Great job, Jon. Thank you so much for spending the time sharing the story of MUX here on the show, Amazon Startup Showcase season two, episode two, thanks so much. >> Thank you, John. >> Okay, I'm John Furrier, your host of theCUBE. This is season two, episode two, the ongoing series cover the most exciting startups from the AWS Cloud Ecosystem. Talking data analytics here, video cloud, video as a service, video infrastructure, video APIs, hottest thing going on right now, and you're watching it live here on theCUBE. Thanks for watching. (upbeat music)

Published Date : Mar 30 2022

SUMMARY :

Went into big detail of the of terms in the industry, "Software is eating the world" People know, the video folks And if you want to put video online, And if you go back to the just of the internet. lining of the products. So if you want to build a video platform, the nuances of video? all of the workflow items you So kind of the general On the other extreme, you have just think infrastructure that you own, saying I'm all in API to video, So one of the things that use video is it's just like you said, that you can use to optimize performance, And I'd like you to is that the developer experience is great. you a reaction to this. that to video as well. at the end of the day, the absolute best video infrastructure love the company, love what you guys do. and the insight you get of MUX here on the show, from the AWS Cloud Ecosystem.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc AndreessenPERSON

0.99+

Jon DahlPERSON

0.99+

John FurrierPERSON

0.99+

70%QUANTITY

0.99+

CBSORGANIZATION

0.99+

13 yearsQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JohnPERSON

0.99+

JonPERSON

0.99+

NetflixORGANIZATION

0.99+

Dave VellantePERSON

0.99+

10 engineersQUANTITY

0.99+

GoogleORGANIZATION

0.99+

threeQUANTITY

0.99+

VimeoORGANIZATION

0.99+

DiscoveryORGANIZATION

0.99+

RedditORGANIZATION

0.99+

10 sprintsQUANTITY

0.99+

two weeksQUANTITY

0.99+

Fox SportsORGANIZATION

0.99+

60 peopleQUANTITY

0.99+

200 peopleQUANTITY

0.99+

AWSORGANIZATION

0.99+

PythonTITLE

0.99+

two thingsQUANTITY

0.99+

four monthsQUANTITY

0.99+

firstQUANTITY

0.99+

SiriTITLE

0.99+

iOSTITLE

0.99+

three monthsQUANTITY

0.99+

six years agoDATE

0.99+

EC2TITLE

0.99+

first thoughtQUANTITY

0.99+

FacebookORGANIZATION

0.99+

BandcampORGANIZATION

0.99+

next decadeDATE

0.99+

five years agoDATE

0.99+

first productQUANTITY

0.99+

Data as CodeTITLE

0.99+

MUXORGANIZATION

0.99+

TodayDATE

0.99+

five timesQUANTITY

0.99+

ViscoORGANIZATION

0.99+

AndroidTITLE

0.98+

theCUBEORGANIZATION

0.98+

first timeQUANTITY

0.98+

this yearDATE

0.98+

ZencoderORGANIZATION

0.98+

oneQUANTITY

0.98+

last yearDATE

0.98+

10 years agoDATE

0.98+

TwitterORGANIZATION

0.98+

two choicesQUANTITY

0.98+

Robin HoodPERSON

0.97+

two years agoDATE

0.97+

TwilioORGANIZATION

0.97+

HubSpotORGANIZATION

0.96+

one applicationQUANTITY

0.96+

OneQUANTITY

0.96+

ShopifyORGANIZATION

0.96+

one designQUANTITY

0.96+

one thingQUANTITY

0.96+

StripeORGANIZATION

0.95+

first videoQUANTITY

0.95+

second thingQUANTITY

0.95+

one wayQUANTITY

0.94+

AgileTITLE

0.94+

one pushQUANTITY

0.93+

first thingQUANTITY

0.92+

Breaking Analysis: What to Expect in Cloud 2022 & Beyond


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante you know we've often said that the next 10 years in cloud computing won't be like the last ten cloud has firmly planted its footprint on the other side of the chasm with the momentum of the entire multi-trillion dollar tech business behind it both sellers and buyers are leaning in by adopting cloud technologies and many are building their own value layers on top of cloud in the coming years we expect innovation will continue to coalesce around the three big u.s clouds plus alibaba in apac with the ecosystem building value on top of the hardware saw tooling provided by the hyperscalers now importantly we don't see this as a race to the bottom rather our expectation is that the large public cloud players will continue to take cost out of their platforms through innovation automation and integration while other cloud providers and the ecosystem including traditional companies that buy it mine opportunities in their respective markets as matt baker of dell is fond of saying this is not a zero sum game welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll update you on our latest projections in the cloud market we'll share some new etr survey data with some surprising nuggets and drill into this the important cloud database landscape first we want to take a look at what people are talking about in cloud and what's been in the recent news with the exception of alibaba all the large cloud players have reported earnings google continues to focus on growth at the expense of its profitability google reported that it's cloud business which includes applications like google workspace grew 45 percent to five and a half billion dollars but it had an operating loss of 890 billion now since thomas curion joined google to run its cloud business google has increased head count in its cloud business from 25 000 25 000 people now it's up to 40 000 in an effort to catch up to the two leaders but playing catch up is expensive now to put this into perspective let's go back to aws's revenue in q1 2018 when the company did 5.4 billion so almost exactly the same size as google's current total cloud business and aws is growing faster at the time at 49 don't forget google includes in its cloud numbers a big chunk of high margin software aws at the time had an operating profit of 1.4 billion that quarter around 26 of its revenues so it was a highly profitable business about as profitable as cisco's overall business which again is a great business this is what happens when you're number three and didn't get your head out of your ads fast enough now in fairness google still gets high marks on the quality of its technology according to corey quinn of the duck bill group amazon and google cloud are what he called neck and neck with regard to reliability with microsoft azure trailing because of significant disruptions in the past these comments were made last week in a bloomberg article despite some recent high-profile outages on aws not surprisingly a microsoft spokesperson said that the company's cloud offers industry-leading reliability and that gives customers payment credits after some outages thank you turning to microsoft and cloud news microsoft's overall cloud business surpassed 22 billion in the december quarter up 32 percent year on year like google microsoft includes application software and sas offerings in its cloud numbers and gives little nuggets of guidance on its azure infrastructure as a service business by the way we estimate that azure comprises about 45 percent of microsoft's overall cloud business which we think hit a 40 billion run rate last quarter microsoft guided in its earning call that recent declines in the azure growth rates will reverse in q1 and that implies sequential growth for azure and finally it was announced that the ftc not the doj will review microsoft's announced 75 billion acquisition of activision blizzard it appears ftc chair lena khan wants to take this one on herself she of course has been very outspoken about the power of big tech companies and in recent a recent cnbc interview suggested that the u.s government's actions were a meaningful contributor back then to curbing microsoft's power in the 90s i personally found that dubious just ask netscape wordperfect novell lotus and spc the maker of harvard presentation graphics how effective the government was in curbing microsoft power generally my take is that the u s government has had a dismal record regulating tech companies most notably ibm and microsoft and it was market forces company hubris complacency and self-inflicted wounds not government intervention these were far more effective than the government now of course if companies are breaking the law they should be punished but the u.s government hasn't been very productive in its actions and the unintended consequences of regulation could be detrimental to the u.s competitiveness in the race with china but i digress lastly in the news amazon announced earnings thursday and the company's value increased by 191 billion dollars on friday that's a record valuation gain for u.s stocks aws amazon's profit engine grew 40 percent year on year for the quarter it closed the year at 62 billion dollars in revenue and at a 71 billion dollar revenue run rate aws is now larger than ibm which without kindrel is at a 67 billion dollar run rate just for context ibm's revenue in 2011 was 107 billion dollars now there's a conversation going on in the media and social that in order to continue this growth and compete with microsoft that aws has to get into the sas business and offer applications we don't think that's the right strategy for amp from for amazon in the near future rather we see them enabling developers to compete in that business finally amazon disclosed that 48 of its top 50 customers are using graviton 2 instances why is this important because aws is well ahead of the competition in custom silicon chips is and is on a price performance curve that is far better than alternatives especially those based on x86 this is one of the reasons why we think this business is not a race to the bottom aws is being followed by google microsoft and alibaba in terms of developing custom silicon and will continue to drive down their internal cost structures and deliver price performance equal to or better than the historical moore's law curves so that's the recent news for the big u.s cloud providers let's now take a look at how the year ended for the big four hyperscalers and look ahead to next year here's a table we've shown this view before it shows the revenue estimates for worldwide is and paths generated by aws microsoft alibaba and google now remember amazon and alibaba they share clean eye ass figures whereas microsoft and alphabet only give us these nuggets that we have to interpret and we correlate those tidbits with other data that we gather we're one of the few outlets that actually attempts to make these apples to apples comparisons there's a company called synergy research there's another firm that does this but i really can't map to their numbers their gcp figures look far too high and azure appears somewhat overestimated and they do include other stuff like hosted private cloud services but it's another data point that you can use okay back to the table we've slightly adjusted our gcp figures down based on interpreting some of alphabet's statements and other survey data only alibaba has yet to announce earnings so we'll stick to a 2021 market size of about 120 billion dollars that's a 41 growth rate relative to 2020 and we expect that figure to increase by 38 percent to 166 billion in 2022 now we'll discuss this a bit later but these four companies have created an opportunity for the ecosystem to build what we're calling super clouds on top of this infrastructure and we're seeing it happen it was increasingly obvious at aws re invent last year and we feel it will pick up momentum in the coming months and years a little bit more on that later now here's a graphical view of the quarterly revenue shares for these four companies notice that aws has reversed its share erosion and is trending up slightly aws has accelerated its growth rate four quarters in a row now it accounted for 52 percent of the big four hyperscaler revenue last year and that figure was nearly 54 in the fourth quarter azure finished the year with 32 percent of the hyper scale revenue in 2021 which dropped to 30 percent in q4 and you can see gcp and alibaba they're neck and neck fighting for the bronze medal by the way in our recent 2022 predictions post we said google cloud platform would surpass alibaba this year but given the recent trimming of our numbers google's got some work to do for that prediction to be correct okay just to put a bow on the wikibon market data let's look at the quarterly growth rates and you'll see the compression trends there this data tracks quarterly revenue growth rates back to 20 q1 2019 and you can see the steady downward trajectory and the reversal that aws experienced in q1 of last year now remember microsoft guided for sequential growth and azure so that orange line should trend back up and given gcp's much smaller and big go to market investments that we talked about we'd like to see an acceleration there as well the thing about aws is just remarkable that it's able to accelerate growth at a 71 billion run rate business and alibaba you know is a bit more opaque and likely still reeling from the crackdown of the chinese government we're admittedly not as close to the china market but we'll continue to watch from afar as that steep decline in growth rate is somewhat of a concern okay let's get into the survey data from etr and to do so we're going to take some time series views on some of the select cloud platforms that are showing spending momentum in the etr data set you know etr uses a metric we talked about this a lot called net score to measure that spending velocity of products and services netscore basically asks customers are you spending more less or the same on a platform and a vendor and then it subtracts the lesses from the moors and that yields a net score this chart shows net score for five cloud platforms going back to january 2020. note in the table that the table we've inserted inside that chart shows the net score and shared n the latter metric indicates the number of mentions in the data set and all the platforms we've listed here show strong presence in the survey that red dotted line at 40 percent that indicates spending is at an elevated level and you can see azure and aws and vmware cloud on aws as well as gcp are all nicely elevated and bounding off their october figures indicating continued cloud momentum overall but the big surprise in these figures is the steady climb and the steep bounce up from oracle which came in just under the 40 mark now one quarter is not necessarily a trend but going back to january 2020 the oracle peaks keep getting higher and higher so we definitely want to keep watching this now here's a look at some of the other cloud platforms in the etr survey the chart here shows the same time series and we've now brought in some of the big hybrid players notably vmware cloud which is vcf and other on-prem solutions red hat openstack which as we've reported in the past is still popular in telcos who want to build their own cloud we're also starting to see hpe with green lake and dell with apex show up more and ibm which years ago acquired soft layer which was really essentially a bare metal hosting company and over the years ibm cobbled together its own public cloud ibm is now racing after hybrid cloud using red hat openshift as the linchpin to that strategy now what this data tells us first of all these platforms they don't have the same presence in the data set as do the previous players vmware is the one possible exception but other than vmware these players don't have the spending velocity shown in the previous chart and most are below the red line hpe and dell are interesting and notable in that they're transitioning their early private cloud businesses to dell gr sorry hpe green lake and dell apex respectively and finally after years of kind of staring at their respective navels in in cloud and milking their legacy on-prem models they're finally building out cloud-like infrastructure for their customers they're leaning into cloud and marketing it in a more sensible and attractive fashion for customers so we would expect these figures are going to bounce around for a little while for those two as they settle into a groove and we'll watch that closely now ibm is in the process of a complete do-over arvin krishna inherited three generations of leadership with a professional services mindset now in the post gerschner gerstner era both sam palmisano and ginny rometty held on far too long to ibm's service heritage and protected the past from the future they missed the cloud opportunity and they forced the acquisition of red hat to position the company for the hybrid cloud remedy tried to shrink to grow but never got there krishna is moving faster and with the kindred spin is promising mid-single-digit growth which would be a welcome change ibm is a lot of work to do and we would expect its net score figures as well to bounce around as customers transition to the future all right let's take a look at all these different players in context these are all the clouds that we just talked about in a two-dimensional view the vertical axis is net score or spending momentum and the horizontal axis is market share or presence or pervasiveness in the data set a couple of call-outs that we'd like to make here first the data confirms what we've been saying what everybody's been saying aws and microsoft stand alone with a huge presence many tens of billions of dollars in revenue yet they are both well above the 40 line and show spending momentum and they're well ahead of gcp on both dimensions second vmware while much smaller is showing legitimate momentum which correlates to its public statements alibaba the alibaba in this survey really doesn't have enough sample to make hardcore conclusions um you can see hpe and dell and ibm you know similarly they got a little bit more presence in the data set but they clearly have some work to do what you're seeing there is their transitioning their legacy install bases oracle's the big surprise look what oracle was in the january survey and how they've shot up recently now we'll see if this this holds up let's posit some possibilities as to why it really starts with the fact that oracle is the king of mission critical apps now if you haven't seen video on twitter you have to check it out it's it's hilarious we're not going to run the video here but the link will be in our post but i'll give you the short version some really creative person they overlaid a data migration narrative on top of this one tooth guy who speaks in spanish gibberish but the setup is he's a pm he's a he's a a project manager at a bank and aws came into the bank this of course all hypothetical and said we can move all your apps to the cloud in 12 months and the guy says but wait we're running mission critical apps on exadata and aws says there's nothing special about exadata and he starts howling and slapping his knee and laughing and giggling and talking about the 23 year old senior engineer who says we're going to do this with microservices and he could tell he was he was 23 because he was wearing expensive sneakers and what a nightmare they encountered migrating their environment very very very funny video and anyone who's ever gone through a major migration of mission critical systems this is gonna hit home it's funny not funny the point is it's really painful to move off of oracle and oracle for all its haters and its faults is really the best environment for mission critical systems and customers know it so what's happening is oracle's building out the best cloud for oracle database and it has a lot of really profitable customers running on-prem that the company is migrating to oracle cloud infrastructure oci it's a safer bet than ripping it and putting it into somebody else's cloud that doesn't have all the specialized hardware and oracle knowledge because you can get the same integrated exadata hardware and software to run your database in the oracle cloud it's frankly an easier and much more logical migration path for a lot of customers and that's possibly what's happening here not to mention oracle jacks up the license price nearly doubles the license price if you run on other clouds so not only is oracle investing to optimize its cloud infrastructure it spends money on r d we've always talked about that really focused on mission critical applications but it's making it more cost effective by penalizing customers that run oracle elsewhere so this possibly explains why when the gartner magic quadrant for cloud databases comes out it's got oracle so well positioned you can see it there for yourself oracle's position is right there with aws and microsoft and ahead of google on the right-hand side is gartner's critical capabilities ratings for dbms and oracle leads in virtually all of the categories gartner track this is for operational dvms so it's kind of a narrow view it's like the red stack sweet spot now this graph it shows traditional transactions but gartner has oracle ahead of all vendors in stream processing operational intelligence real-time augmented transactions now you know gartner they're like old name framers and i say that lovingly so maybe they're a bit biased and they might be missing some of the emerging opportunities that for example like snowflake is pioneering but it's hard to deny that oracle for its business is making the right moves in cloud by optimizing for the red stack there's little question in our view when it comes to mission critical we think gartner's analysis is correct however there's this other really exciting landscape emerging in cloud data and we don't want it to be a blind spot snowflake calls it the data cloud jamactagani calls it data mesh others are using the term data fabric databricks calls it data lake house so so does oracle by the way and look the terminology is going to evolve and most of the action action that's happening is in the cloud quite frankly and this chart shows a select group of database and data warehouse companies and we've filtered the data for aws azure and gcp customers accounts so how are these accounts or companies that were showing how these vendors were showing doing in aws azure and gcp accounts and to make the cut you had to have a minimum of 50 mentions in the etr survey so unfortunately data bricks didn't make it just not enough presence in the data set quite quite yet but just to give you a sense snowflake is represented in this cut with 131 accounts aws 240 google 108 microsoft 407 huge [ __ ] 117 cloudera 52 just made the cut ibm 92 and oracle 208. again these are shared accounts filtered by customers running aws azure or gcp the chart shows a net score lime green is new ads forest green is spending more gray is flat spending the pink is spending less and the bright red is defection again you subtract the red from the green and you get net score and you can see that snowflake as we reported last week is tops in the data set with a net score in the 80s and virtually no red and even by the way single digit flat spend aws google and microsoft are all prominent in the data set as is [ __ ] and snowflake as i just mentioned and they're all elevated over the 40 mark cloudera yeah what can we say once they were a high flyer they're really not in the news anymore with anything compelling other than they just you know took the company private so maybe they can re-emerge at some point with a stronger story i hope so because as you can see they actually have some new additions and spending momentum in the green just a lot of customers holding steady and a bit too much red but they're in the positive territory at least with uh plus 17 percent unlike ibm and oracle and this is the flip side of the coin ibm they're knee-deep really chest deep in the middle of a major transformation we've said before arvind krishna's strategy and vision is at least achievable prune the portfolio i.e spin out kindrel sell watson health hold serve with the mainframe and deal with those product cycles shift the mix to software and use red hat to win the day in hybrid red hat is working for ibm's growing well into the double digits unfortunately it's not showing up in this chart with little database momentum in aws azure and gcp accounts zero new ads not enough acceleration and spending a big gray middle in nearly a quarter of the base in the red ibm's data and ai business only grew three percent this last quarter and the word database wasn't even mentioned once on ibm's earnings call this has to be a concern as you can see how important database is to aws microsoft google and the momentum it's giving companies like snowflake and [ __ ] and others which brings us to oracle with a net score of minus 12. so how do you square the momentum in oracle cloud spending and the strong ratings and databases from gartner with this picture good question and i would say the following first look at the profile people aren't adding oracle new a large portion of the base 25 is reducing spend by 6 or worse and there's a decent percentage of the base migrating off oracle with a big fat middle that's flat and this accounts for the poor net score overall but what etr doesn't track is how much is being spent rather it's an account based model and oracle is heavily weighted toward big spenders running mission critical applications and databases oracle's non-gaap operating margins are comparable to ibm's gross margins on a percentage basis so a very profitable company with a big license and maintenance in stall basin oracle has focused its r d investments into cloud erp database automation they've got vertical sas and they've got this integrated hardware and software story and this drives differentiation for the company but as you can see in this chart it has a legacy install base that is constantly trying to minimize its license costs okay here's a little bit of different view on the same data we expand the picture with the two dimensions of net score on the y-axis and market share or pervasiveness on the horizontal axis and the table insert is how the data gets plotted y and x respectively not much to add here other than to say the picture continues to look strong for those companies above the 40 line that are focused and their focus and have figured out a clear cloud strategy and aren't necessarily dealing with a big install base the exception of course is is microsoft and the ones below the line definitely have parts of their portfolio which have solid momentum but they're fighting the inertia of a large install base that moves very slowly again microsoft had the advantage of really azure and migrating those customers very quickly okay so let's wrap it up starting with the big three cloud players aws is accelerating and innovating great example is custom silicon with nitro and graviton and other chips that will help the company address concerns related to the race to the bottom it's not a race to zero aws we believe will let its developers go after the sas business and for the most part aws will offer solutions that address large vertical markets think call centers the edge remains a wild card for aws and all the cloud players really aws believes that in the fullness of time all workloads will run in the public cloud now it's hard for us to imagine the tesla autonomous vehicles running in the public cloud but maybe aws will redefine what it means by its cloud microsoft well they're everywhere and they're expanding further now into gaming and the metaverse when he became ceo in 2014 many people said that satya should ditch xbox just as an aside the joke among many oracle employees at the time was that safra katz would buy her kids and her nieces and her nephews and her kids friends everybody xbox game consoles for the holidays because microsoft lost money for everyone that they shipped well nadella has stuck with it and he sees an opportunity to expand through online gaming communities one of his first deals as ceo was minecraft now the acquisition of activision will make microsoft the world's number three gaming company by revenue behind only 10 cent and sony all this will be powered by azure and drive more compute storage ai and tooling now google for its part is battling to stay relevant in the conversation luckily it can afford the massive losses it endures in cloud because the company's advertising business is so profitable don't expect as many have speculated that google is going to bail on cloud that would be a huge mistake as the market is more than large enough for three players which brings us to the rest of the pack cloud ecosystems generally and aws specifically are exploding the idea of super cloud that is a layer of value that spans multiple clouds hides the underlying complexity and brings new value that the cloud players aren't delivering that's starting to bubble to the top and legacy players are staying close to their customers and fighting to keep them spending and it's working dell hpe cisco and smaller predominantly on-plan prem players like pure storage they continue to do pretty well they're just not as sexy as the big cloud players the real interesting activity it's really happening in the ecosystem of companies and firms within industries that are transforming to create their own digital businesses virtually all of them are running a portion of their offerings on the public cloud but often connecting to on-premises workloads and data think goldman sachs making that work and creating a great experience across all environments is a big opportunity and we're seeing it form right before our eyes don't miss it okay that's it for now thanks to my colleague stephanie chan who helped research this week's topics remember these episodes are all available as podcasts wherever you listen just search breaking analysis podcast check out etr's website at etr dot ai and also we publish a full report every week on wikibon.com and siliconangle.com you can get in touch with me email me at david.velante siliconangle.com you can dm me at divalante or comment on my linkedin post this is dave vellante for the cube insights powered by etr have a great week stay safe be well and we'll see you next time [Music] you

Published Date : Feb 7 2022

SUMMARY :

opportunity for the ecosystem to build

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
amazonORGANIZATION

0.99+

45 percentQUANTITY

0.99+

2011DATE

0.99+

40 percentQUANTITY

0.99+

january 2020DATE

0.99+

2021DATE

0.99+

microsoftORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

32 percentQUANTITY

0.99+

30 percentQUANTITY

0.99+

52 percentQUANTITY

0.99+

5.4 billionQUANTITY

0.99+

2020DATE

0.99+

january 2020DATE

0.99+

2022DATE

0.99+

ibmORGANIZATION

0.99+

48QUANTITY

0.99+

22 billionQUANTITY

0.99+

71 billionQUANTITY

0.99+

40 billionQUANTITY

0.99+

40 percentQUANTITY

0.99+

62 billion dollarsQUANTITY

0.99+

2014DATE

0.99+

107 billion dollarsQUANTITY

0.99+

890 billionQUANTITY

0.99+

two leadersQUANTITY

0.99+

17 percentQUANTITY

0.99+

38 percentQUANTITY

0.99+

1.4 billionQUANTITY

0.99+

67 billion dollarQUANTITY

0.99+

december quarterDATE

0.99+

xboxCOMMERCIAL_ITEM

0.99+

sam palmisanoPERSON

0.99+

191 billion dollarsQUANTITY

0.99+

thomas curionPERSON

0.99+

stephanie chanPERSON

0.99+

awsORGANIZATION

0.99+

three percentQUANTITY

0.99+

last weekDATE

0.99+

fridayDATE

0.99+

david.velanteOTHER

0.99+

last weekDATE

0.99+

71 billion dollarQUANTITY

0.99+

75 billionQUANTITY

0.99+

last yearDATE

0.99+

krishnaPERSON

0.99+

bostonLOCATION

0.99+

50 mentionsQUANTITY

0.99+

three playersQUANTITY

0.99+

23QUANTITY

0.99+

oracleORGANIZATION

0.99+

five and a half billion dollarsQUANTITY

0.99+

q1 2018DATE

0.99+

two dimensionsQUANTITY

0.99+

166 billionQUANTITY

0.99+

lena khanPERSON

0.99+

multi-trillion dollarQUANTITY

0.99+

12 monthsQUANTITY

0.99+

gartnerORGANIZATION

0.99+

Breaking Analysis: Cyber, Blockchain & NFTs Meet the Metaverse


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> When Facebook changed its name to Meta last fall, it catalyzed a chain reaction throughout the tech industry. Software firms, gaming companies, chip makers, device manufacturers, and others have joined in hype machine. Now, it's easy to dismiss the metaverse as futuristic hyperbole, but do we really believe that tapping on a smartphone, or staring at a screen, or two-dimensional Zoom meetings are the future of how we work, play, and communicate? As the internet itself proved to be larger than we ever imagined, it's very possible, and even quite likely that the combination of massive processing power, cheap storage, AI, blockchains, crypto, sensors, AR, VR, brain interfaces, and other emerging technologies will combine to create new and unimaginable consumer experiences, and massive wealth for creators of the metaverse. Hello, and welcome to this week's Wiki Bond Cube Insights, powered by ETR. In this "Breaking Analysis" we welcome in cyber expert, hacker gamer, NFT expert, and founder of ORE System, Nick Donarski. Nick, welcome, thanks so much for coming on theCUBE. >> Thank you, sir, glad to be here. >> Yeah, okay, so today we're going to traverse two parallel paths, one that took Nick from security expert and PenTester to NFTs, tokens, and the metaverse. And we'll simultaneously explore the complicated world of cybersecurity in the enterprise, and how the blockchain, crypto, and NFTs will provide key underpinnings for digital ownership in the metaverse. We're going to talk a little bit about blockchain, and crypto, and get things started there, and some of the realities and misconceptions, and how innovations in those worlds have led to the NFT craze. We'll look at what's really going on in NFTs and why they're important as both a technology and societal trend. Then, we're going to dig into the tech and try to explain why and how blockchain and NFTs are going to lay the foundation for the metaverse. And, finally, who's going to build the metaverse. And how long is it going to take? All right, Nick, let's start with you. Tell us a little bit about your background, your career. You started as a hacker at a really, really young age, and then got deep into cyber as a PenTester. You did some pretty crazy stuff. You have some great stories about sneaking into buildings. You weren't just doing it all remote. Tell us about yourself. >> Yeah, so I mean, really, I started a long time ago. My dad was really the foray into technology. I wrote my first program on an Apple IIe in BASIC in 1989. So, I like to say I was born on the internet, if you will. But, yeah, in high school at 16, I incorporated my first company, did just tech support for parents and teachers. And then in 2000 I transitioned really into security and focused there ever since. I joined Rapid7 and after they picked up Medis boy, I joined HP. I was one of their founding members of Shadowlabs and really have been part of the information security and the cyber community all throughout, whether it's training at various different conferences or talking. My biggest thing and my most awesome moments as various things of being broken into, is really when I get to actually work with somebody that's coming up in the industry and who's new and actually has that light bulb moment of really kind of understanding of technology, understanding an idea, or getting it when it comes to that kind of stuff. >> Yeah, and when you think about what's going on in crypto and NFTs and okay, now the metaverse it's you get to see some of the most innovative people. Now I want to first share a little bit of data on enterprise security and maybe Nick get you to comment. We've reported over the past several years on the complexity in the security business and the numerous vendor choices that SecOps Pros face. And this chart really tells that story in the cybersecurity space. It's an X,Y graph. We've shown it many times from the ETR surveys where the vertical axis, it's a measure of spending momentum called net score. And the horizontal axis is market share, which represents each company's presence in the data set, and a couple of points stand out. First, it's really crowded. In that red dotted line that you see there, that's 40%, above that line on the net score axis, marks highly elevated spending momentum. Now, let's just zoom in a bit and I've cut the data by those companies that have more than a hundred responses in the survey. And you can see here on this next chart, it's still very crowded, but a few call-outs are noteworthy. First companies like SentinelOne, Elastic, Tanium, Datadog, Netskope and Darktrace. They were all above that 40% line in the previous chart, but they've fallen off. They still have actually a decent presence in the survey over 60 responses, but under that hundred. And you can see Auth0 now Okta, big $7 billion acquisition. They got the highest net score CrowdStrike's up there, Okta classic they're kind of enterprise business, and Zscaler and others above that line. You see Palo Alto Networks and Microsoft very impressive because they're both big and they're above that elevated spending velocity. So Nick, kind of a long-winded intro, but it was a little bit off topic, but I wanted to start here because this is the life of a SecOps pro. They lack the talent in a capacity to keep bad guys fully at bay. And so they have to keep throwing tooling at the problem, which adds to the complexity and as a PenTester and hacker, this chaos and complexity means cash for the bad guys. Doesn't it? >> Absolutely. You know, the more systems that these organizations find to integrate into the systems, means that there's more components, more dollars and cents as far as the amount of time and the engineers that need to actually be responsible for these tools. There's a lot of reasons that, the more, I guess, hands in the cookie jar, if you will, when it comes to the security architecture, the more links that are, or avenues for attack built into the system. And really one of the biggest things that organizations face is being able to have engineers that are qualified and technical enough to be able to support that architecture as well, 'cause buying it from a vendor and deploying it, putting it onto a shelf is good, but if it's not tuned properly, or if it's not connected properly, that security tool can just hold up more avenues of attack for you. >> Right, okay, thank you. Now, let's get into the meat of the discussion for today and talk a little bit about blockchain and crypto for a bit. I saw sub stack post the other day, and it was ripping Matt Damon for pedaling crypto on TV ads and how crypto is just this big pyramid scheme. And it's all about allowing criminals to be anonymous and it's ransomware and drug trafficking. And yes, there are definitely scams and you got to be careful and lots of dangers out there, but these are common criticisms in the mainstream press, that overlooked the fact by the way that IPO's and specs are just as much of a pyramid scheme. Now, I'm not saying there shouldn't be more regulation, there should, but Bitcoin was born out of the 2008 financial crisis, cryptocurrency, and you think about, it's really the confluence of software engineering, cryptography and game theory. And there's some really powerful innovation being created by the blockchain community. Crypto and blockchain are really at the heart of a new decentralized platform being built out. And where today, you got a few, large internet companies. They control the protocols and the platform. Now the aspiration of people like yourself, is to create new value opportunities. And there are many more chances for the little guys and girls to get in on the ground floor and blockchain technology underpins all this. So Nick, what's your take, what are some of the biggest misconceptions around blockchain and crypto? And do you even pair those two in the same context? What are your thoughts? >> So, I mean, really, we like to separate ourselves and say that we are a blockchain company, as opposed to necessarily saying(indistinct) anything like that. We leverage those tools. We leverage cryptocurrencies, we leverage NFTs and those types of things within there, but blockchain is a technology, which is the underlying piece, is something that can be used and utilized in a very large number of different organizations out there. So, cryptocurrency and a lot of that negative context comes with a fear of something new, without having that regulation in place, without having the rules in place. And we were a big proponent of, we want the regulation, right? We want to do right. We want to do it by the rules. We want to do it under the context of, this is what should be done. And we also want to help write those rules as well, because a lot of the lawmakers, a lot of the lobbyists and things, they have a certain aspect or a certain goal of when they're trying to get these things. Our goal is simplicity. We want the ability for the normal average person to be able to interact with crypto, interact with NFTs, interact with the blockchain. And basically by saying, blockchain in quotes, it's very ambiguous 'cause there's many different things that blockchain can be, the easiest way, right? The easiest way to understand blockchain is simply a distributed database. That's really the core of what blockchain is. It's a record keeping mechanism that allows you to reference that. And the beauty of it, is that it's quote unquote immutable. You can't edit that data. So, especially when we're talking about blockchain, being underlying for technologies in the future, things like security, where you have logging, you have keeping, whether you're talking about sales, where you may have to have multiple different locations (indistinct) users from different locations around the globe. It creates a central repository that provides distribution and security in the way that you're ensuring your data, ensuring the validation of where that data exists when it was created. Those types of things that blockchain really is. If you go to the historical, right, the very early on Bitcoin absolutely was made to have a way of not having to deal with the fed. That was the core functionality of the initial crypto. And then you had a lot of the illicit trades, those black markets that jumped onto it because of what it could do. The maturity of the technology though, of where we are now versus say back in 97 is a much different world of blockchain, and there's a much different world of cryptocurrency. You still have to be careful because with any fed, you're still going to have that FUD that goes out there and sells that fear, uncertainty and doubt, which spurs a lot of those types of scams, and a lot of those things that target end users that we face as security professionals today. You still get mailers that go out, looking for people to give their social security number over during tax time. Snail mail is considered a very ancient technology, but it still works. You still get a portion of the population that falls for those tricks, fishing, whatever it might be. It's all about trying to make sure that you have fear about what is that change. And I think that as we move forward, and move into the future, the simpler and the more comfortable these types of technologies become, the easier it is to utilize and indoctrinate normal users, to be able to use these things. >> You know, I want to ask you about that, Nick, because you mentioned immutability, there's a lot of misconceptions about that. I had somebody tell me one time, "Blockchain's Bs," and they say, "Well, oh, hold on a second. They say, oh, they say it's a mutable, but you can hack Coinbase, whatever it is." So I guess a couple of things, one is that the killer app for blockchain became money. And so we learned a lot through that. And you had Bitcoin and it really wasn't programmable through its interface. And then Ethereum comes out. I know, you know a lot about Ether and you have solidity, which is a lot simpler, but it ain't JavaScript, which is ubiquitous. And so now you have a lot of potential for the initial ICO's and probably still the ones today, the white papers, a lot of security flaws in there. I'm sure you can talk to that, but maybe you can help square that circle about immutability and security. I've mentioned game theory before, it's harder to hack Bitcoin and the Bitcoin blockchain than it is to mine. So that's why people mine, but maybe you could add some context to that. >> Yeah, you know it goes to just about any technology out there. Now, when you're talking about blockchain specifically, the majority of the attacks happen with the applications and the smart contracts that are actually running on the blockchain, as opposed to necessarily the blockchain itself. And like you said, the impact for whether that's loss of revenue or loss of tokens or whatever it is, in most cases that results from something that was a phishing attack, you gave up your credentials, somebody said, paste your private key in here, and you win a cookie or whatever it might be, but those are still the fundamental pieces. When you're talking about various different networks out there, depending on the blockchain, depends on how much the overall security really is. The more distributed it is, and the more stable it is as the network goes, the better or the more stable any of the code is going to be. The underlying architecture of any system is the key to success when it comes to the overall security. So the blockchain itself is immutable, in the case that the owner are ones have to be trusted. If you look at distributed networks, something like Ethereum or Bitcoin, where you have those proof of work systems, that disperses that information at a much more remote location, So the more disperse that information is, the less likely it is to be able to be impacted by one small instance. If you look at like the DAO Hack, or if you look at a lot of the other vulnerabilities that exist on the blockchain, it's more about the code. And like you said, solidity being as new as it is, it's not JavaScript. The industry is very early and very infantile, as far as the developers that are skilled in doing this. And with that just comes the inexperience and the lack of information that you don't learn until JavaScript is 10 or 12 years old. >> And the last thing I'll say about this topic, and we'll move on to NFTs, but NFTs relate is that, again, I said earlier that the big internet giants have pretty much co-opted the platform. You know, if you wanted to invest in Linux in the early days, there was no way to do that. You maybe have to wait until red hat came up with its IPO and there's your pyramid scheme folks. But with crypto it, which is again, as Nick was explaining underpinning is the blockchain, you can actually participate in early projects. Now you got to be careful 'cause there are a lot of scams and many of them are going to blow out if not most of them, but there are some, gems out there, because as Nick was describing, you've got this decentralized platform that causes scaling issues or performance issues, and people are solving those problems, essentially building out a new internet. But I want to get into NFTs, because it's sort of the next big thing here before we get into the metaverse, what Nick, why should people pay attention to NFTs? Why do they matter? Are they really an important trend? And what are the societal and technological impacts that you see in this space? >> Yeah, I mean, NFTs are a very new technology and ultimately it's just another entry on the blockchain. It's just another piece of data in the database. But how it's leveraged in the grand scheme of how we, as users see it, it can be the classic idea of an NFT is just the art, or as good as the poster on your wall. But in the case of some of the new applications, is where are you actually get that utility function. Now, in the case of say video games, video games and gamers in general, already utilize digital items. They already utilize digital points. As in the case of like Call of Duty points, those are just different versions of digital currencies. You know, World of Warcraft Gold, I like to affectionately say, was the very first cryptocurrency. There was a Harvard course taught on the economy of WOW, there was a black market where you could trade your end game gold for Fiat currencies. And there's even places around the world that you can purchase real world items and stay at hotels for World of Warcraft Gold. So the adoption of blockchain just simply gives a more stable and a more diverse technology for those same types of systems. You're going to see that carry over into shipping and logistics, where you need to have data that is single repository for being able to have multiple locations, multiple shippers from multiple global efforts out there that need to have access to that data. But in the current context, it's either sitting on a shipping log, it's sitting on somebody's desk. All of those types of paper transactions can be leveraged as NFTs on the blockchain. It's just simply that representation. And once you break the idea of this is just a piece of art, or this is a cryptocurrency, you get into a world where you can apply that NFT technology to a lot more things than I think most people think of today. >> Yeah, and of course you mentioned art a couple of times when people sold as digital art for whatever, it was 60, 65 million, 69 million, that caught a lot of people's attention, but you're seeing, I mean, there's virtually infinite number of applications for this. One of the Washington wizards, tokenized portions of his contract, maybe he was creating a new bond, that's really interesting use cases and opportunities, and that kind of segues into the latest, hot topic, which is the metaverse. And you've said yourself that blockchain and NFTs are the foundation of the metaverse, they're foundational elements. So first, what is the metaverse to you and where do blockchain and NFTs, fit in? >> Sure, so, I mean, I affectionately refer to the metaverse just a VR and essentially, we've been playing virtual reality games and all the rest for a long time. And VR has really kind of been out there for a long time. So most people's interpretation or idea of what the metaverse is, is a virtual reality version of yourself and this right, that idea of once it becomes yourself, is where things like NFT items, where blockchain and digital currencies are going to come in, because if you have a manufacturer, so you take on an organization like Nike, and they want to put their shoes into the metaverse because we, as humans, want to individualize ourselves. We go out and we want to have that one of one shoe or that, t-shirt or whatever it is, we're going to want to represent that same type of individuality in our virtual self. So NFTs, crypto and all of those digital currencies, like I was saying that we've known as gamers are going to play that very similar role inside of the metaverse. >> Yeah. Okay. So basically you're going to take your physical world into the metaverse. You're going to be able to, as you just mentioned, acquire things- I loved your WOW example. And so let's stay on this for a bit, if we may, of course, Facebook spawned a lot of speculation and discussion about the concept of the metaverse and really, as you pointed out, it's not new. You talked about why second life, really started in 2003, and it's still around today. It's small, I read recently, it's creators coming back into the company and books were written in the early 90s that used the term metaverse. But Nick, talk about how you see this evolving, what role you hope to play with your company and your community in the future, and who builds the metaverse, when is it going to be here? >> Yeah, so, I mean, right now, and we actually just got back from CES last week. And the Metaverse is a very big buzzword. You're going to see a lot of integration of what people are calling, quote unquote, the metaverse. And there was organizations that were showing virtual office space, virtual malls, virtual concerts, and those types of experiences. And the one thing right now that I don't think that a lot of organizations have grasp is how to make one metaverse. There's no real player one, if you will always this yet, There's a lot of organizations that are creating their version of the metaverse, which then again, just like every other software and game vendor out there has their version of cryptocurrency and their version of NFTs. You're going to see it start to pop up, especially as Oculus is going to come down in price, especially as you get new technologies, like some of the VR glasses that look more augmented reality and look more like regular glasses that you're wearing, things like that, the easier that those technologies become as in adopting into our normal lifestyle, as far as like looks and feels, the faster that stuff's going to actually come out to the world. But when it comes to like, what we're doing is we believe that the metaverse should actually span multiple different blockchains, multiple different segments, if you will. So what ORE system is doing, is we're actually building the underlying architecture and technologies for developers to bring their metaverse too. You can leverage the ORE Systems NFTs, where we like to call our utility NFTs as an in-game item in one game, or you can take it over and it could be a t-shirt in another game. The ability for having that cross support within the ecosystem is what really no one has grasp on yet. Most of the organizations out there are using a very classic business model. Get the user in the game, make them spend their money in the game, make all their game stuff as only good in their game. And that's where the developer has you, they have you in their bubble. Our goal, and what we like to affectionately say is, we want to bring white collar tools and technology to blue collar folks, We want to make it simple. We want to make it off the shelf, and we want to make it a less cost prohibitive, faster, and cheaper to actually get out to all the users. We do it by supporting the technology. That's our angle. If you support the technology and you support the platform, you can build a community that will build all of the metaverse around them. >> Well, and so this is interesting because, if you think about some of the big names, we've Microsoft is talking about it, obviously we mentioned Facebook. They have essentially walled gardens. Now, yeah, okay, I could take Tik Tok and pump it into Instagram is fine, but they're really siloed off. And what you're saying is in the metaverse, you should be able to buy a pair of sneakers in one location and then bring it to another one. >> Absolutely, that's exactly it. >> And so my original kind of investment in attractiveness, if you will, to crypto, was that, the little guy can get an early, but I worry that some of these walled gardens, these big internet giants are going to try to co-op this. So I think what you're doing is right on, and I think it's aligned with the objectives of consumers and the users who don't want to be forced in to a pen. They want to be able to live freely. And that's really what you're trying to do. >> That's exactly it. You know, when you buy an item, say a Skin in Fortnite or Skin in Call of Duty, it's only good in that game. And not even in the franchise, it's only good in that version of the game. In the case of what we want to do is, you can not only have that carry over and your character. So say you buy a really cool shirt, and you've got that in your Call of Duty or in our case, we're really Osiris Protocol, which is our proof of concept video game to show that this all thing actually works, but you can actually go in and you can get a gun in Osiris Protocol. And if we release, Osiris Protocol two, you'll be able to take that to Osiris Protocol two. Now the benefit of that is, is you're going to be the only one in the next version with that item, if you haven't sold it or traded it or whatever else. So we don't lock you into a game. We don't lock you into a specific application. You own that, you can trade that freely with other users. You can sell that on the open market. We're embracing what used to be considered the black market. I don't understand why a lot of video games, we're always against the skins and mods and all the rest. For me as a gamer and coming up, through the many, many years of various different Call of Duties and everything in my time, I wish I could still have some this year. I still have a World of Warcraft account. I wasn't on, Vanilla, Burning Crusade was my foray, but I still have a character. If you look at it that way, if I had that wild character and that gear was NFTs, in theory, I could actually pass that onto my kid who could carry on that character. And it would actually increase in value because they're NFT back then. And then if needed, you could trade those on the open market and all the rest. It just makes gaming a much different thing. >> I love it. All right, Nick, hey, we're out of time, but I got to say, Nick Donarski, thanks so much for coming on the program today, sharing your insights and really good luck to you and building out your technology platform and your community. >> Thank you, sir, it's been an absolute pleasure. >> And thank you for watching. Remember, all these episodes are available as podcasts, just search "Breaking Analysis Podcast", and you'll find them. I publish pretty much every week on siliconangle.com and wikibond.com. And you can reach me @dvellante on Twitter or comment on my LinkedIn posts. You can always email me david.vellante@siliconangle.com. And don't forget, check out etr.plus for all the survey data. This is Dave Vellante for theCUBE Insights, powered by ETR, happy 2022 be well, and we'll see you next time. (upbeat music)

Published Date : Jan 17 2022

SUMMARY :

bringing you data-driven and even quite likely that the combination and how the blockchain, crypto, and NFTs and the cyber community all throughout, and the numerous vendor hands in the cookie jar, if you will, and the platform. and security in the way that and probably still the ones any of the code is going to be. and many of them are going to of data in the database. Yeah, and of course you and all the rest for a long time. and discussion about the believe that the metaverse is in the metaverse, and the users who don't want and mods and all the rest. really good luck to you Thank you, sir, it's all the survey data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NikeORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

NetskopeORGANIZATION

0.99+

2003DATE

0.99+

DatadogORGANIZATION

0.99+

DarktraceORGANIZATION

0.99+

Nick DonarskiPERSON

0.99+

SentinelOneORGANIZATION

0.99+

NickPERSON

0.99+

ElasticORGANIZATION

0.99+

TaniumORGANIZATION

0.99+

1989DATE

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

10QUANTITY

0.99+

HPORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Call of DutyTITLE

0.99+

ORE SystemORGANIZATION

0.99+

40%QUANTITY

0.99+

2000DATE

0.99+

Osiris Protocol twoTITLE

0.99+

OculusORGANIZATION

0.99+

FirstQUANTITY

0.99+

69 millionQUANTITY

0.99+

Matt DamonPERSON

0.99+

World of Warcraft GoldTITLE

0.99+

OktaORGANIZATION

0.99+

World of WarcraftTITLE

0.99+

JavaScriptTITLE

0.99+

Call of DutiesTITLE

0.99+

first programQUANTITY

0.99+

ZscalerORGANIZATION

0.99+

theCUBE StudiosORGANIZATION

0.99+

Burning CrusadeTITLE

0.99+

Osiris ProtocolTITLE

0.99+

each companyQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.98+

single repositoryQUANTITY

0.98+

ETRORGANIZATION

0.98+

siliconangle.comOTHER

0.98+

david.vellante@siliconangle.comOTHER

0.98+

first companyQUANTITY

0.98+

LinuxTITLE

0.98+

CESEVENT

0.98+

ShadowlabsORGANIZATION

0.98+

todayDATE

0.98+

over 60 responsesQUANTITY

0.98+

bothQUANTITY

0.98+

more than a hundred responsesQUANTITY

0.98+

BostonLOCATION

0.97+

two parallel pathsQUANTITY

0.97+

HarvardORGANIZATION

0.97+

Rapid7ORGANIZATION

0.97+

this yearDATE

0.97+

early 90sDATE

0.97+

16QUANTITY

0.97+

firstQUANTITY

0.97+

BASICTITLE

0.97+

one gameQUANTITY

0.97+

one locationQUANTITY

0.97+

OneQUANTITY

0.96+

last fallDATE

0.96+

one small instanceQUANTITY

0.96+

Auth0ORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

2008 financial crisisEVENT

0.95+

FortniteTITLE

0.95+

two-dimensionalQUANTITY

0.95+

Shruthi Murthy, St. Louis University & Venkat Krishnamachari, MontyCloud | AWS Startup Showcase


 

(gentle music) >> Hello and welcome today's session theCUBE presentation of AWS Startup Showcase powered by theCUBE, I'm John Furrier, for your host of theCUBE. This is a session on breaking through with DevOps data analytics tools, cloud management tools with MontyCloud and cloud management migration, I'm your host. Thanks for joining me, I've got two great guests. Venkat Krishnamachari who's the co-founder and CEO of MontyCloud and Shruthi Sreenivasa Murthy, solution architect research computing group St. Louis University. Thanks for coming on to talk about transforming IT, day one day two operations. Venkat, great to see you. >> Great to see you again, John. So in this session, I really want to get into this cloud powerhouse theme you guys were talking about before on our previous Cube Conversations and what it means for customers, because there is a real market shift happening here. And I want to get your thoughts on what solution to the problem is basically, that you guys are targeting. >> Yeah, John, cloud migration is happening rapidly. Not an option. It is the current and the immediate future of many IT departments and any type of computing workloads. And applications and services these days are better served by cloud adoption. This rapid acceleration is where we are seeing a lot of challenges and we've been helping customers with our platform so they can go focus on their business. So happy to talk more about this. >> Yeah and Shruthi if you can just explain your relationship with these guys, because you're a cloud architect, you can try to put this together. MontyCloud is your customer, talk about your solution. >> Yeah I work at the St. Louis University as the solutions architect for the office of Vice President of Research. We can address St. Louis University as SLU, just to keep it easy. SLU is a 200-year-old university with more focus on research. And our goal at the Research Computing Group is to help researchers by providing the right infrastructure and computing capabilities that help them to advance their research. So here in SLU research portfolio, it's quite diverse, right? So we do research on vaccines, economics, geospatial intelligence, and many other really interesting areas, and you know, it involves really large data sets. So one of the research computing groups' ambitious plan is to move as many high-end computation applications from on-prem to the AWS. And I lead all the cloud initiatives for the St. Louis university. >> Yeah Venkat and I, we've been talking, many times on theCUBE, previous interviews about, you know, the rapid agility that's happening with serverless and functions, and, you know, microservices start to see massive acceleration of how fast cloud apps are being built. It's put a lot of pressure on companies to hang on and manage all this. And whether you're a security group was trying to lock down something, or it's just, it's so fast, the cloud development scene is really fun and you're implementing it at a large scale. What's it like these days from a development standpoint? You've got all this greatness in the cloud. What's the DevOps mindset right now? >> SLU is slowly evolving itself as the AWS Center of Excellence here in St. Louis. And most of the workflows that we are trying to implement on AWS and DevOps and, you know, CICD Pipelines. And basically we want it ready and updated for the researchers where they can use it and not have to wait on any of the resources. So it has a lot of importance. >> Research as code, it's like the internet, infrastructure as code is DevOps' ethos. Venkat, let's get into where this all leads to because you're seeing a culture shift in companies as they start to realize if they don't move fast, and the blockers that get in the way of the innovation, you really can't get your arms around this growth as an opportunity to operationalize all the new technology, could you talk about the transformation goals that are going on with your customer base. What's going on in the market? Can you explain and unpack the high level market around what you guys are doing? >> Sure thing, John. Let's bring up the slide one. So they have some content that Act-On tabs. John, every legal application, commercial application, even internal IT departments, they're all transforming fast. Speed has never been more important in the era we are today. For example, COVID research, you know, analyzing massive data sets to come up with some recommendations. They don't demand a lot from the IT departments so that researchers and developers can move fast. And I need departments that are not only moving current workloads to the cloud they're also ensuring the cloud is being consumed the right way. So researchers can focus on what they do best, what we win, learning and working closely with customers and gathering is that there are three steps or three major, you know, milestone that we like to achieve. I would start the outcome, right? That the important milestone IT departments are trying to get to is transforming such that they're directly tied to the key business objectives. Everything they do has to be connected to the business objective, which means the time and you know, budget and everything's aligned towards what they want to deliver. IT departments we talk with have one common goal. They want to be experts in cloud operations. They want to deliver cloud operations excellence so that researchers and developers can move fast. But they're almost always under the, you know, they're time poor, right? And there is budget gaps and that is talent and tooling gap. A lot of that is what's causing the, you know, challenges on their path to journey. And we have taken a methodical and deliberate position in helping them get there. >> Shruthi hows your reaction to that? Because, I mean, you want it faster, cheaper, better than before. You don't want to have all the operational management hassles. You mentioned that you guys want to do this turnkey. Is that the use case that you're going after? Just research kind of being researchers having the access at their fingertips, all these resources? What's the mindset there, what's your expectation? >> Well, one of the main expectations is to be able to deliver it to the researchers as demand and need and, you know, moving from a traditional on-prem HBC to cloud would definitely help because, you know, we are able to give the right resources to the researchers and able to deliver projects in a timely manner, and, you know, with some additional help from MontyCloud data platform, we are able to do it even better. >> Yeah I like the onboarding thing and to get an easy and you get value quickly, that's the cloud business model. Let's unpack the platform, let's go into the hood. Venkat let's, if you can take us through the, some of the moving parts under the platform, then as you guys have it's up at the high level, the market's obvious for everyone out there watching Cloud ops, speed, stablism. But let's go look at the platform. Let's unpack that, do you mind pick up on slide two and let's go look at the what's going on in the platform. >> Sure. Let's talk about what comes out of the platform, right? They are directly tied to what the customers would like to have, right? Customers would like to fast track their day one activities. Solution architects, such as Shruthi, their role is to try and help get out of the way of the researchers, but we ubiquitous around delegating cloud solutions, right? Our platform acts like a seasoned cloud architect. It's as if you've instantly turned on a cloud solution architect that should, they can bring online and say, Hey, I want help here to go faster. Our lab then has capabilities that help customers provision a set of governance contracts, drive consumption in the right way. One of the key things about driving consumption the right way is to ensure that we prevent a security cost or compliance issues from happening in the first place, which means you're shifting a lot of the operational burden to left and make sure that when provisioning happens, you have a guard rails in place, we help with that, the platform solves a problem without writing code. And an important takeaway here, John is that a was built for architects and administrators who want to move fast without having to write a ton of code. And it is also a platform that they can bring online, autonomous bots that can solve problems. For example, when it comes to post provisioning, everybody is in the business of ensuring security because it's a shared model. Everybody has to keep an eye on compliance, that is also a shared responsibility, so is cost optimization. So we thought wouldn't it be awesome to have architects such as Shruthi turn on a compliance bot on the platform that gives them the peace of mind that somebody else and an autonomous bot is watching our 24 by 7 and make sure that these day two operations don't throw curve balls at them, right? That's important for agility. So platform solves that problem with an automation approach. Going forward on an ongoing basis, right, the operation burden is what gets IT departments. We've seen that happen repeatedly. Like IT department, you know, you know this, John, maybe you have some thoughts on this. You know, you know, if you have some comments on how IT can face this, then maybe that's better to hear from you. >> No, well first I want to unpack that platform because I think one of the advantages I see here and that people are talking about in the industry is not only is the technology's collision colliding between the security postures and rapid cloud development, because DevOps and cloud, folks, are moving super fast. They want things done at the point of coding and CICB pipeline, as well as any kind of changes, they want it fast, not weeks. They don't want to have someone blocking it like a security team, so automation with the compliance is beautiful because now the security teams can provide policies. Those policies can then go right into your platform. And then everyone's got the rules of the road and then anything that comes up gets managed through the policy. So I think this is a big trend that nobody's talking about because this allows the cloud to go faster. What's your reaction to that? Do you agree? >> No, precisely right. I'll let Shurthi jump on that, yeah. >> Yeah, you know, I just wanted to bring up one of the case studies that we read on cloud and use their compliance bot. So REDCap, the Research Electronic Data Capture also known as REDCap is a web application. It's a HIPAA web application. And while the flagship projects for the research group at SLU. REDCap was running on traditional on-prem infrastructure, so maintaining the servers and updating the application to its latest version was definitely a challenge. And also granting access to the researchers had long lead times because of the rules and security protocols in place. So we wanted to be able to build a secure and reliable enrollment on the cloud where we could just provision on demand and in turn ease the job of updating the application to its latest version without disturbing the production environment. Because this is a really important application, most of the doctors and researchers at St. Louis University and the School of Medicine and St. Louis University Hospital users. So given this challenge, we wanted to bring in MontyCloud's cloud ops and, you know, security expertise to simplify the provisioning. And that's when we implemented this compliance bot. Once it is implemented, it's pretty easy to understand, you know, what is compliant, what is noncompliant with the HIPAA standards and where it needs an remediation efforts and what we need to do. And again, that can also be automated. It's nice and simple, and you don't need a lot of cloud expertise to go through the compliance bot and come up with your remediation plan. >> What's the change in the outcome in terms of the speed turnaround time, the before and after? So before you're dealing with obviously provisioning stuff and lead time, but just a compliance closed loop, just to ask a question, do we have, you know, just, I mean, there's a lot of manual and also some, maybe some workflows in there, but not as not as cool as an instant bot that solve yes or no decision. And after MontyCloud, what are some of the times, can you share any data there just doing an order of magnitude. >> Yeah, definitely. So the provisioning was never simpler, I mean, we are able to provision with just one or two clicks, and then we have a better governance guardrail, like Venkat says, and I think, you know, to give you a specific data, it, the compliance bot does about more than 160 checks and it's all automated, so when it comes to security, definitely we have been able to save a lot of effort on that. And I can tell you that our researchers are able to be 40% more productive with the infrastructure. And our research computing group is able to kind of save the time and, you know, the security measures and the remediation efforts, because we get customized alerts and notifications and you just need to go in and, you know. >> So people are happier, right? People are getting along at the office or virtually, you know, no one is yelling at each other on Slack, hey, where's? Cause that's really the harmony here then, okay. This is like a, I'm joking aside. This is a real cultural issue between speed of innovation and the, what could be viewed as a block, or just the time that say security teams or other teams might want to get back to you, make sure things are compliant. So that could slow things down, that tension is real and there's some disconnects within companies. >> Yeah John, that's spot on, and that means we have to do a better job, not only solving the traditional problems and make them simple, but for the modern work culture of integrations. You know, it's not uncommon like you cut out for researchers and architects to talk in a Slack channel often. You say, Hey, I need this resource, or I want to reconfigure this. How do we make that collaboration better? How do you make the platform intelligent so that the platform can take off some of the burden off of people so that the platform can monitor, react, notify in a Slack channel, or if you should, the administrator say, Hey, next time, this happens automatically go create a ticket for me. If it happens next time in this environment automatically go run a playbook, that remediates it. That gives a lot of time back that puts a peace of mind and the process that an operating model that you have inherited and you're trying to deliver excellence and has more help, particularly because it is very dynamic footprint. >> Yeah, I think this whole guard rail thing is a really big deal, I think it's like a feature, but it's a super important outcome because if you can have policies that map into these bots that can check rules really fast, then developers will have the freedom to drive as fast as they want, and literally go hard and then shift left and do the coding and do all their stuff on the hygiene side from the day, one on security is really a big deal. Can we go back to this slide again for the other project? There's another project on that slide. You talked about RED, was it REDCap, was that one? >> Yeah. Yeah, so REDCap, what's the other project. >> So SCAER, the Sinfield Center for Applied Economic Research at SLU is also known as SCAER. They're pretty data intensive, and they're into some really sophisticated research. The Center gets daily dumps of sensitive geo data sensitive de-identified geo data from various sources, and it's a terabyte so every day, becomes petabytes. So you know, we don't get the data in workable formats for the researchers to analyze. So the first process is to convert this data into a workable format and keep an analysis ready and doing this at a large scale has many challenges. So we had to make this data available to a group of users too, and some external collaborators with ads, you know, more challenges again, because we also have to do this without compromising on the security. So to handle these large size data, we had to deploy compute heavy instances, such as, you know, R5, 12xLarge, multiple 12xLarge instances, and optimizing the cost and the resources deployed on the cloud again was a huge challenge. So that's when we had to take MontyCloud help in automating the whole process of ingesting the data into the infrastructure and then converting them into a workable format. And this was all automated. And after automating most of the efforts, we were able to bring down the data processing time from two weeks or more to three days, which really helped the researchers. So MontyCloud's data platform also helped us with automating the risk, you know, the resource optimization process and that in turn helped bring the costs down, so it's been pretty helpful then. >> That's impressive weeks to days, I mean, this is the theme Venkat speed, speed, speed, hybrid, hybrid. A lot of stuff happening. I mean, this is the new normal, this is going to make companies more productive if they can get the apps built faster. What do you see as the CEO and founder of the company you're out there, you know, you're forging new ground with this great product. What do you see as the blockers from customers? Is it cultural, is it lack of awareness? Why aren't people jumping all over this? >> Only people aren't, right. They go at it in so many different ways that, you know, ultimately be the one person IT team or massively well-funded IT team. Everybody wants to Excel at what they're delivering in cloud operations, the path to that as what, the challenging part, right? What are you seeing as customers are trying to build their own operating model and they're writing custom code, then that's a lot of need for provisioning, governance, security, compliance, and monitoring. So they start integrating point tools, then suddenly IT department is now having a, what they call a tax, right? They have to maintain the technical debt while cloud service moving fast. It's not uncommon for one of the developers or one of the projects to suddenly consume a brand new resource. And as you know, AWS throws up a lot more services every month, right? So suddenly you're not keeping up with that service. So what we've been able to look at this from a point of view of how do we get customers to focus on what they want to do and automate things that we can help them with? >> Let me, let me rephrase the question if you don't mind. Cause I I didn't want to give the impression that you guys aren't, you guys have a great solution, but I think when I see enterprises, you know, they're transforming, right? So it's not so much the cloud innovators, like you guys, it's really that it's like the mainstream enterprise, so I have to ask you from a customer standpoint, what's some of the cultural things are technical reasons why they're not going faster? Cause everyone's, maybe it's the pandemic's forcing projects to be double down on, or some are going to be cut, this common theme of making things available faster, cheaper, stronger, more secure is what cloud does. What are some of the enterprise challenges that they have? >> Yeah, you know, it might be money for right, there's some cultural challenges like Andy Jassy or sometimes it's leadership, right? You want top down leadership that takes a deterministic step towards transformation, then adequately funding the team with the right skills and the tools, a lot of that plays into it. And there's inertia typically in an existing process. And when you go to cloud, you can do 10X better, people see that it doesn't always percolate down to how you get there. So those challenges are compounded and digital transformation leaders have to, you know, make that deliberate back there, be more KPI-driven. One of the things we are seeing in companies that do well is that the leadership decides that here are our top business objectives and KPIs. Now if we want the software and the services and the cloud division to support those objectives when they take that approach, transformation happens. But that is a lot more easier said than done. >> Well you're making it really easy with your solution. And we've done multiple interviews. I've got to say you're really onto something really with this provisioning and the compliance bots. That's really strong, that the only goes stronger from there, with the trends with security being built in. Shruthi, got to ask you since you're the customer, what's it like working with MontyCloud? It sounds so awesome, you're customer, you're using it. What's your review, what's your- What's your, what's your take on them? >> Yeah they are doing a pretty good job in helping us automate most of our workflows. And when it comes to keeping a tab on the resources, the utilization of the resources, so we can keep a tab on the cost in turn, you know, their compliance bots, their cost optimization tab. It's pretty helpful. >> Yeah well you're knocking projects down from three weeks to days, looking good, I mean, looking real strong. Venkat this is the track record you want to see with successful projects. Take a minute to explain what else is going on with MontyCloud. Other use cases that you see that are really primed for MontyCloud's platform. >> Yeah, John, quick minute there. Autonomous cloud operations is the goal. It's never done, right? It there's always some work that you hands-on do. But if you set a goal such that customers need to have a solution that automates most of the routine operations, then they can focus on the business. So we are going to relentlessly focused on the fact that autonomous operations will have the digital transformation happen faster, and we can create a lot more value for customers if they deliver to their KPIs and objectives. So our investments in the platform are going more towards that. Today we already have a fully automated compliance bot, a security bot, a cost optimization recommendation engine, a provisioning and governance engine, where we're going is we are enhancing all of this and providing customers lot more fluidity in how they can use our platform Click to perform your routine operations, Click to set up rules based automatic escalation or remediation. Cut down the number of hops a particular process will take and foster collaboration. All of this is what our platform is going and enhancing more and more. We intend to learn more from our customers and deliver better for them as we move forward. >> That's a good business model, make things easier, reduce the steps it takes to do something, and save money. And you're doing all those things with the cloud and awesome stuff. It's really great to hear your success stories and the work you're doing over there. Great to see resources getting and doing their job faster. And it's good and tons of data. You've got petabytes of that's coming in. It's it's pretty impressive, thanks for sharing your story. >> Sounds good, and you know, one quick call out is customers can go to MontyCloud.com today. Within 10 minutes, they can get an account. They get a very actionable and valuable recommendations on where they can save costs, what is the security compliance issues they can fix. There's a ton of out-of-the-box reports. One click to find out whether you are having some data that is not encrypted, or if any of your servers are open to the world. A lot of value that customers can get in under 10 minutes. And we believe in that model, give the value to customers. They know what to do with that, right? So customers can go sign up for a free trial at MontyCloud.com today and get the value. >> Congratulations on your success and great innovation. A startup showcase here with theCUBE coverage of AWS Startup Showcase breakthrough in DevOps, Data Analytics and Cloud Management with MontyCloud. I'm John Furrier, thanks for watching. (gentle music)

Published Date : Sep 22 2021

SUMMARY :

the co-founder and CEO Great to see you again, John. It is the current and the immediate future you can just explain And I lead all the cloud initiatives greatness in the cloud. And most of the workflows that and the blockers that get in important in the era we are today. Is that the use case and need and, you know, and to get an easy and you get of the researchers, but we ubiquitous the cloud to go faster. I'll let Shurthi jump on that, yeah. and reliable enrollment on the cloud of the speed turnaround to kind of save the time and, you know, as a block, or just the off of people so that the and do the coding and do all Yeah, so REDCap, what's the other project. the researchers to analyze. of the company you're out there, of the projects to suddenly So it's not so much the cloud innovators, and the cloud division to and the compliance bots. the cost in turn, you know, to see with successful projects. So our investments in the platform reduce the steps it takes to give the value to customers. Data Analytics and Cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

ShruthiPERSON

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

Shruthi MurthyPERSON

0.99+

two weeksQUANTITY

0.99+

40%QUANTITY

0.99+

Sinfield Center for Applied Economic ResearchORGANIZATION

0.99+

Venkat KrishnamachariPERSON

0.99+

John FurrierPERSON

0.99+

School of MedicineORGANIZATION

0.99+

St. LouisLOCATION

0.99+

oneQUANTITY

0.99+

Shruthi Sreenivasa MurthyPERSON

0.99+

SLUORGANIZATION

0.99+

VenkatPERSON

0.99+

10XQUANTITY

0.99+

St. Louis University HospitalORGANIZATION

0.99+

HIPAATITLE

0.99+

MontyCloudORGANIZATION

0.99+

two operationsQUANTITY

0.99+

24QUANTITY

0.99+

St. Louis UniversityORGANIZATION

0.99+

two clicksQUANTITY

0.99+

TodayDATE

0.99+

three stepsQUANTITY

0.99+

three daysQUANTITY

0.99+

ExcelTITLE

0.99+

10 minutesQUANTITY

0.99+

todayDATE

0.98+

under 10 minutesQUANTITY

0.98+

200-year-oldQUANTITY

0.98+

three weeksQUANTITY

0.98+

OneQUANTITY

0.97+

firstQUANTITY

0.97+

Research Computing GroupORGANIZATION

0.97+

MontyCloud.comORGANIZATION

0.96+

VenkatORGANIZATION

0.96+

first processQUANTITY

0.96+

AWS Center of ExcellenceORGANIZATION

0.95+

Research Electronic Data CaptureORGANIZATION

0.95+

theCUBEORGANIZATION

0.95+

twoQUANTITY

0.95+

7QUANTITY

0.94+

ShurthiPERSON

0.94+

about more than 160 checksQUANTITY

0.94+

one personQUANTITY

0.93+

St. Louis universityORGANIZATION

0.93+

two great guestsQUANTITY

0.93+

One clickQUANTITY

0.93+

Vice PresidentPERSON

0.91+

one common goalQUANTITY

0.9+

pandemicEVENT

0.9+

three majorQUANTITY

0.9+

Rupesh Chokshi, AT&T Cybersecurity | Fortinet Security Summit 2021


 

>>From around the globe. It's the cube covering Fortinet security summit brought to you by Fortinet. >>Welcome back to the cube. Lisa Martin here at the Fordham het championship security summit. Napa valley has been beautiful and gracious to us all day. We're very pleased to be here. I'm very pleased to welcome a first-timer to the cube. Rupesh Chuck Chuck Xi, VP a T and T cybersecurity and edge solutions at, at and T cybersecurity. Refresh. Welcome. >>Thank you. Thank you so much for having me, Lisa, I'm looking forward to our conversation today. >>Me too. First of all, it's we're in Napa we're outdoors. It's beautiful venue, no complaints, right? We're at a golf PGA tournament. Very exciting. Talk to me about the at and T Fordanet relationship. Give me, give me an, a good insight into the partnership. >>Sure, sure. So, as you said, you know, beautiful weather in California, Napa it's my first time. Uh, so it's kind of a new experience for me going back to your question in terms of the relationship between eight P and T and Ford in that, uh, a long lasting, you know, 10 plus years, you know, hand in hand in terms of the product, the technology, the capabilities that we are brought together in the security space for our customers. So a strategic relationship, and I'm so thrilled to be here today as a, Fordanet invited us to be part of the championship. Tommy, >>Talk to me. So your role VP of, and T cybersecurity and edge solutions, give me an, a deep dive into what's in your purview. >>Sure, sure. So I, uh, sort of, you know, run the PNL or the profit and loss center for product management for all of at and T cybersecurity and ed solutions and the whole concept behind putting the teams together is the convergence in networking and security. Um, so, you know, we are supporting the entire customer continuum, whether it's a fortune 50, the fortune 1000 to mid-market customers, to small businesses, to, you know, government agencies, you know, whether it's a local government agency or a school district or a federal agency, et cetera. And my team and I focus on bringing new product and capabilities to the marketplace, you know, working with our sales team from an enablement perspective, go to market strategy. Um, and the whole idea is about, uh, you know, winning in the marketplace, right? So delivering growth and revenue to the business, >>Competitive differentiation. So we've seen so much change in the last year and a half. I know that's an epic understatement, but we've also seen the proliferation at the edge. What are some of the challenges that you're seeing and hearing from customers where that's concerned >>As you stated, right. There's a lot happening in the edge. And sometimes the definition for edge varies when you talk with different people, uh, the way we look at it is, you know, definitely focused on the customer edge, right? So if you think about many businesses, whether I am a, a quick serve restaurant or I'm a banking Institute or a financial services or an insurance agency, or I'm a retail at et cetera, you know, lots of different branches, lots of different transformation taking place. So one way of approaching it is that when you think about the customer edge, you see a lot of virtualization, software driven, a lot of IOT endpoints, et cetera, taking place. So the cyber landscape becomes more important. Now you're connecting users, devices, capabilities, your point of sale system to a multi-cloud environment, and that, you know, encryption of that data, the speed at which it needs to happen, all of that is very important. And as we think ahead with 5g and edge compute and what that evolution revolution is going to bring, it's going to get even more excited because to me, those are kind of like in a playgrounds of innovation, but we want to do it right and keep sort of, you know, cyber and security at the core of it. So we can innovate and keep the businesses safe. >>How do you help customers to kind of navigate edge cybersecurity challenges and them not being synonymous? >>That's a great, great question. You know, every day I see, you know, different teams, different agendas, different kinds of ways of approaching things. And what I tell customers and even my own teams is that, look, we have to have a, a blueprint and architecture, a vision, you know, what are the business outcomes that we want to achieve? What the customer wants to achieve. And then start to look at that kind of technology kind of convergence that is taking place, and especially in the security and the networking space, significant momentum on the convergence and utilize that convergence to create kind of full value stack solutions that can be scaled, can be delivered. So you are not just one and done, but it's a continuous innovation and improvement. And in the security space, you need that, right. It's never going to be one and done. No >>We've seen so much change in the last year. We've seen obviously this rapid pivot to work from home that was overnight for millions and millions of people. We're still in that too. A fair amount. There's a good amount of people that are still remote, and that probably will be permanently there's. Those that are going to be hybrid threat landscape bloated. I was looking at and talking with, um, 40 guard labs and the, the nearly 11 X increase in the last 12 months in ransomware is insane. And the ransomware as a business has exploded. So security is a board level conversation for businesses I assume in any. >>Absolutely. Absolutely. I agree with you, it's a board level conversation. Security is not acknowledged the problem about picking a tool it's about, you know, the business risk and what do we need to do? Uh, you mentioned a couple of interesting stats, right? So we've seen, uh, you know, two things I'll share. One is we've seen, you know, 440 petabytes of data on the at and T network in one average business day. So 440 petabytes of data. Most people don't know what it is. So you can imagine the amount of information. So you can imagine the amount of security apparatus that you need, uh, to Tofino, protect, and defend and provide the right kind of insights. And then the other thing that VOC and along the same lines of what you were mentioning is significant, you know, ransomware, but also significant DDoSs attacks, right? So almost like, you know, we would say around 300% plus said, DDoSs mitigations that we did from last year, you know, year over year. >>So a lot of focus on texting the customer, securing the end points, the applications, the data, the network, the devices, et cetera. Uh, the other two points that I want to mention in this space, you know, again, going back to all of this is happening, right? So you have to focus on this innovation at the, at the speed of light. So, you know, artificial intelligence, machine learning, the software capabilities that are more, forward-looking have to be applied in the security space ever more than ever before, right. Needs these do, we're seeing alliances, right? We're seeing this sort of, you know, crowdsourcing going on of action on the good guys side, right? You see the national security agencies kind of leaning in saying, Hey, let's together, build this concept of a D because we're all going to be doing business. Whether it's a public to public public, to private, private, to private, all of those different entities have to work together. So having security, being a digital trust, >>Do you think that the Biden administrations fairly recent executive order catalyst of that? >>I give it, you know, the president and the, the administration, a lot of, you know, kudos for kind of, and then taking it head on and saying, look, we need to take care of this. And I think the other acknowledgement that it is not just hunting or one company or one agency, right? It's the whole ecosystem that has to come together, not just national at the global level, because we live in a hyper connected world. Right. And one of the things that you mentioned was like this hybrid work, and I was joking with somebody the other day that, and really the word is location, location, location, thinking, network security, and networking. The word is hybrid hybrid hybrid because you got a hybrid workforce, the hybrid cloud, you have a hybrid, you have a hyper-connected enterprise. So we're going to be in this sort of, you know, hybrid for quite some time are, and it has to >>Be secure and an org. And it's, you know, all the disruption of folks going to remote work and trying to get connected. One beyond video conference saying, kids are in school, spouse working, maybe kids are gaming. That's been, the conductivity alone has been a huge challenge. And Affordanet zooming a lot there with links to us, especially to help that remote environment, because we know a lot of it's going to remain, but in the spirit of transformation, you had a session today here at the security summit, talked about transformation, formation plan. We talk about that word at every event, digital transformation, right? Infrastructure transformation, it security. What context, where you talking about transformation in it today? What does it transformation plan mean for your customers? >>That's a great question because I sometimes feel, you know, overused term, right? Then you just take something and add it. It's it? Transformation, network, transformation, digital transformation. Um, but what we were talking today in, in, in the morning was more around and sort of, you know, again, going back to the network security and the transformation that the customers have to do, we hear a lot about sassy and the convergence we are seeing, you know, SD van takeoff significantly from an adoption perspective application, aware to experiences, et cetera, customers are looking at doing things like internet offload and having connectivity back into the SAS applications. Again, secure connectivity back into the SAS applications, which directly ties to their outcomes. Um, so the, the three tenants of my conversation today was, Hey, make sure you have a clear view on the business outcomes that you want to accomplish. Now, the second was work with a trusted advisor and at and T and in many cases is providing that from a trusted advisor perspective. And third, is that going back to the one and done it is not a one and done, right? This is a, is a continuous process. So sometimes we have to be thinking about, are we doing it in a way that we will always be future ready, will be always be able to deal with the security threats that we don't even know about today. So yeah, >>You bring up the term future ready. And I hear that all the time. When you think of man, we really weren't future ready. When the pandemic struck, there was so much that wasn't there. And when I was talking with 49 earlier, I said, you know, how much, uh, has the pandemic been a, uh, a catalyst for so much innovation? I imagine it has been the same thing that >>Absolutely. And, you know, I remember, you know, early days, February, March, where we're all just trying to better understand, right? What is it going to be? And the first thing was, Hey, we're all going to work remote, is it a one week? Is it a two week thing? Right? And then if you're like the CIO or the CSO or other folks who are worried about how am I going to give the productivity tools, right. Businesses in a one customer we work with, again, tobacco innovation was said, Hey, I have 20,000 call center agents that I need to take remote. How do you deliver connectivity and security? Because that call center agent is the bloodline for that business interacting with their end customers. So I think, you know, it is accelerated what would happen over 10 years and 18 months, and it's still unknown, right? So we're still discovering the future. >>There's a, there will be more silver linings to come. I think we'll learn to pick your brain on, on sassy adoption trends. One of the things I noticed in your abstract of your session here was that according to Gardner, the convergence of networking and security into the sassy framework is the most vigorous technology trend. And coming out of 2020, seeing that that's a big description, most vigorous, >>It's a big, big description, a big statement. And, uh, we are definitely seeing it. You know, we saw some of that, uh, in the second half of last year, as the organizations were getting more organized to deal with, uh, the pandemic and the change then coming into this year, it's even more accelerated. And what I mean by that is that, you know, I look at sort of, you know, three things, right? So one is going back to the hybrid work, remote work, work from anywhere, right. So how do you continue to deliver a differentiated experience, highly secure to that workforce? Because productivity, human capital very important, right? The second is that there's a back and forth on the branch transformation. So yes, you know, restaurants are opening back up. Retailers are opening back up. So businesses are thinking about how do I do that branch transformation? And then the third is explosive business IOT. So the IOT end points, do you put into manufacturing, into airports in many industries, we continue to see that. So when you think about sassy and the framework, it's about delivering a, a framework that allows you to protect and secure all of those endpoints at scale. And I think that trend is real. I've seen customer demand, we've signed a number of deals. We're implementing them as we speak across all verticals, healthcare, retail, finance, manufacturing, transportation, government agencies, small businesses, mid-sized businesses. >>Nope, Nope. Not at all. Talk to me about, I'm curious, you've been at, at and T a long time. You've seen a lot of innovation. Talk, talk to me about your perspectives on seeing that, and then what to you think as a silver lining that has come out of the, the acceleration of the last 18 months. >>She and I, I get the question, you know, I've been with at and T long time. Right. And I still remember the day I joined at T and T labs. So it was one of my kind of dream coming out of engineering school. Every engineer wants to go work for a brand that is recognized, right. And I, I drove from Clemson, South Carolina to New Jersey Homedale and, uh, I'm still, you know, you can see I'm still having the smile on my face. So I've, you know, think innovation is key. And that's what we do at, at and T I think the ability to, um, kind of move fast, you know, I think what the pandemic has taught us is the speed, right? The speed at which we have to move the speed at which we have to collaborate the speed at which we have to deliver, uh, to agility has become, you know, the differentiator for all of us. >>And we're focusing on that. I also feel that, uh, you know, there have been times where, you know, product organizations, technology organizations, you know, we struggle with jumping this sort of S-curve right, which is, Hey, I'm holding onto something. Do I let go or not? Let go. And I think the pandemic has taught us that you have to jump the S-curve, you have to accelerate because that is where you need to be in, in a way, going back to the sassy trend, right. It is something that is real, and it's going to be there for the next three to five years. So let's get ready. >>I call that getting comfortably uncomfortable, no businesses safe if they rest on their laurels these days. I think we've learned that, speaking of speed, I wanna, I wanna get kind of your perspective on 5g, where you guys are at, and when do you think it's going to be really impactful to, you know, businesses, consumers, first responders, >>The 5g investments are happening and they will continue to happen. And if you look at what's happened with the network, what at and T has announced, you know, we've gotten a lot of kudos for whatever 5g network for our mobile network, for our wireless network. And we are starting to see that, that innovation and that innovation as we anticipated is happening for the enterprise customers first, right? So there's a lot of, you know, robotics or warehouse or equipment that needs to sort of, you know, connect at a low latency, high speed, highly secure sort of, you know, data movements, compute edge that sits next to the, to the campus, you know, delivering a very different application experience. So we're seeing that, you know, momentum, uh, I think on the consumer side, it is starting to come in and it's going to take a little bit more time as the devices and the applications catch up to what we are doing in the network. And if you think about, you know, the, the value creation that has happened on, on the mobile networks is like, if you think about companies like Uber or left, right, did not exist. And, uh, many businesses, you know, are dependent on that network. And I think, uh, it will carry on. And I think in the next year or two, we'll see firsthand the outcomes and the value that it is delivering you go to a stadium at and T stadium in Dallas, you know, 5g enabled, you know, that the experience is very different. >>I can't wait to go to a stadium again and see it came or live music. Oh, that sounds great. Rubbish. Thank you so much for joining me today, talking about what a T and T is doing with 49, the challenges that you're helping your customers combat at the edge and the importance of really being future. Ready? >>Yes. Thank you. Thank you so much. Really appreciate you having me. Thanks for 49 to invite us to be at this event. Yes. >>Thank you for refresh talk. She I'm Lisa Martin. You're watching the cube at the 40 net championship security summits.

Published Date : Sep 14 2021

SUMMARY :

security summit brought to you by Fortinet. a first-timer to the cube. Thank you so much for having me, Lisa, I'm looking forward to our conversation today. Talk to me about the at and T Fordanet uh, a long lasting, you know, 10 plus years, you know, hand in hand So your role VP of, and T cybersecurity and edge solutions, give me an, Um, and the whole idea is about, uh, you know, What are some of the challenges that you're but we want to do it right and keep sort of, you know, cyber and security at the core of a vision, you know, what are the business outcomes that we want to achieve? And the ransomware as a business acknowledged the problem about picking a tool it's about, you know, the business risk and what do mention in this space, you know, again, going back to all of this is happening, So we're going to be in this sort of, you know, hybrid for quite some time are, And it's, you know, all the disruption of folks going to remote in, in the morning was more around and sort of, you know, again, going back to the network security And when I was talking with 49 earlier, I said, you know, how much, uh, has the pandemic been you know, it is accelerated what would happen over 10 years and 18 months, and it's One of the things I noticed in your abstract of your session here was that according to Gardner, So the IOT end points, do you put into manufacturing, seeing that, and then what to you think as a silver lining that has come out of the, She and I, I get the question, you know, I've been with at and T long time. I also feel that, uh, you know, there have been times where you guys are at, and when do you think it's going to be really impactful to, you know, that needs to sort of, you know, connect at a low latency, high speed, Thank you so much for joining me today, talking about what a T and T is doing with Thank you so much. Thank you for refresh talk.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rupesh ChokshiPERSON

0.99+

Lisa MartinPERSON

0.99+

UberORGANIZATION

0.99+

LisaPERSON

0.99+

440 petabytesQUANTITY

0.99+

NapaLOCATION

0.99+

TommyPERSON

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

one weekQUANTITY

0.99+

440 petabytesQUANTITY

0.99+

last yearDATE

0.99+

thirdQUANTITY

0.99+

Rupesh Chuck Chuck XiPERSON

0.99+

Napa valleyLOCATION

0.99+

two weekQUANTITY

0.99+

10 plus yearsQUANTITY

0.99+

TofinoORGANIZATION

0.99+

todayDATE

0.99+

secondQUANTITY

0.99+

New Jersey HomedaleLOCATION

0.99+

FortinetORGANIZATION

0.99+

one companyQUANTITY

0.99+

FebruaryDATE

0.98+

DallasLOCATION

0.98+

one agencyQUANTITY

0.98+

two pointsQUANTITY

0.98+

next yearDATE

0.98+

first timeQUANTITY

0.98+

GardnerPERSON

0.98+

OneQUANTITY

0.98+

over 10 yearsQUANTITY

0.98+

three tenantsQUANTITY

0.98+

one customerQUANTITY

0.98+

two thingsQUANTITY

0.98+

20,000 call center agentsQUANTITY

0.98+

around 300%QUANTITY

0.97+

pandemicEVENT

0.97+

three thingsQUANTITY

0.97+

FordORGANIZATION

0.97+

MarchDATE

0.96+

oneQUANTITY

0.96+

Fortinet Security Summit 2021EVENT

0.95+

FordanetORGANIZATION

0.95+

Clemson, South CarolinaLOCATION

0.94+

last year and a halfDATE

0.93+

FirstQUANTITY

0.91+

first thingQUANTITY

0.91+

18 monthsQUANTITY

0.9+

eightQUANTITY

0.9+

40 guard labsQUANTITY

0.89+

Fordham het championship security summitEVENT

0.89+

TPERSON

0.88+

millions of peopleQUANTITY

0.88+

AT&T CybersecurityORGANIZATION

0.88+

this yearDATE

0.88+

T and TORGANIZATION

0.87+

firstQUANTITY

0.87+

five yearsQUANTITY

0.86+

one wayQUANTITY

0.83+

California, NapaLOCATION

0.83+

twoDATE

0.83+

T FordanetORGANIZATION

0.83+

Fortinet security summitEVENT

0.82+

second half of last yearDATE

0.78+

49DATE

0.74+

one average businessQUANTITY

0.74+

last 18 monthsDATE

0.74+

last 12 monthsDATE

0.72+

threeQUANTITY

0.7+

40 net championship security summitsEVENT

0.7+

nearly 11 XQUANTITY

0.7+

BidenORGANIZATION

0.66+

VOCORGANIZATION

0.65+

TORGANIZATION

0.64+

Mike Tarselli, TetraScience | CUBE Conversation May 2021


 

>>Mhm >>Yes, welcome to this cube conversation. I'm lisa martin excited about this conversation. It's combining my background in life sciences with technology. Please welcome Mike Tarsa Lee, the chief scientific officer at Tetra Science. Mike I'm so excited to talk to you today. >>Thank you lisa and thank you very much to the cube for hosting us. >>Absolutely. So we talk about cloud and data all the time. This is going to be a very interesting conversation especially because we've seen events of the last what are we on 14 months and counting have really accelerated the need for drug discovery and really everyone's kind of focused on that. But I want you to talk with our audience about Tetra science, Who you guys are, what you do and you were founded in 2014. You just raised 80 million in series B but give us an idea of who you are and what you do. >>Got it. Tetro Science, what are we? We are digital plumbers and that may seem funny but really we are taking the world of data and we are trying to resolve it in such a way that people can actually pipe it from the data sources they have in a vendor agnostic way to the data targets in which they need to consume that data. So bringing that metaphor a little bit more to life sciences, let's say that you're a chemist and you have a mass spec and an NMR and some other piece of technology and you need all of those to speak the same language. Right? Generally speaking, all of these are going to be made by different vendors. They're all going to have different control software and they're all going to have slightly different ways of sending their data in. Petro Science takes those all in. We bring them up to the cloud or cloud native solution. We harmonize them, we extract the data first and then we actually put it into what we call our special sauce are intermediate data schema to harmonize it. So you have sort of like a picture and a diagram of what the prototypical mass spec or H P. L. C. Or cell counting data should look like. And then we build pipelines to export that data over to where you need it. So if you need it to live in an L. N. Or a limb system or in a visualization tool like spot fire tableau. We got you covered. So again we're trying to pipe things from left to right from sources to targets and we're trying to do it with scientific context. >>That was an outstanding description. Data plumbers who have secret sauce and never would have thought I would have heard that when I woke up this morning. But I'm going to unpack this more because one of the things that I read in the press release that just went out just a few weeks ago announcing the series B funding, it said that that picture science is pioneering a $300 billion dollar Greenfield data market and operating this is what got my attention without a direct cloud native and open platform competitor. Why is that? >>That's right. If you look at the way pharma data is handled today, even those that long tend to be either on prem solutions with a sort of license model or a distribution into a company and therefore maintenance costs, professional services, etcetera. Or you're looking at somebody who is maybe cloud but their cloud second, you know, they started with their on prem journey and they said we should go and build out some puppies, we should go to the cloud migrate. However, we're cloud first cloud native. So that's one first strong point. And the second is that in terms of data harmonization and in terms of looking at data in a vendor agnostic way, um many companies claim to do it. But the real hard test of this, the metal, what will say is when you can look at this with the Scientific contextual ization we offer. So yes, you can collect the data and put it on a cloud. Okay great. Yes. You may be able to do an extract, transform and load and move it to somewhere else. Okay. But can you actually do that from front to back while retaining all the context of the data while keeping all of the metadata in the right place? With veracity, with G XP readiness, with data fidelity and when it gets over to the other side can somebody say oh yeah that's all the data from all the H. P. L. C. S we control. I got it. I see where it is. I see where to go get it, I see who created it. I see the full data train and validation landscape and I can rebuild that back and I can look back to the old raw source files if I need to. Um I challenge someone to find another direct company that's doing that today. >>You talk about that context and the thing that sort of surprises me is with how incredibly important scientific discovery is and has been for since the beginning of time. Why is why has nobody come out in the last seven years and tried to facilitate this for life sciences organizations. >>Right. I would say that people have tried and I would say that there are definitely strides being made in the open source community, in the data science community and inside pharma and biotech themselves on these sort of build motif, right. If you are inside of a company and you understand your own ontology and processes while you can probably design an application or a workflow using several different tools in order to get that data there. But will it be generally useful to the bioscience community? One thing we pride ourselves on is when we product eyes a connector we call or an integration, we actually do it with a many different companies, generic cases in mind. So we say, OK, you have an h p l C problem over at this top pharma, you have an HPC problem with this biotech and you have another one of the C R. O. Okay. What are the common points between all of those? Can we actually distill that down to a workflow? Everyone's going to need, for example a compliance workflow. So everybody needs compliance. Right. So we can actually look into an empower or a unicorn operation and we can say, okay, did you sign off on that? Did it come through the right way? Was the data corrupted etcetera? That's going to be generically useful to everybody? And that's just one example of something we can do right now for anybody in bio pharma. >>Let's talk about the events of the last 14 months or so mentioned 10 X revenue growth in 2020. Covid really really highlighted the need to accelerate drug discovery and we've seen that. But talk to me about some of the things that Tetra science has seen and done to facilitate that. >>Yeah, this past 14 months. I mean um I will say that the global pandemic has been a challenge for everyone involved ourselves as well. We've basically gone to a full remote workforce. Um We have tried our very best to stay on top of it with remote collaboration tools with vera, with GIT hub with everything. However, I'll say that it's actually been some of the most successful time in our company's history because of that sort of lack of any kind of friction from the physical world. Right? We've really been able to dig down and dig deep on our integrations are connections, our business strategy. And because of that, we've actually been able to deliver a lot of value to customers because, let's be honest, we don't actually have to be on prem from what we're doing since we're not an on prem solution and we're not an original equipment manufacturer, we don't have to say, okay, we're going to go plug the thing in to the H. P. L. C. We don't have to be there to tune the specific wireless protocols or you're a W. S. Protocols, it can all be done remotely. So it's about building good relationships, building trust with our colleagues and clients and making sure we're delivering and over delivering every time. And then people say great um when I elect a Tetra solution, I know what's going right to the cloud, I know I can pick my hosting options, I know you're going to keep delivering more value to me every month. Um Thanks, >>I like that you make it sound simple and that actually you bring up a great point though that the one of the many things that was accelerated this last year Plus is the need to be remote that need to be able to still communicate, collaborate but also the need to establish and really foster those relationships that you have with existing customers and partners as everybody was navigating very, very different challenges. I want to talk now about how you're helping customers unlock the problem that is in every industry data silos and point to point integration where things can talk to each other, Talk to me about how you're helping customers like where do they start with? Touch? Where do you start that? Um kind of journey to unlock data value? >>Sure. Journey to unlock data value. Great question. So first I'll say that customers tend to come to us, it's the oddest thing and we're very lucky and very grateful for this, but they tend to have heard about what we've done with other companies and they come to us they say listen, we've heard about a deployment you've done with novo Nordisk, I can say that for example because you know, it's publicly known. Um so they'll say, you know, we hear about what you've done, we understand that you have deep expertise in chromatography or in bio process. And they'll say here's my really sticky problem. What can you do here? And invariably they're going to lay out a long list of instruments and software for us. Um we've seen lists that go up past 2000 instruments. Um and they'll say, yeah, they'll say here's all the things we need connected, here's four or five different use cases. Um we'll bring you start to finish, we'll give you 20 scientists in the room to talk through them and then we to get somewhere between two and four weeks to think about that problem and come back and say here's how we might solve that. Invariably, all of these problems are going to have a data silos somewhere, there's going to be in Oregon where the preclinical doesn't see the biology or the biology doesn't see the screening etcetera. So we say, all right, give us one scientist from each of those, hence establishing trust, establishing input from everybody. And collaboratively we'll work with, you will set up an architecture diagram, will set up a first version of a prototype connector, will set up all this stuff they need in order to get moving, we'll deliver value upfront before we've ever signed a contract and will say, is this a good way to go for you? And they'll say either no, no, thank you or they'll say yes, let's go forward, let's do a pilot a proof of concept or let's do a full production rollout. And invariably this data silos problem can usually be resolved by again, these generic size connectors are intermediate data schema, which talks and moves things into a common format. Right? And then also by organizationally, since we're already connecting all these groups in this problem statement, they tend to continue working together even when we're no longer front and center, right? They say, oh we set up that thing together. Let's keep thinking about how to make our data more available to one another. >>Interesting. So culturally, within the organization it sounds like Tetra is having significant influences their, you know, the collaboration but also data ownership. Sometimes that becomes a sticky situation where there are owners and they want to read retain that control. Right? You're laughing? You've been through this before. I'd like to understand a little bit more though about the conversation because typically we're talking about tech but we're also talking about science. Are you having these technical conversations with scientists as well as I. T. What is that actual team from the customer perspective look >>like? Oh sure. So the technical conversation and science conversation are going on sometimes in parallel and sometimes in the same threat entirely. Oftentimes the folks who reach out to us first tend to be the scientists. They say I've got a problem, you know and and my research and and I. T. Will probably hear about this later. But let's go. And then we will invariably say well let's bring in your R. And D. I. T. Counterparts because we need them to help solve it right? But yes we are usually having those conversations in parallel at first and then we unite them into one large discussion. And we have varied team members here on the Tetris side we have me from science along with multiple different other PhD holders and pharma lifers in our business who actually can look at the scientific use cases and recommend best practices for that and visualizations. We also have a lot of solutions architects and delivery engineers who can look at it from the how should the platform assemble the solution and how can we carry it through? Um And those two groups are three groups really unite together to provide a unified front and to help the customer through and the customer ends up providing the same thing as we do. So they'll give us on the one call, right? Um a technical expert, a data and QA person and a scientist all in one group and they'll say you guys work together to make sure that our orders best represented here. Um And I think that that's actually a really productive way to do this because we end up finding out things and going deeper into the connector than we would have otherwise. >>It's very collaborative, which is I bet those are such interesting conversations to be a part of it. So it's part of the conversation there helping them understand how to establish a common vision for data across their organization. >>Yes, that that tends to be a sort of further reaching conversation. I'll say in the initial sort of short term conversation, we don't usually say you three scientists or engineers are going to change the fate of the entire orig. That's maybe a little outside of our scope for now. But yes, that first group tends to describe a limited solution. We help to solve that and then go one step past and then they'll nudge somebody else in the Oregon. Say, do you see what Petra did over here? Maybe you could use it over here in your process. And so in that way we sort of get this cultural buy in and then increased collaboration inside a single company. >>Talk to me about some customers that you've worked with it. Especially love to know some of the ones that you've helped in the last year where things have been so incredibly dynamic in the market. But give us an insight into maybe some specific customers that work with you guys. >>Sure. I'd love to I'll speak to the ones that are already on our case studies. You can go anytime detector science dot com and read all of these. But we've worked with Prelude therapeutics for example. We looked at a high throughput screening cascade with them and we were able to take an instrument that was basically unloved in a corner at T. Can liquid handler, hook it up into their Ln. And their screening application and bring in and incorporate data from an external party and do all of that together and merge it so they could actually see out the other side a screening cascade and see their data in minutes as opposed to hours or days. We've also worked as you've seen the press release with novo Nordisk, we worked on automating much of their background for their chromatography fleet. Um and finally we've also worked with several smaller biotechs in looking at sort of in stan shih ation, they say well we've just started we don't have an L. N. We don't have a limbs were about to buy these 50 instruments. Um what can you do with us and we'll actually help them to scope what their initial data storage and harmonization strategy should even be. Um so so we're really man, we're at everywhere from the enterprise where its fleets of thousands of instruments and we're really giving data to a large amount of scientists worldwide, all the way down to the small biotech with 50 people who were helping add value there. >>So big range there in terms of the data conversation, I'm curious has have you seen it change in the last year plus with respect to elevating to the C suite level or the board saying we've got to be able to figure this out because as we saw, you know, the race for the Covid 19 vaccine for example. Time to value and and to discovery is so critical. Is that C suite or board involved in having conversations with you guys? >>It's funny because they are but they are a little later. Um we tend to be a scientist and user driven um solution. So at the beginning we get a power user, an engineer or a R and D I. T. Person in who really has a problem to solve. And as they are going through and developing with us, eventually they're going to need either approval for the time, the resources or the budget and then they'll go up to their VP or their CIA or someone else at the executive level and say, let's start having more of this conversation. Um, as a tandem effort, we are starting to become involved in some thought leadership exercises with some larger firms. And we are looking at the strategic aspect through conferences, through white papers etcetera to speak more directly to that C suite and to say, hey, you know, we could fit your industry for dato motif. And then one other thing you said, time to value. So I'll say that the Tetro science executive team actually looks at that as a tract metric. So we're actually looking at driving that down every single week. >>That's outstanding. That's a hard one to measure, especially in a market that is so dynamic. But that time to value for your customers is critical. Again, covid sort of surfaced a number of things and some silver linings. But that being able to get hands on the day to make sure that you can actually pull insights from it accelerate facilitate drug discovery. That time to value there is absolutely critical. >>Yeah. I'll say if you look at the companies that really, you know, went first and foremost, let's look at Moderna right? Not our customer by the way, but we'll look at Madonna quickly as an example as an example are um, everything they do is automated, right? Everything they do is cloud first. Everything they do is global collaboration networks, you know, with harmonized data etcetera. That is the model we believe Everyone's going to go to in the next 3-5 years. If you look at the fact that Madonna went from sequence to initial vaccine in what, 50, 60 days, that kind of delivery is what the market will become accustomed to. And so we're going to see many more farmers and biotechs move to that cloud first. Distributed model. All data has to go in somewhere centrally. Everyone has to be able to benefit from it. And we are happy to help them get >>Well that's that, you know, setting setting a new record for pace is key there, but it's also one of those silver linings that has come out of this to show that not only was that critical to do, but it can be done. We have the technology, we have the brain power to be able to put those all user would harmonize those together to drive this. So give me a last question. Give me an insight into some of the things that are ahead for Tetra science the rest of this year. >>Oh gosh, so many things. One of the nice parts about having funding in the bank and having a dedicated team is the ability to do more. So first of course our our enterprise pharma and BioPharma clients, there are plenty more use cases, workflows, instruments. We've just about scratch the surface but we're going to keep growing and growing our our integrations and connectors. First of all right we want to be like a netflix for connectors. You know we just want you to come and say look do they have the connector? No well don't worry. They're going to have it in a month or two. Um so that we can be basically the almost the swiss army knife for every single connector you can imagine. Then we're going to be developing a lot more data apps so things that you can use to derive value from your data out. And then again, we're going to be looking at helping to educate everybody. So how is cloud useful? Why go to the system with harmonization? How does this influence your compliance? How can you do bi directional communication? There's lots of ways you can use. Once you have harmonized centralized data, you can do things with it to influence your order and drive times down again from days and weeks, two minutes and seconds. So let's get there. And I think we're going to try doing that over the next year. >>That's awesome. Never a dull moment. And I, you should partner with your marketing folks because we talked about, you talked about data plumbing the secret sauce and becoming the netflix of connectors. These are three gems that you dropped on this this morning mike. This has been awesome. Thank you for sharing with us what teacher science is doing, how you're really helping to fast track a lot of the incredibly important research that we're all really um dependent on and helping to heal the world through data. It's been a pleasure talking with you. >>Haley says I'm a real quickly. It's a team effort. The entire Tetro science team deserves credit for this. I'm just lucky enough to be able to speak to you. So thank you very much for the opportunity. >>And she about cheers to the whole touch of science team. Keep up the great work guys. Uh for mike Roselli, I'm lisa martin. You're watching this cube conversation. >>Mhm.

Published Date : May 13 2021

SUMMARY :

Mike I'm so excited to talk to you today. But I want you to talk with our audience about over to where you need it. But I'm going to unpack this more because one of the things that I read I can rebuild that back and I can look back to the old raw source files if I need to. You talk about that context and the thing that sort of surprises me is with how incredibly important scientific So we say, OK, you have an h p l C problem over at this top pharma, Covid really really highlighted the need to accelerate to the H. P. L. C. We don't have to be there to tune the specific wireless protocols or you're a W. is the need to be remote that need to be able to still communicate, we understand that you have deep expertise in chromatography or in bio process. T. What is that actual team from the customer perspective look and going deeper into the connector than we would have otherwise. it. So it's part of the conversation there helping them understand how to establish of short term conversation, we don't usually say you three scientists or engineers are going to change the Especially love to know some of the ones that you've helped Um what can you do with us and we'll actually help them to scope what their initial data as we saw, you know, the race for the Covid 19 vaccine for example. So at the beginning we get a But that being able to get hands on the day to make That is the model we believe Everyone's going to go to in the next 3-5 years. We have the technology, we have the brain power to be able to put those You know we just want you to come and say look do they have the connector? And I, you should partner with your marketing folks because we talked about, I'm just lucky enough to be able to speak to you. And she about cheers to the whole touch of science team.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2014DATE

0.99+

Mike TarselliPERSON

0.99+

CIAORGANIZATION

0.99+

OregonLOCATION

0.99+

50QUANTITY

0.99+

MikePERSON

0.99+

HaleyPERSON

0.99+

2020DATE

0.99+

Mike Tarsa LeePERSON

0.99+

Tetro ScienceORGANIZATION

0.99+

Tetra scienceORGANIZATION

0.99+

lisa martinPERSON

0.99+

mike RoselliPERSON

0.99+

lisaPERSON

0.99+

fourQUANTITY

0.99+

May 2021DATE

0.99+

20 scientistsQUANTITY

0.99+

MadonnaPERSON

0.99+

netflixORGANIZATION

0.99+

two groupsQUANTITY

0.99+

50 peopleQUANTITY

0.99+

80 millionQUANTITY

0.99+

one callQUANTITY

0.99+

three groupsQUANTITY

0.99+

two minutesQUANTITY

0.99+

Tetra ScienceORGANIZATION

0.99+

one groupQUANTITY

0.99+

50 instrumentsQUANTITY

0.99+

14 monthsQUANTITY

0.99+

$300 billion dollarQUANTITY

0.99+

OneQUANTITY

0.99+

novo NordiskORGANIZATION

0.99+

twoQUANTITY

0.99+

secondQUANTITY

0.99+

last yearDATE

0.99+

Petro ScienceORGANIZATION

0.99+

todayDATE

0.99+

four weeksQUANTITY

0.99+

ModernaORGANIZATION

0.98+

three scientistsQUANTITY

0.98+

eachQUANTITY

0.98+

60 daysQUANTITY

0.98+

firstQUANTITY

0.98+

one scientistQUANTITY

0.98+

a monthQUANTITY

0.98+

FirstQUANTITY

0.97+

oneQUANTITY

0.97+

PetraPERSON

0.97+

first versionQUANTITY

0.96+

one exampleQUANTITY

0.96+

series BOTHER

0.96+

next yearDATE

0.96+

2000 instrumentsQUANTITY

0.96+

five different use casesQUANTITY

0.94+

single companyQUANTITY

0.94+

TetroORGANIZATION

0.94+

first groupQUANTITY

0.93+

mikePERSON

0.93+

three gemsQUANTITY

0.92+

this morningDATE

0.9+

one stepQUANTITY

0.9+

first strong pointQUANTITY

0.89+

BioPharmaORGANIZATION

0.89+

TetraScienceORGANIZATION

0.88+

few weeks agoDATE

0.86+

last 14 monthsDATE

0.86+

past 14 monthsDATE

0.86+

TetrisORGANIZATION

0.85+

last seven yearsDATE

0.85+

thousands of instrumentsQUANTITY

0.83+

this yearDATE

0.82+

H. P. L. C.ORGANIZATION

0.81+

swissORGANIZATION

0.8+

10 X revenueQUANTITY

0.79+

CTITLE

0.79+

single weekQUANTITY

0.78+

Jerome Lecat, Scality and Chris Tinker, HPE | CUBE Conversation


 

(uplifting music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCube here in Palo Alto, California. We've got two great remote guests to talk about some big news hitting with Scality and Hewlett Packard Enterprise. Jerome Lecat CEO of Scality and Chris Tinker, Distinguished Technologist from HPE, Hewlett Packard Enterprise, Jerome, Chris, great to see you both Cube alumnis from an original gangster days as we'd say back then when we started almost 11 years ago. Great to see you both. >> It's great to be back. >> Good to see you John. >> So, really compelling news around kind of this next generation storage cloud native solution. Okay, it's really kind of an impact on the next gen, I call next gen, dev ops meets application, modern application world and something we've been covering heavily. There's some big news here around Scality and HPE offering a pretty amazing product. You guys introduced essentially the next gen piece of it, Artesca, we'll get into in a second, but this is a game-changing announcement you guys announced, this is an evolution continuing I think is more of a revolution, but I think, you know storage is kind of abstractionally of evolution to this app centric world. So talk about this environment we're in and we'll get to the announcement, which is object store for modern workloads, but this whole shift is happening Jerome. This is a game changer to storage and customers are going to be deploying workloads. >> Yeah, Scality really, I mean, I personally really started working on Scality more than 10 years ago, close to 15 now. And if we think about it I mean the cloud has really revolutionized IT. And within the cloud, we really see layers and layers of technology. I mean, it all start at around 2006 with Amazon and Google and Facebook finding ways to do initially what was consumer IT at very large scale, very low credible reliability and then slowly creeped into the enterprise. And at the very beginning, I would say that everyone was kind of wizards trying things and really coupling technologies together. And to some degree we were some of the first wizard doing this, but we, we're now close to 15 years later and there's a lot of knowledge and a lot of experience, a lot of tools. And this is really a new generation. I'll call it cloud native, or you can call it next gen whatever, but there is now enough experience in the world, both at the development level and at the infrastructure level to deliver truly distributed automated systems that run on industry standard servers. Obviously good quality server deliver a better service than others, but there is now enough knowledge for this to truly go at scale. And call this cloud or call this cloud native. Really the core concept here is to deliver scalable IT at very low cost, very high level of reliability, all based on software. And we've, we've been participated in this motion, but we feel that now the breadth of what's coming is at the new level, and it was time for us to think, develop and launch a new product that's specifically adapted to that. And Chris, I will let you comment on this because the customers or some of them, you can add a customer, you do that. >> Well, you know, you're right. You know, I've been in the, I've been like you I've been in this industry for a, well, along time. Give a long, 20 to 21 years in HPE in engineering. And look at the actual landscape has changed with how we're doing scale-out software-defined storage for particular workloads. And we're a catalyst has evolved here is an analytics normally what was only done in the three letter acronyms and massively scale-out parallel namespace file systems, parallel file systems. The application space has encroached into the enterprise world where the enterprise world needed a way to actually take a look at how to, how do I simplify the operations? How do I actually be able to bring about an application that can run in the public cloud or on premise or hybrid, be able to actually look at a workload optimized step that aligns the actual cost to the actual analytics that I'm going to be doing the workload that I'm going to be doing and be able to bridge those gaps and be able to spin this up and simplify operations. And you know, and if you, if you are familiar with these parallel processes which by the way we actually have on our truck, I, I do engineer those, but they are, they are, they are they have their own unique challenges, but in the world of enterprise where customers are looking to simplify operations, then take advantage of new application, analytic workloads whether it be smart, Mesa, whatever it might be, right. I mean, if I want to spin up a Mongo DB or maybe maybe a, you know, last a search capability how do I actually take those technologies, embrace a modern scale-out storage stack that without without breaking the bank, but also provide a simple operations. And that's, that's why we look for object storage capabilities because it brings us this massive parallelization. Back to you John. >> Well before we get into the product. I want to just touch on one thing Jerome you mentioned, and Chris, you, you brought up the DevOps piece next gen, next level, whatever term you use. It is cloud native, cloud native has proven that DevOps infrastructure is code is not only legit. It's being operationalized in all enterprises and add security in there, you have DevSecOps, this is the reality and hybrid cloud in particular has been pretty much the consensus is that standard. So our defacto center whatever you want to call it, that's happening. Multicloud are on the horizon. So these new workloads are have these new architectural changes, cloud on premises and edge. This is the number one story. And the number one challenge all enterprises are now working on. How do I build the architecture for the cloud on premises and edge? This is forcing the DevOps team to flex and build new apps. Can you guys talk about that particular trend? And is it, and is that relevant here? >> Yeah, I, I now talk about really storage anywhere and cloud anywhere and and really the key concept is edge to go to cloud. I mean, we all understand now that the edge will host a lot of that time and the edge is many different things. I mean, it's obviously a smartphone, whatever that is, but it's also factories, it's also production. It's also, you know, moving moving machinery, trains, planes, satellites that that's all the edge, cars obviously. And a lot of that I will be both produced and process there, but from the edge who will want to be able to send the data for analysis, for backup, for logging to a call, and that call could be regional, maybe not, you know, one call for the whole planet, but maybe one corporate region the state in the U.S. And then from there you will also want to push some of the data to public cloud. One of the thing that we see more and more is that the D.R that has centered the disaster recovery is not another physical data center. It's actually the cloud, and that's a very efficient infrastructure very cost efficient, especially. So really it, it, it's changing the paradigm on how you think about storage because you really need to integrate these three layers in a consistent approach especially around the topic of security because you want the data to be secure all along the way. And data is not just data, its data, and who can access the data, who can modify the data what are the conditions that allow modification all automatically erasure of the data? In some cases, it's super important that the data automatically erased after 10 years and all this needs to be transported from edge to core to cloud. So that that's one of the aspects. Another aspects that resonates for me with what you said is a word you didn't say, but it's actually crucial this whole revolution. It's Kubernetes I mean, Kubernetes is in now a mature technology, and it's, it's just, you know the next level of automatized operation for distributed system, which we didn't have 5 or 10 years ago. And that is so powerful that it's going to allow application developers to develop much faster system that can be distributed again edge to go to cloud, because it's going to be an underlying technology that spans the three layers. >> Chris, your thoughts hybrid cloud. I've been, I've been having questions with the HPE folks for God years and years on hybrid clouds, now here. >> Right (chuckles) >> Well, you know, and, and it's exciting in a layout right, so you look at like a, whether it be enterprise virtualization, that is a scale-out general purpose virtualization workloads whether it be analytic workloads, whether it be no data protection is a paramount to all of this, orchestration is paramount. If you look at that DevSecOps, absolutely. I mean, securing the actual data the digital last set is, is absolutely paramount. And if you look at how we do this look at the investments we're making, we're making enough and look at the collaborative platform development which goes to our partnership with Scality. It is, we're providing them an integral aspect of everything we do, whether we're bringing in Ezmeral which is our software we use for orchestration look at the veneer of its control plane, controlling Kubernetes. Being able to actually control the active clusters and the actual backing store for all the analytics that we just talked about. Whether it be a web-scale app that is traditionally using a politics namespace and now been modernized and take advantage of newer technologies running an NBME burst buffers or a hundred gig networks with Slingshot network of 200 and 400 gigabit looking at how do we actually get the actual analytics, the workload to the CPU and have it attached to the data at risk. Where's the data, how do we land the data? How do we actually align, essentially locality, locality of the actual asset to the computer. And this is where, you know, we can look leverage whether it be a Zair or Google or name your favorite hybrid, hyperscaler, leverage those technologies leveraging the actual persistent store. And this is where Scality is, with this object store capability has it been an industry trendsetter, setting the actual landscape of how provide an object store on premise and hybrid cloud run it in a public cloud, but being able to facilitate data mobility and tie it back to, and tie it back to an application. And this is where a lot of things have changed in the world of analytics, because the applications that you, the newer technologies that are coming on the market have taken advantage of this particular protocol as threes. So they can do web scale massively parallel concurrent workloads. >> You know what let's get into the announcement. I love cool and relevant products. And I think this hits the mark. Scality you guys have Artesca, which is just announced. And I think it, you know, we obviously we reported on it. You guys have a lightweight true enterprise grade object store software for Kubernetes. This is the announcement, Jerome, tell us about it. What's the big deal? Cool and relevant, come on, this is cool. Right, tell us. >> I'm super excited. I'm not sure, if you can see it as well on the screen, but I'm super, super excited. You know, we, we introduced the ring 11 years ago and they says our biggest announcements for the past 11 years. So yes, do pay attention. And, you know, after, after looking at, at all these trends and understanding where we see the future going. We decided that it was time to embark (indistinct) So there's not one line of code that's the same as our previous generation product. They will both exist, they both have a space in the market. And Artesca was specifically designed for this cloud native era. And what we see is that people want something that's lightweight especially because it had to go to the edge. They still want the enterprise grid that Scality is known for. And it has to be modern. What we really mean by modern is, we see object storage now being the primary storage for many application more and more applications. And so we have to be able to deliver the performance, that primary storage expects. This idea of a Scality of serving primary storage is actually not completely new. When we launched Scality 10 years ago, the first application that we were supporting was consumer email for which we were, and we are still today, the primary storage. So we have, we know what it is to be the primary store. We know what's the level of reliability you need to hit. We know what, what latency means and latency is different from throughput, you really need to optimize both. And I think that still today we're the only object storage company that protects data from both replication and original encoding Because we understand that replication is faster, but the original encoding is more better, and more, of file where fast internet latency doesn't matter so much. So we we've been being all that experience, but really rethinking of product for that new generation that really is here now. And so where we're truly excited, I guess people a bit more about the product. It's a software, Scality is a software company and that's why we love to partner with HPE who's producing amazing servers, you know for the record and the history. The very first deployment of Scality in 2010 was on the HP servers. So this is a long love story here. And so to come back to our desk is lightweight in the sense that it's easy to use. We can start small, we can start from just one server or one VM I mean, you would start really small, but he can grow infinitely. The fact that we start small, we didn't, you know limit the technology because of that. So you can start from one to many and it's cloud native in the sense that it's completely Kubernetes compatible it's Kubernetes office traded. It will deploy on many Kubernetes distributions. We're talking obviously with Ezmeral we're also talking with zoo and with the other all those of communities distribution it will also be able to be run in the cloud. Now, I'm not sure that there will be many true production deployment of Artesca going the cloud, because you already have really good object storage by the cloud providers but when you are developing something and you want to test that, you know just doing it in the cloud is very practical. So you'll be able to deploy our Kubernetes cloud distribution, and it's more than object storage in the sense that it's application centric. A lot of our work is actually validating that our storage is fit for this single purpose application. And making sure that we understand the requirement of these application, that we can guide our customers on how to deploy. And it's really designed to be the primary storage for these new workloads. >> The big part of the news is your relationship with Hewlett Packard Enterprise is some exclusivity here as part of this and as you mentioned the relationship goes back many, many years. We've covered the, your relationship in the past. Chris also, you know, we cover HP like a blanket. This is big news for HPE as well. >> This is very big news. >> What is the relationship, talk about this exclusivity Could you share about the partnership and the exclusivity piece? >> Well, there's the partnership expands into the pan HPE portfolio. we look, we made a massive investment in edge IOT device. So we actually have how did we align the cost to the demand. Our customers come to us, wanting to looking at think about what we're doing with Greenlake, like in consumption based modeling. They want to be able to be able to consume the asset without having to do a capital outlay out of the gate. Number two, look at, you know how do you deploy technology, really demand. It depends on the scale, right? So in a lot of your web skill, you know, scale out technologies, it putting them on a diet is challenging. Meaning how skinny can you get it. Getting it down into the 50 terabyte range and then the complexities of those technologies at as you take a day one implementation and scale it out over you know, you know, multiple iterations over quarters, the growth becomes a challenge so working with Scality we, we believe we've actually cracked this nut. We figured out how to a number one, how to start small, but not limit a customer's ability to scale it out incrementally or grotesquely. You can eat depending on the quarters, the month, whatever whatever the workload is, how do you actually align and be able to consume it? So now whether it be on our Edgeline products our DL products go right there, now what that Jerome was talking about earlier you know, we, we, we ship a server every few seconds. That won't be a problem. But then of course, into our density optimized compute with the Apollo products. And this where our two companies have worked in an exclusivity where they scale the software bonds on the HP ecosystem. And then we can, of course provide you, our customers the ability to consume that through our GreenLake financial models or through a CapEx partners. >> Awesome, so Jerome and, and Chris, who's the customer here obviously, there's an exclusive period. Talk about the target customer and how the customers get the product and how they get the software. And how does this exclusivity with HP fit into it? >> Yeah, so there there's really a three types of customers and we've really, we've worked a lot with a company called UseDesign to optimize the user interface for each the types of customers. So we really thought about each customer role and providing with each of them the best product. So the, the first type of customer are application owners who are deploying an application that requires an object storage in the backend, you typically want a simple object store for one application, they want it to be simple and work. Honestly they want no thrill, just want an object store that works. And they want to be able to start as small as they start with their application. Often it's, you know, the first deployment maybe a small deployment, you know applications like a backup like VML, Rubrik, or analytics like (indistinct), file system that now, now available as a software, you know like CGI does a really great departmental NAS that works very well that needs an object store in the backend. Or for high performance computing a wake-up house system is an amazing file system. We will also have vertical application like road peak, for example, who provides origin and the view of the software broadcasters. So all these are application, they request an object store in the backend and you just need a simple high-performance working well object store and I'll discuss perfect for that. Now, the second type of people that we think will be interested by Artesca are essentially developer who are currently developing some capabilities or cloud native application, your next gen. And as part of their development stack, it's getting better and better when you're developing a cloud native application to really target an object storage rather than NFS, as you're persistent. It just, you know, think about generations of technologies and NFS and filesystem were great 25 years ago. I mean, it's an amazing technology. Now, when you want to develop a distributed scalable application object storage is a better fit because it's the same generation. And so same thing, I mean, you know, they're developing something they need an object store that they can develop on. So they want it very lightweight, but they also want the product that their enterprise or their customers will be able to rely on for years and years on. And this guy's really great fit to do that. The third type of customer are more architects, I would say are the architects that are designing a system where they are going to have 50 factories, a thousand planes, a million cars, they are going to have some local storage which will they want to replicate to the core and possibly also to the cloud. And as the design is really new generation workloads that are incredibly distributed but with local storage Artesca are really great for that. >> And tell about the HPE exclusive Chris. What's the, how does that fit in? Do they buy through Scality? Can they get it for the HP? Are you guys working together on how customers can procure it? >> Both ways, yeah both ways they can procure it through Scality. They can secure it through HPE and it's, it's it's the software stack running on our density optimized compute platforms which you would choose and align those and to provide an enterprise quality. Cause if it comes back to it in all of these use cases is how do we align up into a true enterprise stack, bringing about multitenancy bringing about the, the, the fact that you know, if you look at like a local coding one of the things that they're bringing to it, so that we can get down into the DL325. So with the exclusivity, you actually get choice. And that choice comes into our entire portfolio whether it be the Edgeline platform the DL325 AMD processing stack or the Intel 380, or whether it be the Apollos or like I said, there's, there's, there's so many ample choices there that facilitate this, and it's this allows us to align those two strategies. >> Awesome, and I think the Kubernetes piece is really relevant because, you know, I've been interviewing folks practitioners and Kubernetes is very much maturing fast. It's definitely the centerpiece of the cloud native both below the, the line, if you will below under the hood for the, for the infrastructure and then for apps, they want a program on top of it that's critical. I mean, Jerome, this is like, this is the future. >> Yeah, and if you don't mind like to come back to the myth on the exclusivity with HP. So we did a six month exclusive and the very reason we could do this is because HP has such breadth of server portfolio. And so we can go from, you know, really simple, very cheap you know, DL380, machine that we tell us for a few dollars. I mean, it's really like simple system, 50 terabyte. We can have the DL325 that Chris mentioned that is really a powerhouse all NVME, clash over storage is NVME, very fast processors you know, dense, large, large system, like the APOE 4,500. So it's a very large graph of portfolio. We support the whole portfolio and we work together on this. So I want to say that you know, one of the reason I want to send kudos to HP for the breadth of their server line really. As mentioned, Artesca can be ordered from either company. In hand-in-hand together, so anyway, you'll see both of us and our field working incredibly well together. >> Well, just on that point, I think just for clarification was this co-design by Scality and HPE, because Chris you mentioned, you know, the, the configuration of your systems. Can you guys, Chris quickly talk about the design. >> From, from, from the code base the software is entirely designed and developed by Scality, from testing and performance, so this really was a joint work with HP providing both a hardware and manpower so that we could accelerate the testing phase. >> You know, Chris HPE has just been doing such a great job of really focused on this. I know I've been covering it for years before it was fashionable. The idea of apps working no matter where it lives, public cloud, data center, edge. And you mentioned edge line's been around for awhile, you know, app centric, developer friendly, cloud first, has been an HPE kind of guiding first principle for many, many years. >> Well, it has. And, you know, as our CEO here intended, by 2022 everything will be able to be consumed as a service in our portfolio. And then this stack allows us the simplicity and the consumability of the technology and the granulation of it allows us to simplify the installation. Simplify the actual deployment bringing into a cloud ecosystem, but more importantly for the end customer. They simply get an enterprise quality product running on an optimized stack that they can consume through a orchestrated simplistic interface. That customers that's what they're wanting for today's but they come to me and ask, hey how do I need a, I've got this new app, new project. And, you know, it goes back to who's actually coming. It's no longer the IT people who are actually coming to us. It's the lines of business. It's that entire dimension of business owners coming to us, going this is my challenge. And how can you, HPE help us? And we rely on our breadth of technology, but also our breadth of partners to come together in our, of course Scality is hand in hand and our collaborative business unit our collaborative storage product engineering group that actually brought, brought this to market. So we're very excited about this solution. >> Chris, thanks for that input and great insight. Jerome, congratulations on a great partnership with HPE obviously great joint customer base. Congratulations on the product release here. Big moving the ball down the field, as they say. New functionality, clouds, cloud native object store. Phenomenal, so wrap, wrap, wrap up the interview. Tell us your vision for Scality and the future of storage. >> Yeah, I think I started in, Scality is going to be an amazing leader, it is already. But yeah, so, you know I have three things that I think will govern how storage is going. And obviously Marc Andreessen said it software is everywhere and software is eating the world. So definitely that's going to be true in the data center in storage in particular, but the three trends that are more specific are first of all, I think that security performance and agility is now basic expectation. It's, it's not, you know it's not like an additional feature. It's just the basic tables, security performance and our job. The second thing is, and we've talked about it during this conversation is edge to go. You need to think your platform with edge, core and cloud. You know, you, you don't want to have separate systems separate design interface point for edge and then think about the core and then think about cloud, and then think about the diverse power. All this needs to be integrated in a design. And the third thing that I see as a major trend for the next 10 years is data sovereignty. More and more, you need to think about where is the data residing? What are the legal challenges? What is the level of protection, against who are you protected? What is your independence strategy? How do you keep as a company being independent from the people you need to be in the band? And I mean, I say companies, but this is also true for public services. So these, these for me are the three big trends. And I do believe that software defined distributed architecture are necessary for these trends but you also need to think about being truly enterprise grade. and that has been one of our focus with design of Artesca. How do we combine a lightweight product with all of the security requirements and data sovereignty requirements that we expect to have in the next thing? >> That's awesome. Congratulations on the news Scality, Artesca. The big release with HPE exclusive for six months, Chris Tinker, Distinguished Engineer at HPE. Great to see you Jerome Lecat CEO of Scality, great to see you as well. Congratulations on the big news. I'm John Furrier from theCube. Thanks for watching. (uplifting music)

Published Date : Apr 26 2021

SUMMARY :

Great to see you both. an impact on the next gen, And at the very beginning, I would say that aligns the actual cost And the number one challenge So that that's one of the aspects. for God years and years on that are coming on the And I think it, you know, we in the sense that it's easy to use. The big part of the align the cost to the demand. and how the customers get the product in the backend and you just need a simple And tell about the HPE exclusive Chris. and it's, it's it's the of the cloud native both below and the very reason we could do this is talk about the design. the software is entirely designed And you mentioned edge line's been around and the consumability of the and the future of storage. from the people you great to see you as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JeromePERSON

0.99+

AmazonORGANIZATION

0.99+

Jerome LecatPERSON

0.99+

Marc AndreessenPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Chris TinkerPERSON

0.99+

two companiesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

HPORGANIZATION

0.99+

2010DATE

0.99+

Jerome LecatPERSON

0.99+

FacebookORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

ScalityORGANIZATION

0.99+

20QUANTITY

0.99+

HPEORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

50 factoriesQUANTITY

0.99+

50 terabyteQUANTITY

0.99+

a million carsQUANTITY

0.99+

six monthsQUANTITY

0.99+

GreenlakeORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three typesQUANTITY

0.99+

a thousand planesQUANTITY

0.99+

CapExORGANIZATION

0.99+

both waysQUANTITY

0.99+

10 years agoDATE

0.99+

U.S.LOCATION

0.99+

DL325COMMERCIAL_ITEM

0.99+

six monthQUANTITY

0.99+

21 yearsQUANTITY

0.99+

bothQUANTITY

0.99+

ArtescaORGANIZATION

0.99+

eachQUANTITY

0.99+

one applicationQUANTITY

0.99+

one serverQUANTITY

0.99+

11 years agoDATE

0.99+

200QUANTITY

0.99+

second thingQUANTITY

0.98+

third thingQUANTITY

0.98+

first typeQUANTITY

0.98+

each customerQUANTITY

0.98+

first applicationQUANTITY

0.98+

5DATE

0.98+

Craig Hyde, Splunk | Leading with Observability | January 2021


 

>> Narrator: From theCUBE studios in Palo Alto in Boston connecting with that leaders all around the world, this is a CUBE Conversation. >> Hello and welcome to this special CUBE Conversation. I'm John Furrier, your host. We're here for a special series, Leading with Observability, and this segment is: End-to-end observability drives great digital experiences. We've got a great guest here, Craig Hyde, senior director of product management for Splunk. Craig, great to see you. Thanks for coming on. >> And thanks for having me. This is great. >> So this series, Leading with Observability is a super hot topic obviously with cloud native. In the pandemic, COVID-19 has really kind of shown cloud native trend has been a tailwind for people who invested in it, who have been architecting for cloud on premises where data is a key part of that value proposition and then there's people who haven't been doing it. So, and out of this trend, the word observability has become a hot segment. And for us insiders in the industry, we know observability is just kind of network management on steroids in the cloud, so it's about data and all this. But at the end of the day, there's value that's enabled from observability. So I want to talk to you about that value that's enabled in the experience of the end user whether it's in a modern application or user inside the enterprise. Tell us what you think about this end user perspective. >> Sure, yeah thanks a lot for that intro. And I would actually argue that observability wouldn't even just be machine data or network data, it's more of a broader context where you can see everything that's going on inside the application and the digital user experience. From a user experience or a digital experience management perspective, I believe the metrics that you pull from such a thing are the most useful and ubiquitous metrics that you have and visibility in all of technology. And when done right, it can tell you what the actual end result of all this technology that you're piecing together, the end result of what's getting delivered to the user, both quantitatively and qualitatively. So, my background, I actually started a company in this domain. It was called Rigor and we focused purely on looking at user experience and digital experience. And the idea was that, you know, this was 10 years ago, we were just thinking, look, 10 years from now, more and more people are going to do business digitally, they're going to work more digitally and at the same time we saw the legacy data centers being shut down and things were moving to the cloud. So we said, look, the future is in the users, and where it all comes together is on the user's desktop or on their phone, and so we set out to focus specifically on that space. Fast forward 10 years, we're now a part of Splunk and we're really excited to bolt this onto an overall observability strategy. You know, I believe that it's becoming more and more popular, like you said, with the pandemic and COVID-19, it was already on a tear from a digital perspective, the adoption was going through the roof and people were doing more and more remote, they were buying more and more offline, but the pandemic has just pushed it through the roof. And I mean, wow, like the digital business genie's out of the bottle and there's no putting it back now. But, you know, there's also other things that are driving the need for this and the importance of it and part of it comes with the way technology is growing. It's becoming much more complex in terms of moving parts. Where an app used to be run off three different tiers in a data center, now it could be across hundreds of machines and opaque networks, opaque data centers all over the world, and the only time you often see things, how they come together, is on the user's desktop. And so that's where we really think you got to start from the user experience and work back. And, you know, all the drive in computing is all about making things better, faster and cheaper, but without this context of the user, often the customer and the experience gets left out from reaping the rewards from all these gains. So that's sort of like encapsulates my overall view of the space and why we got into it and why I'm so excited about it. >> Well Craig, I got to ask on a personal level. I mean, you look at what happened with the pandemic, I mean, you're a pioneer, you had a vision. Folks that are on the entrepreneurial side say, hey digital businesses is coming and they get it and it's slowly gets known in the real world, becomes more certain, but with the pandemic, it just happened all of a sudden so fast for everybody because everyone's impacted. Teachers, students, families, work, everyone's at home. So the entire user experience was impacted in the entire world. What was going through your mind when you saw all this happening and you see the winners obviously were people had invested in cloud native and data-driven technologies, what was your take on all this when you saw this coming? >> Well, the overall trend has been going on for decades, right? And so the direction of it isn't that surprising, but the magnitude and the acceleration, there's some stats out there from Forbes where the e-commerce adoption doubled within the first six months of the pandemic. So we're talking, you know, 10, 12 years of things ticking up and then within six months, a doubling of the adoption of e-commerce. And so like anybody else, you first freeze and say, what does this mean? But when people start working remote and people start ordering things from Amazon and all the other websites, it's quick to see like, aha! It no longer matters what chairs somebody is sitting in when they're doing work or that they're close to a store and you have a physical storefront when you're trying to buy something, it's all about that digital experience and it needs to be ubiquitous. So it's been interesting to see the change over the past few months for sure. But again, it doesn't change the trend, it just magnified it and I don't see it going back anytime soon. >> Yeah I mean, digital transformation has always been a buzz word that everyone kind of uses as a way to kind of talk about the big picture. >> Right. >> It's actually transforming and there's also share shifts that happen in every transformation, in any market shift. Obviously that's happening with cloud. Cloud native edge is becoming super important. In all of these, and by the way, in all the applications that sit on that infrastructure which is now infrastructure as code, has a data requirement that observability piece becomes super critical, not just from identifying and resolving, but also for training machine learning and AI, right? So, again, you have this new flywheel observability that's really at the heart of digital transformation. What should companies think about when they associate observability to digital transformation as they're sitting around whether they're CXOs or CSOs or solution architects going, okay, how does observability plug into my plans? >> Yeah, absolutely. I mean, my recommendation and the approach that I would take is that you want to start with the end in mind and it's all about how you set your goals when you're setting out in getting into digital transformation. And, you know, the late Steve Jobs, to borrow one of his quotes, he said that you have to start with the customer experience in mind and work backwards to the technology. And so I think that applies when you get into an observability strategy. So without understanding what the actual user experience is, you don't have a good enough yardstick to go out there and start working towards. So availability on a server or CPU time or transaction time in a database, like, those are all great, but without the context of what is the goal you're actually going after, it's kind of useless. So, like I said, it's not uptime, it's not server time, it's not any of that stuff, and it's user experience and these things are different. So they're like visual metrics, right? So what a user sees, because all kinds of things are going on in the background, but if it can see that the person can see and their experience is that they're getting some kind of response from the machine, then that's how you measure where the end point is and what the overall goal is. And so like to keep kind of going on with that, it's like you start with the end in mind, you use that end to set your goals, you use that domain and that visibility to troubleshoot faster. So when the calls start rolling in then they say, hey, I'm stuck at home and I'm on a slow internet connection, I can't get on the app and core IT is taking a phone call, You can quickly look and instrument that user and see exactly what they're seeing. So when you're troubleshooting, you're looking at the data from their perspective and then working backwards to the technology. >> That's super exciting. I want to get your thoughts on that. So just to double down on that because I think this highlights the trend that we were just talking about. But I'll break it down into three areas that I see happening in the marketplace. Number one, availability and performance. That's on everyone's mind. You just hit that, right? Number two, integrations. There's more integrations going on within platforms or tools or systems, whether it's an API over here, people are working together digitally, right? And you're seeing e-commerce. And third is the user patterns and the expectations are changing. So when you unpack those kinds of like trends, there's features of observability underneath each. Could you talk about that because I think that seems to be the common pattern that I'm seeing? Okay, high availability, okay, check. Everyone has to have that. Almost table stakes. But it's hard when you're scaling, right? And then integrations, all kinds of API is being slinged around. You've got microservices, you've got Kubernetes, people are integrating data flows, control planes, whatever, and then finally users. They want new things. New patterns emerge, which is new data. >> Yeah, absolutely. And to just kind of talk about that, it reminds me of like a Maslow's hierarchy of needs of visibility, right? Like, okay, the machine is on, check. Like you said, it's table stakes, make sure it's up and running. That's great. Then you want to see sort of the applications that are running on the machine, how they're talking to each other, are other components that you're making API calls to, are they timing out or are they breaking things? And so you get that visibility of like, okay, they're on, what's going on top of those machines are inside of them or in the containers or the virtual machines or whatever segment of computing that you're looking at, And then that cherry on top, the highest point is like, how is that stack of technology serving your customer? How's it serving the user and what's the experience? So those are sort of the three levels that we kind of look at when we're thinking of user experience. And so, it's a different way to look at it, but it's sort of the way that kind of we see the world is that three tier, that three layer cake. >> It's interesting. >> And you need all the layers. >> It's super relevant. And again, it's better together, but you can mix and match and have product in there. So I want to get into the Splunk solution. You guys have the digital experience monitoring solution. Can you explain what that is and how that fits into all this and what's in it for the customers, what's the benefit? >> Right, sure. So with the digital experience monitoring and the platform that we have, we're giving people the ability to basically do what I was talking about, where it enables you to take a look at what the user's experience are and pull metrics and then correlate them from the user all the way through the technical journey to the back end, through the different tiers of the application and so on. So that technology is called real user monitoring where we instrument the users. And then we also layer in synthetic monitoring which is the sort of robot users that are always on for when you're in lower level environments and you want to see, you know, what experience is going to look like when you push out new software, or when nobody's on the application, did something break? So a couple of those two together and then we feed that into our overall observability platform that's fed with machine data, we have all the metrics from all the components that you're looking at in that single pane of glass. And the idea is that we're also bringing you not only just the metrics and the events from logs and all the happenings, but we're also trying to help tease out some of these problems for you. So many problems that happen in technology have happened before, and we've got a catalog with our optimization platform of 300 plus things that go wrong when webpages or web applications or API calls start acting funky. And so we can provide, based on our intelligence that's built into the platform, basically run books for people to fix things faster and build those playbooks into the release process so you don't break the applications to begin with and you can set flags to where people understand what performance is before when it's being delivered to the customer, and if there are problems, let's fix them before we break the experience and lose the trust of the user. So again, it's the metrics from the stats that are coming across the wire of everything all the way to the users, it's the events from the logs that are coming in so you can see context, and then it's that user experience, it's a trace level data from where you can double click into each of the tiers and say, like, what's going on in here? What's going on in the browser? What's going on in the application? What's going on in the backend? And so you can sort of pool all that together in a single pane of glass and find problems faster, fix them faster and prevent users from having problems to begin with. And to do this properly, you really need it all under one roof and so that's why we're so excited to bring this all together. >> Yeah, I've been sitting on theCUBE for 10 years now. We've been 11 years, on our 11th year doing theCUBE. Digital you can measure everything. So why not? There should be no debate if done properly. So that brings up this whole concept that you guys are talking about full fidelity. Can you just take a minute to explain what that is? What is full fidelity mean? >> Sure, you know, full fidelity really comes down to a lot about these traces. So when we talk about metrics, logs and traces, it's all about getting all the activity that goes on in an application and looking at it. So when you or I interact with our company's app online and there's problems, that the person who's going to fix this problem, they can actually see specifically me. They can look at my experience and look at what it would look like in my browser, you know, what were all the services that I was interacting with and what was going on in the application, what code was being called, what services were being called, and look at specifically me as opposed to an aggregate of all the domains all put together. And it really is important from a troubleshooting standpoint. It's really important from an understanding of the actuals because without full fidelity and capturing all of the data, you're kind of going, you know, you're taking guesses and it eliminates a lot of the guesswork. And so that's something that's special with our platform is that ability to have the full fidelity. >> When does a client, a customer not have full fidelity? I might think I have it, someone sold me a product, What's the tell sign that I don't have full fidelity? >> Oh yeah, well with observability, there's a lot of tricks in the game. And so you see a lot of summary data that looks like, hey, this is that one call, but usually it's knitted together from a bunch of different calls. So that summary data just from, because this stuff takes up a lot of storage and there's a lot of problems with scale, and so when you might see something that looks like it's this call, it's actually like, in general, when a call like this happens, this is what it looks like. And so you've got to say like, is this the exact call? And, you know, it makes a big difference from a troubleshooting perspective and it's really hard to implement and that's something that Splunk's very good at, right? It's data at scale. It's the 800 pound gorilla in collecting and slicing apart machine data. So like, you have to have something of that scale in order to ingest all this information. It's a hard problem for sure. >> Yeah, totally. And I appreciate that. While I got you here, you're an expert, I got to ask you about Open Telemetry. We've heard that term kicked around. What does that mean? Is it an open source thing, is it an open framework? What is Open Telemetry and what does it mean for your customers or the marketplace? >> Yeah, I think of Open Telemetry as finally creating a standard for how we're collecting data from applications across AP- In the past, it's been onesie-twosie, here and there each company coming up with it themselves and there are never any standards of how to look at transactions across data, across applications and across tiers. And so Open Telemetry is the attempt and it's a consortium, so there's many people involved in pushing this together, but think of like a W3C, which creates the standards for how websites operate, and without it, the web wouldn't be what it is today. And now Open Telemetry is coming behind and doing that same thing from an observability standpoint. So you're not just totally locked into one vendor in the way that they do it and you're held hostage to only looking at that visibility. We're trying to set the standards to lower the barrier of entry into getting to application performance monitoring, network performance monitoring and just getting that telemetry where there are standards across the board. And so it's an open source project. We commit to it, and it's a really important project for observability in general. >> So does that speak to like, the whole more data you have, the less blind spots you might have? Is that the same concept? Is that some of the thinking behind this? >> It enables you to get more data faster. Now, if you think about, if there are no standards and there are no rules on the road and everybody can get on the road and they can decide if they want to drive in the left lane or the right lane today, it makes getting places a lot harder. And the same is true with Open Telemetry. without the standards of what, you know, the naming conventions, where you instrument, how you instrument, it becomes very hard to put some things in a single pane of glass because they just look differently everywhere. And so that's the idea behind it. >> Well Craig, great to have you on. You're super smart on this, and Leading with Observability, it's a hot topic. It's super cool and relevant right now with digital transformation as companies are looking to rearchitect and change how they're going to flip the script on software development, modern applications, modern infrastructure, edge, all of this is on top of mind of everyone's thing on their plans. And we certainly want to have you back in some of our conversations that we have around this on our editorial side as well with when we have these clubhouses we are going to start doing a lot of those. We definitely want to bring you in. I'll give you a final word here. Tell us what you're most excited about. Put the commercial for Splunk. Why Splunk? Why you guys are excited. Take a minute to get the plug in. >> It's so easy. Splunk has the base to make this possible. Splunk is, like I said, it's an 800 pound gorilla in machine data and taking in data at scale. And when you start going off into the observability abyss, the really, really hard part about it is having the scale to not only go broad in the levels of technology that you can collect, but also go deep. And that depth, when we talked about that full fidelity, it's really important when you get down to brass tacks and you start implementing changes and troubleshooting things and turning that data that you have in to doing, so understanding what you can do with it. And Splunk is fully committed to going, not only broad to get everything under one roof, but also deep so that you can make all of the information that you collect actionable and useful. And it's something that I haven't seen anybody even attempt and I'm really excited to be a part of building towards that vision. >> Well, I've been covering Splunk for, man, many, many years. 10 years plus, I think, since it's been founded, and really the growth and the vision and the mission still is the same. Leveraging data, making use of it, unlocking the power of data as it evolves and there's more of it. And it gets more complicated when data is involved in the user experience end-to-end from cybersecurity to user flows and new expectations. So congratulations. Great product. Thanks for coming on and sharing. >> Thanks again for having us. >> Okay, this is John Furrier in theCUBE. Leading with Observability is the theme of this series and this topic was End-to-end observability to enable great digital experiences. Thanks for watching. (lighthearted music)

Published Date : Feb 22 2021

SUMMARY :

all around the world, and this segment is: And thanks for having me. in the experience of the end user and the only time you often see things, and you see the winners obviously and all the other websites, about the big picture. and by the way, in all the applications but if it can see that the person can see and the expectations are changing. that are running on the machine, and how that fits into all this and the platform that we have, that you guys are talking and it eliminates a lot of the guesswork. and so when you might see something I got to ask you about Open Telemetry. And so Open Telemetry is the and everybody can get on the road Well Craig, great to have you on. but also deep so that you can and really the growth and is the theme of this series

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Craig HydePERSON

0.99+

John FurrierPERSON

0.99+

10 yearsQUANTITY

0.99+

January 2021DATE

0.99+

CraigPERSON

0.99+

Palo AltoLOCATION

0.99+

Steve JobsPERSON

0.99+

11 yearsQUANTITY

0.99+

11th yearQUANTITY

0.99+

10QUANTITY

0.99+

SplunkORGANIZATION

0.99+

800 poundQUANTITY

0.99+

AmazonORGANIZATION

0.99+

BostonLOCATION

0.99+

eachQUANTITY

0.99+

RigorORGANIZATION

0.99+

thirdQUANTITY

0.99+

six monthsQUANTITY

0.99+

pandemicEVENT

0.98+

one callQUANTITY

0.98+

twoQUANTITY

0.98+

300 plus thingsQUANTITY

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

10 years agoDATE

0.98+

COVID-19OTHER

0.97+

three different tiersQUANTITY

0.97+

each companyQUANTITY

0.97+

Open TelemetryTITLE

0.97+

one vendorQUANTITY

0.97+

todayDATE

0.97+

three areasQUANTITY

0.97+

SplunkPERSON

0.97+

hundreds of machinesQUANTITY

0.96+

three levelsQUANTITY

0.96+

oneQUANTITY

0.95+

first six monthsQUANTITY

0.94+

three layerQUANTITY

0.91+

CUBE ConversationEVENT

0.91+

single paneQUANTITY

0.9+

decadesQUANTITY

0.9+

one roofQUANTITY

0.89+

KubernetesTITLE

0.87+

Leading with ObservabilityTITLE

0.86+

doubleQUANTITY

0.83+

COVID-19EVENT

0.8+

12 yearsQUANTITY

0.8+

three tierQUANTITY

0.8+

Number twoQUANTITY

0.77+

a minuteQUANTITY

0.74+

theCUBEORGANIZATION

0.69+

past few monthsDATE

0.62+

ForbesORGANIZATION

0.58+

10 yearsDATE

0.57+

ConversationEVENT

0.52+

MaslowORGANIZATION

0.46+

tiersQUANTITY

0.43+

Eron Kelly, AWS | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome to the Cubes Live coverage of AWS reinvent 2020. I'm Lisa Martin and I have a Cube alumni joining me Next. Aaron Kelly, the GM of product marketing at AWS Aaron. Welcome back to the program. >>Thanks, Lisa. It's great to be here. >>Likewise, even though we don't get to all be crammed into Las Vegas together, uh, excited to talk to you about Amazon Connect, talk to our audience about what that is. And then let's talk about it in terms of how it's been a big facilitator during this interesting year, that is 2020. >>Great, yes, for sure. So Amazon Connect is a cloud contact center where we're really looking to really reinvent how contact centers work by bringing it into the cloud. It's an Omni Channel, easy to use contact center that allows customers to spin up contact centers in minutes instead of months. Its very scalable so can scale to 10 tens of thousands of agents. But it also scaled down when you when it's not in use and because it's got a pay as you go business model. You only pay when you're engaging with collars or customers. You're not paying for high upfront per agent fees every month. So it's really been a great service during this pandemic, as there's been a lot of unpredictable spikes in demand, uh, that customers have had to deal with across many sectors, >>and we've been talking for months now about the acceleration that Corbett has delivered with respect to digital transformation. And, of course, as patients has been wearing fin globally. I think with everybody when we're calling a contact center, we want a resolution quickly. And of course, as we all know is we all in any industry are working from home. So are they. So I can imagine during this time that being able to have a cloud contact center has been transformative, I guess, to help some businesses keep the lights on. But now to really be successful moving forward, knowing that they can operate and scale up or down as things change. >>Yeah, that's exactly right. And so one of the key benefits of connect his ability to very quickly on board and get started, you know, we have some very interesting and examples like Morrisons, which is a retailer in the UK They wanted to create a new service as you highlighted, which was a door, you know, doorstep delivery service. And so they needed to spin up a quick new contact center in order to handle those orders. They were able to do it and move all their agents remotely in about a day and be able to immediately start to take those orders, which is really powerful, you know. Another interesting example is the Rhode Island Department of Labor and Training. Which part of their responsibility is to deliver unemployment benefits for their citizens? Obviously a huge surge of demand there they were able to build an entirely new context center in about nine days to support their citizens. They went from a knave ridge of about 74 call volume sort of capacity per minute to 1000 call on capacity per minute. And in the first day of standing up this new context center, they were able to serve 75,000 Rhode Island citizens with their unemployment benefits. So really ah, great example of having that cloud scalability that ability to bring agents remotely and then helping citizens in need during a very, very difficult time, >>right? So a lot of uses private sector, public sector. What are some of the new capabilities of Amazon connected? You're announcing at reinvent. >>Yeah, So we announced five big capabilities this during reinvent yesterday that really spanned the entire experience, and our goal is to make it better for agents so they're more efficient. That actually helps customers reduce their costs but also create a better collar experience so that C sat could go up in the collars, can get what they need quickly and then move on. And so the first capability is Amazon Connect Voice I D, which makes it easier to validate that the person calling is who in fact, they say they are so in this case, Lee. So let's say you're calling in. You can opt in tow, have a voice print made of you. The next time you call in, we're able to use machine learning to match that voiceprint to know. Yes, it is Lisa. I don't need to ask Lisa questions about her mother's maiden name and Social Security number. We can validate you quickly as an agent I'm confident it's you. So I'm less concerned about things like fraud, and we can move on. That's the first great new feature. The second is Amazon Connect customer profiles. So now, once you join the call rather than me is an agent having to click around a different systems and find out your order history, etcetera. I could get that all surface to me directly. So I have that context. I can create a more personalized experience and move faster through the call. The third one is called Wisdom. It's Amazon Connect wisdom, which now based on either what you're asking me or a search that I might make, I could get answers to your questions. Push to me using machine learning. So if you may be asking about a refund policy or the next time a new product may launch, I may not know rather than clicking around and sort of finding that in the different systems is pushed right to me. Um, now the Fourth Feet feature is really time capability of contact lens for Amazon connect, and what this does is while you were having our conversation, it measures the sentiment based on what you're saying or any keywords. So let's say you called it and said, I want a refund or I want to cancel That keyword will trigger a new alert to my supervisor who can see that this call may be going in the wrong direction. Let me go help Aaron with Lisa. Maybe there's a special offer I can provide or extra assistance so I can help turn that call around and create a great customer experience, which right now it feels like it's not going in that direction. And then the last one is, um, Amazon Connect tasks where about half of an agents time is spent on task other than the call follow up items. So you're looking for a refund or you want me Thio to ship you a new version of the product or something? Well, today I might write that on a sticky note or send myself a reminder and email. It's not very tracked very well. With Amazon Connect task, I can create that task for me as a supervisor. I could then X signed those tax and I can make sure that the follow up items air prioritized. And then when I look at my work. You is an agent. I can see both calls, my chats and my task, which allows me to be more efficient. That allows me to follow up faster with you. My customer, Andi. Overall, it's gonna help lower the cost and efficiency of the Contact Center. So we're really excited about all five of these features and how they improve the entire life cycle of a customer contact. >>And that could be table stakes for any business in terms of customer satisfaction. You talked about that, but I always say, You know, customer satisfaction is inextricably linked to employee satisfaction. They need. The agents need to be empowered with that information and really time, but also to be able to look at. I want them to know why I'm calling. They should already know what I have. We have that growing expectation right as a consumer. So the agent experience the customer experience. You've also really streamline. And I could just see this being something that is like I said, kind of table stakes for an organization to reduce churn, to be able to service more customers in a shorter amount of time and also employee satisfaction, right, >>right that's that. That's exactly right. Trader Grills, which is one of our, you know, beta customers using some of these capabilities. You know, they're saying 25% faster, handle times so shorter calls and a 10% increase in customer satisfaction because now it's personalized. When you call in, I know what grill you purchased. And so I have a sense based on the grill, you purchase just what your question might be or what you know, what special offers I might have available to me and that's all pushed to me is an agent, So I feel more empowered. I could give you better service. You have, you know, greater loyalty towards my brand, which is a win for everyone, >>absolutely that empowerment of the agent, that personalization for the customer. I think again we have that growing demanded expectation that you should know why I'm calling, and you should be able to solve my problem. If you can't, I'm gonna turn and find somebody else who can do that. That's a huge risk that businesses face. Let's talk about some of the trends that you're seeing that this has been a very interesting year to say the least, what are some of the trends in the context center space that you guys were seeing that you're working Thio to help facilitate? >>Yeah, absolutely. So I think one of the biggest trends that we're seeing is this move towards remote work. So as you can imagine, with the pandemic almost immediately, most customers needed to quickly move their agents to remote work scenario. And this is where Amazon Connect was a great benefit. For as I mentioned before, we saw about 5000 new contact centers created in March in April. Um, Atiya, very beginning of the pandemic. So that was a very, uh, that's a very big trend we're seeing. And now what we're seeing is customers were saying, Hey, when I have something like Amazon Connect that's in the cloud, it scales up. It provides me a great experience. I just need really a headset in a Internet connection from my agents. I'm not dealing with VPNs and, ah, lot of the complexity that comes with trying to move on on premises system remote. We're seeing a huge, you know, search of adoption and usage around that the ability to very quickly create a new context center around specific scenarios are use cases has been really, really powerful. So, uh, those are the big trends moving to remote remote work and a trend towards, um, spinning of new context that is quickly and then spending them back down as that demand moves or or those those those situations move >>right. And as we're all experiencing, the one thing that is a given during this time is the uncertainty that remains Skilling up. Skilling down volume changes. But looking as if a lot of what's currently going on from home is going to stay for a while longer, I actually not think about it. I'm calling into whether it's, you know, cable service or whatnot. I think What about agent is actually on their couch at home like I am working? And so I think it's being able to facilitate that because is transformative, and I think I think I'll step out on limbs side, you know, very potentially impact the winners and the losers of tomorrow, making sure that the consumer experience is tailored. It's personalized to your point and that the agents are empowered in real time to facilitate a seamless and fast resolution of whatever the issue is. >>Well, and I think you hit on it earlier as well. Agents wanna be helpful. They wanna solve a customer problem. They wanna have that information at their fingertips. They wanna be on power to take action. Because at the end of their day, they want to feel like they helped people, right? And so being able to give them that information safe from wisdom or being able to see your entire customer profile, Right? Right. When you come on board or know that you are Lisa, um, and have the confidence that I'm talking to Lisa, I'm not. This is not some sort of, you know, fishing, exercise, exercise. These are all really important scenarios and features that empower the agent, lowers cost significantly for the customer and creates a much better customer experience for you. The collar? >>Absolutely. And we all know how important that is these days to get some sort of satisfying experience. Last question. Erin, talk to us about, you know, as we all look forward, Thio 2021. For many reasons. What can we expect with Amazon? Connect? >>Well, we're going to continue to listen to our customers and hear their feedback and what they need, which what we certainly anticipate is continued focus on that agent efficiency, giving agents mawr of the information they need to be successful and answer customers questions quickly, continuing to invest in machine learning as a way of doing that. So using ML to identify that you are who you say you are, finding that right information. Getting data that I can use is an agent Thio. Handle those tasks and then automate the things that you know I really shouldn't have to take steps is a human to go do so if we need to send you a follow up email when when your product ships or when your refund is issued. Let me just put that in the system once and have it happened when it executes. So that level of automation continuing to bring machine learning in to make the agent experience better and more efficient, which ultimate leads to lower costs and better see set. These are all the investments. You'll see a sui continue for it next year. >>Excellent stuff, Erin, thank you so much for joining me on the program today, ensuring what's next and the potential the impact that Amazon connect is making. >>Thanks, Lisa. It's great to be here >>for Aaron Kelly. I'm Lisa Martin. You're watching the cubes. Live coverage of AWS reinvent 2020.

Published Date : Dec 8 2020

SUMMARY :

It's the Cube with digital uh, excited to talk to you about Amazon Connect, talk to our audience about what that It's an Omni Channel, easy to use contact center that allows customers to spin up So I can imagine during this time that being able to have a cloud contact And so one of the key benefits of connect his ability to very What are some of the new capabilities of and I can make sure that the follow up items air prioritized. And I could just see this being something that is like I said, kind of table stakes for an organization to And so I have a sense based on the grill, you purchase just what your question might be or what you the least, what are some of the trends in the context center space that you guys were seeing that you're working So as you can imagine, with the pandemic almost immediately, most customers needed to that the agents are empowered in real time to facilitate a seamless These are all really important scenarios and features that empower the agent, Erin, talk to us about, you know, as we all look forward, Thio 2021. a human to go do so if we need to send you a follow up email when when your product ships or Excellent stuff, Erin, thank you so much for joining me on the program today, ensuring what's next and the potential the impact Live coverage of AWS reinvent

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Dave VellantePERSON

0.99+

Steve ManlyPERSON

0.99+

SanjayPERSON

0.99+

RickPERSON

0.99+

Lisa MartinPERSON

0.99+

VerizonORGANIZATION

0.99+

DavidPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Fernando CastilloPERSON

0.99+

JohnPERSON

0.99+

Dave BalantaPERSON

0.99+

ErinPERSON

0.99+

Aaron KellyPERSON

0.99+

JimPERSON

0.99+

FernandoPERSON

0.99+

Phil BollingerPERSON

0.99+

Doug YoungPERSON

0.99+

1983DATE

0.99+

Eric HerzogPERSON

0.99+

LisaPERSON

0.99+

DeloitteORGANIZATION

0.99+

YahooORGANIZATION

0.99+

SpainLOCATION

0.99+

25QUANTITY

0.99+

Pat GelsingPERSON

0.99+

Data TorrentORGANIZATION

0.99+

EMCORGANIZATION

0.99+

AaronPERSON

0.99+

DavePERSON

0.99+

PatPERSON

0.99+

AWS Partner NetworkORGANIZATION

0.99+

Maurizio CarliPERSON

0.99+

IBMORGANIZATION

0.99+

Drew ClarkPERSON

0.99+

MarchDATE

0.99+

John TroyerPERSON

0.99+

Rich SteevesPERSON

0.99+

EuropeLOCATION

0.99+

BMWORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

three yearsQUANTITY

0.99+

85%QUANTITY

0.99+

Phu HoangPERSON

0.99+

VolkswagenORGANIZATION

0.99+

1QUANTITY

0.99+

Cook IndustriesORGANIZATION

0.99+

100%QUANTITY

0.99+

Dave ValataPERSON

0.99+

Red HatORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

BostonLOCATION

0.99+

Stephen JonesPERSON

0.99+

UKLOCATION

0.99+

BarcelonaLOCATION

0.99+

Better Cybercrime Metrics ActTITLE

0.99+

2007DATE

0.99+

John FurrierPERSON

0.99+

Christian Keynote with Disclaimer


 

(upbeat music) >> Hi everyone, thank you for joining us at the Data Cloud Summit. The last couple of months have been an exciting time at Snowflake. And yet, what's even more compelling to all of us at Snowflake is what's ahead. Today I have the opportunity to share new product developments that will extend the reach and impact of our Data Cloud and improve the experience of Snowflake users. Our product strategy is focused on four major areas. First, Data Cloud content. In the Data Cloud silos are eliminated and our vision is to bring the world's data within reach of every organization. You'll hear about new data sets and data services available in our data marketplace and see how previous barriers to sourcing and unifying data are eliminated. Second, extensible data pipelines. As you gain frictionless access to a broader set of data through the Data Cloud, Snowflakes platform brings additional capabilities and extensibility to your data pipelines, simplifying data ingestion, and transformation. Third, data governance. The Data Cloud eliminates silos and breaks down barriers and in a world where data collaboration is the norm, the importance of data governance is ratified and elevated. We'll share new advancements to support how the world's most demanding organizations mobilize your data while maintaining high standards of compliance and governance. Finally, our fourth area focuses on platform performance and capabilities. We remain laser focused on continuing to lead with the most performant and capable data platform. We have some exciting news to share about the core engine of Snowflake. As always, we love showing you Snowflake in action, and we prepared some demos for you. Also, we'll keep coming back to the fact that one of the characteristics of Snowflake that we're proud as staff is that we offer a single platform from which you can operate all of your data workloads, across clouds and across regions, which workloads you may ask, specifically, data warehousing, data lake, data science, data engineering, data applications, and data sharing. Snowflake makes it possible to mobilize all your data in service of your business without the cost, complexity and overhead of managing multiple systems, tools and vendors. Let's dive in. As you heard from Frank, the Data Cloud offers a unique capability to connect organizations and create collaboration and innovation across industries fueled by data. The Snowflake data marketplace is the gateway to the Data Cloud, providing visibility for organizations to browse and discover data that can help them make better decisions. For data providers on the marketplace, there is a new opportunity to reach new customers, create new revenue streams, and radically decrease the effort and time to data delivery. Our marketplace dramatically reduces the friction of sharing and collaborating with data opening up new possibilities to all participants in the Data Cloud. We introduced the Snowflake data marketplace in 2019. And it is now home to over 100 data providers, with half of them having joined the marketplace in the last four months. Since our most recent product announcements in June, we have continued broadening the availability of the data marketplace, across regions and across clouds. Our data marketplace provides the opportunity for data providers to reach consumers across cloud and regional boundaries. A critical aspect of the Data Cloud is that we envisioned organizations collaborating not just in terms of data, but also data powered applications and services. Think of instances where a provider doesn't want to open access to the entirety of a data set, but wants to provide access to business logic that has access and leverages such data set. That is what we call data services. And we want Snowflake to be the platform of choice for developing discovering and consuming such rich building blocks. To see How the data marketplace comes to live, and in particular one of these data services, let's jump into a demo. For all of our demos today, we're going to put ourselves in the shoes of a fictional global insurance company. We've called it Insureco. Insurance is a data intensive and highly regulated industry. Having the right access control and insight from data is core to every insurance company's success. I'm going to turn it over to Prasanna to show how the Snowflake data marketplace can solve a data discoverability and access problem. >> Let's look at how Insureco can leverage data and data services from the Snowflake data marketplace and use it in conjunction with its own data in the Data Cloud to do three things, better detect fraudulent claims, arm its agents with the right information, and benchmark business health against competition. Let's start with detecting fraudulent claims. I'm an analyst in the Claims Department. I have auto claims data in my account. I can see there are 2000 auto claims, many of these submitted by auto body shops. I need to determine if they are valid and legitimate. In particular, could some of these be insurance fraud? By going to the Snowflake data marketplace where numerous data providers and data service providers can list their offerings, I find the quantifying data service. It uses a combination of external data sources and predictive risk typology models to inform the risk level of an organization. Quantifying external sources include sanctions and blacklists, negative news, social media, and real time search engine results. That's a wealth of data and models built on that data which we don't have internally. So I'd like to use Quantifind to determine a fraud risk score for each auto body shop that has submitted a claim. First, the Snowflake data marketplace made it really easy for me to discover a data service like this. Without the data marketplace, finding such a service would be a lengthy ad hoc process of doing web searches and asking around. Second, once I find Quantifind, I can use Quantifind service against my own data in three simple steps using data sharing. I create a table with the names and addresses of auto body shops that have submitted claims. I then share the table with Quantifind to start the risk assessment. Quantifind does the risk scoring and shares the data back with me. Quantifind uses external functions which we introduced in June to get results from their risk prediction models. Without Snowflake data sharing, we would have had to contact Quantifind to understand what format they wanted the data in, then extract this data into a file, FTP the file to Quantifind, wait for the results, then ingest the results back into our systems for them to be usable. Or I would have had to write code to call Quantifinds API. All of that would have taken days. In contrast, with data sharing, I can set this up in minutes. What's more, now that I have set this up, as new claims are added in the future, they will automatically leverage Quantifind's data service. I view the scores returned by Quantifind and see the two entities in my claims data have a high score for insurance fraud risk. I open up the link returned by Quantifind to read more, and find that this organization has been involved in an insurance crime ring. Looks like that is a claim that we won't be approving. Using the Quantifind data service through the Snowflake data marketplace gives me access to a risk scoring capability that we don't have in house without having to call custom APIs. For a provider like Quantifind this drives new leads and monetization opportunities. Now that I have identified potentially fraudulent claims, let's move on to the second part. I would like to share this fraud risk information with the agents who sold the corresponding policies. To do this, I need two things. First, I need to find the agents who sold these policies. Then I need to share with these agents the fraud risk information that we got from Quantifind. But I want to share it such that each agent only sees the fraud risk information corresponding to claims for policies that they wrote. To find agents who sold these policies, I need to look up our Salesforce data. I can find this easily within Insureco's internal data exchange. I see there's a listing with Salesforce data. Our sales Ops team has published this listing so I know it's our officially blessed data set, and I can immediately access it from my Snowflake account without copying any data or having to set up ETL. I can now join Salesforce data with my claims to identify the agents for the policies that were flagged to have fraudulent claims. I also have the Snowflake account information for each agent. Next, I create a secure view that joins on an entitlements table, such that each agent can only see the rows corresponding to policies that they have sold. I then share this directly with the agents. This share contains the secure view that I created with the names of the auto body shops, and the fraud risk identified by Quantifind. Finally, let's move on to the third and last part. Now that I have detected potentially fraudulent claims, I'm going to move on to building a dashboard that our executives have been asking for. They want to see how Insureco compares against other auto insurance companies on key metrics, like total claims paid out for the auto insurance line of business nationwide. I go to the Snowflake data marketplace and find SNL U.S. Insurance Statutory Data from SNP. This data is included with Insureco's existing subscription with SMP so when I request access to it, SMP can immediately share this data with me through Snowflake data sharing. I create a virtual database from the share, and I'm ready to query this data, no ETL needed. And since this is a virtual database, pointing to the original data in SNP Snowflake account, I have access to the latest data as it arrives in SNPs account. I see that the SNL U.S. Insurance Statutory Data from SNP has data on assets, premiums earned and claims paid out by each us insurance company in 2019. This data is broken up by line of business and geography and in many cases goes beyond the data that would be available from public financial filings. This is exactly the data I need. I identify a subset of comparable insurance companies whose net total assets are within 20% of Insureco's, and whose lines of business are similar to ours. I can now create a Snow site dashboard that compares Insureco against similar insurance companies on key metrics, like net earned premiums, and net claims paid out in 2019 for auto insurance. I can see that while we are below median our net earned premiums, we are doing better than our competition on total claims paid out in 2019, which could be a reflection of our improved claims handling and fraud detection. That's a good insight that I can share with our executives. In summary, the Data Cloud enabled me to do three key things. First, seamlessly fine data and data services that I need to do my job, be it an external data service like Quantifind and external data set from SNP or internal data from Insureco's data exchange. Second, get immediate live access to this data. And third, control and manage collaboration around this data. With Snowflake, I can mobilize data and data services across my business ecosystem in just minutes. >> Thank you Prasanna. Now I want to turn our focus to extensible data pipelines. We believe there are two different and important ways of making Snowflakes platform highly extensible. First, by enabling teams to leverage services or business logic that live outside of Snowflake interacting with data within Snowflake. We do this through a feature called external functions, a mechanism to conveniently bring data to where the computation is. We announced this feature for calling regional endpoints via AWS gateway in June, and it's currently available in public preview. We are also now in public preview supporting Azure API management and will soon support Google API gateway and AWS private endpoints. The second extensibility mechanism does the converse. It brings the computation to Snowflake to run closer to the data. We will do this by enabling the creation of functions and procedures in SQL, Java, Scala or Python ultimately providing choice based on the programming language preference for you or your organization. You will see Java, Scala and Python available through private and public previews in the future. The possibilities enabled by these extensibility features are broad and powerful. However, our commitment to being a great platform for data engineers, data scientists and developers goes far beyond programming language. Today, I am delighted to announce Snowpark a family of libraries that will bring a new experience to programming data in Snowflake. Snowpark enables you to write code directly against Snowflake in a way that is deeply integrated into the languages I mentioned earlier, using familiar concepts like DataFrames. But the most important aspect of Snowpark is that it has been designed and optimized to leverage the Snowflake engine with its main characteristics and benefits, performance, reliability, and scalability with near zero maintenance. Think of the power of a declarative SQL statements available through a well known API in Scala, Java or Python, all these against data governed in your core data platform. We believe Snowpark will be transformative for data programmability. I'd like to introduce Sri to showcase how our fictitious insurance company Insureco will be able to take advantage of the Snowpark API for data science workloads. >> Thanks Christian, hi, everyone? I'm Sri Chintala, a product manager at Snowflake focused on extensible data pipelines. And today, I'm very excited to show you a preview of Snowpark. In our first demo, we saw how Insureco could identify potentially fraudulent claims. Now, for all the valid claims InsureCo wants to ensure they're providing excellent customer service. To do that, they put in place a system to transcribe all of their customer calls, so they can look for patterns. A simple thing they'd like to do is detect the sentiment of each call so they can tell which calls were good and which were problematic. They can then better train their claim agents for challenging calls. Let's take a quick look at the work they've done so far. InsureCo's data science team use Snowflakes external functions to quickly and easily train a machine learning model in H2O AI. Snowflake has direct integrations with H2O and many other data science providers giving Insureco the flexibility to use a wide variety of data science libraries frameworks or tools to train their model. Now that the team has a custom trained sentiment model tailored to their specific claims data, let's see how a data engineer at Insureco can use Snowpark to build a data pipeline that scores customer call logs using the model hosted right inside of Snowflake. As you can see, we have the transcribed call logs stored in the customer call logs table inside Snowflake. Now, as a data engineer trained in Scala, and used to working with systems like Spark and Pandas, I want to use familiar programming concepts to build my pipeline. Snowpark solves for this by letting me use popular programming languages like Java or Scala. It also provides familiar concepts in APIs, such as the DataFrame abstraction, optimized to leverage and run natively on the Snowflake engine. So here I am in my ID, where I've written a simple scalar program using the Snowpark libraries. The first step in using the Snowpark API is establishing a session with Snowflake. I use the session builder object and specify the required details to connect. Now, I can create a DataFrame for the data in the transcripts column of the customer call logs table. As you can see, the Snowpark API provides native language constructs for data manipulation. Here, I use the Select method provided by the API to specify the column names to return rather than writing select transcripts as a string. By using the native language constructs provided by the API, I benefit from features like IntelliSense and type checking. Here you can see some of the other common methods that the DataFrame class offers like filters like join and others. Next, I define a get sentiment user defined function that will return a sentiment score for an input string by using our pre trained H2O model. From the UDF, we call the score method that initializes and runs the sentiment model. I've built this helper into a Java file, which along with the model object and license are added as dependencies that Snowpark will send to Snowflake for execution. As a developer, this is all programming that I'm familiar with. We can now call our get sentiment function on the transcripts column of the DataFrame and right back the results of the score transcripts to a new target table. Let's run this code and switch over to Snowflake to see the score data and also all the work that Snowpark has done for us on the back end. If I do a select star from scored logs, we can see the sentiment score of each call right alongside the transcript. With Snowpark all the logic in my program is pushed down into Snowflake. I can see in the query history that Snowpark has created a temporary Java function to host the pre trained H20 model, and that the model is running right in my Snowflake warehouse. Snowpark has allowed us to do something completely new in Snowflake. Let's recap what we saw. With Snowpark, Insureco was able to use their preferred programming language, Scala and use the familiar DataFrame constructs to score data using a machine learning model. With support for Java UDFs, they were able to run a train model natively within Snowflake. And finally, we saw how Snowpark executed computationally intensive data science workloads right within Snowflake. This simplifies Insureco's data pipeline architecture, as it reduces the number of additional systems they have to manage. We hope that extensibility with Scala, Java and Snowpark will enable our users to work with Snowflake in their preferred way while keeping the architecture simple. We are very excited to see how you use Snowpark to extend your data pipelines. Thank you for watching and with that back to you, Christian. >> Thank you Sri. You saw how Sri could utilize Snowpark to efficiently perform advanced sentiment analysis. But of course, if this use case was important to your business, you don't want to fully automate this pipeline and analysis. Imagine being able to do all of the following in Snowflake, your pipeline could start far upstream of what you saw in the demo. By storing your actual customer care call recordings in Snowflake, you may notice that this is new for Snowflake. We'll come back to the idea of storing unstructured data in Snowflake at the end of my talk today. Once you have the data in Snowflake, you can use our streams and past capabilities to call an external function to transcribe these files. To simplify this flow even further, we plan to introduce a serverless execution model for tasks where Snowflake can automatically size and manage resources for you. After this step, you can use the same serverless task to execute sentiment scoring of your transcript as shown in the demo with incremental processing as each transcript is created. Finally, you can surface the sentiment score either via snow side, or through any tool you use to share insights throughout your organization. In this example, you see data being transformed from a raw asset into a higher level of information that can drive business action, all fully automated all in Snowflake. Turning back to Insureco, you know how important data governance is for any major enterprise but particularly for one in this industry. Insurance companies manage highly sensitive data about their customers, and have some of the strictest requirements for storing and tracking such data, as well as managing and governing it. At Snowflake, we think about governance as the ability to know your data, manage your data and collaborate with confidence. As you saw in our first demo, the Data Cloud enables seamless collaboration, control and access to data via the Snowflake data marketplace. And companies may set up their own data exchanges to create similar collaboration and control across their ecosystems. In future releases, we expect to deliver enhancements that create more visibility into who has access to what data and provide usage information of that data. Today, we are announcing a new capability to help Snowflake users better know and organize your data. This is our new tagging framework. Tagging in Snowflake will allow user defined metadata to be attached to a variety of objects. We built a broad and robust framework with powerful implications. Think of the ability to annotate warehouses with cost center information for tracking or think of annotating tables and columns with sensitivity classifications. Our tagging capability will enable the creation of companies specific business annotations for objects in Snowflakes platform. Another key aspect of data governance in Snowflake is our policy based framework where you specify what you want to be true about your data, and Snowflake enforces those policies. We announced one such policy earlier this year, our dynamic data masking capability, which is now available in public preview. Today, we are announcing a great complimentary a policy to achieve row level security to see how role level security can enhance InsureCo's ability to govern and secure data. I'll hand it over to Artin for a demo. >> Hello, I'm Martin Avanes, Director of Product Management for Snowflake. As Christian has already mentioned, the rise of the Data Cloud greatly accelerates the ability to access and share diverse data leading to greater data collaboration across teams and organizations. Controlling data access with ease and ensuring compliance at the same time is top of mind for users. Today, I'm thrilled to announce our new row access policies that will allow users to define various rules for accessing data in the Data Cloud. Let's check back in with Insureco to see some of these in action and highlight how those work with other existing policies one can define in Snowflake. Because Insureco is a multinational company, it has to take extra measures to ensure data across geographic boundaries is protected to meet a wide range of compliance requirements. The Insureco team has been asked to segment what data sales team members have access to based on where they are regionally. In order to make this possible, they will use Snowflakes row access policies to implement row level security. We are going to apply policies for three Insureco's sales team members with different roles. Alice, an executive must be able to view sales data from both North America and Europe. Alex in North America sales manager will be limited to access sales data from North America only. And Jordan, a Europe sales manager will be limited to access sales data from Europe only. As a first step, the security administrator needs to create a lookup table that will be used to determine which data is accessible based on each role. As you can see, the lookup table has the row and their associated region, both of which will be used to apply policies that we will now create. Row access policies are implemented using standard SQL syntax to make it easy for administrators to create policies like the one our administrators looking to implement. And similar to masking policies, row access policies are leveraging our flexible and expressive policy language. In this demo, our admin users to create a row access policy that uses the row and region of a user to determine what row level data they have access to when queries are executed. When users queries are executed against the table protected by such a row access policy, Snowflakes query engine will dynamically generate and apply the corresponding predicate to filter out rows the user is not supposed to see. With the policy now created, let's log in as our Sales Users and see if it worked. Recall that as a sales executive, Alice should have the ability to see all rows from North America and Europe. Sure enough, when she runs her query, she can see all rows so we know the policy is working for her. You may also have noticed that some columns are showing masked data. That's because our administrator's also using our previously announced data masking capabilities to protect these data attributes for everyone in sales. When we look at our other users, we should notice that the same columns are also masked for them. As you see, you can easily combine masking and row access policies on the same data sets. Now let's look at Alex, our North American sales manager. Alex runs to st Korea's Alice, row access policies leverage the lookup table to dynamically generate the corresponding predicates for this query. The result is we see that only the data for North America is visible. Notice too that the same columns are still masked. Finally, let's try Jordan, our European sales manager. Jordan runs the query and the result is only the data for Europe with the same columns also masked. And you reintroduced masking policies, today you saw row access policies in action. And similar to our masking policies, row access policies in Snowflake will be accepted Hands of capability integrated seamlessly across all of Snowflake everywhere you expect it to work it does. If you're accessing data stored in external tables, semi structured JSON data, or building data pipelines via streams or plan to leverage Snowflakes data sharing functionality, you will be able to implement complex row access policies for all these diverse use cases and workloads within Snowflake. And with Snowflakes unique replication feature, you can instantly apply these new policies consistently to all of your Snowflake accounts, ensuring governance across regions and even across different clouds. In the future, we plan to demonstrate how to combine our new tagging capabilities with Snowflakes policies, allowing advanced audit and enforcing those policies with ease. And with that, let's pass it back over to Christian. >> Thank you Artin. We look forward to making this new tagging and row level security capabilities available in private preview in the coming months. One last note on the broad area of data governance. A big aspect of the Data Cloud is the mobilization of data to be used across organizations. At the same time, privacy is an important consideration to ensure the protection of sensitive, personal or potentially identifying information. We're working on a set of product capabilities to simplify compliance with privacy related regulatory requirements, and simplify the process of collaborating with data while preserving privacy. Earlier this year, Snowflake acquired a company called Crypto Numerix to accelerate our efforts on this front, including the identification and anonymization of sensitive data. We look forward to sharing more details in the future. We've just shown you three demos of new and exciting ways to use Snowflake. However, I want to also remind you that our commitment to the core platform has never been greater. As you move workloads on to Snowflake, we know you expect exceptional price performance and continued delivery of new capabilities that benefit every workload. On price performance, we continue to drive performance improvements throughout the platform. Let me give you an example comparing an identical set of customers submitted queries that ran both in August of 2019, and August of 2020. If I look at the set of queries that took more than one second to compile 72% of those improved by at least 50%. When we make these improvements, execution time goes down. And by implication, the required compute time is also reduced. Based on our pricing model to charge for what you use, performance improvements not only deliver faster insights, but also translate into cost savings for you. In addition, we have two new major announcements on performance to share today. First, we announced our search optimization service during our June event. This service currently in public preview can be enabled on a table by table basis, and is able to dramatically accelerate lookup queries on any column, particularly those not used as clustering columns. We initially support equality comparisons only, and today we're announcing expanded support for searches in values, such as pattern matching within strings. This will unlock a number of additional use cases such as analytics on logs data for performance or security purposes. This expanded support is currently being validated by a few customers in private preview, and will be broadly available in the future. Second, I'd like to introduce a new service that will be in private preview in a future release. The query acceleration service. This new feature will automatically identify and scale out parts of a query that could benefit from additional resources and parallelization. This means that you will be able to realize dramatic improvements in performance. This is especially impactful for data science and other scan intensive workloads. Using this feature is pretty simple. You define a maximum amount of additional resources that can be recruited by a warehouse for acceleration, and the service decides when it would be beneficial to use them. Given enough resources, a query over a massive data set can see orders of magnitude performance improvement compared to the same query without acceleration enabled. In our own usage of Snowflake, we saw a common query go 15 times faster without changing the warehouse size. All of these performance enhancements are extremely exciting, and you will see continued improvements in the future. We love to innovate and continuously raise the bar on what's possible. More important, we love seeing our customers adopt and benefit from our new capabilities. In June, we announced a number of previews, and we continue to roll those features out and see tremendous adoption, even before reaching general availability. Two have those announcements were the introduction of our geospatial support and policies for dynamic data masking. Both of these features are currently in use by hundreds of customers. The number of tables using our new geography data type recently crossed the hundred thousand mark, and the number of columns with masking policies also recently crossed the same hundred thousand mark. This momentum and level of adoption since our announcements in June is phenomenal. I have one last announcement to highlight today. In 2014, Snowflake transformed the world of data management and analytics by providing a single platform with first class support for both structured and semi structured data. Today, we are announcing that Snowflake will be adding support for unstructured data on that same platform. Think of the abilities of Snowflake used to store access and share files. As an example, would you like to leverage the power of SQL to reason through a set of image files. We have a few customers as early adopters and we'll provide additional details in the future. With this, you will be able to leverage Snowflake to mobilize all your data in the Data Cloud. Our customers rely on Snowflake as the data platform for every part of their business. However, the vision and potential of Snowflake is actually much bigger than the four walls of any organization. Snowflake has created a Data Cloud a data connected network with a vision where any Snowflake customer can leverage and mobilize the world's data. Whether it's data sets, or data services from traditional data providers for SaaS vendors, our marketplace creates opportunities for you and raises the bar in terms of what is possible. As examples, you can unify data across your supply chain to accelerate your time and quality to market. You can build entirely new revenue streams, or collaborate with a consortium on data for good. The possibilities are endless. Every company has the opportunity to gain richer insights, build greater products and deliver better services by reaching beyond the data that he owns. Our vision is to enable every company to leverage the world's data through seamless and governing access. Snowflake is your window into this data network into this broader opportunity. Welcome to the Data Cloud. (upbeat music)

Published Date : Nov 19 2020

SUMMARY :

is the gateway to the Data Cloud, FTP the file to Quantifind, It brings the computation to Snowflake and that the model is running as the ability to know your data, the ability to access is the mobilization of data to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
InsurecoORGANIZATION

0.99+

ChristianPERSON

0.99+

AlicePERSON

0.99+

August of 2020DATE

0.99+

August of 2019DATE

0.99+

JuneDATE

0.99+

InsureCoORGANIZATION

0.99+

Martin AvanesPERSON

0.99+

EuropeLOCATION

0.99+

QuantifindORGANIZATION

0.99+

PrasannaPERSON

0.99+

15 timesQUANTITY

0.99+

2019DATE

0.99+

AlexPERSON

0.99+

SNPORGANIZATION

0.99+

2014DATE

0.99+

JordanPERSON

0.99+

AWSORGANIZATION

0.99+

ScalaTITLE

0.99+

JavaTITLE

0.99+

72%QUANTITY

0.99+

SQLTITLE

0.99+

TodayDATE

0.99+

North AmericaLOCATION

0.99+

each agentQUANTITY

0.99+

SMPORGANIZATION

0.99+

second partQUANTITY

0.99+

FirstQUANTITY

0.99+

SecondQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

SnowflakeTITLE

0.99+

PythonTITLE

0.99+

each callQUANTITY

0.99+

Sri ChintalaPERSON

0.99+

each roleQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

twoQUANTITY

0.99+

BothQUANTITY

0.99+

Crypto NumerixORGANIZATION

0.99+

two entitiesQUANTITY

0.99+

Joshua Spence, State of West Virginia | AWS Public Sector Online


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of AWS Public Sector Online brought to you by Amazon Web Services. >> Hi and welcome back to theCUBE's coverage of AWS Summit Online. I'm Stu Miniman your host for this segment. Always love when we get to talk to the practitioners in this space and of course at AWS Public Sector, broad diversity of backgrounds and areas, everything from government to education and the like, so really happy they were able to bring us Joshua Spence, he is the Chief Technology Officer, from West Virginia in the Office of Technology. Josh, thank you so much for joining us. >> I appreciate the invitation to be here. >> All right so, technology for an entire state, quite a broad mandate, when you talk about that, maybe give our audience a little bit of your background and the role of your organization for West Virginia. >> Yeah, absolutely so in the public sector space, especially at state government, we're involved in a myriad of services for government to the citizens and from a central IT perspective, we're seeking to provide those enterprise services and support structures to keep those costs controlled and efficient and be able to enable these agencies to service the citizens of the state. >> Excellent, maybe just to talk about the role of the state versus more local, from a technology standpoint, how many applications do you manage? How many people do you have? Is everything that you do in the Cloud, or do you also have some data centers? just give us a little thumbnail sketch if you would, of what what's under that umbrella. >> Sure, absolutely I think you'll see at the state level we have... We typically administer a lot of the federal programs that come down through funding, ranging from health and human resources to environmental protection, to public safety you've got, just a broad spectrum of services that are being provided at the state level and so the central office, the Office of Technology, Services approximately 22,000 state employees and their ability to carry out those services to the citizens. And then of course you have like local government, like in State of West Virginia with 55 counties, and then you're following municipalities. The interesting thing though in public sector is from the citizen's perspective, government is government, whether it's local, state or federal. >> Yeah, that's such a good point and right now of course there's a strain on everything. With the global pandemic, services from the public sector are needed more than ever, maybe help us understand a little bit things like work from home and unemployment, I expect, may require a shift and some reaction from your office. So tell us what's been happening in your space the last few months. >> Yeah absolutely, well, the first part you get the work from home piece rate, West Virginia, although the last state to have a confirmed test positive of COVID-19, we were in a little bit of in a position of advantage as we were watching what was happening across the world, across the country and so we didn't hesitate to react in West Virginia and through great leadership here, we shut down the state quickly, we put protections in place to help, show up and prevent the spread of COVID. And to do that though with the government facilities, government services, we had to be able to enable a remote workforce and do so very quickly, at a scale that no one ever anticipated having to do. Coop plans for the most part rejected just picking up from the location you're working at to go work at another centralized location. No one really ever thought, "Well, we wouldn't be able to all congregate to work." So that created our first challenge that we had to respond to. The second challenge was then how do we adjust government services to interface with citizens from a remote perspective and in addition to that a surge of need. And when you look at unemployment all across the country, the demand became exponentially larger than what was ever experienced. The systems were not equipped to take on that type of load. And we had to leverage technology to very quickly adapt to the situation. >> Yeah, I'd love you to drill in a little bit on that technology piece. Obviously you think about certain services, if I had them, just in a data center and I needed it all of a sudden ramp up, do I run into capacity issues? Can I actually get to that environment? How do I scale that up fast? The promise of Cloud always has been well, I should be able to react immediately, I have in theory infinite scale. So what has been your experience, are there certain services that you say, "Oh boy, I'm so glad I have them in the Cloud." and has there been any struggles with being able to react to what you're dealing with. >> Well yeah the struggles have absolutely been there and it's been a combination of not just on-premise infrastructure, but then legacy infrastructure. And that's what we saw when we were dealing with the unemployment surge here in West Virginia, just from a citizen contact perspective, being able to answer the phone calls that were coming in, it was overwhelming and what we found is we unfortunately had a number of phone systems all supporting whether it's the central office or the regional office, they were all disparate, some of which were legacy. We therefore had no visibility on the metrics, we didn't even know how many calls were actually coming in a day. When you compound that the citizen's just trying to find answers, well, they're not going to just call the numbers you provide, they're going to call any numbers. So then they're now also calling other agencies seeking assistance just 'cause they're wanting help and that's understandable. So we needed to make a change, we need to make change very quickly. And that's when we looked to see if a solution in the Cloud might be a better option. And would it enable us to not only correct the situation, get visibility and scale, what could we do so extremely quick because the time to value was what was real important. >> Excellent, so my understanding that you were not using any cloud-based contact center before this hit. >> We were in only... There were some other agencies that had some hosted contact center capabilities, but on a small scale. This was the first large project around a Cloud Contact Center, and needed to run the project from Go Live or decision to go forward on a Friday at one o'clock and to roll over the first call center on the following Monday at 6:00 p.m. was a speed that we had never seen before. >> Oh boy yeah, I think back, I worked in telecom back in the 90s and you talk about a typical deployment you used to measure months and you're talking more like hours for getting something up and running and there's not only the technology, there's the people, the training, all these sorts of things there, so, yeah tell us, how did you come to such a fast decision and deployment? So you walk us through a little bit of that. >> Sure, so we went out to the market and asked several providers to give us their solution proposals and to do so very quickly 'cause we knew we had to move quickly and then when upon evaluation of the options before us, we made our selection and indicate that selection and started working with both the Cloud provider and the integrator, to build out a phased approach deployment of the technology. Phase one was, hey, let's get everybody calling the same 800 number as best as we can. And then where we can't get the 800 number be that focal point, let's forward all other phone numbers to the same call center. Because before we were able to bring the technology and our only solution was to put more people on the phones and we had physical limitations there. So we went after, the Amazon contact center or our integrator a Smartronix and we were able to do so very quickly and get that phase one change in place, which then allowed us to decide what was phase two and what was going to be phase three. >> Josh, you've got some background in cybersecurity, I guess in general, there's been a raised awareness and need for security with the pandemic going on, bad actors are still going in there. I've talked to some when they're rolling out their call centers, they need to worry about... Sounds like you've got everything in your municipality. So might not need to worry about, government per se but, I guess if you could touch on security right now for what's happening in general and anything specific about the contact center that you need to make sure that people working from home were following policy, procedure, not breaking any regulation and guidelines. >> Yeah, absolutely I think the most important piece of the puzzle when you're looking at security is understanding, so it's always a question of risk, right? If you're seeking first and foremost, to put in security with the understanding that now, hey we've put it in we don't have to think about it anymore. That's not the answer 'cause you're not going to stop all risk, right? You have to weigh it and understand which risks you need to address so that's really important piece. The second part that we've looked at in the current situation with the response to COVID is not only do we see threat actors trying to take advantage of the circumstances, right? Because more people are working from home, there are less computers on the hard network, right? They're now either VPN-ing in or they are just simply outside the network and there may be limited visibility that central agency or the central entity has on those devices. So what do you do? We got to extend that protection out to the account and to the devices itself and not worry so much about the boundary, right? 'cause the boundary now is a lot in all and since it purposes the accounts, but then I think an additional piece of the puzzle right now is to look at how important technology is to your organization, look at the role it's performing in enabling your ability to continue to function remotely (indistinct) the risk associated with those devices becoming compromised or unavailable. So, we see that the most important aspects of our security changes were to extend that protection as best we could to push out education to the users on the changing threats that might be coming their way. >> Yeah, it's fascinating to think if this pandemic had hit 10 years ago, you wouldn't have the capability of this. I'm thinking back to like, well, we could forward numbers to a certain place and do some cascading, but the Cloud Contact Center, absolutely wasn't available. Have you had a chance to think about now that you have this capability, what this means as we progress down the road, do you think you'll be keeping a hybrid model or stay fully Cloud once people are moving back to the offices? >> Well, I definitely think that the near future is a hybrid model and we'll see where it goes from there. There's workloads without a doubt that are better served, putting them in the Cloud, giving you that on demand scalability. I mean, if we look at what a project like this would have required, had we had to procure equipment, install equipment, there was just no time to do that. So having the services, the capability, whether it's microservices or VMS or whatever, all available, just don't need be turned on and configure to be used, it's just there's a lot of power there. And as government seeks to develop digital government, right? How do we transition from providing services where citizens stand in line to doing it online? I think Cloud's going to continue to play a key piece in that. >> Yeah I'm wondering if you could speak a little bit to the financial impact of this. So typically you think about, I roll out a project, it's budgeted, we write it off over a certain number of years, Cloud of course by its nature is there's flexibility and I'm paying for what I'm using, but this was something that was unexpected. So how were you... Did you have oversight on this? Was there additional funding put out? How was that financial discussion happening? >> Yeah, so that's a big piece of the puzzle when a government entity like a state is under a state of emergency, the good thing is there's processes and procedures that we leverage regularly to understand how we're going to fund those response activities. And then the Federal Government plays a role also in responding to states of emergency that enable the state and local government to have additional funding to cover during the state of emergency. So that makes things a little easier to start in a sense, I think the bigger challenge is going to be what comes from the following years after COVID, because obviously tax revenues are going to take a hit across the board. And what does that mean to government budgets that then in turn are going to have to be adjusted? So the advantage of Cloud services and other type technology services where they're sold under that OPEX model, do give states flexibility in ways to scale services, scale solutions as needed and give us a little bit more flexibility in adjusting for budget challenges. >> Yeah, it's been fascinating to watch, we know how the speed of adoption in technology, tends to run at a certain pace. The last three months, there are definitely certain technologies that there's been massive acceleration like you've discussed. So, I'm wondering that you've had the modernization, things like the unemployment claims was the immediate requirement that you needed, but have there been other pieces, other use cases and applications that this modernization, leverage of cloud technologies is impacting you today or other things that you see a little bit down the path. >> Yeah, I think it's... We're going to see a modernization of government applications designed to interface directly with the citizen, right? So we're going to want to be able to give the citizen opportunity, whether it's on a smartphone, a tablet, or a computer to interface with government, whether it's communications to inquire about a service, or to get support around a service or to file paperwork around a service. We want to enable that digital interface and so that's going to be a big push, and it's going to be amplified. There was already a look towards that, right? With the smart cities, smart states and some of the initiatives there, but what's happened with COVID basically it's forced the issue of not being able to be physically together, well, how do you do it using technology? So if there was a silver lining in an awful situation that we have with COVID, one might be that, we've been able to stretch our use of technology to better serve the citizens. >> Well, great, really really impressive story. Josh, I want to give you the final word. Just what advice would you give your peers kind of dealing with things in a crisis, and any other advice you'd have in general about managing and leveraging the Cloud? >> I think in a closing comment, I think one of the most important aspects that can be considered is having that translation capability of talking to the business element, the government service component and understand what they're trying to achieve, what their purpose or their mission is and then being able to tie it back to the technology in a way to where all parties, all stakeholders understand their roles and responsibilities, to make that happen. Unfortunately I think what happens too often is on the business side or the non-technical side of the equation, they see the end state, but they don't truly understand their responsibilities to get to the end state. And it's definitely a partnership and the better that partnership's understood at the start, the more successful the project's going to have to get there under budget and on time. >> Well, thank you so much for joining us, best of luck with the project and please stay safe. >> Thank you for having me. >> All right, stay tuned for more coverage from AWS Public Sector Online. I'm Stu Miniman and thank you for watching theCUBE. (soft music)

Published Date : Jun 30 2020

SUMMARY :

brought to you by Amazon Web Services. talk to the practitioners and the role of your and support structures to Excellent, maybe just to and their ability to services from the public sector and in addition to that Can I actually get to that environment? because the time to value understanding that you were not and needed to run the project from Go Live come to such a fast decision and the integrator, to build out So might not need to worry and to the devices itself to the offices? and configure to be used, it's just to the financial impact of this. are going to take a hit across the board. Yeah, it's been fascinating to watch, and so that's going to be a big push, about managing and leveraging the Cloud? and then being able to tie Well, thank you so much for joining us, I'm Stu Miniman and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Joshua SpencePERSON

0.99+

JoshPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

AmazonORGANIZATION

0.99+

second challengeQUANTITY

0.99+

West VirginiaLOCATION

0.99+

first challengeQUANTITY

0.99+

55 countiesQUANTITY

0.99+

second partQUANTITY

0.99+

first call centerQUANTITY

0.99+

COVID-19OTHER

0.98+

CoopORGANIZATION

0.98+

Office of Technology, ServicesORGANIZATION

0.98+

bothQUANTITY

0.98+

COVIDOTHER

0.97+

first partQUANTITY

0.97+

10 years agoDATE

0.97+

oneQUANTITY

0.97+

800OTHER

0.96+

approximately 22,000 state employeesQUANTITY

0.96+

AWS Public SectorORGANIZATION

0.95+

firstQUANTITY

0.95+

pandemicEVENT

0.95+

SmartronixORGANIZATION

0.94+

a dayQUANTITY

0.91+

90sDATE

0.91+

CloudTITLE

0.91+

AWS Summit OnlineEVENT

0.9+

first large projectQUANTITY

0.9+

globalEVENT

0.88+

last three monthsDATE

0.88+

todayDATE

0.88+

one o'clockDATE

0.85+

COVIDTITLE

0.83+

Friday atDATE

0.81+

Office of TechnologyORGANIZATION

0.8+

Monday at 6:00 p.m.DATE

0.79+

lastDATE

0.77+

COVIDEVENT

0.75+

West VirginiaLOCATION

0.75+

theCUBEORGANIZATION

0.73+

phase threeQUANTITY

0.68+

Phase oneQUANTITY

0.67+

AWS Public Sector OnlineORGANIZATION

0.67+

AWS PublicORGANIZATION

0.66+

StateLOCATION

0.63+

phaseQUANTITY

0.62+

monthsDATE

0.61+

OnlineTITLE

0.53+

GoORGANIZATION

0.51+

oneOTHER

0.5+

phase twoQUANTITY

0.5+

LiveTITLE

0.49+

Cloud ContactTITLE

0.49+

Sector OnlineTITLE

0.46+

CenterCOMMERCIAL_ITEM

0.43+

Willie Tejada, IBM | IBM Think 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, it's theCUBE, covering IBM Think, brought to you by IBM. >> Welcome back, I'm Stu Miniman and this is theCUBE's coverage of IBM Think 2020. It is the digital experience online so rather than all gathering together in San Francisco we're getting to talk to everybody where they are and we're happy to bring back one of our CUBE alums, it's actually been a little while since we've had them on the program. Willie Tejada, who is the general manager and Chief Developer Advocate with IBM. Willie, so great to see you, thanks for joining us. >> Hey Stu, thanks for having me, it's good to be back, it's been too long. >> So, first thing, obviously we're all together while we're apart, because of the global pandemic, developers, I've had so many interviews I've done over the years talking about dispersed development, around the clock development, I had a great interview with a head of remote work in the developer community at the beginning of the year before everything happened, so, how's the community doing overall and how are you seeing them react to what's happening? >> In the developer community, I think one of the interesting parts is one, developers feel oftentimes that they can actually make a difference. Two, their work oftentimes happens remotely. And so, one of the things that we've seen is a lot of the interaction that we have when we're doing our developer advocacy work has just converted to digital. And there's some interesting dynamics that come about, just even in that, where if you were doing something like a meetup in New York that was attracting something like 50 people, to maybe 100, maybe the venue was limiting the number of people that you would actually have there if you had a popular topic or speaker. We've had meetups basically be as large as 500 plus people when we went to digital. So definitely some different dynamics as we actually talk about this new normal that we're in, and everybody utilizing digital vehicles to reach the people that they want to talk to. >> All right. So I know last time we talked with you a big topic we talked about was Call for Code, and something that IBM has done different initiatives there, and you've got a very relevant one so bring our audience up to speed, this year's Call for Code, what that would involve. >> Yeah Stu, thanks very much. The Call for Code initiative inside of IBM is now in its third year. We did it in 2018, the concept was fairly simple, developers always love to solve problems and we said what if we challenge the 24 million developers to come and take a crack at society's most pressing issues? And in the first two years we focused on natural disasters, all you had to do was take a look at the coverage prior to the COVID-19 pandemic and you had wildfires in Australia and in Northern California where my home actually is based, and you had tsunamis and hurricanes and floodings. And so the ability for us to actually bring the developer community to bear on some of society's most pressing issues was really kind of the concept upfront, and IBM would help by bringing subject matter experts together, making available tools, because we're thinking let's solve the problem exactly how we solve it when we apply business. You get an expert on supply chain, you get a user of supply chain, you bring them together, developer builds these things. Well, not all the time can you get an expert in disaster, a first responder, so we actually created a lot of that fusion from there. Then, over the course of the first two years, we've had over 210,000 developers participate over across 168 nations with over 8,000 applications submitted. So, wildly successful. Now this year, Stu to your point, we had something that we could really bear down on very heavily. We announced that we were taking on climate change kind of laddering up natural disasters was let's look at the root, climate change, and then the COVID pandemic came about. We said let's tilt people towards that and it's been a tremendous outcoming for it. We've asked the developers to focus on three areas: crisis communications, you may have been one of those folks that's on a conference call or emails that haven't been responded to, on wait times forever, so those communications systems how do we fortify them get them to scale? The second area is remote learning, really look at where all the students are actually these days and what they're doing there, not just teaching but basically how do you give them entertainment, how do you actually provide them some level of social interaction. And the third area with the COVID focus is community collaboration. We really want to try to make sure people's spirits are up and that really does require everybody leaning in, and again you look at the news and tremendous examples of community collaboration and where technology can help scale or broaden that, that's really where Call for Code actually comes into play. >> Yeah, maybe it would be helpful, tell us a little bit about some of the previous winners, what have been some of the outcomes, more than just rallying the community, what resources is IBM putting into this? >> So one of the things that makes it different is rather than it just being a regular hack, this is really a processing side of IBM that we've developed over the course of this last three years. Where the challenge is one piece, the Call for Code challenge, we also developed and rolled out and committed another 25 million, with Call for Code we committed 30 million over that five years and in the following year we recognized the need to see the solutions actually get deployed. And so we committed another $25 million for the fortification, testing, scaling and deployment. So when you win a Call for Code Global Challenge, you also get IBM's support around deployment, fortification, some counseling and relation basically from development, to architecture, to even the business side of it. In our first year, we had a team called Project Owl actually come out and win, and one of the first things that happens especially in hurricanes or these natural disasters, communication grids go down. So they developed a solution that could quickly establish an ad hoc communication grid, and anybody that had a typical cell phone could connect up to that Wi-Fi grid or that grid very similar to the way they actually connect into a Starbucks Wi-Fi system. And it would allow both the first responders to understand where folks were at, and then establish communications. So that was in the first year. The second year was a team called Prometeo, and in October we selected them as the Global Challenge winner, and they were a solution that was built by a firefighter, a nurse and a developer with this concept roughly of how do they monitor essentially a firefighter's situation when they're actually in the heat of battle to best allocate the resources to the people who need them most. Understanding a little bit about their environment, understanding a little bit about the health that's actually happening with the firefighter, and again it's one of those scenarios where you couldn't just build it from the firefighter's side, you couldn't just build it from the nurse's side, and a developer would have a difficult time building it just by themselves. So bringing those people together, a nurse, a firefighter and a developer, and creating a system like this is really really what we're aspiring to do. Now, they won in October, and in February, they're in a field deployment actually doing real testing in the field in some of the fields at Catalonia, Spain. So, we've seen it first-hand exactly what happens when they win, the Project Owl team actually did some hurricane deployment testing in Puerto Rico, that of course IBM helped fortify and build connections between the Puerto Rico government so that we're really seeing essentially the challenge winner see this type of deployment. >> Willie, I love it, it's even better than a punch line I could do, what do you get when you combine a firefighter, a nurse and a developer? The answer is you can positively impact the world so phenomenal there. >> Absolutely. >> I'm curious, where does open source play into this activity? We were just covering Red Hat Summit last week, of course, lots of open source, lots of community engagement in hearing how they are helping communities engage and of course open source has been a big rallying point, everything from 3D printing to other projects in the community. So where does open source fit into this initiative? >> 100%. The amazing part about activating developers these days is just the broad availability of the technologies. And it's certainly stimulated by the community aspect of open source, this idea that they democratize access to technology, and it's really community-centric, and folks can start building very quickly on open source technologies that are material. So number one, all the things that is part of Call for Code and what we actually deployed are based on open source technologies. Now, again one of the differences is how do we actually make those winners and those technology sets become real? And becoming real requires this idea of how do you actually build durable sustainable solutions. So each five of the winners every year have the opportunity essentially to go through the Linux Foundation and have their solutions established as a project with the idea of roughly that people can download it and fork it, people can actually fortify it, but it's available to the whole globe, everybody in the world, to help build upon and fortify and continue to innovate on. So open source is right at the root of it, not just from the technology side, but from the ecosystem and community side that open source was for. And so we've seen as an example the formal establishment of Project Owl's software being open sourced by the Linux Foundation. And it's been fantastic to see both the participation actually there and see how people are basically deriving it and using it exactly what we intended to see in the vision of Call for Code, and Code and Response. >> Well, that's phenomenal. We're huge fans of the community activity, of course open source is a great driver of everything you were talking about. So I'm curious, one of the things we're all looking at is where people are spending their time, how this global pandemic is impacting what people are doing. There's plenty of memes out there on social media, it doesn't mean that you all of a sudden are going to learn a new language, or learn to play an instrument because you have lots of time at home, but I'm curious from what you've seen so far, compared to previous years, how's the engagement? What's the numbers? What can you share? Is there a significant difference or change from previous years? >> Yeah, there's so much good will, I would say, that's been brought about around the world in what we're seeing around the COVID-19 pandemic. That the way I would describe it is the rate of submissions and interest that we've seen is 3x above what we've seen in the prior years. Now keep in mind, we're not even actually at the area where we see the most. So keep in mind, right now we tried to accelerate the time to highlight some of these solutions. So April 27th will be the first deadline for COVID-19 challenge, and we'll highlight some of the solutions on May 5th. Now, when we think about it basically from that standpoint we typically actually see people waiting until that submission timeframe. And so when you think of it from that standpoint you really oftentimes see this acceleration, right? At that submission deadline. But we're already seeing 3x what we've seen in the past in terms of participation just because of the amount of good will that's actually out there, and what people are trying to do in solving these problems. And developers, they're problem solvers overall, and putting out those three areas, community crisis communications, remote learning, and community collaboration, they'll see examples of what they see on the news and think they can actually do something better, and then express that in software. >> That's excellent. So, Willie, one of the things, we've been talking to leaders across the industry and one of things we don't know is how much of what we are going through is temporary, and how much will actually be long term. I'm curious if there's any patterns you're seeing out there, discussions you're having with developers, you talk about remote work, you talk about communication. Are there anything that you've seen so far that you think that this will fundamentally just alter the way things might've been in the past going forward? >> Developers are always actually looking for this idea of how they actually sharpen their skills, their craft, new languages that they actually know, new platforms, whatever it actually might be. And I think in the past there was probably, even from our perspective, this balance of face-to-face versus digital, and a mix of both, but I think what we'll find going forward is a more robust mix of that. Because you can't deny the power of reach that actually happens when you actually move something digital. And then I would say that think about how you at theCUBE have refined your studios in dealing with an interview like mine, it gets better and better, you refine it. How you do an online workshop and how you do a workshop on a steel service mesh, you get better and better about how you engage from real time, hands-on keyboard experience in what information, what chat, what community pieces do you put on the screen to stimulate these pieces, I think in general the industry and our company and our teams have gotten better even in this short amount of time. I think those things will be long-lasting. I think we're all humans, so I think they still want the physical face-to-face and community interaction and camaraderie that comes from being in that physical energy, but I do think it'll be complemented by the things that we refined through the digital delivery that's been refined during this situation. >> All right, so Willie, final thing of course, this week, the winners are all being announced, how about people that are watching this and say this sounds phenomenal, how do I learn more, if I didn't get to participate in some of the initial pieces what should I be looking for? And how can I contribute and participate even after Think? >> Well, number one keep in mind that the challenge for the year will still actually go all the way to October, and submissions for that whole Challenger Watch will go to February first. So that's number one. But number two, going to developer.ibm.com/callforcode you'll find all the resources, we have these things called starter kits that help developers actually get up and going very quickly, finding out more information about both the competition structure, and really how you become part of the movement, go there basically and answer the call. >> Awesome. Love it, Willie, thanks so much, pleasure to catch up with you and definitely looking forward to seeing all the outcome that the community is putting forth to focus on this really important challenge. >> Hey Stu, thanks for having me, I really appreciate it. >> All right, be sure to check out thecube.net for all the coverage from IBM Think, all the backlog we had to see Willie a couple years ago when he was on the program, and check out where we will be later in the year. I'm Stu Miniman, and as always, thanks for watching. (gentle music)

Published Date : May 5 2020

SUMMARY :

IBM Think, brought to you by IBM. It is the digital experience me, it's good to be back, of the interaction that talked with you a big topic at the coverage prior to the and one of the first things positively impact the world and of course open source has So each five of the We're huge fans of the community just because of the amount of good will So, Willie, one of the things, complemented by the things in mind that the challenge outcome that the community is Hey Stu, thanks for having from IBM Think, all the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2018DATE

0.99+

FebruaryDATE

0.99+

May 5thDATE

0.99+

IBMORGANIZATION

0.99+

OctoberDATE

0.99+

WilliePERSON

0.99+

Willie TejadaPERSON

0.99+

Puerto RicoLOCATION

0.99+

San FranciscoLOCATION

0.99+

Palo AltoLOCATION

0.99+

StuPERSON

0.99+

Stu MinimanPERSON

0.99+

AustraliaLOCATION

0.99+

April 27thDATE

0.99+

New YorkLOCATION

0.99+

3xQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

Northern CaliforniaLOCATION

0.99+

BostonLOCATION

0.99+

30 millionQUANTITY

0.99+

50 peopleQUANTITY

0.99+

100QUANTITY

0.99+

500 plus peopleQUANTITY

0.99+

$25 millionQUANTITY

0.99+

developer.ibm.com/callforcodeOTHER

0.99+

25 millionQUANTITY

0.99+

TwoQUANTITY

0.99+

StarbucksORGANIZATION

0.99+

this yearDATE

0.99+

first two yearsQUANTITY

0.99+

Red Hat SummitEVENT

0.99+

over 8,000 applicationsQUANTITY

0.98+

over 210,000 developersQUANTITY

0.98+

bothQUANTITY

0.98+

this weekDATE

0.98+

Project OwlORGANIZATION

0.98+

thecube.netOTHER

0.98+

oneQUANTITY

0.98+

second areaQUANTITY

0.98+

COVID-19 pandemicEVENT

0.98+

COVID-19OTHER

0.98+

one pieceQUANTITY

0.97+

24 million developersQUANTITY

0.97+

second yearQUANTITY

0.97+

Catalonia, SpainLOCATION

0.97+

PrometeoORGANIZATION

0.97+

third yearQUANTITY

0.96+

five yearsQUANTITY

0.96+

third areaQUANTITY

0.96+

first respondersQUANTITY

0.96+

first thingQUANTITY

0.96+

last weekDATE

0.95+

theCUBEORGANIZATION

0.93+

COVID pandemicEVENT

0.92+

first yearQUANTITY

0.91+

100%QUANTITY

0.91+

couple years agoDATE

0.91+

first thingsQUANTITY

0.9+

168 nationsQUANTITY

0.9+

CUBEORGANIZATION

0.9+

first responderQUANTITY

0.88+

Bill Welch, IronNet | Cube Conversation, April 2020


 

>> Woman: From theCUBE studios in Palo Alto in Boston, connecting without leaders all around the world, this is a CUBE conversation. >> Hello everyone, welcome to the special CUBE conversation, I'm John Furrier, host theCUBE here in Palo Alto, California, and doing a remote interview in our quarantine studio where we're getting the stories out there and sharing the content during the time of crisis when we're sheltering in place, as we get through this and get through the other side of the new normal. It's not necessarily normal, but it'll certainly create some normalcy around some of the new work at home, but also cybersecurity, I want to bring in a special guest who's going to talk with me about the impact of COVID-19 on cybersecurity, work at home, work in general, and also businesses practices. So, welcome Bill Welsh, who's the CEO of IronNet, who has taken over the helm run of the operations with General Keith Alexander, CUBE alumni as well, former NSA and former Cyber Command who's now leading a new innovative company called IronNet, which is deploying something really clever, but also something really realistic around cybersecurity so, Bill, thanks for joining me. >> Hey John, thanks for being with you. >> So, obviously, the COVID-19 crisis has created, essentially, a lot of exposure to the real world and, in general, around what it's like to work at home. Obviously, the economy's are crippled. This is an invisible threat. I've been chirping on Twitter and saying we've been fighting a digital war for a long time. There's been, the Internet has provided nation states the opportunity to attack folks using other mechanisms, open source and others, but if you look at this COVID-19, whether it's a bio weapon or not, it has crippled the country in the United States and caused crippling around the world, but it's just a threat and causing disruption, this is almost like a nuke, if you will, digital nuke. This is changing the game. You guys are in the cyber intelligence, cybersecurity area, what's your take on all of this and what are you hearing? >> Well I agree with you, John, I think that this is the invisible enemy, and as you know, right now with that going on, there's going to be adversaries that are going to take advantage of it. You see right now in some of the nation states where they're looking at opportunities to use this, to go after other countries, maybe just to test and see what their vulnerabilities are. You're seeing some activity overseas with nation states where they're looking at some of the military incursions, they're thinking about possible weaknesses with this invisible enemy. You know, it's affecting us in so many ways, whether it's economic, financial, our healthcare system, our supply chains, whether it's our, the supplies and groceries that we get to our people, so these are all challenging times that the adversaries are not going to just sit back and say oh well, you're in a crisis right now, we'll wait for the crisis to be alieved, we are now going to take advantage of it. >> And certainly the death toll is also the human impact as well, this is real world. This is something that we can have a longer conversation on, the time when we get more data in, and we'll certainly want to track this new, kind of digital warfare kind of paradigm, whether it's bio and or packets in cybersecurity, but the real impact has been this at scale exposure of problems and opportunities. For instance, IT folks were telling me that they underprovisioned their VPN access, now it's 100% everyone's at home. That's a disruption, that's not a hurricane, that's not a flood, this is now a new distraction to their operations. Other folks are seeing more hacks and more surface area, more threats from the old side getting hit. This has certainly impacted the cyber, but also people's anxiety at home. How are you guys looking at this, what are you guys doing, what's going on IronNet right now around cyber and COVID-19. >> Yeah, and what we're seeing right now is that our customers are seeing increasing awareness of their employees to understand what is going on around them and one of the things that we formed the company was the ability to assist enterprises of all sizes to collectively defend against threats that target their industries. We believe that collective defense is our collective responsibility. And it can't be just about technology, it's about some of the IT systems you talked about, being able to leverage them together. When I look at our top energy companies that we partner with, these individuals have great operators, but when you think about it, they have operators just for their company. What we're able to do within our environment, in our Iron Dome, is bring all that in together. We bring the human element and the IT element in order to help them solve positive outcomes for their industries. >> I want to dig into that because I think one of the things that I'm seeing coming out of this trend, post-pandemic is going to be the real emphasis on community. You're seeing people realizing through, whether it's doing Zoomification or Cubification, doing CUBE interviews and zooming and talking, I think you're going to see this element of I could do better, I can contribute either to society or to the collective at whole, and I think this collective idea you guys have with Iron Dome is very relevant because I think people are going to say wow, if I contribute, we might not have this kind of crisis again. This is something that's new, you guys have been on this collective thing with Iron Dome for a long time. I think this is pretty clever and I think it's going to be very relevant. Can you explain the Iron Dome collective, intelligence paradigm in the vision? >> Yeah, absolutely. And just to back up a little bit, what I will tell you is that we observed, as far as the problem statement, was that cyber is an element of national power, and people are using it to achieve their political, economic, and military objectives and now what you're seeing is are there other ways, cause while this COVID-19 may or may not have been anything as far as a bio-weapon, now others will see, well here's a way to bring down a country or an economy or something like that. We're also seeing that the cyber attacks are getting more and more destructive, whether it's WannaCry or NotPetya, we're also seeing the toolkits being more advanced, we're seeing how slow the response is by their cyber tools, so what we've looked at is we said wait, stop defending in isolation. That's what enterprises have been doing, they've been defending in isolation, no sharing, no collective intelligence as I would call it. And what we've been able to do is bring the power of those people to come together to collectively defend when something happens. So instead of having one security operation center defending a company, you can bring five or six or seven to defend the entire energy grid, this is one example. And over in Asia, we have the same thing. We have one of our largest customers over there, they have 450 companies, so if you think about it, 450 companies times the number of stock operators that they have in the security operation centers, you can think about the magnitude that we can bring the bearer of the arms, the warriors, to attack this crisis. >> So you're getting more efficiency, more acute response than before, so you got speed. So what you're saying is the collective intelligence provides what value? Speed, quality-- Yeah, it's at cloud scale, network speed, you get the benefit of all these operators, individuals that have incredible backgrounds in offensive and defensive operator experience including the people that we have, and then our partnership with either national governments or international governments that are allies, to make sure that we're sharing that collective intelligence so they can take action because what we're doing is we're making sure that we analyze the traffic, we're bringing the advanced analytics, we're bringing the expert systems, and we're bringing the experts to there, both at a technology level and also a personnel level. >> You know, General Alexander, one of the architects behind the vision here, who's obviously got a background in the military, NSA, Cyber Command, et cetera, uses the analogy of an airport radar, and I think that's a great metaphor because you need to have real-time communications on anything going on in as telemetry to what's landing or approaching or almost like landing that airplane, so he uses that metaphor and he says if there's no communication but it lags, you don't have it. He was using that example. Do you guys still use that example or can you explain further this metaphor? >> Absolutely, and I think another example that we have seen some of our customers really, in our prospects and partners really embrace is this concept of an immersive visualization, almost gaming environment. You look at what is happening now where people have the opportunity, even at home because of COVID-19, my teenage boys are spending way too much time probably on Call of Duty and Fortnite and that, but apply that same logic to cyber. Apply that logic to where you could have multiple players, multiple individuals, you can invite people in, you can invite others that might have subject matter expertise, you might be able to go and invite some of the IT partners that you have whether it's other companies to come in that are partners of yours, to help solve a problem and make it visualized, immersive, and in a gaming environment, and that is what we're doing in our Iron Dome. >> I think that's compelling and I've always loved the vision of abstracting away gaming to real world problems because it's very efficient, those kids are great, and the new Call of Duty came out so everyone's-- >> And they're also the next generation, they're the next generation of individuals that are going to be taking over security for us. So this is a great in mind... Cause this is something they already know, something they're already practicing, and something they're experts at and if you look at how the military is advancing, they've gone from having these great fighter pilots to putting people in charge of drones. It's the same thing with us is that possibility of having a cyber avatar go and fight that initiative is going to be something that we're doing. >> I think you guys are really rethinking security and this brings up my next topic I want to get your thoughts on is this crisis of COVID-19 has really highlighted old and new, and it's really kind of exposed again, at scale because it's an at scale problem, everyone's been forced to shelter in place and it exposes everything from deliveries to food to all the services and you can see what's important, what's not in life and it exposes kind of the old and new. So you have a lot of old antiquated, outdated systems and you have new emerging ones. How do you see those two sides of the street, old and new, what's emerging, what's your vision on what you think will be important post-pandemic? >> Well, I think the first thing is the individuals that are really the human element. So one, we have to make sure that individuals at home are, have all the things that they require in order to be successful and drive great outcomes, because I believe that the days of going into an office and sitting into a cube is yes, that is the old norm, but the new norm is individuals who either at home or on a plane, on a train, on a bus, or wherever they might be, practicing and being a part of it. So I think that the one thing we have to get our arms around is the ability to invite people into this experience no matter where they are and meet them where they are, so that's number one. Number two is making sure that those networks are available and that they're high speed, right? That we are making sure that they're not being used necessarily for streaming of Netflix, but being able to solve the cyber attacks. So there might be segmentation, there might be, as you said, this collective intelligent sharing that'll go across these entities. >> You know, it's interesting, Bill, you're bringing up something that we've been riffing on and I want to just expose that to you and kind of think out loud here. You're mentioning the convergence of physical, hybrid, 100% virtual as it kind of comes together. And then community and collective intelligence, we just talked about that, certainly relevant, you can see more movement on that side and more innovation. But the other thing that comes out of the woodwork and I want to get your thoughts on this is the old IoT Edge, Internet of things. Because if you think about that convergence of operational technologies and Internet technologies, ID, you now have that world's been going on for awhile, so obviously, you got to have telemetry on physical devices, you got to bring it in IT, so as you guys have this Iron Dome, collective view, hallux of view of things, it's really physical and virtual coming together. The virtualization-- >> It's all the above, it's all the above. The whole concept of IoT and OT and whether it's a device that's sitting in a solar wind panel or whether it's a device that's sitting in your network, it could be the human element, or it could actually be a device, that is where you require that cyber posture, that ability to do analytics on it, the ability to respond. And the ability to collectively see all of it, and that goes to that whole visualization I talked to you about, is being able to see your entire network, you can't protect something if you can't see it, and that's something that we've done across IronDome, and with our customers and prospects and with IronDefense, so it's something that absolutely is part of the things we're seeing in the cyber world. >> I want to get your reaction to some commentary that we've been having, Dave Vellante and myself on the team, and we were talking about how events have been shut down, the physical space, the venues where they have events. Obviously, we go to a lot of events with theCUBE, you know that. So, obviously that's kind of our view, but when you think about Internet of things, you think about collective intelligence with community, whether it's central to gamification or Iron Dome that you're innovating on, as we go through the pandemic, there's going to be a boomerang back, we think, to the importance of the physical space, cause at some point, we're going to get back to the real world, and so, the question is what operational technology, what version of learnings do we get from this shelter in place that gets applied to the physical world? This is the convergence of physical and virtual. We see as a big way, want to get your reaction to that. >> I absolutely agree with you, I think that we're going to learn some incredible lessons in so many different ways whether it's healthcare, financial, but I also, believe that's what you said, is that convergence of physical and virtual will become almost one in the same. We will see individuals that will leverage the physical when they need to and leverage the virtual when they need to. And I think that that's something that we will see more and more of of companies looking at how they actually respond and support their customer base. You know, some might decide to have more individuals in an at-home basis, to support a continuity of operations, some might decide that we're going to have some physical spaces and not others, and then we're going to leverage physical IT and some virtual IT, especially the cloud infrastructures are going to become more and more valuable as we've seen within our IronDome infrastructure. >> You know, we were riffing the other day in the remote interviews, theCUBE is going virtual, and we were joking that Amazon Web Services was really created through the trend of virtualization. I mean, VMware and the whole server virtualization created the opportunity for Amazon to abstract and create value. And we think that this next wave is going to be this pandemic has woken us up to this remote, virtual contribution, and it might create a lot of opportunities, for us, for instance, virtual CUBE, for virtual business. I'm sure you, as the CEO of IronNet, are thinking about how you guys recover post-pandemic, is it going to be a different world, are you going to have a mix of virtual, digital, integrated into your physical, whether it's how you market your products and engage customers to solving technical problems. This is a new management challenge, and it's an opportunity if you get it right, it could be a headwind or a tailwind, depending on how you look at it. So I want to get your thoughts on this virtualization post-pandemic management structure, management philosophy, obviously, dislocation with spacial economics, I get that and I always go to work in the office much but, beyond that, management style, posture, incentives. >> Yes, I think that there's a lot of things unpacked there. I mean, one is it is going to be about a lot of more communication. You know, I will tell you that since we have gone into this quarantine, we're holding weekly all hands, every Friday, all in a virtual environment. I think that the transparency will be even more. You know, one of the things that I'm most encouraged by and inspired by is the productivity. I will tell you, getting access to individuals has gotten easier and easier for us. The ability to get people into this virtual environment. They're not spending hours upon hours on commuting or flying on planes or going different places, and it doesn't mean that that won't be an important element of business, but I think it's going to give time back to individuals to focus on what is the most important priorities for the companies that they're driving. So this is an opportunity, I will tell you, our productivity has increased exponentially. We've seen more and more meetings, where more and more access to very high level individuals, who have said we want to hear what you guys are doing, and they have the time to do it now instead of jumping on a plane and wasting six hours and not being productive. >> It's interesting, it's also a human element too, you can hear babies crying, kids playing, dogs barking, you kind of laugh and chuckle in the old days, but now this is a humanization piece of it, and that should foster real communities, so I think... Obviously, we're going to be watching this virtualization of communities, collective intelligence and congratulations, I think Iron Dome, and iron offense, obviously which is core product, I think your Iron Dome is a paradigm that is super relevant, you guys are visionaries on this and I think it's turning out to be quite the product, so I want to congratulate you on that. Thanks for-- >> Thank you, John. Thanks for your time today and stay safe. >> Bill, thanks for joining us and thanks for your great insights on cyber COVID-19, and we'll follow up more on this trend of bio weaponry and kind of the trajectory of how cyber and scale cloud is going to shape how we defend and take offense in the future on how to defend our country and to make the world a safer place. I'm John Furrier, you're watching theCUBE here and our remote interviews in our quarantine studio in Palo Alto, thanks for watching. (lively music)

Published Date : Apr 16 2020

SUMMARY :

this is a CUBE conversation. and sharing the content during the time of crisis and what are you hearing? that the adversaries are not going to just sit back This is something that we can have a longer conversation on, and one of the things that we formed the company and I think it's going to be very relevant. We're also seeing that the cyber attacks and we're bringing the experts to there, and I think that's a great metaphor Apply that logic to where you could have multiple players, and if you look at how the military is advancing, and it exposes kind of the old and new. is the ability to invite people and I want to just expose that to you and that goes to that whole visualization Dave Vellante and myself on the team, and leverage the virtual when they need to. and it's an opportunity if you get it right, and inspired by is the productivity. and that should foster real communities, and stay safe. and kind of the trajectory of how cyber and scale cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Bill WelshPERSON

0.99+

Dave VellantePERSON

0.99+

Iron DomeORGANIZATION

0.99+

AsiaLOCATION

0.99+

IronNetORGANIZATION

0.99+

fiveQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Bill WelchPERSON

0.99+

100%QUANTITY

0.99+

John FurrierPERSON

0.99+

six hoursQUANTITY

0.99+

sixQUANTITY

0.99+

450 companiesQUANTITY

0.99+

Call of DutyTITLE

0.99+

April 2020DATE

0.99+

BillPERSON

0.99+

United StatesLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

sevenQUANTITY

0.99+

NSAORGANIZATION

0.99+

oneQUANTITY

0.99+

AlexanderPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

FortniteTITLE

0.99+

one exampleQUANTITY

0.99+

two sidesQUANTITY

0.99+

CUBEORGANIZATION

0.99+

todayDATE

0.99+

COVID-19OTHER

0.99+

NetflixORGANIZATION

0.98+

BostonLOCATION

0.98+

Keith AlexanderPERSON

0.97+

bothQUANTITY

0.96+

first thingQUANTITY

0.95+

COVID-19 crisisEVENT

0.95+

pandemicEVENT

0.94+

GeneralPERSON

0.94+

IronDefenseORGANIZATION

0.92+

TwitterORGANIZATION

0.92+

theCUBEORGANIZATION

0.87+

waveEVENT

0.8+

Number twoQUANTITY

0.75+

post-EVENT

0.74+

nextEVENT

0.73+

IronDomeORGANIZATION

0.72+

Iron DomeTITLE

0.71+

VMwareORGANIZATION

0.7+

NotPetyaTITLE

0.67+

Cyber CommandORGANIZATION

0.64+

COVID-EVENT

0.56+

19OTHER

0.54+

Zeus Kerravala, ZK Research | CUBE Conversation, March 2020


 

>> Narrator: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hey, welcome to this CUBE Conversation. I'm John Furrier, Host of theCUBE here in Palo Alto, California, for a special conversation with an industry analyst who's been, who travels a lot, does a lot of events, covers the industry, up and down, economically and also some of the big trends, to talk about how the at scale problem that the COVID-19 is causing. Whether it's a lot of people are working at home for the first time, to at scale network problems, the pressure points that this is exposing for what I would call the mainstream world is a great topic. Zeus Kerravala, Founder and Principal Analyst at ZK Research, friend of theCUBE. Zeus, welcome back to theCUBE. Good to see you remotely. We're, as you know, working in place here. I came to the studio for, with our quarantine crew here, to get these stories out, 'cause they're super important. Thanks for spending the time. >> Hi, yeah, thanks, it's certainly been an interesting last couple months and we're probably, maybe half way through this, I'm guessing. >> Yeah, and no matter what happens the new reality of this current situation or mess or whatever you want to call it is the fact that it has awakened what us industry insiders have been seeing for a long time, big data, new networks, cloud native, micro-services, kind of at scale, scale out infrastructure, kind of the stuff that we've been kind of covering is now exposed for the whole world to see on a Petri dish that is called COVID-19, going, "Wow, this world has changed." This is highlighting the problems. Can you share your view of what are some of those things that people are experiencing for the first time and what's the reaction, what's your reaction to it all? >> Yeah, it's been kind of an interesting last couple of months when I talk to CIO's about how they're adapting to this. You know, when, before I was an analyst, John, I was actually in corporate IT. I was part of a business continuity plans group for companies and the whole definition of business continuity's changed. When I was in corporate IT, we thought of business continuity as being able to run the company with a minimal set of services for a week or a month or something like that. So, for instance, I was in charge of corporate technology and financial services firm and we thought, "Well, if we have 50 traders, can we get by with 10", right? Business continuity today is I need to run the entire organization with my full staff for an indefinite period of time, right? And that is substantially different mandate than thinking of how I run a minimal set of services to just maintain the bare minimum business operations and I think that's exposed a lot of things for a lot of companies. You know, for instance, I've talked to so many companies today where the majority of their employees have never worked remote. For you or I, we're mobile professionals. We do this all the time. We travel around. We go to conferences. We do this stuff all, it's second nature. But for a lot of employees, you think of contact center agents, in store people, things like that, they've never worked from home before. And so, all of a sudden, the new reality is they've got to set up a computer in the kitchen or their bedroom or something like that and start working from home. Also for companies, they've never had to think about a world where everybody worked remotely, right? So the VP in Infrastructure would have, the cloud apps they have, the remote access technology they have was set up for a subset of users, maybe 10%, maybe 15%, but certainly not everybody. And so now we're seeing corporate networks get crushed. All the cloud providers are getting crushed. I know some of the conferencing companies, the video companies are having to double, triple capacity. And so I think to your point when you started this, we would have seen this eventually with all the data coming in and all the new devices being connected. I think what COVID did was just accelerate it just to the point where it's exposed to everything at once. >> Yeah, and you know, I have a lot of, being an entrepreneur and done a lot of corporate legal contracts. The word force majeure is always a phrase that's a legal jargon, which means act of God or so to speak, something you can't control. I think what's interesting to your point is that the playbook in IT, even some of the most cutting edge IT, is forecasting some disruption, but never like this. And also disaster recovery and business continuity, as you mentioned, have been practices, but state of the art has been percentages of overall. But disaster recovery was a hurricane, or a power outage, so generators, fail over sites or regions of your cloud, not a change in a new vector. So the disruption is not disruption. It's an amplification of a new work stream. That's the disruption. That's what you're saying. >> Yeah, you know, that's correct. Business continuity used to be very data center-focused. It was, how do I get my power? How do I create some, replicate my office and have 50 desks in here, instead of 500? But now it's everybody working remotely, so I got to have ways for them to collaborate. I have to have ways for them to talk to customers. I have to have ways for them to deliver services. I have to enable people to do what they did in the office, but not in the office, right? And so that's been the big challenge and I think it's been an interesting test for CIO's that have been going through digital transformation plans. I think it's shifted a lot of budgets around and made companies look at the way they do things. There's also the social aspect of a job. People like to go to the office. They like to interact with co-workers. And I've talked to some companies where they're bringing in medical doctors, they're bringing in psychologists to talk to their employees, because if you're never worked from home before, it's quite a big difference. The other aspect of this that's underappreciated, I think, is the fact that now our kids are home, right? >> John: Yeah. (laughter) >> So we've got to contend with that. And I know that the first day that the shelter in place order got put in place for the San Francisco area, a new call, I believe a new version of Call of Duty had just come out. You know, we had some new shows pop up in Netflix, some series continuances. So now these kids who are at home are bored. They're downloading content. They're playing games. At the same time, we're trying to work and we're trying to do video calls and we're trying to bring in multiple video streams or even if they're in classrooms, they're doing Zoom-based calls, that type of thing, or using WebEx or an application like that, and it's played havoc on corporate networks, not just company networks, and so... >> Also Comcast and the providers, AT&T. You've got the fiber seems to be doing well, but Comcast is throttling. I mean, this is the crisis. It's a new vector of disruption. But how do you develop... >> Yeah, YouTube said that they're going to throttle down. Well, I think what this is is it makes you look at how you handle your traffic. And I think there's plenty of bandwidth out there. And even the most basic home routers are capable of prioritizing traffic and I think there's a number of IT leaders I've talked who have actually gone through the steps of helping their employees understand how you use your home networking technology to be able to prioritize video and corporate voice traffic over top. There are corporate ways to do that. You know, for instance, Aruba and Extreme Networks both offer these remote access points where you just plug 'em in and you're connected through a corporate network and you pick up all the policies. But even without that, there's ways to do with home. So I think it's made us rethink networking. Instead of the network being a home network, a WiFi network, a data center network, right, the Internet, we need to think about this grand network as one network and then how we control the quality of a cloud app when the person's home to the cloud, all the way back to the company, because that's what drives user experience. >> I think you're highlighting something really important. And I just want to illustrate and have you double down on more commentary on this, because I think, you know, the one network where we're all part of one network concept shows that the perimeter's dead. That's what we've been saying about the cloud, but also if you think about just the crimes of opportunity that are happening. You've got the hacker and hacking situation. You have all kinds of things that are impacted. There's crimes of opportunity, and there's disruption that's happening because of the opportunity. Can you just share more and unpack that concept of this one network? What are some of the things that business are thinking about now? You've got the VPN. You've got collaboration tools that sometimes are half-baked. I mean, I love Zoom and all, but Zoom is crashing too. I mean, WebEx is more corporate-oriented, but not really as strong as what Zoom is for the consumer. But still they have an opportunity, but they have a challenge as well. So all these work tools are kind of half-baked too. (laughing) >> Well, the thing is they were never designed... I remember seeing in an interview that Chuck Robbins had on CNBC where he said, "We didn't design WebEx to support everybody working from home". It just, that wasn't even a thought. Nowhere did he ever go to his team and say, build this for the whole world to connect, right? And so, every one of the video providers and the cloud collaboration providers have problems, and I don't really blame them, because this is a dynamic we were never expecting to see. I think you brought up a good point on the security side. We, a lot has been written about how more and more companies are moving to these online tools, like Zoom and WebEx and applications like that to let us communicate, but what does that mean from a security perspective? Now`all of sudden I have people working from home. They're using these Web-based applications. I remember a conversation I had about six months ago with one of the world's most famous hackers who does nothing but penetration tests now. He said that the cloud-based applications are his number one entry point into companies and to penetrate them, because people's passwords and things like that are fairly weak. So, now we're moving everything to the cloud. We're moving everything to these SaaS apps, right? And so now it's creating more exposure points. We've got fishers out there that are using the term COVID or Corona as a way to get people to click on links they shouldn't. And so now our whole security paradigm has blown up, right? So we used to have this hard shell we could drop around our company. We can't do that anymore. And we have to start worrying about things on an app-by-app basis. And it's caused companies to rethink security, to look at multi-factor authentication tools. I think those are a lot better. We have to look at Casb tools, the cloud access tools, kind of monitor what apps people are using, what they're not using. Trying to cut down on the use of consumer tools, right? So it's a lot for the security practice to take ahold of too. And you have to understand, even from a company standpoint, your security operations center was built on the concept they pull all their data into one location. SOC engineers aren't used to working remotely as well, so that's a big change as well. How do I get my data analyzed and to my SOC engineers when they're working from home? >> You know, we have coined the term Black Friday for the day after, you know, Thanksgiving. >> Thanksgiving, yeah. >> You know, the big surge, but that's a term to describe that first experience of, holy shit, everyone's going to the websites and they all crashed. So we're kind of having that same moment now, to your point earlier. So I want to read a statement that was on Nima Baidey's LinkedIn. He's at Google now, former Pivotal guy. You probably know him. He had a little graphic that says, "Who led the digital transformation of your company?" It's got a poll with a question mark. "A) Your CEO, B) your CTO, or C) COVID-19"? And it circles COVID-19 and that's the image and that's the meme that's going around. But the reality is it is highlighting it and I want to get your thoughts on this next track of thinking around how people may shift their focus and their spend, because, hey, hybrid cloud's great and multicloud's the next big wave, but screw multicloud. If I can't actually fix my current situation, maybe I'll push off some of the multicloud stuff or maybe I won't. So, how do you see the give and get of project prioritization, because I think this is going to wake everyone up. You mentioned security, clearly. >> Yeah, well, I think it has woken everybody up and I think companies now are really rethinking how they operate. I don't believe we're going to stop traveling. I think once this is over, people are going to hop back on planes. I also don't believe that we'll never go back into the office. I think the big shift here though, John, is we will see more acceptance to hire people out of region. I think that it's proved that you don't have to be in the office, right, which will drive these collaboration tools. And I also think we'll see less use of desktop phones and more use of video means. So now that people are getting used to using these types of tools, I think they're starting to like the experience. And so voice calls get replaced by video calls and that is going to crush our networks in buildings. So we've got WiFi 6 coming. We've got 5G coming, right. We've got lots of security tools out there. And I think you'll see a lot of prioritization to the network and that's kind of an interesting thing, because historically, the network didn't get a lot of C level time, right? It was those people in the basement. We didn't really know what they did. I'm a former network engineer. I was treated that way. (laughing) But most digital organizations now have to come to the realization that they're network-centric, and then so the network is the business and that's not something that anybody's ever put a lot of focus on. But if you look at the building blocks of digital IoT, mobility, cloud, the writing's been on the wall for a while, and I've written this several times. But you need to pay more attention to the network. And I think we're finally going to see that transition, some prioritization of dollars there. >> Yeah, I will attest you have been very vocal and right on point on that, so props to that. I do want to also double amplify your point. The network drives everything, that's clear. I think the other thing that's interesting and used to be kind of a cliche in a pejorative way is the user is the product. I think that's a term that's been coined to Facebook. You know, you're data. You're the product. If you're the product, that's a problem, you know. To describe Facebook as the app that monetizes you, the user. I think this situation has really pointed out that yes, it's good to be the product. The user value and the network are two now end points of the spectrum. The network's got to be kick ass from the ground up, but the user is the product now, and it should be, in a good way, not exploiting. So I think if you're thinking about user-centric value, how my kid can play Call of Duty, how my family can watch the new episode on Netflix, how I can do a kick ass Zoom call, that's my experience. The network does its job. The application service takes advantage of making me happy. So I think this is interesting, right. So we're getting a new thing here. How real do you think that is? Where are we on the spectrum of that nirvana? >> I think we're rapidly approaching that. I think it's been well documented that 2020 was the year that customer experience become the number one brand differentiate, right. In fact, I think it was actually 2018 that that happened, but Walker and Gartner and a few other companies would be 2020. And what that means is that if you're a business, you need to provide exemplary customer service in order to gain share. I think one of the things that was lost in there is that employee experience has to be best in class as well. And so I think a lot of businesses over-rotated the spin away from employee experience to customer experience, and rightfully so, but now they got to rotate back to make sure their workers have the right tools, have the right services, have the right data, to do their jobs better, because when they do, they can turn around and provide customers better experience. So this isn't just about training your people to service customers well. It's about making sure people have the right data, the right information to do their jobs, to collaborate better, right. And there's really a tight coupling now between the consumer and the employee, or the customer and the employee. And, you know, Corona kind of exposed to that, 'cause it shows that we're all connected, in a way. And the connection of people, whether they're the customers or employees or something, that businesses have to focus on. So I think we'll see some dollars sign back to internal, not just customer facing. >> Yeah, well, great insight. And, first of all, we all connect to your great CUBE alumni. But you're also right up the street in California. We're in Palo Alto. You're in San Mateo. You literally could have driven here, but we're sheltering in place. >> We're sheltered in place. >> Great insight and, you know, thanks for sharing that and I think it's good content for people, you know, be aware of this. Obviously they're living in it right now, but I think the world is going to be back to business soon, but it's never going to be the same. I think it's digital... >> No, it'll never be the same. I think this is a real watershed point for the way we work and the way we treat our employees and our customers. I think you'll see a lot of companies make a lot of change. And that's good for the whole industry, 'cause it'll drive innovation. And I think we'll have some innovation come out of this that we never saw before. >> Quick final word for the folks that are on this big wave that's happening. It's reality. It's the current situation now. What's your advice for them as they get on their surfboard, so to speak, and ride this wave? What's your advice to them? >> Yeah, I think use this opportunity to find those weak points in your networks and find out where the bottlenecks are, because I think having everybody work remotely exposes a lot of problems in processes and where a lot of the hiccups happen. But I do think my final word is invest in the network. I think a lot of the networks out there have been badly under-invested in, which I think is why people get frustrated when they're in stadiums or hotels or casinos. I think the world is shifting. Applications and people are becoming network-centric. And if those don't work, nothing works. And I think that's really been proven over the last couple months. If our networks can't handle the traffic and our networks can't handle what we're doing, nothing works. >> You know, you and I could do a podcast show called "No Latency"... >> (mumbles) so it'll be good. >> Zeus, thanks for coming on. I appreciate taking the time. >> No problem, John. >> Stay safe. And I want to follow up with you and get a check in further down the road, in a couple days or maybe next week, if you can. >> Yeah, looking forward to it. >> Thanks a lot. Okay, I'm John Furrier here in Palo Alto Studios doing the remote interviews, getting the quick stories that matter, help you out, and (mumbles) great guest there. Check out ZK Research, a great friend of theCUBE, cutting edge, knows the networking. This is an important area. The network, the users' experience is critical. Thanks for coming and watching today. I'm John Furrier. Thanks for watching. (lighthearted music)

Published Date : Mar 31 2020

SUMMARY :

this is a CUBE Conversation. for the first time, to at scale network problems, couple months and we're probably, maybe half way kind of the stuff that we've been kind of covering And so I think to your point when you started this, or so to speak, something you can't control. And so that's been the big challenge And I know that the first day that the shelter in place You've got the fiber seems to be doing well, And I think there's plenty of bandwidth out there. And I just want to illustrate and have you double down and applications like that to let us communicate, for the day after, you know, Thanksgiving. You know, the big surge, but that's a term to describe And I think we're finally going to see that transition, I think that's a term that's been coined to Facebook. the right information to do their jobs, And, first of all, we all connect to your great CUBE alumni. and I think it's good content for people, you know, And that's good for the whole industry, It's the current situation now. the bottlenecks are, because I think having everybody work You know, you and I could do a podcast show called I appreciate taking the time. and get a check in further down the road, getting the quick stories that matter, help you out,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

ComcastORGANIZATION

0.99+

Chuck RobbinsPERSON

0.99+

CaliforniaLOCATION

0.99+

Zeus KerravalaPERSON

0.99+

John FurrierPERSON

0.99+

10QUANTITY

0.99+

March 2020DATE

0.99+

San MateoLOCATION

0.99+

50 desksQUANTITY

0.99+

Palo AltoLOCATION

0.99+

15%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

10%QUANTITY

0.99+

YouTubeORGANIZATION

0.99+

Call of DutyTITLE

0.99+

ArubaORGANIZATION

0.99+

BostonLOCATION

0.99+

2018DATE

0.99+

FacebookORGANIZATION

0.99+

COVID-19OTHER

0.99+

AT&T.ORGANIZATION

0.99+

50 tradersQUANTITY

0.99+

ZK ResearchORGANIZATION

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

GoogleORGANIZATION

0.99+

ZeusPERSON

0.99+

next weekDATE

0.99+

CUBEORGANIZATION

0.99+

500QUANTITY

0.99+

CNBCORGANIZATION

0.99+

Extreme NetworksORGANIZATION

0.99+

todayDATE

0.98+

LinkedInORGANIZATION

0.98+

first timeQUANTITY

0.98+

Nima BaideyPERSON

0.98+

a weekQUANTITY

0.98+

bothQUANTITY

0.98+

one networkQUANTITY

0.98+

a monthQUANTITY

0.98+

second natureQUANTITY

0.97+

theCUBE StudiosORGANIZATION

0.97+

theCUBEORGANIZATION

0.97+

ThanksgivingEVENT

0.97+

oneQUANTITY

0.96+

WebExTITLE

0.96+

NetflixORGANIZATION

0.95+

Black FridayEVENT

0.94+

No LatencyTITLE

0.94+

CoronaORGANIZATION

0.93+

one locationQUANTITY

0.92+

bigEVENT

0.92+

first experienceQUANTITY

0.87+

COVIDOTHER

0.87+

GodPERSON

0.87+

six months agoDATE

0.86+

Walker and GartnerORGANIZATION

0.85+

Palo Alto StudiosLOCATION

0.82+

PivotalORGANIZATION

0.81+

last couple of monthsDATE

0.81+

first dayQUANTITY

0.8+

ZoomTITLE

0.8+

last couple monthsDATE

0.77+

CUBE ConversationEVENT

0.75+

waveEVENT

0.74+

WiFi 6OTHER

0.74+

UNLIST TILL 4/2 - Vertica Database Designer - Today and Tomorrow


 

>> Jeff: Hello everybody and thank you for joining us today for the Virtual VERTICA BDC 2020. Today's breakout session has been titled, "VERTICA Database Designer Today and Tomorrow." I'm Jeff Healey, Product VERTICA Marketing, I'll be your host for this breakout session. Joining me today is Yuanzhe Bei, Senior Technical Manager from VERTICA Engineering. But before we begin, (clearing throat) I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click Submit. As always, there will be a Q&A session at the end of the presentation. We'll answer as many questions, as we're able to during that time, any questions we don't address, we'll do our best to answer them offline. Alternatively, visit VERTICA forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to join the forums, to keep the conversation going. Also, a reminder that you can maximize your screen by clicking the double arrow button at the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to view on demand this week. We will send you a notification as soon as it's ready. Now let's get started. Over to you Yuanzhe. >> Yuanzhe: Thanks Jeff. Hi everyone, my name is Yuanzhe Bei, I'm a Senior Technical Manager at VERTICA Server RND Group. I run the query optimizer, catalog and the disaggregated engine team. Very glad to be here today, to talk about, the "VERTICA Database Designer Today and Tomorrow". This presentation will be organized as the following; I will first refresh some knowledge about, VERTICA fundamentals such as Tables and Projections, which will bring to the question, "What is Database Designer?" and "Why we need this tool?". Then I will take you through a deep dive, into a Database Designer or we call DBD, and see how DBD's internals works, after that I'll show you some exciting DBD improvements, we have planned for 10.0 release and lastly, I will share with you, some DBD future roadmap we planned next. As most of you should already know, VERTICA is built on a columnar architecture. That means, data is stored column wise. Here we can see a very simple example, of table with four columns, and the many of you may also know, table in VERTICA is a virtual concept. It's just a logical representation of data, which means user can write SQL query, to reference the table names and column, just like other relational database management system, but the actual physical storage of data, is called Projection. A Projection can reference a subset, or all of the columns all to its anchor table, and must be sorted by at least one column. Each table need at least one C for projection which reference all the columns to the table. If you load data to a table with no projection, and automated, auto production will be created, which will be arbitrarily assorted by, the first couple of columns in the table. As you can imagine, even though such other production, can be used to answer any query, the performance is not optimized in most cases. A common practice in VERTICA, is to create multiple projections, contain difference step of column, and sorted in different ways on the same table. When query is sent to the server, the optimizer will pick the projection, that can answer the query in the most efficient way. For example, here you can say, let's say you have a query, that select columns B, D, C and sorted by B and D, the third projection will be ideal, because the data is already sorted, so you can save the sorting costs while executing the query. Basically when you choose the design of the projection, you need to consider four things. First and foremost, of course the sort order. The data already sorted in the right way, can benefit quite a lot of the query actually, like Ordered by, Group By, Analytics, Merge, Join, Predicates and so on. The select column group is also important, because the projection must contain, all the columns referenced by your workflow query. Even missing one column in the projection, this projection cannot be used for a particular query. In addition, VERTICA is the distributed database, and allow projection to be segmented, based on the hash of a set of columns, which is beneficial if the segmentation merged, the join keys or group keys. And finally encoding of each per columns is also part of the design, because the data is sorted in different way, may completely change the optimal encoding for each column. This example only show the benefit of the first two, but you can imagine the rest too are also important. But even for that, it doesn't sound that hard, right? Well I hope you change your mind already when you see this, at least I do. These machine generated queries, really beats me. It will probably take an experienced DBA hours, to figure out which projection can be benefit these queries, not even mentioning there could be hundreds of such queries, in the regular work logs in the real world. So what can we do? That's why we need DBD. DBD is a tool integrated in the VERTICA server, that it can help DBA to perform an access, on their work log query, tabled schema and data, and then automatically figure out, the most optimized projection design for their workload. In addition, DBD also a sophisticated tool, that can take customize by a user, by sending a lot of parameters objectives and so on. And lastly, DBD has access to the optimizer, so DB knows what kind of attribute, the projection need to have, in order to have the optimizer to benefit from them. DBD has been there for years, and I'm sure there are plenty of materials available online, to show you how DBD can be used in different scenarios, whether to achieve the query optimize, or load optimize, whether it's the comprehensive design, or the incremental design, whether it's a dumping deployment script, and manual deployment later, or let the DBD do the order deployment for you, and the many other options. I'm not planning to talk about this today, instead, I will take the opportunity today, to open this black box DBD, and show you what exactly hide inside. DBD is a complex tool and I have tried my best to summarize the DBD design process into seven steps; Extract, Permute, Prune, Build, Score, Identify and Encode. What do they mean? Don't worry, I will show you step by step. The first step is Extract. Extract Interesting Columns. In this step, DBD pass the design queries, and figure out the operations that can be benefited, by the potential projection design, and extract the corresponding columns, as interesting columns. So Predicates, Group By, Order By, Joint Condition, and analytics are all interesting Column to the DBD. As you can see this three simple sample queries, DBD can extract the interest in column sets on the right. Some of these column sets are unordered. For example, the green one for Group By a1 and b1, the DBD extracts the interesting column set, and put them in the own orders set, because either data sorted by a1 first or b1 first, can benefit from this Group By operation. Some of the other sets are ordered, and the best example is here, order by clause a2 and b2, and obviously you cannot sort it by b2 and then a2. These interesting columns set will be used as if, to extend to actual projection sort order candidates. The next step is Permute, once DBD extract all the C's, it will enumerate sort order using C, and how does DBD do that? I'm starting with a very simple example. So here you can see DBD can enumerate two sort orders, by extending d1 with the unordered set a1, b1, and the derived at two sort order candidates, d1, a1, b1, and d1, b1, a1. This sort order can benefit queries with predicate on d1, and also benefit queries by Group By a1, b1, when a1, sorry when d1 is constant. So with the same idea, DBD will try to extend other States with each other, and populate more sort order permutations. You can imagine that how many of them, there could be many of them, these candidates, based on how many queries you have in the design and that can be handled of the sort order candidates. That comes to the third step, which is Pruning. This step is to limit the candidates sort order, so that the design won't be running forever. DBD uses very simple capping mechanism. It sorts all the, sort all the candidates, are ranked by length, and only a certain number of the sort order, with longest length, will be moved forward to the next step. And now we have all the sort orders candidate, that we want to try, but whether this sort order candidate, will be actually be benefit from the optimizer, DBD need to ask the optiizer. So this step before that happens, this step has to build those projection candidate, in the catalog. So this step will build, will generates the projection DBL's, surround the sort order, and create this projection in the catalog. These projections won't be loaded with real data, because that takes a lot of time, instead, DBD will copy over the statistic, on existing projections, to this projection candidates, so that the optimizer can use them. The next step is Score. Scoring with optimizer. Now projection candidates are built in the catalog. DBD can send a work log queries to optimizer, to generate a query plan. And then optimizer will return the query plan, DBD will go through the query plan, and investigate whether, there are certain benefits being achieved. The benefits list have been growing over time, when optimizer add more optimizations. Let's say in this case because the projection candidates, can be sorted by the b1 and a1, it is eligible for Group By Pipe benefit. Each benefit has a preset score. The overall benefit score of all design queries, will be aggregated and then recorded, for each projection candidate. We are almost there. Now we have all the total benefit score, for the projection candidates, we derived on the work log queries. Now the job is easy. You can just pick the sort order with the highest score as the winner. Here we have the winner d1, b1 and a1. Sometimes you need to find more winners, because the chosen winner may only benefit a subset, of the work log query you provided to the DBD. So in order to have the rest of the queries, to be also benefit, you need more projections. So in this case, DBD will go to the next iteration, and let's say in this case find to another winner, d1, c1, to benefit the work log queries, that cannot be benefit by d1, b1 and a1. The number of iterations and thus the winner outcome, DBD really depends on the design objective that uses that. It can be load optimized, which means that only one, super projection winner will be selected, or query optimized, where DBD try to create as many projections, to cover most of the work log queries, or somewhat balance an objective in the middle. The last step is to decide encoding, for each projection columns, for the projection winners. Because the data are sorted differently, the encoding benefits, can be very different from the existing projection. So choose the right projection encoding design, will save the disk footprint a significant factor. So it's worth the effort, to find out the best thing encoding. DBD picks the encoding, based on the actual sampling the data, and measure the storage footprint. For example, in this case, the projection winner has three columns, and say each column has a few encoding options. DBD will write the sample data in the way this projection is sorted, and then you can see with different encoding, the disk footprint is different. DBD will then compare the disk footprint of each, of different options for each column, and pick the best encoding options, based on the one that has the smallest storage footprint. Nothing magical here, but it just works pretty well. And basic that how DBD internal works, of course, I think we've heard it quite a lot. For example, I didn't mention how the DBD handles segmentation, but the idea is similar to analyze the sort order. But I hope this section gave you some basic idea, about DBD for today. So now let's talk about tomorrow. And here comes the exciting part. In version 10.0, we significantly improve the DBD in many ways. In this talk I will highlight four issues in old DBD and describe how the 10.0 version new DBD, will address those issues. The first issue is that a DBD API is too complex. In most situations, what user really want is very simple. My queries were slow yesterday, with the new or different projection can help speed it up? However, to answer a simple question like this using DBD, user will be very likely to have the documentation open on the side, because they have to go through it's whole complex flow, from creating a projection, run the design, get outputs and then create a design in the end. And that's not there yet, for each step, there are several functions user need to call in order. So adding these up, user need to write the quite long script with dozens of functions, it's just too complicated, and most of you may find it annoying. They either manually tune the projection to themselves, or simply live with the performance and come back, when it gets really slow again, and of course in most situations, they never come back to use the DBD. In 10.0 VERTICA support the new simplified API, to run DBD easily. There will be just one function designer_single_run and one argument, the interval that you think, your query was slow. In this case, user complained about it yesterday. So what does this user to need to do, is just specify one day, as argument and run it. The user don't need to provide anything else, because the DBD will look up his query or history, within that time window and automatically populate design, run design and export the projection design, and the clean up, no user intervention needed. No need to have the documentation on the side and carefully write a script, and a debug, just one function call. That's it. Very simple. So that must be pretty impressive, right? So now here comes to another issue. To fully utilize this single round function, users are encouraged to run DBD on the production cluster. However, in fact, VERTICA used to not recommend, to run a design on a production cluster. One of the reasons issue, is that DBD picks massive locks, both table locks and catalog locks, which will badly interfere the running workload, on a production cluster. As of 10.0, we eliminated all the table and ten catalog locks from DBD. Yes, we eliminate 100% of them, simple improvement, clear win. The third issue, which user may not be aware of, is that DBD writes intermediate result. into real VERTICA tables, the real DBD have to do that is, DBD is the background task. So the intermediate results, some user needs to monitor it, the progress of the DBD in concurrent session. For complex design, the intermediate result can be quite massive, and as a result, many lost files will be created, and written to the disk, and we should both stress, the catalog, and that the disk can slow down the design. For ER mode, it's even worse because, the table are shared on communal storage. So writing to the regular table, means that it has to upload the data, to the communal storage, which is even more expensive and disruptive. In 10.0, we significantly restructure the intermediate results buffer, and make this shared in memory data structure. Monitoring queries will go directly look up, in memory data structure, and go through the system table, and return the results. No Intermediate Results files will be written anymore. Another expensive lubidge of local disk for DBD is encoding design, as I mentioned earlier in the deep dive, to determine which encoding works the best for the new projection design, there's no magic way, but the DBD need to actually write down, the sample data to the disk, using the different encoding options, and to find out which ones have the smallest footprint, or pick it as the best choice. These written sample data will be useless after this, and it will be wiped out right away, and you can imagine this is a huge waste of the system resource. In 10.0 we improve this process. So instead of writing, the different encoded data on the disk, and then read the file size, DBD aggregate the data block size on-the-fly. The data block will not be written to the disk, so the overall encoding and design is more efficient and non-disruptive. Of course, this is just about the start. The reason why we put a significant amount of the resource on the improving the DBD in 10.0, is because the VERTICA DBD, as essential component of the out of box performance design campaign. To simply illustrate the timeline, we are now on the second step, where we significantly reduced, the running overhead of the DBD, so that user will no longer fear, to run DBD on their production cluster. Please be noted that as of 10.0, we haven't really started changing, how DBD design algorithm works, so that what we have discussed in the deep dive today, still holds. For the next phase of DBD, we will briefly make the design process smarter, and this will include better enumeration mechanism, so that the pruning is more intelligence rather than brutal, then that will result in better design quality, and also faster design. The longer term is to make DBD to achieve the automation. What entail automation and what I really mean is that, instead of having user to decide when to use DBD, until their query is slow, VERTICA have to know, detect this event, and have have DBD run automatically for users, and suggest the better projections design, if the existing projection is not good enough. Of course, there will be a lot of work that need to be done, before we can actually fully achieve the automation. But we are working on that. At the end of day, what the user really wants, is the fast database, right? And thank you for listening to my presentation. so I hope you find it useful. Now let's get ready for the Q&A.

Published Date : Mar 30 2020

SUMMARY :

at the end of the presentation. and the many of you may also know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Yuanzhe BeiPERSON

0.99+

Jeff HealeyPERSON

0.99+

100%QUANTITY

0.99+

forum.vertica.comOTHER

0.99+

one dayQUANTITY

0.99+

second stepQUANTITY

0.99+

third stepQUANTITY

0.99+

tomorrowDATE

0.99+

third issueQUANTITY

0.99+

todayDATE

0.99+

FirstQUANTITY

0.99+

yesterdayDATE

0.99+

Each benefitQUANTITY

0.99+

TodayDATE

0.99+

third projectionQUANTITY

0.99+

OneQUANTITY

0.99+

b2OTHER

0.99+

each columnQUANTITY

0.99+

first issueQUANTITY

0.99+

one columnQUANTITY

0.99+

three columnsQUANTITY

0.99+

VERTICA EngineeringORGANIZATION

0.99+

YuanzhePERSON

0.99+

each stepQUANTITY

0.98+

Each tableQUANTITY

0.98+

first stepQUANTITY

0.98+

DBDTITLE

0.98+

DBDORGANIZATION

0.98+

seven stepsQUANTITY

0.98+

DBLORGANIZATION

0.98+

eachQUANTITY

0.98+

one argumentQUANTITY

0.98+

VERTICATITLE

0.98+

each projectionQUANTITY

0.97+

first twoQUANTITY

0.97+

firstQUANTITY

0.97+

this weekDATE

0.97+

hundredsQUANTITY

0.97+

one functionQUANTITY

0.97+

clause a2OTHER

0.97+

oneQUANTITY

0.97+

each per columnsQUANTITY

0.96+

TomorrowDATE

0.96+

bothQUANTITY

0.96+

four issuesQUANTITY

0.95+

VERTICAORGANIZATION

0.95+

b1OTHER

0.95+

single roundQUANTITY

0.94+

4/2DATE

0.94+

first couple of columnsQUANTITY

0.92+

VERTICA Database Designer Today and TomorrowTITLE

0.91+

VerticaORGANIZATION

0.91+

10.0QUANTITY

0.89+

one function callQUANTITY

0.89+

a1OTHER

0.89+

four thingsQUANTITY

0.88+

c1OTHER

0.87+

two sort orderQUANTITY

0.85+

Fran Scott | Nutanix .NEXT EU 2019


 

(upbeat music) >> Live, from Copenhagen, Denmark. It's theCUBE. Covering Nutanix.NEXT 2019. Brought to you by Nutanix. >> Welcome back everyone to theCUBE's live coverage of Nutanix.NEXT. We are in Copenhagen, Denmark. I'm your host, Rebecca Knight, hosting alongside Stu Miniman. We're joined by Fran Scott. She is a science and engineering presenter. Thanks so much for coming on the show. >> No worries at all. It's good to be here actually. >> So you are a well known face to UK audiences. You are a three times BAFTA nominated science and engineering presenter. Well-known. >> Give her a winner. (laughter) >> You're the Susan Lucci of science. You are the pyrotechnician and you lead the Christmas lectures at the Royal Institute. >> Yeah. I head up the demonstration team at the Royal Institution. We come up with all the science demonstrations, so the visual ways to show the science ideas. I head up that team. We build the demonstrations and we show science to people rather than just tell them about it. >> So mostly, you have a very cool job. (chuckles) >> I love my job. >> I want to hear how you got into this. What was it? What inspired you? >> Oh gosh, two very different questions. In terms of what inspired me, I was very lucky enough to be able to pursue what I love. And I came from a family where answers weren't given out willy-nilly. If you didn't know something, it wasn't a bad thing. It was like a, "Let's look it up. Let's look it up." I grew up in an atmosphere where you could be anything because you didn't have to know what you had to be. You could just have a play with it. I love being hands-on and making things, and I grew up on a farm, so I was quite practical. But I also loved science. Went to university, did neuroscience at university. I enjoyed the learning part but, where I was in terms of the science hierarchy, I found out that once you actually go into a lab, there's a lot of lab work and not much learning straight away, and it was the learning that I loved. And so my friends actually got me into science communication. They took me to the science museum and they were like, "Fran, you will love this." And I was like, "Will I?" And I was like, "You are so right." I got a job at the science museum in London by just approaching someone on that visit and being like, "How do I get a job here?" And they were like, "Well, you got to do this, this, this." I was like, "I can do that." I got the job there and I realized I loved science demonstrations and building stuff. Eventually I just combined that love of science and being practical together. And now I produce and write, build science props and science stage shows. And then it became a thing. (laughter) Hand it to me, I love it. >> So Fran, our audience is very much the technology community. Very supportive of STEM initiatives. Give us a little flavor as to some of the things you're working on. Where is there need for activities? >> I suppose the biggest example of that would be a show that I did a few years ago where there was a big push for new coders within the UK. And I was getting approached time and time again for visual ways to show computer coding. Or programming, as we used to call it back in the day. I didn't have an answer because then, I wasn't a coder. So I was like, "Well, I'll learn. And then I'll figure out a demonstration because this is what I do. So why don't I do it on coding?" And so yeah, I set about. I learnt code. And I came up with an explosions based coding show. Error 404. And we toured around the country with that. Google picked it up and it was a huge success just because it was something that people wanted to learn about. And people were stumped as to how to show coding visually. But because this is what we do day in and day out with different subjects, we could do it with coding just like we do it with physics. >> What do you think is the key? A lot of your audience is kids. >> Yes and family audiences. >> So what is the key to getting people excited about science? >> I think science itself is exciting if people are allowed to understand how brilliant it is. I think some of the trouble comes from when people take the step too big, and so you'd be like, "Hang on but, why is that cool? Why?" Because they don't under... Well they would understand if they were fed to them in a way that they get it. The way I say it is, anyone can understand anything as long as you make the steps to get there small enough. Sometimes the steps are too big for you to understand the amazingness of that thing that's happening. And if you don't understand that amazingness, of course you're going to lose interest. Because everyone around you is going, "Ah, this is awesome, this is awesome!" And you're like, "What? What's awesome?" I think it's up to us as adults and as educators to just try and not patronize the children, definitely not, but just give them those little steps so they can really see the beauty of what it is that we're in awed by. >> One of the things that is a huge issue in the technology industry is the dearth of women in particular, in the ranks of technology and then also in leadership roles. As a woman in science and also showing little girls everywhere all over the UK what it is to be a woman in science, that's a huge responsibility. How do you think of that, and how are you in particular trying to speak to them and say, "You can do this"? >> I've done a lot of research onto this because this was the reason I went into what I'm into. I worked a lot of the time behind the scenes just trying to get the science right. And then I realized there was no one like me doing science presenting. The girl was always the little bit of extra on the side and it was the man who was the knowledgeable one that was showing how to do the science. And the woman was like, "Oh, well that's amazing." And I was like, "Hang on. Let's try and flip this." And it just so happened that I didn't care if it was me. I just wanted a woman to do it. And it just happened that that was me. But now that I'm in that position, one, well I run a business as well. I run a business where we can train other new presenters to do it. It's that giving back. So yes, I train other presenters. I also make sure there's opportunity for other presenters. But I also try, and actually I work with a lot of TV shows, and work on their language. And work on the combination of like, "Okay, so you've got a man doing that, you got women doing this. Let's have a look at more diversity." And just trying to show the kids that there are people like them doing science. There's that classic phrase that, "You can't be what you can't see." So yes, it comes responsibility, but also there's a lot of fun. And if you can do the science, be intelligent, be fun, and just be normal and just enjoy your job, then people go, "Hang on," whether they're a boy or a girl, they go, "I want a bit of that," in terms of, "I want that as my job." And so by showing that, then I'm hopefully encouraging more people to do it. But it's about getting out and encouraging the next generation to do it as well. >> Fran, you're going to be moderating a panel in the keynote later this afternoon. Give our audience a little bit. What brought you to this event? What's going into it? And for those that don't get to see it live, what they're missing. >> I am one lucky woman. So the panel I'm moderating, it's all about great design and I am a stickler for great design. As a scientist, prop-builder, person that does engineering day in and day out, I love something when it's perfectly designed. If there is such a thing as a perfect design. So this panel that we've got, Tobias Manisfitz, Satish Ramachandran, and Peter Kreiner from Noma. And so they all come with their own different aspect of design. Satish works at Nutanix. Peter works at Noma, the restaurant here in Copenhagen. And Tobias, he designs the visual effects for things such as Game of Thrones and Call of Duty. And so yes, they each design things for... They're amazing at their level but in such a different way and for a different audience. I'm going to be questioning them on what is great design to them and what frictionless design means and just sort of picking their amazing brains. >> I love that fusion of technology and design as something they talked about in the keynote this morning. Think of Apple or Tesla, those two things coming together. I studied engineering and I feel like there was a missing piece of my education to really go into the design. Something I have an appreciation for, that I've seen in my career. But it's something special to bring those together. >> Yeah. I think care is brought in mostly because yes, one, I love design. But also I've worked a lot with LEGO. And so I was brought in to be the engineering judge on the UK version of LEGO Masters. Apparently, design in children's builds is the same as questioning the owner of NOMA restaurant. (chuckles) >> So what do you think? Obviously you're doing the panel tomorrow. What is in your mind the key to great design? Because as you said, you're a sucker for anything that is just beautiful and seamless and intuitive. And we all know what great design is when we hold it in our hands or look at it. But it is this very ineffable quality of something that... >> So the panel's later today actually. But in terms of great design, yes, we all know when we have great design. But the trouble comes in creating good design. I think the key, and it's always obvious when you say it out loud, but it's that hand in hand partnership with aesthetics and practicality. You can't have something that's just beautiful. But you can't have something that just works. You need to have it as a mixture of both. It's those engineers talking with the designers, the designers talking with the engineers. The both of them talking with the consumers. And from that, good design comes. But don't forget, good design means they're for different people as well. >> What are some of the most exciting things you're working on, because you are a professional pyrotechnician. We've never had someone like this on theCUBE before. This is amazing. This is a first time ever. >> I was strictly told no fire. >> Yes, thank you. We appreciate that. >> Well at the moment, as I said at the beginning, I'm lucky enough to head up the demo team at the Royal Institution. We are just heading into our Christmas lectures. Now if you don't know these Christmas lectures, they were the first science ever done to a juvenile audience. Back in 1825 was when they started. It's a tradition in the UK and so this year, we're just starting to come up with the demonstrations for them. And this year they presented by Hannah Fry, and so they're going to be on maths and algorithms and how that makes you lucky or does it make you lucky? We've been having some really fun meetings. I can't give away too much, but there definitely be some type of stunt involved. That's all I can say. But there's going to be a lot of building. I really need to get back, get my sore out, get stuff made. >> Excellent. And who is the scientist you most admire? >> Oh my word. >> Living or dead? >> Who is the scientist I most admire? (sighs) I do have... Oh gosh, this is... >> The wheels are churning. >> It's a cheesy one though, but Da Vinci. Just for his multi-pronged approach and the fact that he had so much going on in his brain that he couldn't even get everything down on paper. He'd half draw something and then something else would come to him. >> I had the opportunity of interviewing Walter Isaacson last year, and he loved... It was the, as we talked about, the science and the design and the merging of those. But reading that biography of him, what struck me is he never finished anything because it would never meet the perfection in his mind to get it done. I've seen that in creative people. They'll start things and then they'll move on to the next thing and there. Me as a engineering by training, it's like no, no. You need to finish work. Manufacturing from standpoint, work in progress is the worst thing you could have out there. >> He would be a rubbish entrepreneur. (chuckling) >> Right, but we're so lucky to have had his brain. >> Exactly. I think that's the thing. I think it gives us an insight into what the brain is capable of and what you can design without even knowing you're designing something. >> Well Fran, thank you so much for coming on theCUBE. This was so fun. >> Thanks for having me. >> I'm Rebecca Knight for Stu Miniman. Stay tuned for more of theCUBE's live coverage of .NEXT. (upbeat music)

Published Date : Oct 9 2019

SUMMARY :

Brought to you by Nutanix. Thanks so much for coming on the show. It's good to be here actually. So you are a well known face to UK audiences. Give her a winner. and you lead the Christmas lectures at the Royal Institute. so the visual ways to show the science ideas. you have a very cool job. I want to hear And I was like, "You are so right." of the things you're working on. And I was getting approached time and time again What do you think is the key? And if you don't understand that amazingness, and how are you in particular And it just so happened that I didn't care if it was me. And for those that don't get to see it live, I love something when it's perfectly designed. I love that fusion of technology and design And so I was brought in to be the engineering judge So what do you think? and it's always obvious when you say it out loud, What are some of the most exciting things We appreciate that. and how that makes you lucky or does it make you lucky? And who is the scientist you most admire? I do have... and the fact that he had so much going on in his brain I had the opportunity of interviewing He would be a rubbish entrepreneur. and what you can design without Well Fran, thank you so much live coverage of .NEXT.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Tobias ManisfitzPERSON

0.99+

PeterPERSON

0.99+

Susan LucciPERSON

0.99+

Fran ScottPERSON

0.99+

Peter KreinerPERSON

0.99+

FranPERSON

0.99+

CopenhagenLOCATION

0.99+

TobiasPERSON

0.99+

AppleORGANIZATION

0.99+

SatishPERSON

0.99+

TeslaORGANIZATION

0.99+

Call of DutyTITLE

0.99+

Satish RamachandranPERSON

0.99+

Game of ThronesTITLE

0.99+

LondonLOCATION

0.99+

UKLOCATION

0.99+

Stu MinimanPERSON

0.99+

LEGOORGANIZATION

0.99+

Hannah FryPERSON

0.99+

NutanixORGANIZATION

0.99+

Walter IsaacsonPERSON

0.99+

Copenhagen, DenmarkLOCATION

0.99+

last yearDATE

0.99+

GoogleORGANIZATION

0.99+

NOMAORGANIZATION

0.99+

Da VinciPERSON

0.99+

1825DATE

0.99+

Royal InstitutionORGANIZATION

0.99+

bothQUANTITY

0.98+

first timeQUANTITY

0.98+

tomorrowDATE

0.98+

this yearDATE

0.98+

first scienceQUANTITY

0.97+

Royal InstituteORGANIZATION

0.97+

two very different questionsQUANTITY

0.97+

Error 404OTHER

0.97+

OneQUANTITY

0.96+

three timesQUANTITY

0.95+

2019DATE

0.95+

ChristmasEVENT

0.94+

theCUBEORGANIZATION

0.94+

two thingsQUANTITY

0.92+

one lucky womanQUANTITY

0.91+

BAFTATITLE

0.91+

this morningDATE

0.89+

each designQUANTITY

0.88+

inkQUANTITY

0.86+

oneQUANTITY

0.84+

later this afternoonDATE

0.76+

few years agoDATE

0.75+

Nutanix.NEXTORGANIZATION

0.72+

nillyPERSON

0.71+

later todayDATE

0.68+

NomaORGANIZATION

0.62+

LEGO MastersTITLE

0.62+

Nutanix.NEXTTITLE

0.52+

EULOCATION

0.4+