Dave Malik, Cisco | Cisco Live US 2019
>> Narrator: Live from San Diego, California. It's theCUBE. covering Cisco Live US 2019. Brought to you by Cisco and its ecosystem partners. >> Welcome back to San Diego, everybody. You're watching Cisco Live 2019. This is theCUBE, the leader in live tech coverage. This is day three of our wall-to-wall coverage. We go out to the events, we extract the signal from the noise. My name is Dave Vellante. Stu Miniman is here. Our third host, Lisa Martin is also in the house. Dave Malik is here. He's a fellow and Chief Architect at Cisco. David, good to see you. >> Oh, glad to be here. >> Thanks for coming on. First of all, congratulations on being a fellow. What does that mean, a Cisco Fellow? What do you got to go through to achieve that status? >> It's pretty arduous task. It's one of the most highest technical designations in Cisco, but we work across multiple architectures in technologies, as well as our partners, as well, to drive corporate-wide strategy. >> So you've been talking to customers here, you've been presenting. I think you said you gave three presentations here? Multi-cloud, blockchain, and some stuff on machine intelligence, ML. >> Yes. >> Let's hit those. Kind of summarize the overall themes, and then we'll maybe get into each, and then we got a zillion questions for you. >> Sure, excellent. So multi-cloud, I think one of the customers, we're clearly hearing from them is around, how do we get a universal policy model and connectivity model, and how do you orchestrate workloads seamlessly? And those are some of the challenges that we trying to address at this conference. On blockchain, a lot of buzz out there. We're not talking about Bitcoin or cryptocurrency, it's really about leveraging blockchain from a networking perspective, or an identity and encryption, and providing a uniform ledger that everything is pervasive across infrastructure. And then ML, I think it's the heart of every conversation. How do we take pervasive analytics and bring it into the network so we can drive actionable insights into automation? >> So let's start with the third one. When you talk about ML, was your talk on machine learning? Did it spill into artificial intelligence? What's the difference to you from a technology perspective? >> Machine learning is really getting a lot of the data and looking at repetitive patterns in a very common fashion, and doing a massive correlation across multiple domains. So you may have some things happening in the branch, the data set, or a WAN in cloud, but the whole idea is how do you put them together to drive insight? And through artificial intelligence and algorithms, we can try to take those insights and automate them and push them back into the infrastructure or to the application layer. So now you're driving intelligence for not just consumers or devices, but also humans as well to drive insight. >> All right. So Dave, I wonder if you'd help connect with us what you were talking about there, and we'll get to the multicloud piece because I was at an Amazon show last week from Amazon, talking about how when they look at all the technologies that they use to get packages, their fulfillment centers, everything that they do as a business, ML and AI, they said, is underneath that, and AWS is what's driving that technology from that standpoint. Now, multicloud, AWS is a partner of yours. >> Yes. >> Can you give us how you work in multicloud and does ML and IA, is that a Cisco specific? Are you working with some of the standards out there to connect all those pieces? Help us look at some of the big picture of those items. >> So we believe we're agnostic, whether you connect to Amazon, Azure, Google, et cetera, we believe in a uniform policy model and connectivity model, which is very, very arduous today. So you shouldn't have to have a specific policy model, connectivity model, security model for that matter, for each provider. So we're normalizing that plane completely, which is awesome. Then, at a workload level, regardless of whether your workload is spun up or spun down, it should have the same security posture and visibility. We have certain customers that are running as single applications across multiple clouds, so your data is going to be obviously on-prem, you may be running analytics in TenserFlow, compute in EC2, and connecting to O365, that's one app. And where we're seeing the models go is are you leveraging technology such as this? Do you offer service mesh? How do we tie a lot of these micro-services together and then be able to layer workload orchestration on top? So regardless of where your workload sits, and one key point that we keep hearing from our customers is their ungovernance. How we provide cloud-based governance regardless of where their workload is, and that's something we're doing in a very large fashion with customers that have a multicloud strategy. >> So Stu, I think there's still some confusion around multicloud generally, and maybe Cisco's strategy. I wonder if we could maybe clear it up a little bit. >> Dave, it's that big elephant in the room, and I always feel like everybody describes multicloud from a different angle. >> So let's dig into this a little bit, and let's hear from Cisco's perspective. So you got, to my count, five companies really going after this space. You got Cisco, VMware, IBM Red Hat, Microsoft, and Google with Anthos. Probably all those guys are partners of yours. >> Yes. >> Okay, but you guys want to provide the bromide or the single pane of glass, okay. I'm hearing open and agnostic. That's a differentiator. Security, you're in a good position to make an argument that you're in a good position to make things secure. You got the network and so forth. High-performance network, and cost-effective. Everybody's going to make that argument relative to having multiple stovepipes, but that's part of your story as well. So the question. Why Cisco? What's the key differentiator and what gives you confidence that you can really help win in this marketplace? >> So our core competencies are our networking and security. Whether it's cloud-based security or on-prem security, it's uniform. From a security perspective, we have a universal architecture. Whether it's the endpoint, the edge, the cloud, they're all sharing information and intelligence. That's really important. Instead of having bespoke products, these products and solutions need to communicate with each other, so if someone's sick in one area, we're informing the other one. So threat intelligence and network intelligence is huge. Then more importantly, after working with Google, Microsoft, and Amazon, we have on-prem solutions as well, so as customers are going on their multicloud journey, and eventually the workload will transition, you have the same management experience and security experience. So Anthos was a recent announcement, AWS as well, where you can run on-prem Kubernetes, and you can take the same workload and move it to AWS or GCP, but the management model and the control pane model, they are extremely similar and you don't have to learn anything new from a training perspective. >> Okay, but I used the term agnostic, oh, no. You did agnostic, I said open. But you don't care if it's Anthos or VMware, or OpenShift, you don't care. >> Don't care. >> And, architecturally, how is it that you can successfully not care? >> Because the underlying, fundamental principles is you can load any workload you want with this, bare metal, virtualized, or Kubernetes-based containers, they all need the same. For example, everyone needs bread and water. It's not different. So why should you be able to discriminate against a workload or OpenShare if they're using Pivotal Cloud Foundry, for example? The same model, all applications still need security, visibility, networking, and management, but they should not be different across all clouds, and that's traditionally what you're seeing from the other vendors in the market. They're very unique to their stovepipe, and we want to break down those stovepipes across the board, regardless of what app and what workload you have. >> Dave, talk a little bit about the automation that Cisco's delivering to help enable this because there's skill set challenges, just the scale of these environments are more than humans alone can take care of, so how does that automation, I know you're heavily involved in the CX beast of Cisco. How does that all tie together? >> So we're working on a lot of automation projects with our large enterprises and SPs, I mean, you see Rakuten being fairly prominent in the show, but more importantly, we understand not everyone's building a greenfield environment, not everything is purely public cloud. We have to deal with brownfield, we have to deal with third-party ecosystem partners, so you can't have a vertically tight single-vendor solution. So again, to your point, it's completely open. Then we have frameworks, meaning you have orchestrators that can talk down to the device through programmatic interfaces. That's why we see DevNet surrounding us, but then more importantly, we're looking at services that have workflows that could span on-prem, off-prem, third-party, it doesn't really matter. And we stitch a lot of those workloads southbound, but more importantly, northbound to security at ITSM Systems. So those frameworks are coming into life, whether you're a telecom cloud provider or you're a large enterprise. And they slowly fall into those workflows as they become more multi-domain. You saw David Goeckeler the other day, talking about SD-WAN, ECI, and campus wired and wireless. These domains are coming together and that's where we're driving a lot of the automation work. >> So automation is a linchpin to what business outcome? Ultimately, what are customers trying to achieve through automation? >> There's a couple of things. Mean time to value. So if you're a service provider, to your internal customers or external, time to value and speed and agility are key. The other ones are mean time to repair and mean time to detect. If I can shorten the time to detect and shorten time to react, then I can take proactive and preemptive action in situations that may happen. So time to value is really, really important. Cost is a play, obviously, 'cause when you have more and more machines doing your work, your OPEX will come down, but it's really not purely a cost play. Agility and speed are really driving automation to that scale as we're working with folks like Rakuten and others. >> What do you see, Dave, as the big challenges of achieving automation when customers, first of all, I was talking like, 10, 15 years ago people, they were afraid of automation. Some still are. But they I think understand as part of a digital transformation, they got to automate. So what are the challenges that they're having and how are you helping them solve them? >> So typically, what people have thought about automation has been more network-centric, but as we just discussed multicloud, automation is extending all the way to the public cloud, at the workload or at the functional level, if you're running in Lambda, for example. And then more importantly, traditionally, customers have been leveraging Python scripts and things of that nature, but the days of scripters are there, but they cannot scale. You need a model-driven framework, you need model-driven telemetry to get insight. So I think the learning curve of customers moving to a model-driven mindset is extremely important, and it's not just about the network alone, it's also about the application. So that's why we're driving a lot of our frameworks and education and training. And talent's a big gap that we're helping with with our training programs. >> Okay, so you're talking about insights. There's a lot of data. The saying goes, "data is plentiful, insights aren't." So how do you get from data to insights? Is that where the machine intelligence comes in? Maybe you can explain that. >> There's a combination. Machines can process much faster than humans can, but more importantly, somebody has to drive the 30 or 40 years of experience that Cisco has from our tech, our architects and CX, and our customers and the community that we're developing through DevNet. So taking trusted expertise from humans, from all that knowledge base, combining that with machine learning so we get the best of both worlds. 'Cause you need that experience. And that is driving insight so we can filter the signal from the noise, and then more importantly, how do you take that signal and then, in an automated fashion, push that down to an intent-based architecture across the board. >> Dave, can you take us inside a little bit of your touchpoints into customers? In the old days, it was a CCIE, his job, his title, it was equipment that he would touch, and today, talking about this multicloud and the automation, it's very dispersed as to who owns it, most of what I am managing is not something that's under their purview, so the touchpoints you have into the company and the relationship you have changed a lot in the last three, five years or so. >> Absolutely, 'cause the buying center's also changing, because folks are getting more and more centric around the line of business and want the outcome we want to drive for their clients. So the cloud architecture teams that are being built, they're more horizontal now. You'll have a security person, an application, networking, operations, for example, and what we're actually pioneering, a lot of the enterprises and SPs, is building the site reliability engineering teams, or SRE, which Google has obviously pioneered, and we're bringing those concepts and teams through a CX framework, through telecos, and some of their high-end enterprises initially, and you'll see more around that over the coming months. Our SRE jobs, if you go on LinkedIn, you'll probably see hundreds of them out there now. >> One of the other things we've been watching is Cisco has a very broad portfolio. This whole CX piece has to make sure that, from a customer's standpoint, no matter where the portfolio, whether core, edge, IOT, all these various devices, I should have a simplified experience today, which isn't necessarily, my words, Cisco's legacy. How do you make sure, is software a unifying factor inside the company? Give us a little bit about those dynamics inside. >> Absolutely, so we take a life cycle approach. It's not one and done. From the time there's a concept where you want to build out a blueprint, but there's no transformation journey, we have to make sure we walk the client through preparation, planning, design, architecture optimization, but then making sure they actually adopt, and get the true value. So we're working with our customers to make sure that they go around the entire life cycle, from end to end, from cradle to grave, and be able to constantly optimize. You're hearing the word continuous pretty much everywhere. It's kind of the fundamental of CICD, so we believe in a continuous life cycle approach that we're walking the customers end to end to make sure from the point of purchase to the point of decommissioning, making sure they're getting the most value out of the solutions they're getting from Cisco. >> All right Dave, we'll give you the last word on Cisco Live 2019. Thoughts? Takeaways? >> I think there's just amazing energy here, and there's a lot more to come. Come down to the CX booth and we'll have to show you some more gadgets and solutions where we're taking our forward customers. >> Great. David, thank you very much for coming to The Cube. >> Pleasure, thank you. >> All right, 28,000 people and The Cube bringing it to you live. This is Dave Vellante with Stu Miniman. Lisa Martin is also in the house. We'll be right back from Cisco Live San Diego 2019, Day 3. You're watching The Cube.
SUMMARY :
Brought to you by Cisco and its ecosystem partners. We go out to the events, What do you got to go through to achieve that status? It's one of the most highest technical I think you said you gave three presentations here? and then we got a zillion questions for you. and how do you orchestrate workloads seamlessly? What's the difference to you from a technology perspective? So you may have some things happening in the branch, and AWS is what's driving that technology and does ML and IA, is that a Cisco specific? and then be able to layer workload orchestration on top? So Stu, I think there's still some confusion around Dave, it's that big elephant in the room, So you got, to my count, five companies and what gives you confidence that and you don't have to learn anything new or OpenShift, you don't care. So why should you be able to discriminate that Cisco's delivering to help enable this So again, to your point, it's completely open. and shorten time to react, and how are you helping them solve them? and it's not just about the network alone, So how do you get from data to insights? and our customers and the community and the relationship you have and want the outcome we want to drive for their clients. One of the other things we've been watching is and get the true value. All right Dave, we'll give you Come down to the CX booth and we'll have to show you David, thank you very much for coming to The Cube. The Cube bringing it to you live.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Malik | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
David Goeckeler | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
San Diego | LOCATION | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
40 years | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
last week | DATE | 0.99+ |
28,000 people | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
five companies | QUANTITY | 0.99+ |
one app | QUANTITY | 0.99+ |
third host | QUANTITY | 0.99+ |
each provider | QUANTITY | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.98+ |
Azure | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
EC2 | TITLE | 0.98+ |
hundreds | QUANTITY | 0.97+ |
ORGANIZATION | 0.97+ | |
IBM Red Hat | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
First | QUANTITY | 0.96+ |
both worlds | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
five years | QUANTITY | 0.96+ |
TenserFlow | TITLE | 0.96+ |
three presentations | QUANTITY | 0.96+ |
Anthos | ORGANIZATION | 0.95+ |
single pane | QUANTITY | 0.94+ |
Day 3 | QUANTITY | 0.94+ |
Pivotal Cloud Foundry | TITLE | 0.93+ |
One | QUANTITY | 0.92+ |
one area | QUANTITY | 0.91+ |
The Cube | TITLE | 0.91+ |
10, 15 years ago | DATE | 0.89+ |
one key | QUANTITY | 0.88+ |
single applications | QUANTITY | 0.88+ |
single | QUANTITY | 0.87+ |
ML | TITLE | 0.86+ |
CX | TITLE | 0.86+ |
OpenShift | TITLE | 0.84+ |
IA | TITLE | 0.84+ |
theCUBE | ORGANIZATION | 0.81+ |
O365 | TITLE | 0.8+ |
Anthos | TITLE | 0.8+ |
2019 | TITLE | 0.77+ |
a zillion questions | QUANTITY | 0.73+ |
Krishna Gade, Fiddler.ai | Amazon re:MARS 2022
(upbeat music) >> Welcome back. Day two of theCUBE's coverage of re:MARS in Las Vegas. Amazon re:MARS, it's part of the Re Series they call it at Amazon. re:Invent is their big show, re:Inforce is a security show, re:MARS is the new emerging machine learning automation, robotics, and space. The confluence of machine learning powering a new industrial age and inflection point. I'm John Furrier, host of theCUBE. We're here to break it down for another wall to wall coverage. We've got a great guest here, CUBE alumni from our AWS startup showcase, Krishna Gade, founder and CEO of fiddler.ai. Welcome back to theCUBE. Good to see you. >> Great to see you, John. >> In person. We did the remote one before. >> Absolutely, great to be here, and I always love to be part of these interviews and love to talk more about what we're doing. >> Well, you guys have a lot of good street cred, a lot of good word of mouth around the quality of your product, the work you're doing. I know a lot of folks that I admire and trust in the AI machine learning area say great things about you. A lot going on, you guys are growing companies. So you're kind of like a startup on a rocket ship, getting ready to go, pun intended here at the space event. What's going on with you guys? You're here. Machine learning is the centerpiece of it. Swami gave the keynote here at day two and it really is an inflection point. Machine learning is now ready, it's scaling, and some of the examples that they were showing with the workloads and the data sets that they're tapping into, you know, you've got CodeWhisperer, which they announced, you've got trust and bias now being addressed, we're hitting a level, a new level in ML, ML operations, ML modeling, ML workloads for developers. >> Yep, yep, absolutely. You know, I think machine learning now has become an operational software, right? Like you know a lot of companies are investing millions and billions of dollars and creating teams to operationalize machine learning based products. And that's the exciting part. I think the thing that that is very exciting for us is like we are helping those teams to observe how those machine learning applications are working so that they can build trust into it. Because I believe as Swami was alluding to this today, without actually building trust into AI, it's really hard to actually have your business users use it in their business workflows. And that's where we are excited about bringing their trust and visibility factor into machine learning. >> You know, a lot of us all know what you guys are doing here in the ecosystem of AWS. And now extending here, take a minute to explain what Fiddler is doing for the folks that are in the space, that are in discovery mode, trying to understand who's got what, because like Swami said on stage, it's a full-time job to keep up on all the machine learning activities and tool sets and platforms. Take a minute to explain what Fiddler's doing, then we can get into some, some good questions. >> Absolutely. As the enterprise is taking on operationalization of machine learning models, one of the key problems that they run into is lack of visibility into how those models perform. You know, for example, let's say if I'm a bank, I'm trying to introduce credit risk scoring models using machine learning. You know, how do I know when my model is rejecting someone's loan? You know, when my model is accepting someone's loan? And why is it doing it? And I think this is basically what makes machine learning a complex thing to implement and operationalize. Without this visibility, you cannot build trust and actually use it in your business. With Fiddler, what we provide is we actually open up this black box and we help our customers to really understand how those models work. You know, for example, how is my model doing? Is it accurately working or not? You know, why is it actually rejecting someone's loan application? We provide these both fine grain as well as coarse grain insights. So our customers can actually deploy machine learning in a safe and trustworthy manner. >> Who is your customer? Who you're targeting? What persona is it, the data engineer, is it data science, is it the CSO, is it all the above? >> Yeah, our customer is the data scientist and the machine learning engineer, right? And we usually talk to teams that have a few models running in production, that's basically our sweet spot, where they're trying to look for a single pane of glass to see like what models are running in their production, how they're performing, how they're affecting their business metrics. So we typically engage with like head of data science or head of machine learning that has a few machine learning engineers and data scientists. >> Okay, so those people that are watching, you're into this, you can go check it out. It's good to learn. I want to get your thoughts on some trends that I see emerging, and I want to get your reaction to those. Number one, we're seeing the cloud scale now and integration a big part of things. So the time to value was brought up on stage today, Swami kind of mentioned time to value, showed some benchmark where they got four hours, some other teams were doing eight weeks. Where are we on the progression of value, time to value, and on the scale side. Can you scope that for me? >> I mean, it depends, right? You know, depending upon the company. So for example, when we work with banks, for them to time to operationalize a model can take months actually, because of all the regulatory procedures that they have to go through. You know, they have to get the models reviewed by model validators, model risk management teams, and then they audit those models, they have to then ship those models and constantly monitor them. So it's a very long process for them. And even for non-regulated sectors, if you do not have the right tools and processes in place, operationalizing machine learning models can take a long time. You know, with tools like Fiddler, what we are enabling is we are basically compressing that life cycle. We are helping them automate like model monitoring and explainability so that they can actually ship models more faster. Like you get like velocity in terms of shipping models. For example, one of the growing fintech companies that started with us last year started with six models in production, now they're running about 36 models in production. So it's within a year, they were able to like grow like 10x. So that is basically what we are trying to do. >> At other things, we at re:MARS, so first of all, you got a great product and a lot of markets that grow onto, but here you got space. I mean, anyone who's coming out of college or university PhD program, and if they're into aero, they're going to be here, right? This is where they are. Now you have a new core companies with machine learning, not just the engineering that you see in the space or aerospace area, you have a new engineering. Now I go back to the old days where my parents, there was Fortran, you used Fortran was Lingua Franca to manage the equipment. Little throwback to the old school. But now machine learning is companion, first class citizen, to the hardware. And in fact, and some will say more important. >> Yep, I mean, machine learning model is the new software artifact. It is going into production in a big way. And I think it has two different things that compare to traditional software. Number one, unlike traditional software, it's a black box. You cannot read up a machine learning model score and see why it's making those predictions. Number two, it's a stochastic entity. What that means is it's predictive power can wane over time. So it needs to be constantly monitored and then constantly refreshed so that it's actually working in tech. So those are the two main things you need to take care. And if you can do that, then machine learning can give you a huge amount of ROI. >> There is some practitioner kind of like craft to it. >> Correct. >> As you said, you got to know when to refresh, what data sets to bring in, which to stay away from, certainly when you get to the bias, but I'll get to that in a second. My next question is really along the lines of software. So if you believe that open source will dominate the software business, which I do, I mean, most people won't argue. I think you would agree with that, right? Open source is driving everything. If everything's open source, where's the differentiation coming from? So if I'm a startup entrepreneur or I'm a project manager working on the next Artemis mission, I got to open source. Okay, there's definitely security issues here. I don't want to talk about shift left right now, but like, okay, open source is everything. Where's the differentiation, where do I have the proprietary edge? >> It's a great question, right? So I used to work in tech companies before Fiddler. You know, when I used to work at Facebook, we would build everything in house. We would not even use a lot of open source software. So there are companies like that that build everything in house. And then I also worked at companies like Twitter and Pinterest, which are actually used a lot of open source, right? So now, like the thing is, it depends on the maturity of the organization. So if you're a Facebook or a Google, you can build a lot of things in house. Then if you're like a modern tech company, you would probably leverage open source, but there are lots of other companies in the world that still don't have the talent pool to actually build, take things from open source and productionize it. And that's where the opportunity for startups comes in so that we can commercialize these things, create a great enterprise experience, so actually operationalize things for them so that they don't have to do it in house for them. And that's the advantage working with startups. >> I don't want to get all operating systems with you on theory here on the stage here, but I will have to ask you the next question, which I totally agree with you, by the way, that's the way to go. There's not a lot of people out there that are peaked. And that's just statistical and it'll get better. Data engineering is really narrow. That is like the SRE of data. That's a new role emerging. Okay, all the things are happening. So if open source is there, integration is a huge deal. And you start to see the rise of a lot of MSPs, managed service providers. I run Kubernetes clusters, I do this, that, and the other thing. So what's your reaction to the growth of the integration side of the business and this role of new services coming from third parties? >> Yeah, absolutely. I think one of the big challenges for a chief data officer or someone like a CTO is how do they devise this infrastructure architecture and with components, either homegrown components or open source components or some vendor components, and how do they integrate? You know, when I used to run data engineering at Pinterest, we had to devise a data architecture combining all of these things and create something that actually flows very nicely, right? >> If you didn't do it right, it would break. >> Absolutely. And this is why it's important for us, like at Fiddler, to really make sure that Fiddler can integrate to all varies of ML platforms. Today, a lot of our customers use machine learning, build machine learning models on SageMaker. So Fiddler nicely integrate with SageMaker so that data, they get a seamless experience to monitor their models. >> Yeah, I mean, this might not be the right words for it, but I think data engineering as a service is really what I see you guys doing, as well other things, you're providing all that. >> And ML engineering as a service. >> ML engineering as a- Well it's hard. I mean, it's like the hard stuff. >> Yeah, yeah. >> Hear, hear. But that has to enable. So you as a business entrepreneur, you have to create a multiple of value proposition to your customers. What's your vision on that? What is that value? It has to be a multiple, at least 5 to 10. >> I mean, the value is simple, right? You know, if you have to operationize machine learning, you need visibility into how these things work. You know, if you're CTO or like chief data officer is asking how is my model working and how is it affecting my business? You need to be able to show them a dashboard, how it's working, right? And so like a data scientist today struggles to do this. They have to manually generate a report, manually do this analysis. What Fiddler is doing them is basically reducing their work so that they can automate these things and they can still focus on the core aspect of model building and data preparation and this boring aspect of monitoring the model and creating reports around the models is automated for them. >> Yeah, you guys got a great business. I think it's a lot of great future there and it's only going to get bigger. Again, the TAM's going to expand as the growth rising tide comes in. I want to ask you on while we're on that topic of rising tides, Dave Malik and I, since re:Invent last year have been kind of kicked down around this term that we made up called supercloud. And supercloud was a word that came out of these clouds that were not Amazon hyperscalers. So Snowflake, Buildman Sachs, Capital One, you name it, they're building massive proprietary value on top of the CapEx of Amazon. Jerry Chen at Greylock calls it castles in the cloud. You can create these moats. >> Yeah, right. >> So this is a phenomenon, right? And you land on one, and then you go to the others. So the strategies, everyone goes to Amazon first, and then hits Azure and GCP. That then creates this kind of multicloud so, okay, so super cloud's kind of happening, it's a thing. Charles Fitzgerald will disagree, he's a platformer, he says he's against the term. I get why, but he's off base a little. We can't wait to debate him on that. So superclouds are happening, but now what do I do about multicloud, because now I understand multicloud, I have this on that cloud, integrating across clouds is a very difficult thing. >> Krishna: Right, right, right. >> If I'm Snowflake or whatever, hey, I'll go to Azure, more TAM expansion, more market. But are people actually working together? Are we there yet? Where it's like, okay, I'm going to re-operationalize this code base over here. >> I mean, the reality of it, enterprise wants optionality, right? I think they don't want to be locked in into one particular cloud vendor on one particular software. And therefore you actually have in a situation where you have a multicloud scenario where they want to have some workloads in Amazon, some workloads in Azure. And this is an opportunity for startups like us because we are cloud agnostic. We can monitor models wherever you have. So this is where a lot of our customers, they have some of their models are running in their data centers and some of their models running in Amazon. And so we can provide a universal single pan of glass, right? So we can basically connect all of those data and actually showcase. I think this is an opportunity for startups to combine the data streams come from various different clouds and give them a single pain of experience. That way, the sort of the where is your data, where are my models running, which cloud are there, is all abstracted out from the customer. Because at the end of the day, enterprises will want optionality. And we are in this multicloud. >> Yeah, I mean, this reminds me of the interoperability days back when I was growing into the business. Everything was interoperability and OSI and the standards came out, but what's your opinion on openness, okay? There's a kneejerk reaction right now in the market to go silo on your data for governance or whatever reasons, but yet machine learning gurus and experts will say, "Hey, you want to horizon horizontal scalability and have the best machine learning models, you've got to have access to data and fast in real time or near real time." And the antithesis is siloing. >> Krishna: Right, right, right. >> So what's the solution? Customers control the data plane and have a control plane that's... What do customers do? It's a big challenge. >> Yeah, absolutely. I think there are multiple different architectures of ML, right, you know? We've seen like where vendors like us used to deploy completely on-prem, right? And they still do it, we still do it in some customers. And then you had this managed cloud experience where you just abstract out the entire operations from the customer. And then now you have this hybrid experience where you split the control plane and data plane. So you preserve the privacy of the customer from the data perspective, but you still control the infrastructure, right? I don't think there's a right answer. It depends on the product that you're trying to solve. You know, Databricks is able to solve this control plane, data plane split really well. I've seen some other tools that have not done this really well. So I think it all depends upon- >> What about Snowflake? I think they a- >> Sorry, correct. They have a managed cloud service, right? So predominantly that's their business. So I think it all depends on what is your go to market? You know, which customers you're talking to? You know, what's your product architecture look like? You know, from Fiddler's perspective today, we actually have chosen, we either go completely on-prem or we basically provide a managed cloud service and that's actually simpler for us instead of splitting- >> John: So it's customer choice. >> Exactly. >> That's your position. >> Exactly. >> Whoever you want to use Fiddler, go on-prem, no problem, or cloud. >> Correct, or cloud, yeah. >> You'll deploy and you'll work across whatever observability space you want to. >> That's right, that's right. >> Okay, yeah. So that's the big challenge, all right. What's the big observation from your standpoint? You've been on the hyperscaler side, your journey, Facebook, Pinterest, so back then you built everything, because no one else had software for you, but now everybody wants to be a hyperscaler, but there's a huge CapEx advantage. What should someone do? If you're a big enterprise, obviously I could be a big insurance, I could be financial services, oil and gas, whatever vertical, I want a supercloud, what do I do? >> I think like the biggest advantage enterprise today have is they have a plethora of tools. You know, when I used to work on machine learning way back in Microsoft on Bing Search, we had to build everything. You know, from like training platforms, deployment platforms, experimentation platforms. You know, how do we monitor those models? You know, everything has to be homegrown, right? A lot of open source also did not exist at the time. Today, the enterprise has this advantage, they're sitting on this gold mine of tools. You know, obviously there's probably a little bit of tool fatigue as well. You know, which tools to select? >> There's plenty of tools available. >> Exactly, right? And then there's like services available for you. So now you need to make like smarter choices to cobble together this, to create like a workflow for your engineers. And you can really get started quite fast, and actually get on par with some of these modern tech companies. And that is the advantage that a lot of enterprises see. >> If you were going to be the CTO or CEO of a big transformation, knowing what you know, 'cause you just brought up the killer point about why it's such a great time right now, you got platform as a service and the tooling essentially reset everything. So if you're going to throw everything out and start fresh, you're basically brewing the system architecture. It's a complete reset. That's doable. How fast do you think you could do that for say a large enterprise? >> See, I think if you set aside the organization processes and whatever kind of comes in the friction, from a technology perspective, it's pretty fast, right? You can devise a data architecture today with like tools like Kafka, Snowflake and Redshift, and you can actually devise a data architecture very clearly right from day one and actually implement it at scale. And then once you have accumulated enough data and you can extract more value from it, you can go and implement your MLOps workflow as well on top of it. And I think this is where tools like Fiddler can help as well. So I would start with looking at data, do we have centralization of data? Do we have like governance around data? Do we have analytics around data? And then kind of get into machine learning operations. >> Krishna, always great to have you on theCUBE. You're great masterclass guest. Obviously great success in your company. Been there, done that, and doing it again. I got to ask you, since you just brought that up about the whole reset, what is the superhero persona right now? Because it used to be the full stack developer, you know? And then it's like, then I call them, it didn't go over very well in theCUBE, the half stack developer, because nobody wants to be a half stack anything, a half sounds bad, worse than full. But cloud is essentially half a stack. I mean, you got infrastructure, you got tools. Now you're talking about a persona that's going to reset, look at tools, make selections, build an architecture, build an operating environment, distributed computing operating. Who is that person? What's that persona look like? >> I mean, I think the superhero persona today is ML engineering. I'm usually surprised how much is put on an ML engineer to do actually these days. You know, when I entered the industry as a software engineer, I had three or four things in my job to do, I write code, I test it, I deploy it, I'm done. Like today as an ML engineer, I need to worry about my data. How do I collect it? I need to clean the data, I need to train my models, I need to experiment with what it is, and to deploy them, I need to make sure that they're working once they're deployed. >> Now you got to do all the DevOps behind it. >> And all the DevOps behind it. And so I'm like working halftime as a data scientist, halftime as a software engineer, halftime as like a DevOps cloud. >> Cloud architect. >> It's like a heroic job. And I think this is why this is why obviously these jobs are like now really hard jobs and people want to be more and more machine learning >> And they get paid. >> engineering. >> Commensurate with the- >> And they're paid commensurately as well. And this is where I think an opportunity for tools like Fiddler exists as well because we can help those ML engineers do their jobs better. >> Thanks for coming on theCUBE. Great to see you. We're here at re:MARS. And great to see you again. And congratulations on being on the AWS startup showcase that we're in year two, episode four, coming up. We'll have to have you back on. Krishna, great to see you. Thanks for coming on. Okay, This is theCUBE's coverage here at re:MARS. I'm John Furrier, bringing all the signal from all the noise here. Not a lot of noise at this event, it's very small, very intimate, a little bit different, but all on point with space, machine learning, robotics, the future of industrial. We'll back with more coverage after the short break. >> Man: Thank you John. (upbeat music)
SUMMARY :
re:MARS is the new emerging We did the remote one before. and I always love to be and some of the examples And that's the exciting part. folks that are in the space, And I think this is basically and the machine learning engineer, right? So the time to value was You know, they have to that you see in the space And if you can do that, kind of like craft to it. I think you would agree with that, right? so that they don't have to That is like the SRE of data. and create something that If you didn't do it And this is why it's important is really what I see you guys doing, I mean, it's like the hard stuff. But that has to enable. You know, if you have to Again, the TAM's going to expand And you land on one, and I'm going to re-operationalize I mean, the reality of it, and have the best machine learning models, Customers control the data plane And then now you have You know, what's your product Whoever you want to whatever observability space you want to. So that's the big challenge, all right. Today, the enterprise has this advantage, And that is the advantage and the tooling essentially And then once you have to have you on theCUBE. I need to experiment with what Now you got to do all And all the DevOps behind it. And I think this is why this And this is where I think an opportunity And great to see you again. Man: Thank you John.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jerry Chen | PERSON | 0.99+ |
Krishna | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
Dave Malik | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Charles Fitzgerald | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
six models | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
eight weeks | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
last year | DATE | 0.99+ |
Buildman Sachs | ORGANIZATION | 0.99+ |
Swami | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Fiddler | ORGANIZATION | 0.99+ |
Krishna Gade | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Fortran | ORGANIZATION | 0.99+ |
TAM | ORGANIZATION | 0.99+ |
two different things | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Artemis | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Snowflake | ORGANIZATION | 0.97+ |
four things | QUANTITY | 0.96+ |
billions of dollars | QUANTITY | 0.96+ |
Day two | QUANTITY | 0.96+ |
Redshift | TITLE | 0.95+ |
Databricks | ORGANIZATION | 0.95+ |
two main | QUANTITY | 0.94+ |
Kafka | TITLE | 0.94+ |
Snowflake | TITLE | 0.94+ |
SageMaker | TITLE | 0.94+ |
a year | QUANTITY | 0.93+ |
10 | QUANTITY | 0.93+ |
Azure | TITLE | 0.93+ |
first | QUANTITY | 0.92+ |
CUBE | ORGANIZATION | 0.92+ |
Greylock | ORGANIZATION | 0.91+ |
single | QUANTITY | 0.91+ |
single pan of glass | QUANTITY | 0.9+ |
about 36 models | QUANTITY | 0.9+ |
year two | QUANTITY | 0.89+ |
CapEx | ORGANIZATION | 0.89+ |
Lingua Franca | ORGANIZATION | 0.84+ |