Image Title

Search Results for DeVops:

theCUBE's New Analyst Talks Cloud & DevOps


 

(light music) >> Hi everybody. Welcome to this Cube Conversation. I'm really pleased to announce a collaboration with Rob Strechay. He's a guest cube analyst, and we'll be working together to extract the signal from the noise. Rob is a long-time product pro, working at a number of firms including AWS, HP, HPE, NetApp, Snowplow. I did a stint as an analyst at Enterprise Strategy Group. Rob, good to see you. Thanks for coming into our Marlboro Studios. >> Well, thank you for having me. It's always great to be here. >> I'm really excited about working with you. We've known each other for a long time. You've been in the Cube a bunch. You know, you're in between gigs, and I think we can have a lot of fun together. Covering events, covering trends. So. let's get into it. What's happening out there? We're sort of exited the isolation economy. Things were booming. Now, everybody's tapping the brakes. From your standpoint, what are you seeing out there? >> Yeah. I'm seeing that people are really looking how to get more out of their data. How they're bringing things together, how they're looking at the costs of Cloud, and understanding how are they building out their SaaS applications. And understanding that when they go in and actually start to use Cloud, it's not only just using the base services anymore. They're looking at, how do I use these platforms as a service? Some are easier than others, and they're trying to understand, how do I get more value out of that relationship with the Cloud? They're also consolidating the number of Clouds that they have, I would say to try to better optimize their spend, and getting better pricing for that matter. >> Are you seeing people unhook Clouds, or just reduce maybe certain Cloud activities and going maybe instead of 60/40 going 90/10? >> Correct. It's more like the 90/10 type of rule where they're starting to say, Hey I'm not going to get rid of Azure or AWS or Google. I'm going to move a portion of this over that I was using on this one service. Maybe I got a great two-year contract to start with on this platform as a service or a database as a service. I'm going to unhook from that and maybe go with an independent. Maybe with something like a Snowflake or a Databricks on top of another Cloud, so that I can consolidate down. But it also gives them more flexibility as well. >> In our last breaking analysis, Rob, we identified six factors that were reducing Cloud consumption. There were factors and customer tactics. And I want to get your take on this. So, some of the factors really, you got fewer mortgage originations. FinTech, obviously big Cloud user. Crypto, not as much activity there. Lower ad spending means less Cloud. And then one of 'em, which you kind of disagreed with was less, less analytics, you know, fewer... Less frequency of calculations. I'll come back to that. But then optimizing compute using Graviton or AMD instances moving to cheaper storage tiers. That of course makes sense. And then optimize pricing plans. Maybe going from On Demand, you know, to, you know, instead of pay by the drink, buy in volume. Okay. So, first of all, do those make sense to you with the exception? We'll come back and talk about the analytics piece. Is that what you're seeing from customers? >> Yeah, I think so. I think that was pretty much dead on with what I'm seeing from customers and the ones that I go out and talk to. A lot of times they're trying to really monetize their, you know, understand how their business utilizes these Clouds. And, where their spend is going in those Clouds. Can they use, you know, lower tiers of storage? Do they really need the best processors? Do they need to be using Intel or can they get away with AMD or Graviton 2 or 3? Or do they need to move in? And, I think when you look at all of these Clouds, they always have pricing curves that are arcs from the newest to the oldest stuff. And you can play games with that. And understanding how you can actually lower your costs by looking at maybe some of the older generation. Maybe your application was written 10 years ago. You don't necessarily have to be on the best, newest processor for that application per se. >> So last, I want to come back to this whole analytics piece. Last June, I think it was June, Dev Ittycheria, who's the-- I call him Dev. Spelled Dev, pronounced Dave. (chuckles softly) Same pronunciation, different spelling. Dev Ittycheria, CEO of Mongo, on the earnings call. He was getting, you know, hit. Things were starting to get a little less visible in terms of, you know, the outlook. And people were pushing him like... Because you're in the Cloud, is it easier to dial down? And he said, because we're the document database, we support transaction applications. We're less discretionary than say, analytics. Well on the Snowflake earnings call, that same month or the month after, they were all over Slootman and Scarpelli. Oh, the Mongo CEO said that they're less discretionary than analytics. And Snowflake was an interesting comment. They basically said, look, we're the Cloud. You can dial it up, you can dial it down, but the area under the curve over a period of time is going to be the same, because they get their customers to commit. What do you say? You disagreed with the notion that people are running their calculations less frequently. Is that because they're trying to do a better job of targeting customers in near real time? What are you seeing out there? >> Yeah, I think they're moving away from using people and more expensive marketing. Or, they're trying to figure out what's my Google ad spend, what's my Meta ad spend? And what they're trying to do is optimize that spend. So, what is the return on advertising, or the ROAS as they would say. And what they're looking to do is understand, okay, I have to collect these analytics that better understand where are these people coming from? How do they get to my site, to my store, to my whatever? And when they're using it, how do they they better move through that? What you're also seeing is that analytics is not only just for kind of the retail or financial services or things like that, but then they're also, you know, using that to make offers in those categories. When you move back to more, you know, take other companies that are building products and SaaS delivered products. They may actually go and use this analytics for making the product better. And one of the big reasons for that is maybe they're dialing back how many product managers they have. And they're looking to be more data driven about how they actually go and build the product out or enhance the product. So maybe they're, you know, an online video service and they want to understand why people are either using or not using the whiteboard inside the product. And they're collecting a lot of that product analytics in a big way so that they can go through that. And they're doing it in a constant manner. This first party type tracking within applications is growing rapidly by customers. >> So, let's talk about who wins in that. So, obviously the Cloud guys, AWS, Google and Azure. I want to come back and unpack that a little bit. Databricks and Snowflake, we reported on our last breaking analysis, it kind of on a collision course. You know, a couple years ago we were thinking, okay, AWS, Snowflake and Databricks, like perfect sandwich. And then of course they started to become more competitive. My sense is they still, you know, compliment each other in the field, right? But, you know, publicly, they've got bigger aspirations, they get big TAMs that they're going after. But it's interesting, the data shows that-- So, Snowflake was off the charts in terms of spending momentum and our EPR surveys. Our partner down in New York, they kind of came into line. They're both growing in terms of market presence. Databricks couldn't get to IPO. So, we don't have as much, you know, visibility on their financials. You know, Snowflake obviously highly transparent cause they're a public company. And then you got AWS, Google and Azure. And it seems like AWS appears to be more partner friendly. Microsoft, you know, depends on what market you're in. And Google wants to sell BigQuery. >> Yeah. >> So, what are you seeing in the public Cloud from a data platform perspective? >> Yeah. I think that was pretty astute in what you were talking about there, because I think of the three, Google is definitely I think a little bit behind in how they go to market with their partners. Azure's done a fantastic job of partnering with these companies to understand and even though they may have Synapse as their go-to and where they want people to go to do AI and ML. What they're looking at is, Hey, we're going to also be friendly with Snowflake. We're also going to be friendly with a Databricks. And I think that, Amazon has always been there because that's where the market has been for these developers. So, many, like Databricks' and the Snowflake's have gone there first because, you know, Databricks' case, they built out on top of S3 first. And going and using somebody's object layer other than AWS, was not as simple as you would think it would be. Moving between those. >> So, one of the financial meetups I said meetup, but the... It was either the CEO or the CFO. It was either Slootman or Scarpelli talking at, I don't know, Merrill Lynch or one of the other financial conferences said, I think it was probably their Q3 call. Snowflake said 80% of our business goes through Amazon. And he said to this audience, the next day we got a call from Microsoft. Hey, we got to do more. And, we know just from reading the financial statements that Snowflake is getting concessions from Amazon, they're buying in volume, they're renegotiating their contracts. Amazon gets it. You know, lower the price, people buy more. Long term, we're all going to make more money. Microsoft obviously wants to get into that game with Snowflake. They understand the momentum. They said Google, not so much. And I've had customers tell me that they wanted to use Google's AI with Snowflake, but they can't, they got to go to to BigQuery. So, honestly, I haven't like vetted that so. But, I think it's true. But nonetheless, it seems like Google's a little less friendly with the data platform providers. What do you think? >> Yeah, I would say so. I think this is a place that Google looks and wants to own. Is that now, are they doing the right things long term? I mean again, you know, you look at Google Analytics being you know, basically outlawed in five countries in the EU because of GDPR concerns, and compliance and governance of data. And I think people are looking at Google and BigQuery in general and saying, is it the best place for me to go? Is it going to be in the right places where I need it? Still, it's still one of the largest used databases out there just because it underpins a number of the Google services. So you almost get, like you were saying, forced into BigQuery sometimes, if you want to use the tech on top. >> You do strategy. >> Yeah. >> Right? You do strategy, you do messaging. Is it the right call by Google? I mean, it's not a-- I criticize Google sometimes. But, I'm not sure it's the wrong call to say, Hey, this is our ace in the hole. >> Yeah. >> We got to get people into BigQuery. Cause, first of all, BigQuery is a solid product. I mean it's Cloud native and it's, you know, by all, it gets high marks. So, why give the competition an advantage? Let's try to force people essentially into what is we think a great product and it is a great product. The flip side of that is, they're giving up some potential partner TAM and not treating the ecosystem as well as one of their major competitors. What do you do if you're in that position? >> Yeah, I think that that's a fantastic question. And the question I pose back to the companies I've worked with and worked for is, are you really looking to have vendor lock-in as your key differentiator to your service? And I think when you start to look at these companies that are moving away from BigQuery, moving to even, Databricks on top of GCS in Google, they're looking to say, okay, I can go there if I have to evacuate from GCP and go to another Cloud, I can stay on Databricks as a platform, for instance. So I think it's, people are looking at what platform as a service, database as a service they go and use. Because from a strategic perspective, they don't want that vendor locking. >> That's where Supercloud becomes interesting, right? Because, if I can run on Snowflake or Databricks, you know, across Clouds. Even Oracle, you know, they're getting into business with Microsoft. Let's talk about some of the Cloud players. So, the big three have reported. >> Right. >> We saw AWSs Cloud growth decelerated down to 20%, which is I think the lowest growth rate since they started to disclose public numbers. And they said they exited, sorry, they said January they grew at 15%. >> Yeah. >> Year on year. Now, they had some pretty tough compares. But nonetheless, 15%, wow. Azure, kind of mid thirties, and then Google, we had kind of low thirties. But, well behind in terms of size. And Google's losing probably almost $3 billion annually. But, that's not necessarily a bad thing by advocating and investing. What's happening with the Cloud? Is AWS just running into the law, large numbers? Do you think we can actually see a re-acceleration like we have in the past with AWS Cloud? Azure, we predicted is going to be 75% of AWS IAS revenues. You know, we try to estimate IAS. >> Yeah. >> Even though they don't share that with us. That's a huge milestone. You'd think-- There's some people who have, I think, Bob Evans predicted a while ago that Microsoft would surpass AWS in terms of size. You know, what do you think? >> Yeah, I think that Azure's going to keep to-- Keep growing at a pretty good clip. I think that for Azure, they still have really great account control, even though people like to hate Microsoft. The Microsoft sellers that are out there making those companies successful day after day have really done a good job of being in those accounts and helping people. I was recently over in the UK. And the UK market between AWS and Azure is pretty amazing, how much Azure there is. And it's growing within Europe in general. In the states, it's, you know, I think it's growing well. I think it's still growing, probably not as fast as it is outside the U.S. But, you go down to someplace like Australia, it's also Azure. You hear about Azure all the time. >> Why? Is that just because of the Microsoft's software state? It's just so convenient. >> I think it has to do with, you know, and you can go with the reasoning they don't break out, you know, Office 365 and all of that out of their numbers is because they have-- They're in all of these accounts because the office suite is so pervasive in there. So, they always have reasons to go back in and, oh by the way, you're on these old SQL licenses. Let us move you up here and we'll be able to-- We'll support you on the old version, you know, with security and all of these things. And be able to move you forward. So, they have a lot of, I guess you could say, levers to stay in those accounts and be interesting. At least as part of the Cloud estate. I think Amazon, you know, is hitting, you know, the large number. Laws of large numbers. But I think that they're also going through, and I think this was seen in the layoffs that they were making, that they're looking to understand and have profitability in more of those services that they have. You know, over 350 odd services that they have. And you know, as somebody who went there and helped to start yet a new one, while I was there. And finally, it went to beta back in September, you start to look at the fact that, that number of services, people, their own sellers don't even know all of their services. It's impossible to comprehend and sell that many things. So, I think what they're going through is really looking to rationalize a lot of what they're doing from a services perspective going forward. They're looking to focus on more profitable services and bringing those in. Because right now it's built like a layer cake where you have, you know, S3 EBS and EC2 on the bottom of the layer cake. And then maybe you have, you're using IAM, the authorization and authentication in there and you have all these different services. And then they call it EMR on top. And so, EMR has to pay for that entire layer cake just to go and compete against somebody like Mongo or something like that. So, you start to unwind the costs of that. Whereas Azure, went and they build basically ground up services for the most part. And Google kind of falls somewhere in between in how they build their-- They're a sort of layer cake type effect, but not as many layers I guess you could say. >> I feel like, you know, Amazon's trying to be a platform for the ecosystem. Yes, they have their own products and they're going to sell. And that's going to drive their profitability cause they don't have to split the pie. But, they're taking a piece of-- They're spinning the meter, as Ziyas Caravalo likes to say on every time Snowflake or Databricks or Mongo or Atlas is, you know, running on their system. They take a piece of the action. Now, Microsoft does that as well. But, you look at Microsoft and security, head-to-head competitors, for example, with a CrowdStrike or an Okta in identity. Whereas, it seems like at least for now, AWS is a more friendly place for the ecosystem. At the same time, you do a lot of business in Microsoft. >> Yeah. And I think that a lot of companies have always feared that Amazon would just throw, you know, bodies at it. And I think that people have come to the realization that a two pizza team, as Amazon would call it, is eight people. I think that's, you know, two slices per person. I'm a little bit fat, so I don't know if that's enough. But, you start to look at it and go, okay, if they're going to start out with eight engineers, if I'm a startup and they're part of my ecosystem, do I really fear them or should I really embrace them and try to partner closer with them? And I think the smart people and the smart companies are partnering with them because they're realizing, Amazon, unless they can see it to, you know, a hundred million, $500 million market, they're not going to throw eight to 16 people at a problem. I think when, you know, you could say, you could look at the elastic with OpenSearch and what they did there. And the licensing terms and the battle they went through. But they knew that Elastic had a huge market. Also, you had a number of ecosystem companies building on top of now OpenSearch, that are now domain on top of Amazon as well. So, I think Amazon's being pretty strategic in how they're doing it. I think some of the-- It'll be interesting. I think this year is a payout year for the cuts that they're making to some of the services internally to kind of, you know, how do we take the fat off some of those services that-- You know, you look at Alexa. I don't know how much revenue Alexa really generates for them. But it's a means to an end for a number of different other services and partners. >> What do you make of this ChatGPT? I mean, Microsoft obviously is playing that card. You want to, you want ChatGPT in the Cloud, come to Azure. Seems like AWS has to respond. And we know Google is, you know, sharpening its knives to come up with its response. >> Yeah, I mean Google just went and talked about Bard for the first time this week and they're in private preview or I guess they call it beta, but. Right at the moment to select, select AI users, which I have no idea what that means. But that's a very interesting way that they're marketing it out there. But, I think that Amazon will have to respond. I think they'll be more measured than say, what Google's doing with Bard and just throwing it out there to, hey, we're going into beta now. I think they'll look at it and see where do we go and how do we actually integrate this in? Because they do have a lot of components of AI and ML underneath the hood that other services use. And I think that, you know, they've learned from that. And I think that they've already done a good job. Especially for media and entertainment when you start to look at some of the ways that they use it for helping do graphics and helping to do drones. I think part of their buy of iRobot was the fact that iRobot was a big user of RoboMaker, which is using different models to train those robots to go around objects and things like that, so. >> Quick touch on Kubernetes, the whole DevOps World we just covered. The Cloud Native Foundation Security, CNCF. The security conference up in Seattle last week. First time they spun that out kind of like reinforced, you know, AWS spins out, reinforced from reinvent. Amsterdam's coming up soon, the CubeCon. What should we expect? What's hot in Cubeland? >> Yeah, I think, you know, Kubes, you're going to be looking at how OpenShift keeps growing and I think to that respect you get to see the momentum with people like Red Hat. You see others coming up and realizing how OpenShift has gone to market as being, like you were saying, partnering with those Clouds and really making it simple. I think the simplicity and the manageability of Kubernetes is going to be at the forefront. I think a lot of the investment is still going into, how do I bring observability and DevOps and AIOps and MLOps all together. And I think that's going to be a big place where people are going to be looking to see what comes out of CubeCon in Amsterdam. I think it's that manageability ease of use. >> Well Rob, I look forward to working with you on behalf of the whole Cube team. We're going to do more of these and go out to some shows extract the signal from the noise. Really appreciate you coming into our studio. >> Well, thank you for having me on. Really appreciate it. >> You're really welcome. All right, keep it right there, or thanks for watching. This is Dave Vellante for the Cube. And we'll see you next time. (light music)

Published Date : Feb 7 2023

SUMMARY :

I'm really pleased to It's always great to be here. and I think we can have the number of Clouds that they have, contract to start with those make sense to you And, I think when you look in terms of, you know, the outlook. And they're looking to My sense is they still, you know, in how they go to market And he said to this audience, is it the best place for me to go? You do strategy, you do messaging. and it's, you know, And I think when you start Even Oracle, you know, since they started to to be 75% of AWS IAS revenues. You know, what do you think? it's, you know, I think it's growing well. Is that just because of the And be able to move you forward. I feel like, you know, I think when, you know, you could say, And we know Google is, you know, And I think that, you know, you know, AWS spins out, and I think to that respect forward to working with you Well, thank you for having me on. And we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Bob EvansPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HPORGANIZATION

0.99+

AWSORGANIZATION

0.99+

RobPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Rob StrechayPERSON

0.99+

New YorkLOCATION

0.99+

SeptemberDATE

0.99+

SeattleLOCATION

0.99+

JanuaryDATE

0.99+

Dev IttycheriaPERSON

0.99+

HPEORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

AmsterdamLOCATION

0.99+

75%QUANTITY

0.99+

UKLOCATION

0.99+

AWSsORGANIZATION

0.99+

JuneDATE

0.99+

SnowplowORGANIZATION

0.99+

eightQUANTITY

0.99+

80%QUANTITY

0.99+

ScarpelliPERSON

0.99+

15%QUANTITY

0.99+

AustraliaLOCATION

0.99+

MongoORGANIZATION

0.99+

SlootmanPERSON

0.99+

two-yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

EuropeLOCATION

0.99+

DatabricksORGANIZATION

0.99+

six factorsQUANTITY

0.99+

threeQUANTITY

0.99+

Merrill LynchORGANIZATION

0.99+

Last JuneDATE

0.99+

five countriesQUANTITY

0.99+

eight peopleQUANTITY

0.99+

U.S.LOCATION

0.99+

last weekDATE

0.99+

16 peopleQUANTITY

0.99+

Databricks'ORGANIZATION

0.99+

DevOps Virtual Forum Panel 2020


 

>>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi guys. Welcome back. So we have discussed the current state and the near future state of DevOps and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where dev ops needs to go in order to be successful next year. So our guests today are, you've seen them all before Jeffrey Hammond is here. The VP and principal analyst serving CIO is at Forester. We've also got Serge Lucio, the GM of Broadcom's enterprise software division and Glenn Martin, the head of QA transformation at BT guys. Welcome back. Great to have you all three together >>To be here. >>All right. So we're very, we're all very socially distanced as we've talked about before. Great to have this conversation. So let's, let's start with one of the topics that we kicked off the forum with Jeff. We're going to start with you spiritual co-location that's a really interesting topic that we've we've uncovered, but how much of the challenge is truly cultural and what can we solve through technology? Jeff, we'll start with you then search then Glen Jeff, take it away. >>Yeah, I think fundamentally you can have all the technology in the world and if you don't make the right investments in the cultural practices in your development organization, you still won't be effective. Um, almost 10 years ago, I wrote a piece, um, where I did a bunch of research around what made high performance teams, software delivery teams, high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the agile manifesto. Let's take that to today where developers are on their own in their own offices. If you've got teams where the team itself had a high level of autonomy, um, and they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. >>And so what we have seen is that organizations that embraced autonomy, uh, and got their teams in the right place and their teams had the information that they needed to make the right decisions have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like, well, how do we actually push the software that we've created into production that would become the challenge is not, are we writing the right software? And that's why I think the term spiritual co-location is so important because even though we may be physically distant, we're on the same plane, we're connected from a, from, from a, a, a shared purpose. Um, you know, surgeon, I worked together a long, long time ago, surgery it's been what almost 15, 16 years since we were at the same place. And yet I would say there's probably still a certain level of spiritual co-location between us, uh, because of the shared purposes that we've had in the past and what we've seen, uh, in the industry. And that's a really powerful tool, uh, to build on. So what do tools play as part of that, to the extent that tools make information available, to build shared purpose on to the extent that they enable communication so that we can build that spiritual co-location to the extent that they reinforce the culture that we want to put in place, they can be incredibly valuable, especially when, when we don't have the luxury of physical locate, physical colocation. Hope. That makes sense. >>It does. I should have introduced us. This last segment is we're all spiritually co-located or it's a surge, clearly you're still spiritually co located with junk. Talk to me about what your thoughts are about spiritual of co-location the cultural impact and how technology can move it forward. >>Yeah. So I think, well, I'm going to sound very similar to Jeff in that respect. I think, you know, it starts with kind of a shared purpose and the other, I, Oh, individuals teams, uh, contributed to kind of a business outcome. What is our shared goal or shared vision? What's what is it we're trying to achieve collectively and, uh, keeping it aligned to that. Um, and so, so it's really starts with that now, now the big challenge, always these over the last 20 years, especially in large organizations, there's the specialization of roles and functions. And so we, we all that started to basically measure which we do, uh, on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome or purpose. We, we kind of revert back to, okay, what is my database all the time? What is my cycle time like? >>And, and I think, you know, which we can do or where we really should be focused as an industry is to start to basically provide a lens for these different stakeholders to look at what they're doing in the context of kind of these business outcomes. So, um, you know, probably one of my, um, theories of experience was to actually weakness at one of a large financial institution, um, you know, to stakeholders and quote development and operations staring at the same data, right. Which was related to, you know, in calming changes, um, testing, execution results, you know, covert coverage, um, official liabilities and all the all ran. It could have a direction leveling. So that's when you start to put these things in context and represent that in a way that these different stakeholders can, can look at from their different lens. And, uh, and it can start to basically communicate and understand of they jointly are competing to, uh, to, to that kind of common view or objective. >>And Glen, we talked a lot about transformation with you last time. What are your on spiritual co-location and the cultural part, the technology impact? >>Yeah, I mean, I agree with Jeffrey that, you know, um, the people and culture, the most important thing, actually, that's why it's really important when you're transforming to have partners who have the same vision as you, um, who, who you can work with, have the same end goal in mind. And I've certainly found that with our, um, you know, continuing relationship with Broadcom, what it also does though, is although, you know, tools can accelerate what you're doing and can join consistency. You know, we've seen within simplify, which is BTS flagship transformation program, where we're trying to, as it says, simplify the number of systems stacks that we have, the number of products that we have actually at the moment, we've got different value streams within that program who have got organizational silos who were trying to rewrite, rewrite the wheel, um, who are still doing things manually. >>So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT, which is such a complex and very different environments. But in all areas, BT you're in whether it's a consumer, whether it's a mobile area, whether it's large global or government organizations, you know, we found that we need tools that can drive that consistency, but also flex to Greenfield brownfield kind of technologies as well. So it's really important that as I say, for a number of different aspects, that you have the right partner, um, to drive the right culture, I've got the same vision, but also who have the tool sets to help you accelerate. They can't do that on their own, but they can help accelerate what it is you're trying to do in it. And a really good example of that is we're trying to shift left, which is probably a, quite a bit of a buzz phrase in there kind of testing world at the moment. >>But, you know, I could talk about things like continuous delivery director, one of Broadcom's tools, and it has many different features to it, but very simply on its own, it allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams, um, around, you know, could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of continuous delivery director, but it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >>Thanks, Ben. So we'd kind of sum that up. Autonomy collaboration tools that facilitate that. So let's talk now about metrics from your perspectives. What are the metrics that matter, Jeff? >>Well, I'm going to go right back to what Glenn said about data that provides visibility that enables us to, to make decisions, um, with shared purpose. And so business value has to be one of the first things that we at. Um, how do we assess whether we have built something that is valuable, you know, that could be sales revenue, it could be net promoter score. Uh, if you're not selling what you've built, it could even be what the level of reuse is within your organization or other teams picking up the services, uh, that you've created. Um, one of the things that I've begun to see organizations do is to align value streams with customer journeys and then to align teams with those value streams. So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that customer journey, the value associated with it. >>And we're all measured on that. Um, there are flow metrics which are really important. How long does it take us to get a new feature out from the time that we conceive it to the time that we can run our first experiments with it? There are quality metrics, um, you know, some of the classics or maybe things like defect, density, or meantime to response. Um, one of my favorites came from a, um, a company called ultimate software where they looked at the ratio of defects found in production to defects found in pre production and their developers were in fact measured on that ratio. It told them that guess what quality is your job to not just the test? Uh, department's a group. The fourth level that I think is really important, uh, in, in the current, uh, uh, situation that we're in is the level of engagement in your development organization. >>We used to joke that we measured this with the parking lot metric. How full was the parking lot at nine? And how full was it at five o'clock? I can't do that anymore since we're not physically co-located, but what you can do is you can look at how folks are delivering. You can look at your metrics in your SCM environment. You can look at, uh, the relative rates of churn. Uh, you can look at things like, well, are our developers delivering, uh, during longer periods earlier in the morning, later in the evening, are they delivering, uh, you know, on the weekends as well? Are those signs that we might be heading toward a burnout because folks are still running at sprint levels instead of marathon levels. Uh, so all of those in combination, uh, business value, uh, flow engagement in quality, I think form the backbone of any sort of, of metrics, uh, uh, a program. >>The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Um, unfortunately I see organizations where they weaponize the data and that's completely the wrong way to look at it. What you need to do is you need to say, you need to say, how is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing. And then what do we do to remove those blockers, uh, to make sure that we're giving these autonomous teams the context that they need to do their job, uh, in a way that creates the most value for the customer. >>Great advice stuff, Glenn, over to your metrics that matter to you that really make a big, and also, >>How do you measure quality kind of following onto the advice that Jeff provided? I mean, Jeff provided some great advice. Actually, he talks about value. He talks about flow. Both of those things are very much on my mind at the moment. Um, but there was this, I listened to a speaker called me Kirsten a couple of months ago. It talked very much about how important flight management is and removing, you know, and using that to remove waste, to understand in terms of, you know, making software changes, um, what is it that's causing us to do it longer than we need to. So where are those areas where it takes too long? So I think that's a very important thing for us. It's, um, even more basic than that at the moment, we're on a journey from moving from kind of a waterfall to agile. Um, and the problem with moving from waterfall to agile is with waterfall, the, the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. >>Um, and with agile, there's that kind of, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that we give that confidence, um, that that's ready to go, or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence, um, and some of the metrics around that and how healthy those releases are, and actually saying, you know, we spend a lot of money, um, um, an investment setting up, Pat, our teams training our teams, are we actually seeing them deliver more quickly and are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment, but I think it's also about, you know, generally bringing it all together, the dev ops, you know, we've got the kind of value ops AI ops, how do we actually bring that together to so we can make quick decisions and making sure that we are delivering the biggest bang for our buck, absolutely biggest bang for the buck, surge, your thoughts. >>Yeah. So I think we all agree, right? It starts with business metrics, flow metrics. Um, these are kind of the most important metrics. And ultimately, I mean, one of the things that's very common across a highly functional teams is engagements, right? When, when you see a team that's highly functioning, that's agile, that practices DevOps every day, they are highly engaged. Um, that that's, that's definitely true. Now the, you know, back to, I think, uh, GemCis point on weaponization of metrics. One of the key challenges we see is that, um, organizations traditionally have been kind of, uh, you know, setting up benchmarks, right? So what is a good cycle time? What is a good lead time? What is a good meantime to repair? The, the problem is that this is very contextual, right? It varies. It's going to vary quite a bit, depending on the nature of application and system. And so one of the things that we really need to evolve, um, as an industry is to understand that it's not so much about those flow metrics is about, are these four metrics ultimately contribute to the business metric to the business outcome. So that's one thing, the second aspect, I think that's oftentimes misunderstood. >>Yeah. >>So that cycle time, or, or, or what you perceive as being a buy cycle time or better quality, the problem is oftentimes like all, do you go and explore why, right. What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics and not on the eye type patterns, which are pretty common across the industry. Um, you know, you look at, for instance, things like, you know, lead time, for instance, it's very common that, uh, organizational boundaries are going to be a key contributor to badly time. And so I think that there is, you know, the metrics there is, I think a lot of, uh, work that we need to do in terms of classifying this antibiograms, um, you know, back to you, Jeff, I think you're one of the cool offers of waterscrumfall as a, as a, as a key patterning industry or anti-fat. Um, but what our scrum fall right, is a key one, right. And you will detect that through defect, arrival rates. That's where that looks like an escort. And so I think it's beyond kind of the metrics is what do you do with those metrics? >>Right? I'll tell you a search. One of the things that is really interesting to me in that space is I think those of us had been in industry for a long time. We know the anti-patterns cause we've seen them in our career maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti-patterns based on the telemetry that comes in. I think it would be a really interesting place to apply, uh, machine learning and reinforcement learning techniques. Um, so hopefully something that we'd see in the future with dev ops tools, because, you know, as a manager that, that, you know, may be only a 10 year veteran or 15 year veteran, you may be seeing these anti-patterns for the first time. And it would sure be nice to know what to do, uh, when they start to pop up, >>That would right. Insight, always helpful. All right, guys, I would like to get your final thoughts on the fit. The one thing that you believe our audience really needs to be on the lookout for and to put on our agendas for the next 12 months, Jeff, we'll go back to you. >>I would say, look for the opportunities that this disruption presents. And there are a couple that I see, first of all, as we shift to remote central working, uh, we're unlocking new pools of talent, uh, we're, it's possible to implement, uh, more geographic diversity. So, so look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low-code tools to very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers that agile and dev ops are the peanut butter and chocolate to support creative, uh, workers with algorithmic capabilities, >>Peanut butter and chocolate Glen, where do we go from there? What are, what's the one silver bullet that you think folks should be on the lookout for? >>I certainly agree that, um, low, low code is, uh, next year. We'll see much more low code we'd already started going, moving towards a more of a SAS based world, but Loco also, um, I think as well for me, um, we've still got one foot in the kind of cow camp. Um, you know, we'll be fully trying to explore what that means going into the next year and, and exploiting the capabilities of cloud. But I think the last, um, the last thing for me is how do you really instill quality throughout the kind of, um, the life cycle, um, where, when I heard the word scrum for it kind of made me shut it because I know that's a problem. That's where we're at with some of our things at the moment. So we need to get beyond that. We need to be releasing, um, changes more frequently into production and actually being a bit more brave and having the confidence to actually do more testing in production in going straight to production itself. So expect to see much more of that next year. Um, yeah. Thank you. I haven't got any food analogies. Unfortunately >>We all need some peanut butter and chocolate. All right. It starts to take us on that's what's that nugget you think everyone needs to have on their agendas. >>That's interesting. Right. So a couple of days ago we had kind of a latest state of the DevOps report, right? And if you read through the report, it's, it's all about the lost city, right? It's all about, we still are receiving DevOps as being all about speed. And so to me, the key advice is in order to create kind of that spiritual collocation in order to foster engagement, we have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams, to understand how they're delivering value and to align everything they do from a metrics to deliver it, to flow to those metrics. And only with that, I think, are we going to be able to actually start to really start to align kind of all these roles across the organizations and drive, not just speed, but business outcomes, >>All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me today and our guests. I think this was an incredibly valuable fruitful conversation, and we appreciate all of you taking the time to spiritually co-located with us today, guys. Thank you. Thank you, Lisa. Thank you for Jeff Hammond serves Lucio and Glen Martin. I'm Lisa Martin. Thank you for watching the broad cops Broadcom dev ops virtual forum.

Published Date : Nov 20 2020

SUMMARY :

of dev ops virtual forum brought to you by Broadcom. Great to have you all three together We're going to start with you spiritual co-location that's a really interesting topic that we've we've And that's one of the things that you see coming out of the agile Um, you know, surgeon, I worked together a long, long time ago, Talk to me about what your thoughts are about spiritual of co-location I think, you know, it starts with kind of a shared purpose and the other, I, So, um, you know, probably one of my, um, theories of experience was to actually And Glen, we talked a lot about transformation with you last time. And I've certainly found that with our, um, you know, continuing relationship with Broadcom, So it's really important that as I say, for a number of different aspects, that you have the right partner, um, around, you know, could they be doing better component testing? What are the metrics So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that um, you know, some of the classics or maybe things like defect, density, or meantime to response. later in the evening, are they delivering, uh, you know, on the weekends as well? teams the context that they need to do their job, uh, in a way that creates the most value for the customer. the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. Um, and with agile, there's that kind of, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that And so one of the things that we really need to evolve, um, as an industry is to understand And so I think that there is, you know, the metrics there is, I think a lot of, And one of the things that I think you could see tooling do is The one thing that you believe our audience really needs to be on the lookout for and are the peanut butter and chocolate to support creative, uh, workers with algorithmic the last thing for me is how do you really instill quality throughout the kind of, It starts to take us on that's what's that nugget you think everyone needs to have on their agendas. And if you read through the report, it's, I think this was an incredibly valuable fruitful conversation, and we appreciate all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

GlenPERSON

0.99+

GlennPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Lisa MartinPERSON

0.99+

LucioPERSON

0.99+

JeffreyPERSON

0.99+

Serge LucioPERSON

0.99+

Glen MartinPERSON

0.99+

Jeff HammondPERSON

0.99+

15 yearQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

10 yearQUANTITY

0.99+

BenPERSON

0.99+

KirstenPERSON

0.99+

LisaPERSON

0.99+

2021DATE

0.99+

second aspectQUANTITY

0.99+

threeQUANTITY

0.99+

Glenn MartinPERSON

0.99+

next yearDATE

0.99+

next yearDATE

0.99+

BothQUANTITY

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

five o'clockDATE

0.99+

one footQUANTITY

0.98+

PatPERSON

0.98+

second thingQUANTITY

0.98+

Glen JeffPERSON

0.98+

agileTITLE

0.97+

two main thingsQUANTITY

0.97+

first experimentsQUANTITY

0.97+

OneQUANTITY

0.97+

fourth levelQUANTITY

0.97+

first timeQUANTITY

0.96+

one thingQUANTITY

0.95+

SASORGANIZATION

0.95+

nineDATE

0.93+

firstQUANTITY

0.93+

2020DATE

0.93+

four metricsQUANTITY

0.92+

BTORGANIZATION

0.9+

couple of months agoDATE

0.88+

couple of days agoDATE

0.87+

one silver bulletQUANTITY

0.87+

last 20 yearsDATE

0.85+

next 12 monthsDATE

0.85+

DevOpsTITLE

0.83+

16 yearsQUANTITY

0.81+

LocoORGANIZATION

0.8+

10 years agoDATE

0.77+

three unique perspectivesQUANTITY

0.75+

Number twoQUANTITY

0.74+

coupleQUANTITY

0.7+

15QUANTITY

0.69+

GreenfieldORGANIZATION

0.69+

GemCisORGANIZATION

0.66+

ForesterLOCATION

0.51+

Jeffrey Hammond, Forrester | DevOps Virtual Forum 2020


 

>> Narrator: From around the globe, it's theCUBE! With digital coverage of DevOps Virtual Forum, brought to you by Broadcom. >> Hi, Lisa Martin here covering the Broadcom DevOps Virtual Forum. I'm very pleased to be joined today by a CUBE alumni, Jeffrey Hammond, the Vice President and Principal Analyst serving CIOs at Forrester. Jeffrey, nice to talk with you today. >> Good morning, it's good to be here. >> So, a virtual forum, a great opportunity to engage with our audiences. So much has changed in the last, it's an understatement, right? Or it's an overstated thing, but it's obvious. So much has changed. When we think of DevOps, one of the things that we think of is speed, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now. Speaking of the need to adapt, talk to us about what you're seeing with respect to DevOps and Agile in the age of COVID. What are things looking like? >> Yeah, I think that for most organizations, we're in a period of adjustment. When we initially started, it was essentially a sprint. You run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And that's actually what the folks at GitHub saw in May, when they run an analysis of how developers commit times and level of work that they were committing and how they were working. In the first couple months of COVID, was progressing, they found that developers, at least in the Pacific Time Zone, were actually increasing their work volume, maybe 'cause they didn't have two hour commutes, or maybe because they work stuck away in their homes, but for whatever reason, they were doing more work. And it's almost like, if you've ever run a marathon, the first mile or two in the marathon, you feel great, you just want to run and you want to power through it, you want to go hard. And if you do that, by the time you get to mile 18 or 19, you're going to be gassed, sucking for wind. And that's I think where we're starting to hit. So as we start to gear our development shops up for the reality that most of us won't be returning into an office until 2021 at the earliest. And many organizations will be fundamentally changing their remote work policies, we have to make sure that the agile processes that we use, and the DevOps processes and tools that we use to support these teams are essentially aligned to help developers run that marathon, instead of just kind of power through. So, let me give you a couple specifics. For many organizations, they have been in an environment where they will tolerate remote work and what I would call remote work around the edges, like developers can be remote, but product managers and essentially scrum masters and all the administrators that are running the SCM repositories and the DevOps pipelines are all in the office. And it's essentially centralized work. That's not where we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so, one of the implications of that is that we have to think about all the activities that you need to do from a DevOps perspective, or from an agile perspective. They have to be remotable. One of the things I found with some of the organizations I talked to early on was, there were things that administrators had to do that required them to go into the office, to reboot the SCM server as an example, or to make sure that the final approvals for production were made. And so, the code could be moved into the production environment. And so, it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some states. And so, one of the the results of that is that, while we've traditionally said tools are important, but they're not as important as culture, as structure, as organization, as process, I think we have to rethink that a little bit. Because to the extent that tools enable us to be more digitally organized and to achieve higher levels of digitization in our processes, and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers that organizations have at their disposal. I'll give you another example. For years, we've said that the key to success with Agile at the team level is cross functional, co-located teams that are working together. Physically co-located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, how do you take the low hanging fruits of an agile transformation and apply it in the time of COVID? Well, I think what you have to do is you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table, it's the fact that we're able to get into a shared mind space. From a measurement perspective, we can have shared purpose, we can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So, one of the biggest things that organizations need to start to ask themselves is, how do we achieve spiritual co-location with our Agile teams, because we don't have the ease of physical co-location available to us anymore. >> Well, spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge. Here we are seven, eight months in, for many organizations as you say, going from physical workspaces, co-location, being able to collaborate face to face to a light switch flip overnight, and this undefined indeterminate period of time where all we were living with was uncertainty. How does spiritual... When you talk about spiritual co-location in terms of collaboration and processes and technology. Help us unpack that and how are you seeing organizations adopt it? >> Yeah, it's a great question. And I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace DevOps. If you go all the way back to the original Agile Manifesto. There were four principles that were espoused. Individuals and interactions over processes and tools. That's still important, individuals and interactions are at the core of software development. Processes and tools that support those individuals in those interactions are more important than ever. Working software over comprehensive documentation. Working software is still more important. But when you are trying to onboard employees, and they can't come into the office, and they can't do the two day training session, and kind of understand how things work, and they can't just holler over theCUBE, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context. Customer collaboration over contract negotiation. Absolutely still important. But employee collaboration is equally as important if you want to be spiritually co-located and if you want to have a shared purpose. And then, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our DevOps effort around velocity. Getting faster, we need to run as fast as we can. Like that sprinter, okay? Trying to just power through it as quickly as possible. But as we shift to the marathon way of thinking, velocity is still important but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important than just flat out velocity. And so, changing some of the ways that we think about DevOps practices is important to make sure that that agility is there. For one thing, you have to defer decisions as far down the chain to the team level as possible. So those teams have to be empowered to make decisions. Because you can't have a program level meeting of six or seven teams in one large hall and say, here's the lay of the land, here's what we're going to do, here are our processes, and here are our guardrails. Those teams have to make decisions much more quickly. The developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here, or 50 minutes there and do something useful. And so, the tools that support us have to become tolerant of the reality of how we're working. So, if they work in a way that it allows the team together to take as much autonomy as they can handle, to allow them to communicate in a way that delivers shared purpose, and allows them to adapt and master new technologies, then they're in the zone, they'll get spiritually connected. I hope that makes sense (chuckles). >> It does, I think we all could use some of that. But you talked about in the beginning and I've talked to numerous companies during the pandemic on theCUBE about the productivity or rather the number of hours worked has gone way up for many roles, and times that they normally at late at night on the weekends. So, but it's a cultural, it's a mind shift. To your point about DevOps focused on velocity, sprint, sprint, sprint, and now we have to. So that cultural shift is not an easy one for developers and even the biz folks to flip so quickly. What have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >> I think at the core, this really comes down to management sensitivity. When everybody was in the office, you could kind of see the mental health of development teams by watching how they work, you can call it management by walking around, right? We can't do that, managers have to be more aware of what their teams are doing, because they're not going to see that developer doing a check in at 9:00 p.m. on a Friday, because that's what they had to do to meet the objectives. And they're going to have to find new ways to measure engagement and also potential burnout. A friend of mine once had a great metric that he called the Parking Lot Metric. It was helpful as the parking lot at nine and helpful was it at five. And that gives you an indication of how engaged your developers are. What's the digital equivalent of the Parking Lot Metric in the time of COVID, it's commit stats, it's commit rates, it's the turn rate that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes how do we use that information? Do we use that information to say, well, this team isn't delivering at the same level of productivity as another team? Do we weaponize that data? Or do we use that data to identify impedances in the process? Why isn't a team working effectively? Is it because they have higher levels of family obligations, and they've got kids that are at home? Is it because they're working with hardware technology, and guess what, it's not easy to get the hardware technology into their home office, because it's in the lab, at the corporate office. Or they're trying to communicate halfway around the world. And they're communicating with an office lab that is also shut down. And the bandwidth just doesn't enable the level of high bandwidth communications. So, from a DevOps perspective, managers have to get much more sensitive to the exhaust that the DevOps tools are throwing off, but also how they're going to use that in a constructive way to prevent burnout. And then they also need to, if they're not already managing, or monitoring or measuring the level of developer engagement they have, they really need to start. Whether that's surveys around developer satisfaction, whether it's more regular social events where developers can kind of just get together and drink a beer and talk about what's going on in the project and monitoring who checks in and who doesn't. They have to work harder, I think than they ever have before. >> Well, and you mentioned burnout. And that's something that I think we've all faced in this time at varying levels, and it changes and it's a real, there's a tension in the air regardless of where you are. There's a challenge, as you mentioned, people having their kids as co-workers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that have done this, well, this adaptation. What can you share in terms of some real world examples that might inspire the audience? >> Yeah, I'll start with Stack Overflow. They recently published a piece in the Journal of the ACM around some of the things that they had discovered. First of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from the executive level. Social spaces, one of the things that they talk about doing is leaving the video conference room open at the team level all day long. And the team members will go on mute, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question and if anybody else knows the answer, it's kind of like being in that virtual pod, if you will. Even here at Forrester, one of the things that we've done is we've invested in social ceremonies. We've actually moved our team meetings on my analyst team from once every two weeks to weekly. And we have built more time in for socialization, just so we can see how we're doing. I think Microsoft has really made some good information available in how they've managed things like the onboarding process. I think Amanda Silver over there mentioned that a couple of weeks ago, a presentation they did that Microsoft's onboarded over 150,000 people since the start of COVID. If you don't have good remote onboarding processes, that's going to be a disaster. Now, they're not all developers, but if you think about it, everything from how you do the interviewing process, to how you get people their badges, to how they get their equipment. Security is another issue that they called out. Typically, IT security, security of developers machines, ends at the corporate desktop. But now since we're increasingly using our own machines, our own hardware, security organization's going to have to extend their security policies to cover employee devices. And that's caused them to scramble a little bit. So, the examples are out there. It's not a lot of like, we have to do everything completely differently. But it's a lot of subtle changes that have to be made. I'll give you another example. One of the things that we are seeing is that more and more organizations to deal with the challenges around agility with respect to delivering software and embracing low code tools. In fact, we see about 50% of firms are using low code tools right now, we predict it's going to be 75% by the end of next year. So, figuring out how your DevOps processes support an organization that might be using Mendix or OutSystems, or the Power Platform, building the front end of an application, like a track and trace application really, really quickly. But then hooking it up to your back end infrastructure. Does that happen completely outside the DevOps investments that you're making? And the agile processes that you're making? Or do you adapt your organization. Are hybrid teams now, teams that not just have professional developers, but also have business users that are doing some development with a low code tool. Those are the kinds of things that we have to be willing to entertain in order to shift the focus a little bit more toward the agility side, I think. >> A lot of obstacles but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location. Jeffrey, thank you so much for joining us on the program today, very insightful conversation. >> It's my pleasure, it's an important thing. Just remember, if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each, and you'll find that you'll get there. >> Digestible components, wise advice. Jeffrey Hammond, thank you so much for joining. For Jeffrey, I'm Lisa Martin. You're watching Broadcom's DevOps Virtual Forum. (bright upbeat music)

Published Date : Nov 20 2020

SUMMARY :

brought to you by Broadcom. Jeffrey, nice to talk with you today. Speaking of the need to adapt, that the key to success being able to collaborate face to face as far down the chain to and I've talked to numerous that the DevOps tools are throwing off, that might inspire the audience? One of the things that we are seeing and learning to embrace going to run that marathon, you so much for joining.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffreyPERSON

0.99+

Lisa MartinPERSON

0.99+

sixQUANTITY

0.99+

Jeffrey HammondPERSON

0.99+

MicrosoftORGANIZATION

0.99+

26QUANTITY

0.99+

MayDATE

0.99+

two hourQUANTITY

0.99+

two hoursQUANTITY

0.99+

50 minutesQUANTITY

0.99+

BroadcomORGANIZATION

0.99+

sevenQUANTITY

0.99+

Amanda SilverPERSON

0.99+

9:00 p.m.DATE

0.99+

twoQUANTITY

0.99+

GitHubORGANIZATION

0.99+

two dayQUANTITY

0.99+

three weeksQUANTITY

0.99+

75%QUANTITY

0.99+

10 minuteQUANTITY

0.99+

Agile ManifestoTITLE

0.99+

over 150,000 peopleQUANTITY

0.99+

first mileQUANTITY

0.99+

2021DATE

0.98+

ForresterORGANIZATION

0.98+

eight monthsQUANTITY

0.98+

Pacific Time ZoneLOCATION

0.98+

seven teamsQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

CUBEORGANIZATION

0.98+

oneQUANTITY

0.97+

Journal of the ACMTITLE

0.97+

one thingQUANTITY

0.96+

first couple monthsQUANTITY

0.96+

eachQUANTITY

0.95+

about 50%QUANTITY

0.95+

one personQUANTITY

0.95+

fiveDATE

0.95+

FridayDATE

0.94+

mile 18QUANTITY

0.92+

pandemicEVENT

0.92+

DevOpsTITLE

0.92+

Parking Lot MetricOTHER

0.91+

agileTITLE

0.91+

end of next yearDATE

0.91+

FirstQUANTITY

0.9+

2020DATE

0.89+

OutSystemsORGANIZATION

0.89+

COVIDEVENT

0.87+

one largeQUANTITY

0.86+

AgileTITLE

0.85+

nineDATE

0.84+

Stack OverflowORGANIZATION

0.84+

couple of weeks agoDATE

0.83+

19QUANTITY

0.82+

COVIDOTHER

0.82+

once every two weeksQUANTITY

0.75+

theCUBEORGANIZATION

0.74+

four principlesQUANTITY

0.71+

COVIDTITLE

0.64+

DevOps Virtual ForumTITLE

0.61+

MendixORGANIZATION

0.58+

AgileORGANIZATION

0.53+

PresidentPERSON

0.51+

Serge Lucio, Broadcom | DevOps Virtual Forum 2020


 

>> From around the globe it's the CUBE with digital coverage of Devops Virtual Forum, brought to you by Broadcom. >> Continuing our conversations here at Broadcom's DevOps Virtual Forum. Lisa Martin here, please do welcome back to the program. Serge Lucio, the general manager of the Enterprise Software Division at Broadcom. Hey Serge welcome. >> Thank you. Good to be here. >> So I know you were just participating with the BizOps manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept but I wanted to get your thoughts on, spiritual co-location as really a necessity for BizOps to succeed in this unusual time in which we're living. What are your thoughts on spiritual co-location in terms of cultural change versus adoption of technologies? >> Yeah, it's quite interesting, right. When we think about the major impediments for DevOps implementation, that means all about culture, right? And swore over the last 20 years we've been talking about silos. We'd be talking about the paradox for these teams too when it goes to align. And in many ways it's not so much about these teams aligning but about being in the same car, in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me this is really about kind of changing this culture where people start to look at kind of OKRs instead of the key objective that drives the entire team. Now, what it means in practice is really that's we need to change a lot of behaviors, right? It's not about the ER key, it's not about roles. It's about, you know, who can do what and when, and, you know, driving a bias towards action. It's also means that we need, I mean, especially in this COVID times it becomes very difficult, right? To drive kind of a kind of collaboration and affinity between these teams. And so I think there's a significant role that especially tools can play in terms of providing this conference feedback from teams to be in that preface spiritual qualification. >> Well, and it talked about culture being it's something that, you know, we're so used to talking about DevOps with respect to velocity, all about speed here. But of course this time everything changed so quickly but going from the physical spaces to everybody being remote really does take. It's very different than you can't replicate it digitally but there are collaboration tools that can kind of really be essential to help that cultural shift, right? >> Yeah, so to me we tend to talk about collaboration in a very mundane way, right? Of course we can use zoom. We can all get into the same room. But the point when I think when Jeff says spiritual co-location, it's really about, we all share the same objective. Do we have a means for instance, our pipeline, right? When you talk about DevOps probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team. But just thinking about a pipeline, right? At the end of the day, it's all about what is the meantime to feed back to these teams. If I'm a developer and I commit code, how long does it take for, you know, that code to be processed through pipeline or quick and I get feedback? If I am a finance person, who's funding a product or a project, what is my meantime to beat back? And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of that door effect metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration it's really a balance. How do you provide that feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when checking code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allocation, how can the system help me reduce my change builder rate? Because it's able to predict that some issue was introduced in the application or the product. So I think there's a great role of technology and AI candidates to actually provide kind of that new level of collaboration. >> So we'll get to AI in a second but I'm curious, what are some of the metrics you think that really matter right now is organizations are still in some form probably of transformation to this new almost 100% remote workforce. >> So I'll just say first I'm not a big fan of metrics. And the reason being that, you know, you can look at a change failure rate, right, or a leak time or cycle time. And those are interesting metrics, right? The trend on metric is absolutely critical. But what's more important these I'll do get to the root cause. What is taught to you lead to that metric to degrade or improve over time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll use a very mundane example. You know, we know that cycle time is heavily influenced by organizational boundaries. So, you know, we talk a lot about silos, but we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are pretty, you know, valid set metrics but what's way more important is to understand, what are the anti parents? What are the things that we can detect through the data that actually are affecting those metrics? And I mean, over the last 10, 20 years, we've learned a lot about kind of what are the anti parents within our large enterprise customers? And there are plenty of them. >> What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >> So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle where DevOps is more and more embracing value stream management, value stream mapping. And I think what's interesting is that in many ways, the product is becoming the new silo. The notion of a product is very difficult by itself to actually define. People are starting to recognize that a value stream is not its own little kind of island. That in reality, when I did find a product this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So on that and there is clearly kind of a new sets if you will of anti-patterns where, you know, products are being defined as a set of OTRs. They have interdependencies and you have to have a new set of silos. On the other hand the other kind of key movement to ease around the SRE space, where I think there is a cultural clash. While the DevOps side is very much embracing this notion of OTRs and value stream mapping and value management. On the other end, you have IT operations teams. We still think business services, right? For them they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so there is this boundary where I think, well, SRE has been put in place, and there's lots of thinking about what kind of metrics can be defined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for, you know, true operations teams. >> And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You now talked about AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate and maybe see eye to eye with the SREs? >> Yeah, so there're kind of too, so even for myself, right? As a leader of , you know, 1500 people organization, there's a number of things I don't see, right, on a daily basis. And I think the technologies that we have at our disposal today from the AI are able to mine a lot of data and expose a lot of issues that as leaders we may not be aware of. And some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you start to understand, for instance, what is the is a work in progress, right, during the sprint? When you start to analyze the data you can detect for instance, that maybe the teams are over committed, that there is too much work in profits. You can start to identify kind of interprocess either from a technology or from a people point of view, which were hidden. You can start to understand that maybe the change failure rate is dragging. So I believe that there is a fundamental role to be played by the tools to expose again these anti parents. To make these things visible to the teams to be able to even compare teams, right? One of the things that's amazing is now we have access to tons of data not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate and what's working, what's not working. >> Thoughts on AI and automation as a facilitator of spiritual co-location? >> Yeah, absolutely. It's, you know, there's a the problem we all face is the unknown, right? The velocity, the volume, variety of the data, every day we don't really necessarily completely appreciate what is the impact of our actions, right? And so AI can really act as a safety net that enables us to understand what is the impact of our actions. And so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data and collaborate on the data in the actual matter, I think is a very powerful enabler on, in that respect. I mean, I've seen countless of times that for instance at the SRE boundary to basically show that we'll turn the quality attributes of an incoming release, right? And exposing that to an operations person, an SRE person and enabling that collaboration dialogue through there is a very, very powerful tool. >> Do you have any recommendations for how teams can use, you know, the SRE folks, the DevOps says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be non-productive. >> Yeah, so to me there's a part that the question really is when we talk about data. There are different ways you can use data, right? So you can do a lot of analytics, predictive analytics. So I think there is a tendency to look at, let's say a specific KPI, like an availability KPI or change failure rate. And to basically do a regression analysis and projecting all these things is going to happen in the future. To me that's a bad approach. The reason why I fundamentally think it's a better approach is because we, our systems the way we develop software is a non-leader kind of system, right? Software development is not linear in nature. And so I think there's a, this is probably the worst approach is to actually focus on metrics. On the other hand if you start to actually understand at a more granular level, what are the things which are contributing to this, right? So if you start to understand, for instance that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we have, I've actually a customer who identified that over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed this resulted in this implant outages. So if you start to be able to basically establish causality, right? Cause an effect between kind of data across the last cycle. I think this is the right way to use AI. And so pharma to be, I think it's way more about kind of a classification problem. What are the causes of problems that do exist and affect things as opposed to an hourly predictive which I don't think is as powerful? >> So I mentioned in the beginning of our conversation that just came off the BizOps manifesto. You're one of the authors of that. I want to get your thoughts on DevOps and BizOps overlapping, complimenting each other. What, from the BizOps perspective, what does it mean to the future of DevOps? >> Yeah, so it's interesting, right? If you think about DevOps, there's no founding document, right? We can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways there is no clear definition of what DevOps is. If you go to the DevOps Institute today you'll see that, you know, they are specific trainings for instance on value management on SRE. And so in many ways, the problem we have as an industry is that there are set practices between agile, DevOps, SRE, value stream management, Ital, right? And we all basically talk about the same things, right? We all talk about essentially accelerating in the meantime to feedback, but yet we don't have a common framework to talk about that. The other key thing is that we add to wait for genius, Jean Kim's last book to really start to get into the business aspect, right? And for value mapping to start to emerge for us to start as an industry, right? IT to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about kind of driving these business outcomes. And so to me, BizOps is really about kind of putting a lens on kind of this critical element that it's not business and IT that we in fact need to fuse business and IT. That I need needs to transform itself to recognize that it's this value generator, right? It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this over all kind of framework, if you will. That set the context for what is the reason for IT to exist. What are the core values and principles that IT needs to embrace to, again, change from cost center to value center. And then we need to start to use this as a way to start to unify some of, again, the core practices, whether it's agile, DevOps, value stream mapping, SRE. So, I think over time, my hope is that we start to organize a lot of our practices, language and cultural elements. >> Last question Serge in the last few seconds we have here, talking about this, the relation between BizOps and DevOps. What do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >> So to me the key challenge for the industry is really around. So we were seeing a very rapid shift towards kind of project to product, right? Which we don't want to do is to recreate kind of these new silos, these hard silos. So that's one of the big changes that I think we need to be really careful about. Because it is ultimately, it is about culture. It's not about kind of how we segment the work, right? And any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of kind of the spiritual co-location, I think it's really about that too. It's really about kind of focusing on the business outcomes on kind of aligning, on driving engagement across the teams, but not for create kind of a new set of silos which instead of being vertical are going to be these horizontal products. >> Great advice Serge that looking at culture as kind of a way of really addressing and helping to reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps Virtual Forum. >> Thank you. Thanks for your time. Serge Lucio, Lisa Martin, we'll be right back. (upbeat music)

Published Date : Nov 20 2020

SUMMARY :

brought to you by Broadcom. of the Enterprise Software Good to be here. I just had the chance to around kind of the common of really be essential to help I guess the system to automatically what are some of the metrics you think What is taught to you lead On the other end, you and maybe see eye to eye with the SREs? the AI are able to mine the ability to be informed and automation in the right of data across the last cycle. that just came off the BizOps manifesto. in the meantime to feedback, on in the next six to 12 months? of the spiritual co-location, as kind of a way of really Thanks for your time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Serge LucioPERSON

0.99+

SergePERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Jean KimPERSON

0.99+

BroadcomORGANIZATION

0.99+

JeffreyPERSON

0.99+

DevOps InstituteORGANIZATION

0.99+

two areasQUANTITY

0.99+

over 50%QUANTITY

0.99+

last yearDATE

0.98+

DevOpsTITLE

0.98+

1500 peopleQUANTITY

0.98+

tens of thousandsQUANTITY

0.98+

OneQUANTITY

0.96+

todayDATE

0.96+

agileTITLE

0.96+

firstQUANTITY

0.95+

oneQUANTITY

0.94+

almost 100%QUANTITY

0.93+

12 monthsQUANTITY

0.92+

BizOpsTITLE

0.89+

secondQUANTITY

0.88+

hundreds of thousandsQUANTITY

0.85+

eight monthsQUANTITY

0.83+

PhoenixLOCATION

0.81+

Enterprise Software DivisionORGANIZATION

0.81+

last 20 yearsDATE

0.77+

2020DATE

0.75+

minutesDATE

0.74+

SRETITLE

0.67+

10, 20 yearsQUANTITY

0.66+

aboutQUANTITY

0.65+

tons of dataQUANTITY

0.62+

ItalTITLE

0.61+

sixQUANTITY

0.57+

DevOps Virtual ForumEVENT

0.55+

ForumEVENT

0.52+

BizOpsORGANIZATION

0.52+

Virtual ForumEVENT

0.51+

DevOpsORGANIZATION

0.5+

COVIDOTHER

0.5+

lastQUANTITY

0.47+

lastDATE

0.45+

sevenQUANTITY

0.43+

DevopsORGANIZATION

0.38+

Glyn Martin, BT Group | DevOps Virtual Forum


 

>>from around the globe. It's >>the Cube with digital coverage of Dev >>Ops Virtual Forum Brought to You by Broadcom. Welcome to Broadcom, Step Ups, Virtual Forum I and Lisa Martin and I'm joined by another Martin very socially. Distance from me all the way. Coming from Birmingham, England, is Glynn Martin, head of Q. A transformation at BT Glenn. It's great to have you on the program. >>Thank you, Lisa. I'm looking forward, Toa. >>As we said before, we went live to Martin's for the price of one in one segment. So this is gonna be an interesting segment, Guesses. What we're gonna do is Glen's gonna give us a really kind of deep inside out view of Dev ops. From an evolution perspective, Soglo's Let's start transformation is at the heart of what you dio. It's obviously been a very transformative year. How have the events of this year affected the transformation that you are so responsible for driving? >>Yeah. Thank you, Leigh. So I mean, yeah, it has been a difficult year Bond, although working for BT, which is ah, global telecommunications company. Relatively resilient, I suppose, as an industry through covert, it obviously still has been affected and has got its challenges on bond. If anything is actually caused us to accelerate of our transformation journey, you know, we had to do some great things during this time around. You know, in the UK for our emergency and health workers give them unlimited data and for vulnerable people to support them and that spent that we've had to deliver changes quickly. Um, but what? We want to be able to do it, deliver those kind of changes quickly, but sustainably for everything that we do, not just because there's an emergency eso we were already on the kind of journey to by John, but ever so ever more important now that we are what we're able to do, those that kind of work, do it more quickly on. But it works because the implications of it not working is could be terrible in terms of, you know, we've been supporting testing centers, new hospitals to treat covert patients, so we need to get it right and therefore the coverage of what we do, the quality of what we do and how quickly we do. It really has taken on a new scowling what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously deal with the fact that you know, Cove in 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less cost, but they're having to deliver more value quicker on bond, you know, to higher quality. So, yeah, certainly the finances is on our minds. And that's why we need flexible models, cost models that allow us to kind of do growth. But we get that growth by showing that we're delivering value, especially in, you know, these times when there are financial challenges on companies. >>So one of the things that I want to ask you about again looking at, develops from the inside out on the evolution that you've seen you talked about the speed of things really accelerating in this last nine months or so. When we think Dev ops, we think speed. But one of the things I love to get your perspective on we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that scene there as as needing to get, as you said, get things right but done so quickly to support essential businesses, essential workers? How have you seen that cultural shift? >>Yeah, I think you know, before, you know, test test team saw themselves of this part of the software delivery cycle. Andi, actually, now, really, our customers were expecting their quality and to deliver for our customers what they want. Quality has to be ingrained throughout the life cycle. Obviously that you know, there's lots of buzzwords like shift left. How do you do? Shift left testing. But for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle. That Dr you know, Dr Automation drive improvements. I always say that you know, you're only as good as your lowest common denominator on one thing that we're finding on our Dev Ops Journey Waas that we were you know, we would be trying thio do certain things quicker and had automated build automated tests. But if we were taking weeks to create test scripts or we were taking weeks to manly craft data, and even then when we had taken so long to do it that the coverage was quite poor and that led to lots of defects later in the lifecycle or even in in our production environment, we just couldn't afford to do that. And actually, you know, focusing on continuous testing over the last 9 to 12 months has really given us the ability Thio delivered quickly across the the whole life cycle and therefore actually go from doing a kind of semi agile kind of thing where we did you use the stories we did a few of the kind of, you know, as our ceremonies. But we weren't really deploying any quicker into production because, you know, our stakeholders were scared that we didn't have the same control that we had when we had more water for releases. And, you know, when way didn't think ourselves. So we've done a lot of work on every aspect, especially from a testing point of view, every aspect of every activity, rather than just looking at automated test, you know, whether it is actually creating the test in the first place, Whether it's doing security testing earlier in the light and performance testing. Learn the life cycle, etcetera. So, yeah, it Z It's been a riel key thing that for for C T for us to drive, develops, >>talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this, you know, there's a thing. I think people were pretty quiet. Customer experience. Gap. It reminds me of a cart, a Gilbert cartoon where, you know, we start with the requirements here on Do you know, we almost like a Chinese whisper effects and what we deliver eyes completely, completely different. So we think the testing team or the the delivery team, you know, you know, you think they've done a great job. This is what it said in the acceptance criteria, but then our customers the same Well, actually, that's not working. This isn't working, you know, on there's this kind of gap Way had a great launched this year of actual Requirement Society, one of the board common tools Onda that for the first time in in since I remember actually working within B. T, I had customers saying to may, Wow, you know, we want more of this. We want more projects, um, to have a actual requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do you actually, you know, do that have something that both the business on technical people can understand? And we've actually been working with the business using at our requirement. Designer Thio, you know, really look about what the requirements are. Tease out requirements to the hadn't even thought off and making sure that we've got high levels of test coverage. And so what we actually deliver at the end of it, not only have you been able Thio generate test more quickly, but we've got much higher test coverage and also can more smartly, you're using the kind of AI within the tour and with some of the other kind of pipeline tools actually deliver to choose the right tests on the bar, still actually doing a risk based testing approach. So that's been a great launched this year, but just the start of many kind of things that we're >>doing. But what I hear in that Glenn is a lot of positives that have come out of a very challenging situation. Uh, talk to me about it and I like that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration, perspective is you're right. We talk about that a lot critical with Dev Ops. But those challenges there you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pit it so fast? >>I mean, you talked about culture. I mean, you know, Bt is like most come countries companies. So, um, is very siloed. You know, we're still trying to work to become closer as a company. So I think there's a lot of challenges around. How do you integrate with other tools? How do you integrate with you know, the various different technologies and bt we have 58 different whitey stacks? That's not systems that stacks all of those stacks of can have, you know, hundreds of systems on we're trying to. We're gonna drive at the moment a simplified program where we're trying Thio, you know, reduce that number 2 14 stacks. And even then they'll be complexity behind the scenes that that we will be challenged. Maurin Mawr As we go forward, how do you actually hired that to our users on as an I T organization? How do we make ourselves Lena so that even when we you know, we've still got some of that legacy and we'll never fully get rid of it on that's the kind of trade off that we have to make. How do we actually deal with that and and hide that for my users a say and and and drive those programs so we can actually accelerate change. So we take, you know, reduce that kind of waste, and that kind of legacy costs out of our business. You know, the other thing is, well, beating. And I'm sure you know telecoms probably no difference to insurance or finance we've got You know, when you take the number of products that we do and then you combine them, the permutations are tens and hundreds of thousands of products. So we as a business to trying to simplify. We are trying Thio do that in a natural way and haven't trying to do agile in the proper way, you know, and really actually work it paste really deliver value. So I think what we're looking Maura, Maura, at the moment is actually, um is more value focus? Before we used to deliver changes, sometimes into production, someone had a great idea or it was a great idea nine months ago or 12 months ago. But actually, then we end up deploying it. And then we look at the the the users, you know, the usage of that product of that application or whatever it is on. It's not being used for six months, so we're getting much we haven't got, you know, because of the last 12 months, we certainly haven't got room for that kind of waste and you know, the for not really understanding the value of changes that we we are doing. So I think that's the most important thing at the moment is really taken that waste out. You know, there's lots of focus on things like flow management. What bits of the our process are actually taking too long, and we've We've started on that journey, but we've got a hell of a long way to go, you know, But that that involves looking every aspect off the kind of software delivery cycle. >>What are some? Because that that going from, what, 58 i t stocks down to 14 or whatever it's going to be go simplifying is sounds magical. Took everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we've started on a continuous testing journey, and I think that's just the start. I mean, that's really, as I say, looking at every aspect off, you know, from a Q, a point of view. It's every aspect of what we dio. But it's also looking at, you know, we're starting to branch into more like a AI ops and, you know, really, the full life cycle on. But, you know, that's just a stepping stone onto, you know, I think oughta Nomics is the way forward, right? You know all of this kind of stuff that happens um, you know, monitoring, you know, monitoring systems, what's happening in production had to be feed that back. How do you get to a point where actually we think about a change on then suddenly it's in production safely. Or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey. But if we want Thio, you know, in a world where the pace is ever increasing the demands of the team and you know, with the pressures on at the moment where with we're being asked to do things, you know more efficiently Ondas leaving as possible. We need to be, you know, thinking about every part of the process. And how do we put the kind of stepping stones in players to lead us to a more automated kind of, you know, their future? >>Do you feel that that plant outcomes are starting to align with what's delivered? Given this massive shift that you're experiencing, >>I think it's starting to, and I think you know, Azzawi. Look at more of a value based approach on. Do you know a Zeiss? A princess was a kind of flight management. I think that's that will become ever evermore important. So I think it's starting to people. Certainly realized that, you know, people teams need to work together. You know, the kind of the cousin between business and ICT, especially as we go Teoh Mawr kind of sad space solutions, low cold solutions. You know there's not such a gap anymore. Actually, some of our business partners expects to be much more tech savvy. Eso I think you know, this is what we have to kind of appreciate. What is I ts role? How do we give the capabilities become more for centers of excellence rather than actually doing Mount amount of work And for May and from a testing point of view, you know, amount, amount of testing, actually, how do we automate that? How do we actually generate that instead of created? I think that's the kind of challenge going forward. >>What are some? As we look forward, what are some of the things that you would like to see implemented or deployed in the next say, 6 to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think you know, certainly for for where we are as a company from a Q A perspective. We are. Yeah, there's certain bits that we do Well, you know, we've started creating continuous delivery. A day evokes pipelines. Um, there's still manual aspects of that. So, you know, certainly for May I I've challenged my team with saying, How do we do an automated journey? So if I, you know, I put a requirement injera or value whoever it is, that's why. Then click a button on bond, you know, with either zero touch of one touch, then put that into production and have confidence that that has been done safely on that it works. And what happens if it doesn't work? So you know, that's that's the next in the next few months, that's what our concentration is about. But it's also about decision making, you know, how do we actually understand those value judgements? And I think there's lots of the things Dev ops, ai ops, kind of always that aspects of business operations. I think it's about having the information in one place to make those kind of decisions. How does it all tied together, as I say, even still with kind of Dev ops, we've still got elements within my company where we've got lots of different organizations doing some doing similar kind of things but the walking of working in silos Still. So I think, having a eye ops Aziz becomes more and more to the fore as we go to the cloud. And that's what we need to. You know, we're still very early on in our cloud journey, you know. So we need to make sure the technologies work with Cloud as well as you kind of legacy systems. But it's about bringing that all together and having a full visible pipeline. Everybody can see and make decisions against >>you said the word confidence, which jumped out at me right away. Because absolutely, you've gotta have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops to be able to gain the confidence that they're making the right decisions for their business? >>Yeah, I mean, I think the the approach that we've taken actually is not started with technology we've actually taken human centered design a za core principle of what we dio within the i t part of BT. So by using humans tend to design. That means we talked to our customers. We understand their pain points, we map out their current processes on. But when we mapped out, those processes also understand their aspirations as well, you know, Where do they want to be in six months? You know, Do they want to be more agile and you know, or do they want Teoh? Is this apart their business that they want thio run better? We have to Then look at why that's not running well and then see what solutions are out there. We've been lucky that, you know, with our partnership with Broadcom within the P l. A. A lot of the tortures and the P l. A have directly answered some of the businesses problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which is you know, in some companies, including as they do there is that kind of, you know, almost by understanding their their pain points and then saying This is how we can solve your problem We've tended to be much more successful than trying Thio impose something and say We're here to technology that they don't quite understand doesn't really understand how it could have resonate with their problems. So I think that's the heart of it is really about, you know, getting looking at the data, looking at the processes, looking at where the kind of waste is on. Then actually then looking at the right solutions. And as I say, continuous testing is a massive for us. We've also got a good relationship with capitals looking at visual ai on. Actually, there's a common theme through that, and I mean, AI is becoming more and more prevalent, and I know yeah, sometimes what is A I and people have kind of the semantics of it. Is it true, ai or not? But yes, certainly, you know, AI and machine learning is becoming more and more prevalent in the way that we work, and it's allowing us to be much more effective, the quicker and what we do on being more accurate. You know, whether it's finding defects, running the right tests or, you know, being able to anticipate problems before they're happening in a production environment. >>Welcome. Thank you so much for giving us this sort of insight. Outlook at Dev Ops, sharing the successes that you're having taking those challenges, converting them toe opportunities and forgiving folks who might be in your shoes or maybe slightly behind advice. I'm sure they appreciate it. We appreciate your time. >>It's been an absolute pleasure, Really. Thank you for inviting me of Extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glynn Martin and Lisa Martin. You're watching the Cube?

Published Date : Nov 20 2020

SUMMARY :

from around the globe. It's great to have you on the program. How have the events of this year affected the transformation that you are so We have to obviously deal with the fact that you know, What are some of the things that scene there as as needing to get, as you said, get things right but done so quickly Waas that we were you know, we would be trying thio do certain What are some of the shifts in terms of expectations So we think the testing team or the the delivery team, you know, But those challenges there you guys were able And then we look at the the the users, you know, the usage of that product of that application What are some of the core technology capabilities that you see really But if we want Thio, you know, in a world where the pace is ever increasing May and from a testing point of view, you know, amount, amount of testing, actually, how do we automate that? So you know, that's that's the next in the next few months, that's what our concentration is Last question for you is how would you advise your peers in a similar situation So I think that's the heart of it is really about, you know, getting looking at the data, Thank you so much for giving us this sort of insight. So thank you ever so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Glynn MartinPERSON

0.99+

Lisa MartinPERSON

0.99+

JohnPERSON

0.99+

tensQUANTITY

0.99+

LisaPERSON

0.99+

Maurin MawrPERSON

0.99+

UKLOCATION

0.99+

LeighPERSON

0.99+

MauraPERSON

0.99+

AzzawiPERSON

0.99+

MartinPERSON

0.99+

Birmingham, EnglandLOCATION

0.99+

BroadcomORGANIZATION

0.99+

14QUANTITY

0.99+

6QUANTITY

0.99+

MayDATE

0.99+

oneQUANTITY

0.99+

Glyn MartinPERSON

0.99+

BT GroupORGANIZATION

0.98+

bothQUANTITY

0.98+

nine months agoDATE

0.98+

12 months agoDATE

0.98+

12 monthsQUANTITY

0.98+

GlennPERSON

0.98+

six monthsQUANTITY

0.98+

this yearDATE

0.98+

SogloORGANIZATION

0.98+

six monthsQUANTITY

0.98+

one touchQUANTITY

0.98+

ThioPERSON

0.97+

hundredsQUANTITY

0.97+

P l. AORGANIZATION

0.97+

first timeQUANTITY

0.97+

BTORGANIZATION

0.96+

GilbertPERSON

0.96+

one segmentQUANTITY

0.95+

agileTITLE

0.94+

BT GlennORGANIZATION

0.94+

ToaPERSON

0.92+

Teoh MawrPERSON

0.91+

one thingQUANTITY

0.91+

CoveORGANIZATION

0.89+

ChineseOTHER

0.88+

GlenPERSON

0.87+

ZeissPERSON

0.87+

B. TORGANIZATION

0.86+

zero touchQUANTITY

0.84+

LenaPERSON

0.83+

Step UpsORGANIZATION

0.81+

58QUANTITY

0.79+

last nine monthsDATE

0.79+

AzizPERSON

0.79+

14 stacksQUANTITY

0.78+

first placeQUANTITY

0.76+

hundreds of thousandsQUANTITY

0.76+

productsQUANTITY

0.75+

last 12 monthsDATE

0.75+

58 different whitey stacksQUANTITY

0.73+

2OTHER

0.71+

DevOpsORGANIZATION

0.71+

ForumORGANIZATION

0.69+

pandemicEVENT

0.63+

Dev OpsORGANIZATION

0.62+

Requirement SocietyORGANIZATION

0.62+

9QUANTITY

0.6+

ThioORGANIZATION

0.59+

thioPERSON

0.58+

AndiPERSON

0.57+

Dev OpsTITLE

0.54+

nextDATE

0.53+

oughta NomicsORGANIZATION

0.52+

Dev Ops JourneyTITLE

0.52+

OutlookTITLE

0.51+

monthsDATE

0.49+

lastDATE

0.47+

19QUANTITY

0.4+

DevOps Virtual Forum 2020 | Broadcom


 

>>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi, Lisa Martin here covering the Broadcom dev ops virtual forum. I'm very pleased to be joined today by a cube alumni, Jeffrey Hammond, the vice president and principal analyst serving CIO is at Forester. Jeffrey. Nice to talk with you today. >>Good morning. It's good to be here. Yeah. >>So a virtual forum, great opportunity to engage with our audiences so much has changed in the last it's an understatement, right? Or it's an overstated thing, but it's an obvious, so much has changed when we think of dev ops. One of the things that we think of is speed, you know, enabling organizations to be able to better serve customers or adapt to changing markets like we're in now, speaking of the need to adapt, talk to us about what you're seeing with respect to dev ops and agile in the age of COVID, what are things looking like? >>Yeah, I think that, um, for most organizations, we're in a, uh, a period of adjustment, uh, when we initially started, it was essentially a sprint, you know, you run as hard as you can for as fast as you can for as long as you can and you just kind of power through it. And, and that's actually what, um, the folks that get hub saw in may when they ran an analysis of how developers, uh, commit times and a level of work that they were committing and how they were working, uh, in the first couple of months of COVID was, was progressing. They found that developers, at least in the Pacific time zone were actually increasing their work volume, maybe because they didn't have two hour commutes or maybe because they were stuck away in their homes, but for whatever reason, they were doing more work. >>And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, you feel great and you just want to run and you want to power through it and you want to go hard. And if you do that by the time you get to mile 18 or 19, you're going to be gassed. It's sucking for wind. Uh, and, and that's, I think where we're starting to hit. So as we start to, um, gear our development chops out for the reality that most of us won't be returning into an office until 2021 at the earliest and many organizations will, will be fundamentally changing, uh, their remote workforce, uh, policies. We have to make sure that the agile processes that we use and the dev ops processes and tools that we use to support these teams are essentially aligned to help developers run that marathon instead of just kind of power through. >>So, um, let me give you a couple of specifics for many organizations, they have been in an environment where they will, um, tolerate Rover remote work and what I would call remote work around the edges like developers can be remote, but product managers and, um, you know, essentially scrum masters and all the administrators that are running the, uh, uh, the SCM repositories and, and the dev ops pipelines are all in the office. And it's essentially centralized work. That's not, we are anymore. We're moving from remote workers at the edge to remote workers at the center of what we do. And so one of the implications of that is that, um, we have to think about all the activities that you need to do from a dev ops perspective or from an agile perspective, they have to be remote people. One of the things I found with some of the organizations I talked to early on was there were things that administrators had to do that required them to go into the office to reboot the SCM server as an example, or to make sure that the final approvals for production, uh, were made. >>And so the code could be moved into the production environment. And so it actually was a little bit difficult because they had to get specific approval from the HR organizations to actually be allowed to go into the office in some States. And so one of the, the results of that is that while we've traditionally said, you know, tools are important, but they're not as important as culture as structure as organization as process. I think we have to rethink that a little bit because to the extent that tools enable us to be more digitally organized and to hiring, you know, achieve higher levels of digitization in our processes and be able to support the idea of remote workers in the center. They're now on an equal footing with so many of the other levers, uh, that, that, um, uh, that organizations have at their disposal. Um, I'll give you another example for years. >>We've said that the key to success with agile at the team level is cross-functional co located teams that are working together physically co located. It's the easiest way to show agile success. We can't do that anymore. We can't be physically located at least for the foreseeable future. So, you know, how do you take the low hanging fruits of an agile transformation and apply it in, in, in, in the time of COVID? Well, I think what you have to do is that you have to look at what physical co-location has enabled in the past and understand that it's not so much the fact that we're together looking at each other across the table. It's the fact that we're able to get into a shared mindspace, uh, from, um, uh, from a measurement perspective, we can have shared purpose. We can engage in high bandwidth communications. It's the spiritual aspect of that physical co-location that is actually important. So one of the biggest things that organizations need to start to ask themselves is how do we achieve spiritual colocation with our agile teams? Because we don't have the, the ease of physical co-location available to us anymore? >>Well, the spiritual co-location is such an interesting kind of provocative phrase there, but something that probably was a challenge here, we are seven, eight months in for many organizations, as you say, going from, you know, physical workspaces, co-location being able to collaborate face to face to a, a light switch flip overnight. And this undefined period of time where all we were living with with was uncertainty, how does spiritual, what do you, when you talk about spiritual co-location in terms of collaboration and processes and technology help us unpack that, and how are you seeing organizations adopted? >>Yeah, it's, it's, um, it's a great question. And, and I think it goes to the very root of how organizations are trying to transform themselves to be more agile and to embrace dev ops. Um, if you go all the way back to the, to the original, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions over processes and tools. That's still important. Individuals and interactions are at the core of software development, processes and tools that support those individual and interact. Uh, those individuals in those interactions are more important than ever working software over comprehensive documentation. Working software is still more important, but when you are trying to onboard employees and they can't come into the office and they can't do the two day training session and kind of understand how things work and they can't just holler over the cube, uh, to ask a question, you may need to invest a little bit more in documentation to help that onboarding process be successful in a remote context, uh, customer collaboration over contract negotiation. >>Absolutely still important, but employee collaboration is equally as important if you want to be spiritually, spiritually co-located. And if you want to have a shared purpose and then, um, responding to change over following a plan. I think one of the things that's happened in a lot of organizations is we have focused so much of our dev ops effort around velocity getting faster. We need to run as fast as we can like that sprinter. Okay. You know, trying to just power through it as quickly as possible. But as we shift to, to the, to the marathon way of thinking, um, velocity is still important, but agility becomes even more important. So when you have to create an application in three weeks to do track and trace for your employees, agility is more important. Um, and then just flat out velocity. Um, and so changing some of the ways that we think about dev ops practices, um, is, is important to make sure that that agility is there for one thing, you have to defer decisions as far down the chain to the team level as possible. >>So those teams have to be empowered to make decisions because you can't have a program level meeting of six or seven teams and one large hall and say, here's the lay of the land. Here's what we're going to do here are our processes. And here are our guardrails. Those teams have to make decisions much more quickly that developers are actually developing code in smaller chunks of flow. They have to be able to take two hours here or 50 minutes there and do something useful. And so the tools that support us have to become tolerant of the reality of, of, of, of how we're working. So if they work in a way that it allows the team together to take as much autonomy as they can handle, um, to, uh, allow them to communicate in a way that, that, that delivers shared purpose and allows them to adapt and master new technologies, then they're in the zone in their spiritual, they'll get spiritually connected. I hope that makes sense. >>It does. I think we all could use some of that, but, you know, you talked about in the beginning and I've, I've talked to numerous companies during the pandemic on the cube about the productivity, or rather the number of hours of work has gone way up for many roles, you know, and, and, and times that they normally late at night on the weekends. So, but it's a cultural, it's a mind shift to your point about dev ops focused on velocity, sprints, sprints, sprints, and now we have to, so that cultural shift is not an easy one for developers. And even at this folks to flip so quickly, what have you seen in terms of the velocity at which businesses are able to get more of that balance between the velocity, the sprint and the agility? >>I think, I think at the core, this really comes down to management sensitivity. Um, when everybody was in the office, you could kind of see the mental health of development teams by, by watching how they work. You know, you call it management by walking around, right. We can't do that. Managers have to, um, to, to be more aware of what their teams are doing, because they're not going to see that, that developer doing a check-in at 9:00 PM on a Friday, uh, because that's what they had to do, uh, to meet the objectives. And, um, and, and they're going to have to, to, um, to find new ways to measure engagement and also potential burnout. Um, friend of mine once had, uh, had a great metric that he called the parking lot metric. It was helpful as the parking lot at nine. And how full was it at five? >>And that gives you an indication of how engaged your developers are. Um, what's the digital equivalent equivalent to the parking lot metric in the time of COVID it's commit stats, it's commit rates. It's, um, you know, the, uh, the turn rate, uh, that we have in our code. So we have this information, we may not be collecting it, but then the next question becomes, how do we use that information? Do we use that information to say, well, this team isn't delivering as at the same level of productivity as another team, do we weaponize that data or do we use that data to identify impedances in the process? Um, why isn't a team working effectively? Is it because they have higher levels of family obligations and they've got kids that, that are at home? Um, is it because they're working with, um, you know, hardware technology, and guess what, they, it's not easy to get the hardware technology into their home office because it's in the lab at the, uh, at the corporate office, uh, or they're trying to communicate, uh, you know, halfway around the world. >>And, uh, they're communicating with a, with an office lab that is also shut down and, and, and the bandwidth just doesn't enable the, the level of high bandwidth communications. So from a dev ops perspective, managers have to get much more sensitive to the, the exhaust that the dev ops tools are throwing off, but also how they're going to use that in a constructive way to, to prevent burnout. And then they also need to, if they're not already managing or monitoring or measuring the level of developer engagement, they have, they really need to start whether that's surveys around developer satisfaction, um, whether it's, you know, more regular social events, uh, where developers can kind of just get together and drink a beer and talk about what's going on in the project, uh, and monitoring who checks in and who doesn't, uh, they have to, to, um, work harder, I think, than they ever have before. >>Well, and you mentioned burnout, and that's something that I think we've all faced in this time at varying levels and it changes. And it's a real, there's a tension in the air, regardless of where you are. There's a challenge, as you mentioned, people having, you know, coworker, their kids as coworkers and fighting for bandwidth, because everyone is forced in this situation. I'd love to get your perspective on some businesses that are, that have done this well, this adaptation, what can you share in terms of some real-world examples that might inspire the audience? >>Yeah. Uh, I'll start with, uh, stack overflow. Uh, they recently published a piece in the journal of the ACM around some of the things that they had discovered. Um, you know, first of all, just a cultural philosophy. If one person is remote, everybody is remote. And you just think that way from an executive level, um, social spaces. One of the things that they talk about doing is leaving a video conference room open at a team level all day long, and the team members, you know, we'll go on mute, you know, so that they don't have to, that they don't necessarily have to be there with somebody else listening to them. But if they have a question, they can just pop off mute really quickly and ask the question. And if anybody else knows the answer, it's kind of like being in that virtual pod. Uh, if you, uh, if you will, um, even here at Forrester, one of the things that we've done is we've invested in social ceremonies. >>We've actually moved our to our team meetings on, on my analyst team from, from once every two weeks to weekly. And we have built more time in for social Ajay socialization, just so we can see, uh, how, how, how we're doing. Um, I think Microsoft has really made some good, uh, information available in how they've managed things like the onboarding process. I think I'm Amanda silver over there mentioned that a couple of weeks ago when, uh, uh, a presentation they did that, uh, uh, Microsoft onboarded over 150,000 people since the start of COVID, if you don't have good remote onboarding processes, that's going to be a disaster. Now they're not all developers, but if you think about it, um, everything from how you do the interviewing process, uh, to how you get people, their badges, to how they get their equipment. Um, security is a, is another issue that they called out typically, uh, it security, um, the security of, of developers machines ends at, at, at the corporate desktop. >>But, you know, since we're increasingly using our own machines, our own hardware, um, security organizations kind of have to extend their security policies to cover, uh, employee devices, and that's caused them to scramble a little bit. Uh, so, so the examples are out there. It's not a lot of, like, we have to do everything completely differently, but it's a lot of subtle changes that, that have to be made. Um, I'll give you another example. Um, one of the things that, that we are seeing is that, um, more and more organizations to deal with the challenges around agility, with respect to delivering software, embracing low-code tools. In fact, uh, we see about 50% of firms are using low-code tools right now. We predict it's going to be 75% by the end of next year. So figuring out how your dev ops processes support an organization that might be using Mendix or OutSystems, or, you know, the power platform building the front end of an application, like a track and trace application really, really quickly, but then hooking it up to your backend infrastructure. Does that happen completely outside the dev ops investments that you're making and the agile processes that you're making, or do you adapt your organization? Um, our hybrid teams now teams that not just have professional developers, but also have business users that are doing some development with a low-code tool. Those are the kinds of things that we have to be, um, willing to, um, to entertain in order to shift the focus a little bit more toward the agility side, I think >>Lot of obstacles, but also a lot of opportunities for businesses to really learn, pay attention here, pivot and grow, and hopefully some good opportunities for the developers and the business folks to just get better at what they're doing and learning to embrace spiritual co-location Jeffrey, thank you so much for joining us on the program today. Very insightful conversation. >>My pleasure. It's it's, it's an important thing. Just remember if you're going to run that marathon, break it into 26, 10 minute runs, take a walk break in between each and you'll find that you'll get there. >>Digestible components, wise advice. Jeffery Hammond. Thank you so much for joining for Jeffrey I'm Lisa Martin, you're watching Broadcom's dev ops virtual forum >>From around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom, >>Continuing our conversations here at Broadcom's dev ops virtual forum. Lisa Martin here, please. To welcome back to the program, Serge Lucio, the general manager of the enterprise software division at Broadcom. Hey, Serge. Welcome. Thank you. Good to be here. So I know you were just, uh, participating with the biz ops manifesto that just happened recently. I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, but I wanted to get your thoughts on spiritual co-location as really a necessity for biz ops to succeed in this unusual time in which we're living. What are your thoughts on spiritual colocation in terms of cultural change versus adoption of technologies? >>Yeah, it's a, it's, it's quite interesting, right? When we, when we think about the major impediments for, uh, for dev ops implementation, it's all about culture, right? And swore over the last 20 years, we've been talking about silos. We'd be talking about the paradox for these teams to when it went to align in many ways, it's not so much about these teams aligning, but about being in the same car in the same books, right? It's really about fusing those teams around kind of the common purpose, a common objective. So to me, the, this, this is really about kind of changing this culture where people start to look at a kind of OKR is instead of the key objective, um, that, that drives the entire team. Now, what it means in practice is really that's, uh, we need to change a lot of behaviors, right? It's not about the Yarki, it's not about roles. It's about, you know, who can do what and when, and, uh, you know, driving a bias towards action. It also means that we need, I mean, especially in this school times, it becomes very difficult, right? To drive kind of a kind of collaboration between these teams. And so I think there there's a significant role that especially tools can play in terms of providing this complex feedback from teams to, uh, to be in that preface spiritual qualification. >>Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect to velocity, all about speed here. But of course this time everything changed so quickly, but going from the physical spaces to everybody being remote really does take it. It's very different than you can't replicate it digitally, but there are collaboration tools that can kind of really be essential to help that cultural shift. Right? >>Yeah. So 2020, we, we touch to talk about collaboration in a very mundane way. Like, of course we can use zoom. We can all get into, into the same room. But the point when I think when Jeff says spiritual, co-location, it's really about, we all share the same objective. Do we, do we have a niece who, for instance, our pipeline, right? When you talk about dev ops, probably we all started thinking about this continuous delivery pipeline that basically drives the automation, the orchestration across the team, but just thinking about a pipeline, right, at the end of the day, it's all about what is the meantime to beat back to these teams. If I'm a developer and a commit code, I don't, does it take where, you know, that code to be processed through pipeline pushy? Can I get feedback if I am a finance person who is funding a product or a project, what is my meantime to beat back? >>And so a lot of, kind of a, when we think about the pipeline, I think what's been really inspiring to me in the last year or so is that there is much more of an adoption of the Dora metrics. There is way more of a focus around value stream management. And to me, this is really when we talk about collaboration, it's really a balance. How do you provide the feedback to the different stakeholders across the life cycle in a very timely matter? And that's what we would need to get to in terms of kind of this, this notion of collaboration. It's not so much about people being in the same physical space. It's about, you know, when I checked in code, you know, to do I guess the system to automatically identify what I'm going to break. If I'm about to release some allegation, how can the system help me reduce my change pillar rates? Because it's, it's able to predict that some issue was introduced in the outpatient or work product. Um, so I think there's, there's a great role of technology and AI candidate Lynch to, to actually provide that new level of collaboration. >>So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right now is organizations are still in some form of transformation to this new almost 100% remote workforce. >>So I'll just say first, I'm not a big fan of metrics. Um, and the reason being that, you know, you can look at a change killer rate, right, or a lead time or cycle time. And those are, those are interesting metrics, right? The trend on metric is absolutely critical, but what's more important is you get to the root cause what is taught to you lean to that metric to degrade or improve or time. And so I'm much more interested and we, you know, fruit for Broadcom. Are we more interested in understanding what are the patterns that contribute to this? So I'll give you a very mundane example. You know, we know that cycle time is heavily influenced by, um, organizational boundaries. So, you know, we talk a lot about silos, but, uh, we we've worked with many of our customers doing value stream mapping. And oftentimes what you see is that really the boundaries of your organization creates a lot of idle time, right? So to me, it's less about the metrics. I think the door metrics are a pretty, you know, valid set metrics, but what's way more important is to understand what are the antiperspirants, what are the things that we can detect through the data that actually are affecting those metrics. And, uh, I mean, over the last 10, 20 years, we've learned a lot about kind of what are, what are the antiperspirants within our large enterprise customers. And there are plenty of them. >>What are some of the things that you're seeing now with respect to patterns that have developed over the last seven to eight months? >>So I think the two areas which clearly are evolving very quickly are on kind of the front end of the life cycle, where DevOps is more and more embracing value stream management value stream mapping. Um, and I think what's interesting is that in many ways the product is becoming the new silo. Uh, the notion of a product is very difficult by itself to actually define people are starting to recognize that a value stream is not its own little kind of Island. That in reality, when I define a product, this product, oftentimes as dependencies on our products and that in fact, you're looking at kind of a network of value streams, if you will. So, so even on that, and there is clearly kind of a new sets, if you will, of anti-patterns where products are being defined as a set of OTRs, they have interdependencies and you have have a new set of silos on the operands, uh, the Abra key movement to Israel and the SRE space where, um, I think there is a cultural clash while the dev ops side is very much embracing this notion of OTRs and value stream mapping and Belgium management. >>On the other end, you have the it operations teams. We still think business services, right? For them, they think about configure items, think about infrastructure. And so, you know, it's not uncommon to see, you know, teams where, you know, the operations team is still thinking about hundreds of thousands, tens of thousands of business services. And so the, the, there is there's this boundary where, um, I think, well, SRE is being put in place. And there's lots of thinking about what kind of metrics can be fined. I think, you know, going back to culture, I think there's a lot of cultural evolution that's still required for true operations team. >>And that's a hard thing. Cultural transformation in any industry pandemic or not is a challenging thing. You talked about, uh, AI and automation of minutes ago. How do you think those technologies can be leveraged by DevOps leaders to influence their successes and their ability to collaborate, maybe see eye to eye with the SRS? >>Yeah. Um, so th you're kind of too. So even for myself, as a leader of a, you know, 1500 people organization, there's a number of things I don't see right. On a daily basis. And, um, I think the, the, the, the technologies that we have at our disposal today from the AI are able to mind a lot of data and expose a lot of, uh, issues that's as leaders we may not be aware of. And some of the, some of these are pretty kind of easy to understand, right? We all think we're agile. And yet when you, when you start to understand, for instance, uh, what is the, what is the working progress right to during the sprint? Um, when you start to analyze the data you can detect, for instance, that maybe the teams are over committed, that there is too much work in progress. >>You can start to identify kind of, interdepencies either from a technology, from a people point of view, which were hidden, uh, you can start to understand maybe the change filler rates he's he is dragging. So I believe that there is a, there's a fundamental role to be played by the tools to, to expose again, these anti parents, to, to make these things visible to the teams, to be able to even compare teams. Right. One of the things that's, that's, uh, that's amazing is now we have access to tons of data, not just from a given customer, but across a large number of customers. And so we start to compare all of these teams kind of operate, and what's working, what's not working >>Thoughts on AI and automation as, as a facilitator of spiritual co-location. >>Yeah, absolutely. Absolutely. It's um, you know, th there's, uh, the problem we all face is the unknown, right? The, the law city, but volume variety of the data, uh, everyday we don't really necessarily completely appreciate what is the impact of our actions, right? And so, um, AI can really act as a safety net that enables us to, to understand what is the impact of our actions. Um, and so, yeah, in many ways, the ability to be informed in a timely matter to be able to interact with people on the basis of data, um, and collaborate on the data. And the actual matter, I think is, is a, is a very powerful enabler, uh, on, in that respect. I mean, I, I've seen, um, I've seen countless of times that, uh, for instance, at the SRE boundary, um, to basically show that we'll turn the quality attributes, so an incoming release, right. And exposing that to, uh, an operations person and a sorry person, and enabling that collaboration dialogue through data is a very, very powerful tool. >>Do you have any recommendations for how teams can use, you know, the SRE folks, the dev ops says can use AI and automation in the right ways to be successful rather than some ways that aren't going to be nonproductive. >>Yeah. So to me, the th there, there's a part of the question really is when, when we talk about data, there are there different ways you can use data, right? Um, so you can, you can do a lot of an analytics, predictive analytics. So I think there is a, there's a tendency, uh, to look at, let's say a, um, a specific KPI, like a, an availability KPI, or change filler rate, and to basically do a regression analysis and projecting all these things, going to happen in the future. To me, that that's, that's a, that's a bad approach. The reason why I fundamentally think it's a better approach is because we are systems. The way we develop software is, is a, is a non-leader kind of system, right? Software development is not linear nature. And so I think there's a D this is probably the worst approach is to actually focus on metrics on the other end. >>Um, if you, if you start to actually understand at a more granular level, what har, uh, which are the things which are contributing to this, right? So if you start to understand, for instance, that whenever maybe, you know, you affect a specific part of the application that translates into production issues. So we, we have, I've actually, uh, a customer who, uh, identified that, uh, over 50% of their unplanned outages were related to specific components in your architecture. And whenever these components were changed, this resulted in these plant outages. So if you start to be able to basically establish causality, right, cause an effect between kind of data across the last cycle. I think, I think this is the right way to, uh, to, to use AI. And so pharma to be, I think it's way more God could have a classification problem. What are the classes of problems that do exist and affect things as opposed to analytics, predictive, which I don't think is as powerful. >>So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. You're one of the authors of that. I want to get your thoughts on dev ops and biz ops overlapping, complimenting each other, what, from a, the biz ops perspective, what does it mean to the future of dev ops? >>Yeah, so, so it's interesting, right? If you think about DevOps, um, there's no felony document, right? Can we, we can refer to the Phoenix project. I mean, there are a set of documents which have been written, but in many ways, there's no clear definition of what dev ops is. Uh, if you go to the dev ops Institute today, you'll see that they are specific, um, trainings for instance, on value management on SRE. And so in many ways, the problem we have as an industry is that, um, there are set practices between agile dev ops, SRE Valley should management. I told, right. And we all basically talk about the same things, right. We all talk about essentially, um, accelerating in the meantime fee to feedback, but yet we don't have the common framework to talk about that. The other key thing is that we add to wait, uh, for, uh, for jeans, Jean Kim's Lascaux, um, to, uh, to really start to get into the business aspect, right? >>And for value stream mapping to start to emerge for us to start as an industry, right. It, to start to think about what is our connection with the business aspect, what's our purpose, right? And ultimately it's all about driving these business outcomes. And so to me, these ops is really about kind of, uh, putting a lens on this critical element that it's not business and it, that we in fact need to fuse business 19 that I need needs to transform itself to recognize that it's, it's this value generator, right. It's not a cost center. And so the relationship to me, it's more than BizOps provides kind of this Oliver or kind of framework, if you will. That set the context for what is the reason, uh, for it to exist. What's part of the core values and principles that it needs to embrace to, again, change from a cost center to a value center. And then we need to start to use this as a way to start to unify some of the, again, the core practices, whether it's agile, DevOps value, stream mapping SRE. Um, so, so I think over time, my hope is that we start to optimize a lot of our practices, language, um, and, uh, and cultural elements. >>Last question surgeon, the last few seconds we have here talking about this, the relation between biz ops and dev ops, um, what do you think as DevOps evolves? And as you talked to circle some of your insights, what should our audience keep their eyes on in the next six to 12 months? >>So to me, the key, the key, um, challenge for, for the industry is really around. So we were seeing a very rapid shift towards kind of, uh, product to product, right. Which we don't want to do is to recreate kind of these new silos, these hard silos. Um, so that, that's one of the big changes, uh, that I think we need to be, uh, to be really careful about, um, because it is ultimately, it is about culture. It's not about, uh, it's not about, um, kind of how we segment the work, right. And, uh, any true culture that we can overcome kind of silos. So back to, I guess, with Jeffrey's concept of, um, kind of the spiritual co-location, I think it's, it's really about that too. It's really about kind of, uh, uh, focusing on the business outcomes on kind of aligning on driving engagement across the teams, but, but not for create a, kind of a new set of silos, which instead of being vertical are going to be these horizontal products >>Crazy by surge that looking at culture as kind of a way of really, uh, uh, addressing and helping to, uh, re re reduce, replace challenges. We thank you so much for sharing your insights and your time at today's DevOps virtual forum. >>Thank you. Thanks for your time. >>I'll be right back >>From around the globe it's the cube with digital coverage of devops virtual forum brought to you by Broadcom. >>Welcome to Broadcom's DevOps virtual forum, I'm Lisa Martin, and I'm joined by another Martin, very socially distanced from me all the way coming from Birmingham, England is Glynn Martin, the head of QA transformation at BT. Glynn, it's great to have you on the program. Thank you, Lisa. I'm looking forward to it. As we said before, we went live to Martins for the person one in one segment. So this is going to be an interesting segment guys, what we're going to do is Glynn's going to give us a really kind of deep inside out view of devops from an evolution perspective. So Glynn, let's start. Transformation is at the heart of what you do. It's obviously been a very transformative year. How have the events of this year affected the >> transformation that you are still responsible for driving? Yeah. Thank you, Lisa. I mean, yeah, it has been a difficult year. >>Um, and although working for BT, which is a global telecommunications company, um, I'm relatively resilient, I suppose, as a, an industry, um, through COVID obviously still has been affected and has got its challenges. And if anything, it's actually caused us to accelerate our transformation journey. Um, you know, we had to do some great things during this time around, um, you know, in the UK for our emergency and, um, health workers give them unlimited data and for vulnerable people to support them. And that's spent that we've had to deliver changes quickly. Um, but what we want to be able to do is deliver those kinds of changes quickly, but sustainably for everything that we do, not just because there's an emergency. Um, so we were already on the kind of journey to agile, but ever more important now that we are, we are able to do those, that kind of work, do it more quickly. >>Um, and that it works because the, the implications of it not working is, can be terrible in terms of you know, we've been supporting testing centers,  new hospitals to treat COVID patients. So we need to get it right. And then therefore the coverage of what we do, the quality of what we do and how quickly we do it really has taken on a new scale and what was already a very competitive market within the telco industry within the UK. Um, you know, what I would say is that, you know, we are under pressure to deliver more value, but we have small cost challenges. We have to obviously, um, deal with the fact that, you know, COVID 19 has hit most industries kind of revenues and profits. So we've got this kind of paradox between having less costs, but having to deliver more value quicker and  to higher quality. So yeah, certainly the finances is, um, on our minds and that's why we need flexible models, cost models that allow us to kind of do growth, but we get that growth by showing that we're delivering value. Um, especially in these times when there are financial challenges on companies. So one of the things that I want to ask you about, I'm again, looking at DevOps from the inside >>Out and the evolution that you've seen, you talked about the speed of things really accelerating in this last nine months or so. When we think dev ops, we think speed. But one of the things I'd love to get your perspective on is we've talked about in a number of the segments that we've done for this event is cultural change. What are some of the things that you've seen there as, as needing to get, as you said, get things right, but done so quickly to support essential businesses, essential workers. How have you seen that cultural shift? >>Yeah, I think, you know, before test teams for themselves at this part of the software delivery cycle, um, and actually now really our customers are expecting that quality and to deliver for our customers what they want, quality has to be ingrained throughout the life cycle. Obviously, you know, there's lots of buzzwords like shift left. Um, how do we do shift left testing? Um, but for me, that's really instilling quality and given capabilities shared capabilities throughout the life cycle that drive automation, drive improvements. I always say that, you know, you're only as good as your lowest common denominator. And one thing that we were finding on our dev ops journey was that we  would be trying to do certain things quick, we had automated build, automated tests. But if we were taking a weeks to create test scripts, or we were taking weeks to manually craft data, and even then when we had taken so long to do it, that the coverage was quite poor and that led to lots of defects later on in the life cycle, or even in our production environment, we just couldn't afford to do that. >>And actually, focusing on continuous testing over the last nine to 12 months has really given us the ability to deliver quickly across the whole life cycle. And therefore actually go from doing a kind of semi agile kind of thing, where we did the user stories, we did a few of the kind of agile ceremonies, but we weren't really deploying any quicker into production because our stakeholders were scared that we didn't have the same control that we had when we had more waterfall releases. And, you know, when we didn't think of ourselves. So we've done a lot of work on every aspect, um, especially from a testing point of view, every aspect of every activity, rather than just looking at automated tests, you know, whether it is actually creating the test in the first place, whether it's doing security testing earlier in the lot and performance testing in the life cycle, et cetera. So, yeah,  it's been a real key thing that for CT, for us to drive DevOps, >>Talk to me a little bit about your team. What are some of the shifts in terms of expectations that you're experiencing and how your team interacts with the internal folks from pipeline through life cycle? >>Yeah, we've done a lot of work on this. Um, you know, there's a thing that I think people will probably call it a customer experience gap, and it reminds me of a Gilbert cartoon, where we start with the requirements here and you're almost like a Chinese whisper effects and what we deliver is completely different. So we think the testing team or the delivery teams, um, know in our teeth has done a great job. This is what it said in the acceptance criteria, but then our customers are saying, well, actually that's not working this isn't working and there's this kind of gap. Um, we had a great launch this year of agile requirements, it's one of the Broadcom tools. And that was the first time in, ever since I remember actually working within BT, I had customers saying to me, wow, you know, we want more of this. >>We want more projects to have extra requirements design on it because it allowed us to actually work with the business collaboratively. I mean, we talk about collaboration, but how do we actually, you know, do that and have something that both the business and technical people can understand. And we've actually been working with the business , using agile requirements designer to really look at what the requirements are, tease out requirements we hadn't even thought of and making sure that we've got high levels of test coverage. And what we actually deliver at the end of it, not only have we been able to generate tests more quickly, but we've got much higher test coverage and also can more smartly, using the kind of AI within the tool and then some of the other kinds of pipeline tools, actually deliver to choose the right tasks, and actually doing a risk based testing approach. So that's been a great launch this year, but just the start of many kinds of things that we're doing >>Well, what I hear in that, Glynn is a lot of positives that have come out of a very challenging situation. Talk to me about it. And I liked that perspective. This is a very challenging time for everybody in the world, but it sounds like from a collaboration perspective you're right, we talk about that a lot critical with devops. But those challenges there, you guys were able to overcome those pretty quickly. What other challenges did you face and figure out quickly enough to be able to pivot so fast? >>I mean, you talked about culture. You know, BT is like most companies  So it's very siloed. You know we're still trying to work to become closer as a company. So I think there's a lot of challenges around how would you integrate with other tools? How would you integrate with the various different technologies. And BT, we have 58 different IT stacks. That's not systems, that's stacks, all of those stacks can have hundreds of systems. And we're trying to, we've got a drive at the moment, a simplified program where we're trying to you know, reduce that number to 14 stacks. And even then there'll be complexity behind the scenes that we will be challenged more and more as we go forward. How do we actually highlight that to our users? And as an it organization, how do we make ourselves leaner, so that even when we've still got some of that legacy, and we'll never fully get rid of it and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from our users and drive those programs, so we can, as I say, accelerate change,  reduce that kind of waste and that kind of legacy costs out of our business. You know, the other thing as well, I'm sure telecoms is probably no different to insurance or finance. When you take the number of products that we do, and then you combine them, the permutations are tens and hundreds of thousands of products. So we, as a business are trying to simplify, we are trying to do that in an agile way. >>And haven't tried to do agile in the proper way and really actually work at pace, really deliver value. So I think what we're looking more and more at the moment is actually  more value focused. Before we used to deliver changes sometimes into production. Someone had a great idea, or it was a great idea nine months ago or 12 months ago, but actually then we ended up deploying it and then we'd look at the users, the usage of that product or that application or whatever it is, and it's not being used for six months. So we haven't got, you know, the cost of the last 12 months. We certainly haven't gotten room for that kind of waste and, you know, for not really understanding the value of changes that we are doing. So I think that's the most important thing of the moment, it's really taking that waste out. You know, there's lots of focus on things like flow management, what bits of our process are actually taking too long. And we've started on that journey, but we've got a hell of a long way to go. But that involves looking at every aspect of the software delivery cycle. >> Going from, what 58 IT stacks down to 14 or whatever it's going to be, simplifying sounds magical to everybody. It's a big challenge. What are some of the core technology capabilities that you see really as kind of essential for enabling that with this new way that you're working? >>Yeah. I mean, I think we were started on a continuous testing journey, and I think that's just the start. I mean as I say, looking at every aspect of, you know, from a QA point of view is every aspect of what we do. And it's also looking at, you know, we've started to branch into more like AI, uh, AI ops and, you know, really the full life cycle. Um, and you know, that's just a stepping stone to, you know, I think autonomics is the way forward, right. You know, all of this kind of stuff that happens, um, you know, monitoring, uh, you know, watching the systems what's happening in production, how do we feed that back? How'd you get to a point where actually we think about change and then suddenly it's in production safely, or if it's not going to safety, it's automatically backing out. So, you know, it's a very, very long journey, but if we want to, you know, in a world where the pace is in ever-increasing and the demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, uh, you know, more efficiently and as lean as possible, we need to be thinking about every part of the process and how we put the kind of stepping stones in place to lead us to a more automated kind of, um, you know, um, the future. >>Do you feel that that planned outcomes are starting to align with what's delivered, given this massive shift that you're experiencing? >>I think it's starting to, and I think, you know, as I say, as we look at more of a value based approach, um, and, um, you know, as I say, print, this was a kind of flow management. I think that that will become ever, uh, ever more important. So, um, I think it starting to people certainly realize that, you know, teams need to work together, you know, the kind of the cousin between business and it, especially as we go to more kind of SAS based solutions, low code solutions, you know, there's not such a gap anymore, actually, some of our business partners that expense to be much more tech savvy. Um, so I think, you know, this is what we have to kind of appreciate what is its role, how do we give the capabilities, um, become more of a centers of excellence rather than actually doing mounds amounts of work. And for me, and from a testing point of view, you know, mounds and mounds of testing, actually, how do we automate that? How do we actually generate that instead of, um, create it? I think that's the kind of challenge going forward. >>What are some, as we look forward, what are some of the things that you would like to see implemented or deployed in the next, say six to 12 months as we hopefully round a corner with this pandemic? >>Yeah, I think, um, you know, certainly for, for where we are as a company from a QA perspective, we are, um, you let's start in bits that we do well, you know, we've started creating, um, continuous delivery and DevOps pipelines. Um, there's still manual aspects of that. So, you know, certainly for me, I I've challenged my team with saying how do we do an automated journey? So if I put a requirement in JIRA or rally or wherever it is and why then click a button and, you know, with either zero touch for one such, then put that into production and have confidence that, that has been done safely and that it works and what happens if it doesn't work. So, you know, that's, that's the next, um, the next few months, that's what our concentration, um, is, is about. But it's also about decision-making, you know, how do you actually understand those value judgments? >>And I think there's lots of the things dev ops, AI ops, kind of that always ask aspects of business operations. I think it's about having the information in one place to make those kinds of decisions. How does it all try and tie it together? As I say, even still with kind of dev ops, we've still got elements within my company where we've got lots of different organizations doing some, doing similar kinds of things, but they're all kind of working in silos. So I think having AI ops as it comes more and more to the fore as we go to cloud, and that's what we need to, you know, we're still very early on in our cloud journey, you know, so we need to make sure the technologies work with cloud as well as you can have, um, legacy systems, but it's about bringing that all together and having a full, visible pipeline, um, that everybody can see and make decisions. >>You said the word confidence, which jumped out at me right away, because absolutely you've got to have be able to have confidence in what your team is delivering and how it's impacting the business and those customers. Last question then for you is how would you advise your peers in a similar situation to leverage technology automation, for example, dev ops, to be able to gain the confidence that they're making the right decisions for their business? >>I think the, the, the, the, the approach that we've taken actually is not started with technology. Um, we've actually taken a human centered design, uh, as a core principle of what we do, um, within the it part of BT. So by using human centered design, that means we talk to our customers, we understand their pain points, we map out their current processes. Um, and then when we mapped out what this process does, it also understand their aspirations as well, you know? Um, and where do they want to be in six months? You know, do they want it to be, um, more agile and, you know, or do they want to, you know, is, is this a part of their business that they want to do one better? We actually then looked at why that's not running well, and then see what, what solutions are out there. >>We've been lucky that, you know, with our partnership, with Broadcom within the payer line, lots of the tools and the PLA have directly answered some of the business's problems. But I think by having those conversations and actually engaging with the business, um, you know, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they do there is that kind of, you know, almost by understanding their, their pain points and then starting, this is how we can solve your problem. Um, is we've, we've tended to be much more successful than trying to impose something and say, well, here's the technology that they don't quite understand. It doesn't really understand how it kind of resonates with their problems. So I think that's the heart of it. It's really about, you know, getting, looking at the data, looking at the processes, looking at where the kind of waste is. >>And then actually then looking at the right solutions. Then, as I say, continuous testing is massive for us. We've also got a good relationship with Apple towards looking at visual AI. And actually there's a common theme through that. And I mean, AI is becoming more and more prevalent. And I know, you know, sometimes what is AI and people have kind of this semantics of, is it true AI or not, but it's certainly, you know, AI machine learning is becoming more and more prevalent in the way that we work. And it's allowing us to be much more effective, be quicker in what we do and be more accurate. And, you know, whether it's finding defects running the right tests or, um, you know, being able to anticipate problems before they're happening in a production environment. >>Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the successes that you're having, taking those challenges, converting them to opportunities and forgiving folks who might be in your shoes, or maybe slightly behind advice enter. They appreciate it. We appreciate your time. >>Well, it's been an absolute pleasure, really. Thank you for inviting me. I have a extremely enjoyed it. So thank you ever so much. >>Excellent. Me too. I've learned a lot for Glenn Martin. I'm Lisa Martin. You're watching the cube >>Driving revenue today means getting better, more valuable software features into the hands of your customers. If you don't do it quickly, your competitors as well, but going faster without quality creates risks that can damage your brand destroy customer loyalty and cost millions to fix dev ops from Broadcom is a complete solution for balancing speed and risk, allowing you to accelerate the flow of value while minimizing the risk and severity of critical issues with Broadcom quality becomes integrated across the entire DevOps pipeline from planning to production, actionable insights, including our unique readiness score, provide a three 60 degree view of software quality giving you visibility into potential issues before they become disasters. Dev ops leaders can manage these risks with tools like Canary deployments tested on a small subset of users, or immediately roll back to limit the impact of defects for subsequent cycles. Dev ops from Broadcom makes innovation improvement easier with integrated planning and continuous testing tools that accelerate the flow of value product requirements are used to automatically generate tests to ensure complete quality coverage and tests are easily updated. >>As requirements change developers can perform unit testing without ever leaving their preferred environment, improving efficiency and productivity for the ultimate in shift left testing the platform also integrates virtual services and test data on demand. Eliminating two common roadblocks to fast and complete continuous testing. When software is ready for the CIC CD pipeline, only DevOps from Broadcom uses AI to prioritize the most critical and relevant tests dramatically improving feedback speed with no decrease in quality. This release is ready to go wherever you are in your DevOps journey. Broadcom helps maximize innovation velocity while managing risk. So you can deploy ideas into production faster and release with more confidence from around the globe. It's the queue with digital coverage of dev ops virtual forum brought to you by Broadcom. >>Hi guys. Welcome back. So we have discussed the current state and the near future state of dev ops and how it's going to evolve from three unique perspectives. In this last segment, we're going to open up the floor and see if we can come to a shared understanding of where dev ops needs to go in order to be successful next year. So our guests today are, you've seen them all before Jeffrey Hammond is here. The VP and principal analyst serving CIO is at Forester. We've also Serge Lucio, the GM of Broadcom's enterprise software division and Glenn Martin, the head of QA transformation at BT guys. Welcome back. Great to have you all three together >>To be here. >>All right. So we're very, we're all very socially distanced as we've talked about before. Great to have this conversation. So let's, let's start with one of the topics that we kicked off the forum with Jeff. We're going to start with you spiritual co-location that's a really interesting topic that we've we've uncovered, but how much of the challenge is truly cultural and what can we solve through technology? Jeff, we'll start with you then search then Glen Jeff, take it away. >>Yeah, I think fundamentally you can have all the technology in the world and if you don't make the right investments in the cultural practices in your development organization, you still won't be effective. Um, almost 10 years ago, I wrote a piece, um, where I did a bunch of research around what made high-performance teams, software delivery teams, high performance. And one of the things that came out as part of that was that these teams have a high level of autonomy. And that's one of the things that you see coming out of the agile manifesto. Let's take that to today where developers are on their own in their own offices. If you've got teams where the team itself had a high level of autonomy, um, and they know how to work, they can make decisions. They can move forward. They're not waiting for management to tell them what to do. >>And so what we have seen is that organizations that embraced autonomy, uh, and got their teams in the right place and their teams had the information that they needed to make the right decisions have actually been able to operate pretty well, even as they've been remote. And it's turned out to be things like, well, how do we actually push the software that we've created into production that would become the challenge is not, are we writing the right software? And that's why I think the term spiritual co-location is so important because even though we may be physically distant, we're on the same plane, we're connected from a, from, from a, a shared purpose. Um, you know, surgeon, I worked together a long, long time ago. So it's been what almost 15, 16 years since we were at the same place. And yet I would say there's probably still a certain level of spiritual co-location between us, uh, because of the shared purposes that we've had in the past and what we've seen in the industry. And that's a really powerful tool, uh, to build on. So what do tools play as part of that, to the extent that tools make information available, to build shared purpose on to the extent that they enable communication so that we can build that spiritual co-location to the extent that they reinforce the culture that we want to put in place, they can be incredibly valuable, especially when, when we don't have the luxury of physical locate physical co-location. Okay. That makes sense. >>It does. I shouldn't have introduced us. This last segment is we're all spiritually co-located or it's a surge, clearly you're still spiritually co located with jump. Talk to me about what your thoughts are about spiritual of co-location the cultural impact and how technology can move it forward. >>Yeah. So I think, well, I'm going to sound very similar to Jeff in that respect. I think, you know, it starts with kind of a shared purpose and the other understanding, Oh, individuals teams, uh, contributed to kind of a business outcome, what is our shared goal or shared vision? What's what is it we're trying to achieve collectively and keeping it kind of aligned to that? Um, and so, so it's really starts with that now, now the big challenge, always these over the last 20 years, especially in large organization, there's been specialization of roles and functions. And so we, we all that started to basically measure which we do, uh, on a daily basis using metrics, which oftentimes are completely disconnected from kind of a business outcome or purpose. We, we kind of reverted back to, okay, what is my database all the time? What is my cycle time? >>Right. And, and I think, you know, which we can do or where we really should be focused as an industry is to start to basically provide a lens or these different stakeholders to look at what they're doing in the context of kind of these business outcomes. So, um, you know, probably one of my, um, favorites experience was to actually weakness at one of a large financial institution. Um, you know, Tuesday Golder's unquote development and operations staring at the same data, right. Which was related to, you know, in calming changes, um, test execution results, you know, Coverity coverage, um, official liabilities and all the all ran. It could have a direction level links. And that's when you start to put these things in context and represent that to you in a way that these different stakeholders can, can look at from their different lens. And, uh, and it can start to basically communicate and, and understand have they joined our company to, uh, to, to that kind of common view or objective. >>And Glen, we talked a lot about transformation with you last time. What are your thoughts on spiritual colocation and the cultural part, the technology impact? >>Yeah, I mean, I agree with Jeffrey that, you know, um, the people and culture, the most important thing, actually, that's why it's really important when you're transforming to have partners who have the same vision as you, um, who, who you can work with, have the same end goal in mind. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, what it also does though, is although, you know, tools can accelerate what you're doing and can join consistency. You know, we've seen within simplify, which is BTS flagship transformation program, where we're trying to, as it can, it says simplify the number of systems stacks that we have, the number of products that we have actually at the moment, we've got different value streams within that program who have got organizational silos. We were trying to rewrite, rewrite the wheel, um, who are still doing things manually. >>So in order to try and bring that consistency, we need the right tools that actually are at an enterprise grade, which can be flexible to work with in BT, which is such a complex and very dev, uh, different environments, depending on what area of BT you're in, whether it's a consumer, whether it's a mobile area, whether it's large global or government organizations, you know, we found that we need tools that can, um, drive that consistency, but also flex to Greenfield brownfield kind of technologies as well. So it's really important that as I say, for a number of different aspects, that you have the right partner, um, to drive the right culture, I've got the same vision, but also who have the tool sets to help you accelerate. They can't do that on their own, but they can help accelerate what it is you're trying to do in it. >>And a really good example of that is we're trying to shift left, which is probably a, quite a bit of a buzz phrase in their kind of testing world at the moment. But, you know, I could talk about things like continuous delivery direct to when a ball comes tools and it has many different features to it, but very simply on its own, it allows us to give the visibility of what the teams are doing. And once we have that visibility, then we can talk to the teams, um, around, you know, could they be doing better component testing? Could they be using some virtualized services here or there? And that's not even the main purpose of continuous delivery director, but it's just a reason that tools themselves can just give greater visibility of have much more intuitive and insightful conversations with other teams and reduce those organizational silos. >>Thanks, Ben. So we'd kind of sum it up, autonomy collaboration tools that facilitate that. So let's talk now about metrics from your perspectives. What are the metrics that matter? Jeff, >>I'm going to go right back to what Glenn said about data that provides visibility that enables us to, to make decisions, um, with shared purpose. And so business value has to be one of the first things that we look at. Um, how do we assess whether we have built something that is valuable, you know, that could be sales revenue, it could be net promoter score. Uh, if you're not selling what you've built, it could even be what the level of reuse is within your organization or other teams picking up the services, uh, that you've created. Um, one of the things that I've begun to see organizations do is to align value streams with customer journeys and then to align teams with those value streams. So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that customer journey, the value with it. >>And we're all measured on that. Um, there are flow metrics which are really important. How long does it take us to get a new feature out from the time that we conceive it to the time that we can run our first experiments with it? There are quality metrics, um, you know, some of the classics or maybe things like defect, density, or meantime to response. Um, one of my favorites came from a, um, a company called ultimate software where they looked at the ratio of defects found in production to defects found in pre production and their developers were in fact measured on that ratio. It told them that guess what quality is your job to not just the test, uh, departments, a group, the fourth level that I think is really important, uh, in, in the current, uh, situation that we're in is the level of engagement in your development organization. >>We used to joke that we measured this with the parking lot metric helpful was the parking lot at nine. And how full was it at five o'clock. I can't do that anymore since we're not physically co-located, but what you can do is you can look at how folks are delivering. You can look at your metrics in your SCM environment. You can look at, uh, the relative rates of churn. Uh, you can look at things like, well, are our developers delivering, uh, during longer periods earlier in the morning, later in the evening, are they delivering, uh, you know, on the weekends as well? Are those signs that we might be heading toward a burnout because folks are still running at sprint levels instead of marathon levels. Uh, so all of those in combination, uh, business value, uh, flow engagement in quality, I think form the backbone of any sort of, of metrics, uh, a program. >>The second thing that I think you need to look at is what are we going to do with the data and the philosophy behind the data is critical. Um, unfortunately I see organizations where they weaponize the data and that's completely the wrong way to look at it. What you need to do is you need to say, you need to say, how is this data helping us to identify the blockers? The things that aren't allowing us to provide the right context for people to do the right thing. And then what do we do to remove those blockers, uh, to make sure that we're giving these autonomous teams the context that they need to do their job, uh, in a way that creates the most value for the customers. >>Great advice stuff, Glenn, over to your metrics that matter to you that really make a big impact. And, and, and also how do you measure quality kind of following onto the advice that Jeff provided? >>That's some great advice. Actually, he talks about value. He talks about flow. Both of those things are very much on my mind at the moment. Um, but there was this, I listened to a speaker, uh, called me Kirsten a couple of months ago. It taught very much around how important flow management is and removing, you know, and using that to remove waste, to understand in terms of, you know, making software changes, um, what is it that's causing us to do it longer than we need to. So where are those areas where it takes long? So I think that's a very important thing for us. It's even more basic than that at the moment, we're on a journey from moving from kind of a waterfall to agile. Um, and the problem with moving from waterfall to agile is with waterfall, the, the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. >>Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that we give that confidence, um, that that's ready to go, or if there's a risk that we're able to truly articulate what that risk is. So there's a bit about release confidence, um, and some of the metrics around that and how, how healthy those releases are, and actually saying, you know, we spend a lot of money, um, um, an investment setting up our teams, training our teams, are we actually seeing them deliver more quickly and are we actually seeing them deliver more value quickly? So yeah, those are the two main things for me at the moment, but I think it's also about, you know, generally bringing it all together, the dev ops, you know, we've got the kind of value ops AI ops, how do we actually bring that together to so we can make quick decisions and making sure that we are, um, delivering the biggest bang for our buck, absolutely biggest bang for the buck, surge, your thoughts. >>Yeah. So I think we all agree, right? It starts with business metrics, flow metrics. Um, these are kind of the most important metrics. And ultimately, I mean, one of the things that's very common across a highly functional teams is engagements, right? When, when you see a team that's highly functioning, that's agile, that practices DevOps every day, they are highly engaged. Um, that that's, that's definitely true. Now the, you know, back to, I think, uh, Jeff's point on weaponization of metrics. One of the key challenges we see is that, um, organizations traditionally have been kind of, uh, you know, setting up benchmarks, right? So what is a good cycle time? What is a good lead time? What is a good meantime to repair? The, the problem is that this is very contextual, right? It varies. It's going to vary quite a bit, depending on the nature of application and system. >>And so one of the things that we really need to evolve, um, as an industry is to understand that it's not so much about those flow metrics is about our, these four metrics ultimately contribute to the business metric to the business outcome. So that's one thing. The second aspect, I think that's oftentimes misunderstood is that, you know, when you have a bad cycle time or, or, or what you perceive as being a buy cycle time or better quality, the problem is oftentimes like all, do you go and explore why, right. What is the root cause of this? And I think one of the key challenges is that we tend to focus a lot of time on metrics and not on the eye type patterns, which are pretty common across the industry. Um, you know, if you look at, for instance, things like lead time, for instance, it's very common that, uh, organizational boundaries are going to be a key contributor to badly time. >>And so I think that there is, you know, the only the metrics there is, I think a lot of work that we need to do in terms of classifying, descend type patterns, um, you know, back to you, Jeff, I think you're one of the cool offers of waterscrumfall as a, as, as a key pattern, the industry or anti-spatter. Um, but waterscrumfall right is a key one, right? And you will detect that through kind of a defect arrival rates. That's where that looks like an S-curve. And so I think it's beyond kind of the, the metrics is what do you do with those metrics? >>Right? I'll tell you a search. One of the things that is really interesting to me in that space is I think those of us had been in industry for a long time. We know the anti-patterns cause we've seen them in our career maybe in multiple times. And one of the things that I think you could see tooling do is perhaps provide some notification of anti-patterns based on the telemetry that comes in. I think it would be a really interesting place to apply, uh, machine learning and reinforcement learning techniques. Um, so hopefully something that we'd see in the future with dev ops tools, because, you know, as a manager that, that, you know, may be only a 10 year veteran or 15 year veteran, you may be seeing these anti-patterns for the first time. And it would sure be nice to know what to do, uh, when they start to pop up, >>That would right. Insight, always helpful. All right, guys, I would like to get your final thoughts on this. The one thing that you believe our audience really needs to be on the lookout for and to put on our agendas for the next 12 months, Jeff will go back to you. Okay. >>I would say look for the opportunities that this disruption presents. And there are a couple that I see, first of all, uh, as we shift to remote central working, uh, we're unlocking new pools of talent, uh, we're, it's possible to implement, uh, more geographic diversity. So, so look to that as part of your strategy. Number two, look for new types of tools. We've seen a lot of interest in usage of low-code tools to very quickly develop applications. That's potentially part of a mainstream strategy as we go into 2021. Finally, make sure that you embrace this idea that you are supporting creative workers that agile and dev ops are the peanut butter and chocolate to support creative, uh, workers with algorithmic capabilities, >>Peanut butter and chocolate Glen, where do we go from there? What are, what's the one silver bullet that you think folks to be on the lookout for now? I, I certainly agree that, um, low, low code is, uh, next year. We'll see much more low code we'd already started going, moving towards a more of a SAS based world, but low code also. Um, I think as well for me, um, we've still got one foot in the kind of cow camp. Um, you know, we'll be fully trying to explore what that means going into the next year and exploiting the capabilities of cloud. But I think the last, um, the last thing for me is how do you really instill quality throughout the kind of, um, the, the life cycle, um, where, when I heard the word scrum fall, it kind of made me shut it because I know that's a problem. That's where we're at with some of our things at the moment we need to get beyond that. We need >>To be releasing, um, changes more frequently into production and actually being a bit more brave and having the confidence to actually do more testing in production and go straight to production itself. So expect to see much more of that next year. Um, yeah. Thank you. I haven't got any food analogies. Unfortunately we all need some peanut butter and chocolate. All right. It starts to take us home. That's what's that nugget you think everyone needs to have on their agendas? >>That's interesting. Right. So a couple of days ago we had kind of a latest state of the DevOps report, right? And if you read through the report, it's all about the lost city, but it's all about sweet. We still are receiving DevOps as being all about speed. And so to me, the key advice is in order to create kind of a spiritual collocation in order to foster engagement, we have to go back to what is it we're trying to do collectively. We have to go back to tie everything to the business outcome. And so for me, it's absolutely imperative for organizations to start to plot their value streams, to understand how they're delivering value into aligning everything they do from a metrics to deliver it, to flow to those metrics. And only with that, I think, are we going to be able to actually start to really start to align kind of all these roles across the organizations and drive, not just speed, but business outcomes, >>All about business outcomes. I think you guys, the three of you could write a book together. So I'll give you that as food for thought. Thank you all so much for joining me today and our guests. I think this was an incredibly valuable fruitful conversation, and we appreciate all of you taking the time to spiritually co-located with us today, guys. Thank you. Thank you, Lisa. Thank you. Thank you for Jeff Hammond serves Lucio and Glen Martin. I'm Lisa Martin. Thank you for watching the broad cops Broadcom dev ops virtual forum.

Published Date : Nov 18 2020

SUMMARY :

of dev ops virtual forum brought to you by Broadcom. Nice to talk with you today. It's good to be here. One of the things that we think of is speed, it was essentially a sprint, you know, you run as hard as you can for as fast as you can And it's almost like, you know, if you've ever run a marathon the first mile or two in the marathon, um, we have to think about all the activities that you need to do from a dev ops perspective and to hiring, you know, achieve higher levels of digitization in our processes and We've said that the key to success with agile at the team level is cross-functional organizations, as you say, going from, you know, physical workspaces, uh, agile manifesto, you know, there were four principles that were espoused individuals and interactions is important to make sure that that agility is there for one thing, you have to defer decisions So those teams have to be empowered to make decisions because you can't have a I think we all could use some of that, but, you know, you talked about in the beginning and I've, Um, when everybody was in the office, you could kind of see the And that gives you an indication of how engaged your developers are. um, whether it's, you know, more regular social events, that have done this well, this adaptation, what can you share in terms of some real-world examples that might Um, you know, first of all, since the start of COVID, if you don't have good remote onboarding processes, Those are the kinds of things that we have to be, um, willing to, um, and the business folks to just get better at what they're doing and learning to embrace It's it's, it's an important thing. Thank you so much for joining for Jeffrey I'm Lisa Martin, of dev ops virtual forum brought to you by Broadcom, I just had the chance to talk with Jeffrey Hammond and he unlocked this really interesting concept, uh, you know, driving a bias towards action. Well, and it talked about culture being, it's something that, you know, we're so used to talking about dev ops with respect does it take where, you know, that code to be processed through pipeline pushy? you know, when I checked in code, you know, to do I guess the system to automatically identify what So we'll get to AI in a second, but I'm curious, what are some of the, of the metrics you think that really matter right And so I'm much more interested and we, you know, fruit for Broadcom. are being defined as a set of OTRs, they have interdependencies and you have have a new set And so, you know, it's not uncommon to see, you know, teams where, you know, How do you think those technologies can be leveraged by DevOps leaders to influence as a leader of a, you know, 1500 people organization, there's a number of from a people point of view, which were hidden, uh, you can start to understand maybe It's um, you know, you know, the SRE folks, the dev ops says can use AI and automation in the right ways Um, so you can, you can do a lot of an analytics, predictive analytics. So if you start to understand, for instance, that whenever maybe, you know, So I mentioned in the beginning of our conversation, that just came off the biz ops manifesto. the problem we have as an industry is that, um, there are set practices between And so to me, these ops is really about kind of, uh, putting a lens on So to me, the key, the key, um, challenge for, We thank you so much for sharing your insights and your time at today's DevOps Thanks for your time. of devops virtual forum brought to you by Broadcom. Transformation is at the heart of what you do. transformation that you are still responsible for driving? you know, we had to do some great things during this time around, um, you know, in the UK for one of the things that I want to ask you about, I'm again, looking at DevOps from the inside But one of the things I'd love to get your perspective I always say that, you know, you're only as good as your lowest And, you know, What are some of the shifts in terms of expectations Um, you know, there's a thing that I think people I mean, we talk about collaboration, but how do we actually, you know, do that and have something that did you face and figure out quickly enough to be able to pivot so fast? and that's the kind of trade off that we have to make, how do we actually deal with that and hide that from So we haven't got, you know, the cost of the last 12 months. What are some of the core technology capabilities that you see really as kind demands for the team, and, you know, with the pressures on, at the moment where we're being asked to do things, And for me, and from a testing point of view, you know, mounds and mounds of testing, we are, um, you let's start in bits that we do well, you know, we've started creating, ops as it comes more and more to the fore as we go to cloud, and that's what we need to, Last question then for you is how would you advise your peers in a similar situation to You know, do they want it to be, um, more agile and, you know, or do they want to, especially if the business hold the purse strings, which in, in, uh, you know, in some companies include not as they And I know, you know, sometimes what is AI Well, thank you so much for giving us this sort of insight outlook at dev ops sharing the So thank you ever so much. I'm Lisa Martin. the entire DevOps pipeline from planning to production, actionable This release is ready to go wherever you are in your DevOps journey. Great to have you all three together We're going to start with you spiritual co-location that's a really interesting topic that we've we've And that's one of the things that you see coming out of the agile Um, you know, surgeon, I worked together a long, long time ago. Talk to me about what your thoughts are about spiritual of co-location I think, you know, it starts with kind of a shared purpose and the other understanding, that to you in a way that these different stakeholders can, can look at from their different lens. And Glen, we talked a lot about transformation with you last time. And w I've certainly found that with our, um, you know, continuing relationship with Broadcom, So it's really important that as I say, for a number of different aspects, that you have the right partner, then we can talk to the teams, um, around, you know, could they be doing better component testing? What are the metrics So that's one of the ways that you get to a shared purpose, cause we're all trying to deliver around that um, you know, some of the classics or maybe things like defect, density, or meantime to response. later in the evening, are they delivering, uh, you know, on the weekends as well? teams the context that they need to do their job, uh, in a way that creates the most value for the customers. And, and, and also how do you measure quality kind of following the business had a kind of comfort that, you know, everything was tested together and therefore it's safer. Um, and with agile, there's that kind of, you know, how do we make sure that, you know, if we're doing things quick and we're getting stuff out the door that of, uh, you know, setting up benchmarks, right? And so one of the things that we really need to evolve, um, as an industry is to understand that we need to do in terms of classifying, descend type patterns, um, you know, And one of the things that I think you could see tooling do is The one thing that you believe our audience really needs to be on the lookout for and to put and dev ops are the peanut butter and chocolate to support creative, uh, But I think the last, um, the last thing for me is how do you really instill and having the confidence to actually do more testing in production and go straight to production itself. And if you read through the report, it's all about the I think this was an incredibly valuable fruitful conversation, and we appreciate all of you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

SergePERSON

0.99+

GlenPERSON

0.99+

Lisa MartinPERSON

0.99+

Jeffrey HammondPERSON

0.99+

Serge LucioPERSON

0.99+

AppleORGANIZATION

0.99+

Jeffery HammondPERSON

0.99+

GlennPERSON

0.99+

sixQUANTITY

0.99+

26QUANTITY

0.99+

Glenn MartinPERSON

0.99+

50 minutesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LisaPERSON

0.99+

BroadcomORGANIZATION

0.99+

Jeff HammondPERSON

0.99+

tensQUANTITY

0.99+

six monthsQUANTITY

0.99+

2021DATE

0.99+

BenPERSON

0.99+

10 yearQUANTITY

0.99+

UKLOCATION

0.99+

two hoursQUANTITY

0.99+

15 yearQUANTITY

0.99+

sevenQUANTITY

0.99+

9:00 PMDATE

0.99+

two hourQUANTITY

0.99+

14 stacksQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

GlynnPERSON

0.99+

two dayQUANTITY

0.99+

MartinPERSON

0.99+

Glynn MartinPERSON

0.99+

KirstenPERSON

0.99+

todayDATE

0.99+

SRE ValleyORGANIZATION

0.99+

five o'clockDATE

0.99+

BothQUANTITY

0.99+

2020DATE

0.99+

millionsQUANTITY

0.99+

second aspectQUANTITY

0.99+

Glen JeffPERSON

0.99+

threeQUANTITY

0.99+

14QUANTITY

0.99+

75%QUANTITY

0.99+

three weeksQUANTITY

0.99+

Amanda silverPERSON

0.99+

oneQUANTITY

0.99+

seven teamsQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

last yearDATE

0.99+

Jeffrey Hammond, Forrester | DevOps Virtual Forum Promo


 

>>Yeah. Hey, welcome back. Friday, Jeffrey here with the Cube, come to you from our Palo Alto studios today, talking about event that we're gonna have in November. It's pretty exciting. And to talk about it and give us a little bit of a preview, we're joined in the segment by Jeffrey Hammond. He's the vice president and principal analyst at Forrester. Jeffrey, great to see you. >>It's good to be here, Jeff. Thanks for having me. >>Absolutely so lot of social media memes about. You know, what's driving your digital transformation is the CEO, the CEO or CO of it, and I think we all know what the answer is. But what's what's happened is, as we've, you know, accelerated digital transformation, and we had the lights light switch moment, everybody working from home. We're now six months, eight months into this, and this is gonna be going on for a while. So specifically in the context of Dev Ops, where such a foundation of that is us getting together every morning in a room and having a quick stand up and talking about what our challenges isn't going out to develop. We have been able to do that for six months, and we're probably not gonna be able to do it for a little while longer. So how is Dev? Ops in 2021? The Age of Covert and even Post Cove? It's gonna be different from what we had say 2019. >>Yeah, Jeff. A couple years ago I wrote a piece called Designing Developer Spaces, and it was all about creating physical spaces for agile teams. Toe work in because as creative teams, they needed to have an environment that supported them. And the idea of remote working was kind of like unaudited e. You know, there was a list up on git hub of companies. It's a or did remote developers. And it was maybe 100 companies long at that point. And, you know, now you know, in in 2020 every company is a remote development company. And so all those investments in physical spaces to support cross functional co located teams aren't something that we're able to take advantage of today, and as a result, it's forcing companies Thio become even more disciplined with respect to the things that they do to help development teams work together. It's enforcing them to to, you know, focus on what I would call spiritual co location, because physical co location is no longer an option. And you can't do that without having and even higher attention toe automation on Dev ops practices that enable it, but also an increased focus on enabling digital collaboration, moving from things like the physical con bond wall that you put index cards on onto tools that help you replicate that sort of capability. But do it in a digital world when you have 100% remote developers, right, >>right, so so just begs a lot of questions. You know? What should people be measuring? How should they be measuring? I mean, we have all kinds of measurement tools, and obviously the devolves process is continuous thing that's happening every day, pushing out new releases every day. How dio the managers kind of rethink about how they're measuring outcomes. I don't wanna say success because it's really outcomes and not activity. >>Yeah, it's a really timely question, Jeff. You know, I've been getting a lot of questions from from large enterprise development shops about Well, how doe I make sure that my employees are still productive now that I can't see them. Should I be measuring individual productivity? You know my answers. You know, I don't think so. You really want to be able Thio? You really want to be able Thio measure the team level, But you may want to allow individuals to begin toe look at their own productivity metrics and benchmarks themselves because they can't see the person next to them in the other desk or have that conversation and know that they're doing a good job. So the way that managers works changes significantly. Andi. That's one of the things that we'll talk about in November, >>right? And I'm just curious. How much stuff can we pull from? Generic leadership is well, because it's the same situation. I love your I love your concept of spiritual alignment that's also got to come not only from the Dev Ops team, but from all the senior leadership now who don't necessarily have the opportunity to reinforce those messages in the hallway or whatever the kind of the normal communication channels that they used before. But this this is well beyond Dev ops, but really, you know, leadership in general, I would say, >>Yeah, it comes down to data collaboration and shared vision. You know, those principles are not unique to software development, but they're extremely important for any type of creative work. And and that's what software development is. So we can learn a lot from from from from the businesses, the whole. But then we need to apply it specifically in the process and context of developing software. And that's where Dev Ops creates the link to enable that happened. >>Yeah, really? An interesting kind of fork in the road, if you will. Dev Ops has been around for 20 some odd years. Fundamental change in the way software respect and built and delivered. But as you said, I mean, even by definition, um, cross functional co located teams simply aren't enabled today and probably won't be for a little while longer. So I think this is probably, ah, lot of information that people are really excited to hear. >>Yeah, especially because we're now out of the sprint phase. We're moving into a marathon. We're gonna have to deal with this for probably at least the next 8 to 12 months. So we've got to start thinking differently for the long term and and and how we keep our employees productive. But we also keep them happy and make sure that they aren't burning out so that they're developing great software. That really matters. >>Yeah, that's great. Well, thanks for the little tease we look forward to getting. Ah, a lot more meat in this topic and diving in in November. So, Jeffrey, Thanks for stopping by and again. His Dev Ops Virtual Form, November 18th, 11 a.m. Eastern 80 and Pacific. Jeffrey, we'll see you there. >>Can't wait. It'll be a lot of, um >>Alright. He's Jeffrey. I'm Jeff. You're watching the Cube, So get ready. Mark your calendars for November. It's the Dev Ops Virtual Forum. Um, November 18 11 a.m. Eastern eight, Pacific Sea There. Thanks for watching.

Published Date : Oct 30 2020

SUMMARY :

come to you from our Palo Alto studios today, talking about event that we're gonna have in November. It's good to be here, Jeff. CEO, the CEO or CO of it, and I think we all know what the answer is. It's enforcing them to to, you know, focus on what I would call How dio the managers kind of rethink about how they're measuring You really want to be able Thio measure the team level, But you may want to allow individuals But this this is well beyond Dev ops, but really, you know, leadership in general, Yeah, it comes down to data collaboration and shared vision. An interesting kind of fork in the road, if you will. We're gonna have to deal with this for probably at least the next 8 to 12 months. Well, thanks for the little tease we look forward to getting. It'll be a lot of, um It's the Dev Ops Virtual Forum.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

Jeffrey HammondPERSON

0.99+

six monthsQUANTITY

0.99+

November 18thDATE

0.99+

eight monthsQUANTITY

0.99+

2019DATE

0.99+

NovemberDATE

0.99+

2020DATE

0.99+

100 companiesQUANTITY

0.99+

100%QUANTITY

0.99+

2021DATE

0.99+

FridayDATE

0.99+

20QUANTITY

0.99+

Palo AltoLOCATION

0.99+

todayDATE

0.98+

ForresterORGANIZATION

0.98+

oneQUANTITY

0.98+

ThioPERSON

0.95+

Dev OpsTITLE

0.95+

Pacific SeaLOCATION

0.95+

CubeORGANIZATION

0.92+

November 18 11 a.m. EasternDATE

0.87+

DevOps VirtualORGANIZATION

0.83+

12 monthsQUANTITY

0.83+

couple years agoDATE

0.82+

ThioORGANIZATION

0.8+

Post CoveTITLE

0.8+

of CovertTITLE

0.79+

AndiPERSON

0.73+

8QUANTITY

0.72+

Dev Ops VirtualORGANIZATION

0.71+

11 a.m. Eastern 80DATE

0.61+

PacificLOCATION

0.58+

odd yearsQUANTITY

0.53+

eightLOCATION

0.53+

CubePERSON

0.45+

Jeffrey Hammond, Forrester | DevOps Virtual Forum


 

>>Hey, welcome back Friday, Jeffrey here with the Cube coming to you from our Palo Alto studios today, talking about event that we're gonna have in November. It's pretty exciting. And to talk about it and give us a little bit of a preview, we're joined in the segment by Jeffrey Hammond. He's the vice president and principal analyst at Forrester. Jeffrey, great to see you. >>It's good to be here, Jeff. Thanks for having me. >>Absolutely so lot of social media memes about. You know, what's driving your digital transformation is the CEO, the CEO or CO of it, and I think we all know what the answer is. But what's what's happened is, as we've, you know, accelerated digital transformation, and we had the lights light switch moment, everybody working from home. We're now six months, eight months into this, and this is gonna be going on for a while. So specifically in the context of Dev Ops, where such a foundation of that is us getting together every morning in a room and having a quick stand up and talking about what our challenges isn't going out to develop. We have been able to do that for six months, and we're probably not gonna be able to do it for a little while longer. So how is Dev? Ops in 2021? The Age of Covert and even Post Cove is gonna be different from what we had say, 2019. >>Yeah, Jeff. A couple years ago I wrote a piece called Designing Developer Spaces. And it was all about creating physical spaces for agile teams. Toe work in because as creative teams, they needed to have an environment that supported them. And the idea of remote working was kind of like unaudited e. You know, there was a list up on git hub of of companies that supported remote developers. And it was maybe 100 companies long at that point. And, you know, now you know, in 2020 every company is a remote development company. And so all those investments in physical spaces, too, for cross functional co located teams aren't something that we're able to take advantage of today, and as a result, it's forcing companies Thio become even more disciplined with respect to the things that they do to help development teams work together. It's enforcing them to to, you know, focus on what I would call spiritual co location because physical co location is no longer an option. And you can't do that without having and even higher attention toe automation on Dev ops practices that enable it, but also an increased focus on enabling digital collaboration. Moving from things like the physical con bond wall that you put index cards on onto tools that help you replicate that sort of capability. But do it in a digital world when you have 100% remote developers, right? >>Right, so so just begs a lot of questions. You know? What should people be measuring? How should they be measuring? I mean, we have all kinds of measurement tools, and obviously the devolves process is continuous thing that's happening every day, pushing out new releases every day. How dio the managers kind of rethink about how they're measuring outcomes. I don't wanna say success because it's really outcomes and not activity. >>Yeah, it's a really timely question, Jeff. You know, I've been getting a lot of questions from from large enterprise development shops about Well, how doe I make sure that my employees are still productive now that I can't see them. Should I be measuring individual productivity? You know my answers. You know, I don't think so. You really want to be able Thio? You really want to be able Thio measure the team level, But you may want to allow individuals to begin toe look at their own productivity metrics and benchmarks themselves because they can't see the person next to them in the other desk or have that conversation and know that they're doing a good job. So the way that managers works changes significantly. Andi. That's one of the things that we'll talk about in November, >>right? And I'm just curious. How much stuff can we pull from? Generic leadership is well, because it's the same situation. I love your I love your concept of spiritual alignment that's also got to come not only from the Dev Ops team, but from all the senior leadership now who don't necessarily have the opportunity to reinforce those messages in the hallway or whatever the kind of the normal communication channels that they used before. But this this is well beyond Dev ops, but really, you know, leadership in general, I would say >>Yeah, it comes down to data collaboration and shared vision. You know, those principles were not unique to software development, but they're extremely important for any type of creative work. And and that's what software development is. So we can learn a lot from from from from the businesses, the whole. But then we need to apply it specifically in the process and context of developing software. And that's where Dev Ops creates the link to enable that happen. >>Yeah, really? An interesting kind of fork in the road, if you will. Dev Ops has been around for 20 some odd years. Fundamental change in the way software respect and built and delivered. But as you said, I mean, even by definition, um, cross functional co located teams simply aren't enabled today and probably won't be for a little while longer. So I think this is probably, ah, lot of information that people are really excited to hear. >>Yeah, especially because we're now out of the sprint phase. We're moving into a marathon. We're gonna have to deal with this for probably at least the next 8 to 12 months. So we've got to start thinking differently for the long term and and and how we keep our employees productive, but we also keep them happy and make sure that they aren't burning out so that they're developing great software. That really matters. >>Yeah, that's great. Well, thanks for the little tease we look forward to getting. Ah, a lot more meat in this topic and diving in in November. So, Jeffrey, Thanks for stopping by and again. His Dev Ops Virtual Form, November 18th, 11 a.m. Eastern 80 and Pacific. Jeffrey, we'll see you there. >>Can't wait. It'll be a lot of, um >>Alright. He's Jeffrey. I'm Jeff. You're watching the Cube, so get ready. Mark your calendars for November. It's the Dev Ops Virtual Forum. Um, November 18. 11 a.m. Eastern eight, Pacific Sea There. Thanks for watching. Yeah.

Published Date : Oct 12 2020

SUMMARY :

And to talk about it and give us a little bit of a preview, It's good to be here, Jeff. CEO, the CEO or CO of it, and I think we all know what the answer is. It's enforcing them to to, you know, focus on what I would call How dio the managers kind of rethink about how they're measuring You really want to be able Thio measure the team level, But you may want to allow individuals But this this is well beyond Dev ops, but really, you know, Yeah, it comes down to data collaboration and shared vision. An interesting kind of fork in the road, if you will. We're gonna have to deal with this for probably at least the next 8 to 12 months. Well, thanks for the little tease we look forward to getting. It'll be a lot of, um It's the Dev Ops Virtual Forum.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

JeffreyPERSON

0.99+

Jeffrey HammondPERSON

0.99+

November 18thDATE

0.99+

six monthsQUANTITY

0.99+

100%QUANTITY

0.99+

FridayDATE

0.99+

2019DATE

0.99+

eight monthsQUANTITY

0.99+

NovemberDATE

0.99+

2021DATE

0.99+

100 companiesQUANTITY

0.99+

2020DATE

0.99+

Palo AltoLOCATION

0.99+

todayDATE

0.98+

ForresterORGANIZATION

0.98+

Dev OpsTITLE

0.98+

oneQUANTITY

0.97+

Pacific SeaLOCATION

0.96+

ThioPERSON

0.96+

CubeORGANIZATION

0.93+

couple years agoDATE

0.88+

November 18. 11 a.m. EasternDATE

0.86+

12 monthsQUANTITY

0.85+

8QUANTITY

0.78+

Dev Ops Virtual ForumORGANIZATION

0.77+

ThioORGANIZATION

0.77+

11 a.m. Eastern 80DATE

0.72+

DevOps VirtualORGANIZATION

0.72+

Post CoveTITLE

0.71+

20 someQUANTITY

0.69+

CubePERSON

0.67+

PacificLOCATION

0.65+

DevORGANIZATION

0.56+

eightLOCATION

0.51+

DesigningTITLE

0.5+

SpacesTITLE

0.49+

CovertTITLE

0.4+

Gene Kim, DevOps Author & Researcher | Nutanix .NEXT Conference 2019


 

>> live from Anaheim, California. It's the queue covering nutanix dot Next twenty nineteen. Brought to you by Nutanix. >> Welcome back, everyone to the cubes. Live coverage of Nutanix Stott next here in Anaheim, California. I'm your host, Rebecca Night, along with my co host, John Farrier. We're joined by Jean Kim. He is an author, researcher, entrepreneur and founder of Revolution. Thank you so much for coming back on the Cube, Gene. >> Oh, thanks so much for Becca and always great seeing you and John. >> So you are a prolific author. You've written many books, including the Phoenix Project, The Deb Ops Handbook, given new one coming out. But this is this is the latest one we have here the Dev Ops Handbook >> twenty sixteen. And then we came up with a little bit cool accelerate based on the state of Davis report. And yeah, it's been a fun ride. Just what a great space to be writing about >> Dev ops has been. I'LL see that covered going back years. Now it's mainstream, and you started to see the impact of people who have taken the devil's mentality put promise and the place we see all the you know, Web scales from Facebook, you name. But now the enterprises is now really looking at agility scenario. You've been working a lot on you Host the Devil Devil Enterprise Summit. What's that been like? I mean, it seems to be well taken longer than some of the hard core cloud guys. So what's the State of the Union, if you will, for the enterprise from a devil standpoint? >> Yeah, What a great question. I mean, I think there's no doubt that the devil's principles and practices were pioneered in the tech giant's Facebook's Amazon necklace and Google's, but I've long believed with a certain level certainty that a CZ much economic values they've created, uh, that's just the tip of the iceberg. The real value will be created when you know the largest, most complex organization, the planet adopting same principles of patterns. And when you have Ah yeah, I think I. D. C said there's eighteen million developers on the planet of which, at maximum, no half million at the tech trying and the rest are in, you know, the largest brands across every industry vertical. And if we could get those seventeen and a half million developers as productive as if there were at Facebook Amazon, that for school I'm not, generates trillions of dollars of economic value per year. And when you know what, that much, um, economically being created. I mean, we'LL have undoubtedly, you know, incredible societal improving outcomes as well. So it has been such a treat to help chronicle that journey. >> One of the things I want to ask you. Genes that doesn't impressive numbers, but also UV factor and net new developers, younger generation, re skilled workers used to be a network. I now I'm a developer. You seeing developers really at the infrastructure level now. But show like this where Nutanix is a heart was a hardware company there now a software company. So they're ato heart of Jeb ops. In terms of their target audience, they're implementing this stuff, So this is a refreshing change. So I gotta ask you when you walk into an enterprise, what is the current temperature of our I Q of Dev ops are they are their percentage. That's you know, they're some are learning. Take us through kind of the progress. >> If I would guess right? This has much as I love statistics and you know, comprehensive benchmarking. Yeah, I think we're three percent of the way there. Alright, I percent Yeah, you know, we're in the earliest stages of it, Which means the best is yet to come. I think develops is an aspiration for many on DH. No, but having to change the I think Dave is often a rebellious group rebelling against agent powerful order right now, uh, forces far beyond their control. Conservative groups protecting their turf. I think that's kind of the, uh, probably a typical situation. And so, you know, we're a long way away from Devil's being the dominant orthodoxy. >> So if that's the case, just probably some people who have adopted it had success we're seeing in these new, innovative shifts. The early adopters have massive value extraction from that. So and that's an advantage. Committed advantage. Can you give us some examples of people who did that took the rebellion that went to Dev Ops were successful and then doubled down on it? >> Yeah, I think the one that come to mind immediately are like Capital one. Yeah, they went from eighty percent outsourcing to now. Almost hundred cent Insourced. Same with target, where they're really started off as a uh ah bottom up movement and then gain the support of the highest levels of leadership. And it has been so exciting to see the story's not just told by technology leaders, but increasingly shared and being told by both the technology leader and a business counterpart were the business leader is saying, I am wholly reliant upon my technology, Pierre, to achieve all the goals, dreams and aspirations of our organization. And that's what a treat, to be able to see that kind of recognition and appreciation. >> It's an operational shift to They have to buy into changing how they operate as a company. Yes, and believe me, they're like clutching on to the old ways. And that's just the way it is. A >> wonderful phrase from the NUTANIX CEO that Loved is that way often characterized that developers as the builders, but operation infrastructure, they are builders, too. In fact, you know, developers cannot be productive if they are mired in infrastructure, right? And so, uh, you know, uh, you know, you get a peek. Productivity focus flown joy when you don't have to deal with concerns outside of the business feature and the visibility. One solved. And I know that from personal experience where the frustration you have when you just want to do one thing and you just carved out a door ten things that you just can't do because you have two. Puzzle is a puzzle. They have solved >> it. Love to get your reaction, tio some of the trends that I'm seeing because Kev Ops has been such an important movement, at least from my standpoint, because people could get lost in the what the word means at the end of the day program ability, making infrastructures code, which is the original ethos. Making the officer programmable and invisible, which is one of the themes of nutanix was the dream. That kind of is the objective, right? I mean, to make it programmable. So you don't that stand up all these services and prep and provisions Hard infrastructure stuff? >> Yeah. Yeah. In November, the Unicorn project is coming out. So it's the follow into the Phoenix project, and I'm really trying to capture how great it feels when you could be productive and all of infrastructures taken care of for you by your friends and infrastructure. Right then allows youto you know, have your best energy focusing on solving a business problem, not on how to connect a to B. And we need to expect to see in the yamma files and configuring. You know, all these things that you don't really care about, but you're forced to write, and I think that allows ah, level of productivity and joy. But also, >> uh, >> of, uh, >> is that the idea working relationship between development and infrastructure, where developers are costly thanking their infrastructure, appears for making their life easy >> way. We're joking. Rebecca and I were joking about how we use Siri ate Siri. What's the weather in Palo Alto? This should be an app for the enterprises says Hey, Cube or whatever at NUTANIX or whatever. Give me some more storage. Why isn't it happening? But that's that's that's That's kind of a joke, but it's kind of goal. Oh, increasing the right >> that's just available on demand right on. You certainly don't have to open up thirty tickets these days. Like was so typical ten years ago that that's a modern miracle. >> My question for you is why books? I mean, so here here we have were in this very fast changing technological environment and landscape. And as you said, the Dev Ops is still relatively new. There's it's not. It's a three percent really who understand it. Why use a bunch of dead tree just to get your message across? I was like writing, in fact and an ideal >> month, and I get to spend half the time writing and half the time hanging out with the best in the game, studying now that the greatest in the field. And I think even in this day and age, there's still no Maur effective and viral mechanism spread ideas and books. You know, when people someone says, Hey, I love the finished project I'd loved reading it. It says a couple things right. They probably spent eight hours reading it on. You know, that's a serious commitment. And so I think, Imagine how many impression minutes, you know it takes a purchase. Eight minutes, eight hours of someone's time. And so for things like this, I really do think that you know, the written form is still won most effective ways. Tio communicate ideas. >> Your dream job. You're writing out the best people. What did you What have you learned from the these people. >> Oh, my goodness, >> you could write a book. Yeah, >> but for twenty years, I self identified as an operations person. Even that well, I was formally trained to develop Our got my graduate degree in compiler design in nineteen ninety five. And so for twenty years, I just loved operations. This because that's where the action was. That's what saves happened. But something changed. About four years ago. I learned at programming language called Closure. It's a functional programming languages, a list so very alien to me, the hardest thing I've ever learned. I mean, I must have read and watched eighty hours of video before I wrote one line of code, but it has been the most rewarding thing. And it's just that, uh, exactly brought the joy of development and encoding back into my daily life. So So I guess I should amend my answer. I would say it's half the time writing half the time hand with the best of game and twenty percent coding just because I love to solve problems, right? Yeah, my own problems. So So I have I would thank people I get I you know, I've been able to hang out with and had the privilege to watch because, um, if it weren't for that, I think I would been happy. No, just saying that coding was a thing of the past. Right? S o for that. I'm so grateful. >> How do you use what you learn about in terms of your writing and in your coding and vice a versa. I mean, So how are they different in how are they the same? >> Uh, that's a great question. You >> know, I think >> what's really nice about coding is that it's, uh that's very formal. I mean, in fact, the most extreme. It's all mathematics, right? The books are just a pile of words that may or may not have order and structure. And so, in the worst days, I felt like with the Unicorn Project, I wrote one hundred fifty thousand words. Target work count is one hundred thousand, and I was telling friends I wrote one hundred fifty thousand words that say nothing of significance, right? What have I done The best days and that's I think that's because you have to impose upon it a structure and a point right on the best days is very much like coding. Everything has a spot, right? Uh uh, And you know what to get rid of. So, uh, yeah, I think the fact that coding has structure, I think makes it in some ways an easier for me to work >> with. And what brings you to new tenants next this week? What's the story? Which >> I gotta say I had the privilege and was delighted to take part in what they called deaf days. So if they were gathering developers to learn about educate everyone on how to use, uh, the new Tanis capabilities through AP eyes just like he said, right to help enable automation, and, uh, I just find it very rewarding and fulfilling. I just because even though I think nutanix er as a community is known for being the, uh, the innovators and the, uh so the rebellion a cz productive as you know, that technology's made them to turn into an automated platform. And I think that's another order of magnitude gain in terms of value they could create for their organization. So that was a >> tree. And they've transformed from an operations oriented box company years ago and now officially subscription based software. They're going all software. They're flipping their model upside down, too. >> And it was just a delight to see the developers who are attracted to that one day thing I would recommend to anyone who's interested in development on just being on the cutting edge of what could be done with it. For example, if you have cameras in every store is their way to automate the analysis that you compute dwell times and, you know, Q abandonment rates. I mean, it's like a crash course in modern business practices that I thought was absolutely amazing. >> Well, Jean, you do great work. I've been following you for years. I know you're very humbles. Well, but give a plug. Take a minute to explain the things you're working on. You got a great event. You run, you gotta books. What other things you got going on? Shared the audience. >> Just those two things that were just Everything is about the book right now. The Unicorn project is coming in November. Uh, and so accepts Will be available at the Devil sent five summit in London s O. That's a conference for technology leaders from large, complex organizations and over the years, we've now chronicle of over two hundred case studies by technology leaders from almost every brand across every industry vertical. And it has been such a privilege toe. See, hear the stories and to see how they're being rewarded for their achievements. I mean there being promoted on being given more responsibility. So that is, Ah, treat beyond words >> and it's a revolution. It's a shift that's definitely happening. You're in the bin and doing it for years, and we're documenting it so and you are a CZ. Well, >> I'm looking forward to see you there. >> I just have one final question and this is about something you were saying about how Nutanix is the insurgent and the rebel the rebel in office. How does it How do you recommend it? As a researcher, as an entrepreneur yourself and as someone who's really in this mindset, how do you recommend it? Stay feisty and scrappy and with that mentality at it, especially as it grows and becomes more and more of a behemoth itself? >> Um, there was some statements made about, like how, ten years ago, virtual ization was the one key certification that was guaranteed. You relevant stuff forever in the future. And, yeah, I think there's some basis to say that, you know, that alone is not enough to guarantee lifetime employment. And I think the big lesson is you know, we all have to be continual learners and, you know, every year that goes by, you know, they're Mohr miracles being >> ah ah, >> being created for us to be able to use to solve problems. And if that doesn't think the lesson is if we're not, uh, always focused on being a continual Lerner, Yeah, there's great joy that comes with it and a great peril, You know, if we choose to forego it. >> Well, that's a great note to end. Thank you so much for coming back on the Cube. Gene. >> Thank you so much. And not great CD. Both. Thanks. >> I'm Rebecca Knight for John Furrier. We will have much more from dot next, just after this

Published Date : May 8 2019

SUMMARY :

Brought to you by Nutanix. Thank you so much for coming back on the Cube, Gene. So you are a prolific author. And then we came up with a little bit cool accelerate based on the state of Davis report. promise and the place we see all the you know, Web scales from Facebook, you name. I mean, we'LL have undoubtedly, you know, incredible societal improving So I gotta ask you when you walk into an enterprise, what is the current temperature of I percent Yeah, you know, we're in the earliest stages of it, So if that's the case, just probably some people who have adopted it had success we're seeing in these And it has been so exciting to see the story's And that's just the way it is. And so, uh, you know, uh, you know, you get a peek. So you don't that stand up all these services and prep You know, all these things that you don't really care about, but you're forced to write, This should be an app for the enterprises says Hey, Cube or whatever at NUTANIX or whatever. You certainly don't have to open up thirty tickets these days. And as you said, I really do think that you know, the written form is still won most effective ways. What did you What have you learned from the these people. you could write a book. I you know, I've been able to hang out with and had the privilege to watch because, um, How do you use what you learn about in terms of your writing and in Uh, that's a great question. The best days and that's I think that's because you have to impose upon it a structure And what brings you to new tenants next this week? the rebellion a cz productive as you know, that technology's made them to turn into an And they've transformed from an operations oriented box company years ago and now is their way to automate the analysis that you compute dwell times and, you know, Q abandonment rates. You run, you gotta books. Uh, and so accepts Will be available at the Devil sent five summit in London s so and you are a CZ. I just have one final question and this is about something you were saying about how Nutanix is the insurgent And I think the big lesson is you know, we all have to be continual learners and, And if that doesn't think Thank you so much for coming back on the Cube. Thank you so much. I'm Rebecca Knight for John Furrier.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

John FarrierPERSON

0.99+

JeanPERSON

0.99+

Rebecca KnightPERSON

0.99+

Rebecca NightPERSON

0.99+

Jean KimPERSON

0.99+

FacebookORGANIZATION

0.99+

DavePERSON

0.99+

PierrePERSON

0.99+

eight hoursQUANTITY

0.99+

Eight minutesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Gene KimPERSON

0.99+

one lineQUANTITY

0.99+

Palo AltoLOCATION

0.99+

NutanixORGANIZATION

0.99+

SiriTITLE

0.99+

JohnPERSON

0.99+

twenty yearsQUANTITY

0.99+

NovemberDATE

0.99+

eighty hoursQUANTITY

0.99+

Phoenix ProjectTITLE

0.99+

one hundred thousandQUANTITY

0.99+

NUTANIXORGANIZATION

0.99+

Anaheim, CaliforniaLOCATION

0.99+

eighty percentQUANTITY

0.99+

Anaheim, CaliforniaLOCATION

0.99+

twoQUANTITY

0.99+

twenty percentQUANTITY

0.99+

BothQUANTITY

0.99+

The Deb Ops HandbookTITLE

0.99+

LondonLOCATION

0.99+

AmazonORGANIZATION

0.99+

one hundred fifty thousand wordsQUANTITY

0.99+

John FurrierPERSON

0.99+

three percentQUANTITY

0.99+

bothQUANTITY

0.99+

Dev Ops HandbookTITLE

0.99+

thirty ticketsQUANTITY

0.99+

one final questionQUANTITY

0.98+

ten thingsQUANTITY

0.98+

eighteen million developersQUANTITY

0.98+

ten years agoDATE

0.98+

BeccaPERSON

0.98+

trillions of dollarsQUANTITY

0.98+

OneQUANTITY

0.97+

one thingQUANTITY

0.97+

two thingsQUANTITY

0.97+

over two hundred case studiesQUANTITY

0.97+

next this weekDATE

0.96+

GenePERSON

0.96+

About four years agoDATE

0.96+

nutanixORGANIZATION

0.95+

CapitalORGANIZATION

0.94+

seventeen and a half million developersQUANTITY

0.93+

oneQUANTITY

0.93+

half millionQUANTITY

0.89+

Devil Devil Enterprise SummitEVENT

0.85+

one dayQUANTITY

0.85+

nutanixTITLE

0.85+

Dev OpsTITLE

0.84+

years agoDATE

0.84+

nineteen ninety fiveQUANTITY

0.79+

timeQUANTITY

0.79+

one keyQUANTITY

0.77+

halfQUANTITY

0.77+

JebPERSON

0.76+

Next twenty nineteenDATE

0.76+

five summitQUANTITY

0.76+

NutanixEVENT

0.75+

TioPERSON

0.75+

twenty sixteenTITLE

0.74+

Nutanix StottORGANIZATION

0.73+

I. D. CPERSON

0.71+

DevOpsORGANIZATION

0.71+

Unicorn ProjectTITLE

0.7+

yearsQUANTITY

0.69+

Kev OpsTITLE

0.67+

PhoenixTITLE

0.64+

DevilORGANIZATION

0.63+

2019DATE

0.6+

MohrPERSON

0.58+

TanisORGANIZATION

0.55+

Nicole Forsgren, DevOps Research & Assessment | PagerDuty Summit 2017


 

>> Hey, welcome back here everybody. It's Jeff Frick here with theCUBE. We're at PagerDuty Summit. It's in San Francisco at Pier 27. It's a new facility, we've never been here. It's pretty unique. It's right between the Bay bridge and Pier 39. Beautiful day out on the water and it's all about DevOps here at PagerDuty. And I'm going to tease Jen later if people even know what a pager is at this town. So we are excited to have Nicole Forsgren She's a founder at CEO and chief scientist of DevOps research and assessment. I had to read it, it's a big mouthful but it goes buy DORA for sure. Nicole, welcome to see you. Good to see you. >> Thanks so much. It's good to be here. >> Alright so you are the DevOps expert. You got a really interesting past. Did some research on the LinkedIn profile industry. Academe industry, Academe and now you're out helping people. >> Yes, bounce around a bit. It's all about the pivot right? >> Absolutely. >> Out here doing DevOps. >> Absolutely, absolutely so you do an annual report on the state of DevOps. So where are we? DevOps has been being talked about for a long, long time. How much is reality? How far are we on this journey? What are you seeing? >> Right so it's really interesting you point that out right, because for years everyone's been like DevOps. What is it? Does it matter? And so DORA and by the way, DORA is myself. Jess Humble, Jame Kim. We just brought on Sue Chow. But the core founders, we've partnered up with the team at Puppet, and for the last several years. We've put out the state of DevOps report. To kind of help define at least from a research standpoint and from our standpoint. What it is? What are the key contributors to really drive value and does it drive value? It's for years and I'll talk about this later this afternoon on my closing keynote. For years and when I say years, I mean decades of academic rigorous, pure review research. Technology didn't matter. Like it didn't matter at all. It just never delivered value to organizations. But then we started seeing patterns and really interesting patterns and companies saying no. We're seeing results, we're delivering value. We're delivering outcomes. Core essential outcomes for end users and customers in the business. And so we got together and say okay, let's really take a look at this in a really important way. >> Right, now how far we've come right. 'Cause now most companies are technology companies. They just happen to warp their technology around a particular product or a particular service. >> Yeah, exactly. >> And now most leading the technology in terms of a vehicle to drive value and to drive transformation. So DevOps is also very wrapped up in this whole concept of digital transformation. That's all anybody wants to talk about. It's in every earnings call, so how closely are the two related and how do you see, 'cause DevOps got a little bit more history in terms of the buzz of transformation. Are people applying DevOps concept beyond strictly development and operations? >> So, there's a lot to unpack there. So like you said, it's really, really involved. Although it has some kind of a buzz word, right? Some people love it, some people embrace it, some people never want to hear it. So it's really all about what's important to the company in delivering value. But it's core is really about taking important methodologies and practices to deliver value and it's about using technology and automation, in conjunction with core values and practices and processes that we've adopted from the lane and agile movements. >> Jeff: Right, right. And having a really good healthy culture that's about more than just DevOps. Right like you said. DevOps, QA, Info Sec. The business marrying all of that, pulling all of it together, working in conjunction in the right kind of ways to deliver value. To deliver key outcomes to help us pivot, move fast, learn, have fast feedback. So that we can do what we need to do for the company, for the business, because like you said, it's so many companies right now, really are technology organizations that happened to be wrapped around in some particular industry. >> Jeff: Right, right. >> Capital One is a financial institution. Really they are a technology organization that happens to do finance and deliver finance really, really well for their customers. So many other companies are doing retail but it's driven by technology. Right or they do insurance and it's driven by technology or they're a healthcare organizations that really can't do what they do unless they have technology to really drive it. >> Right, right. The financials institutions are interesting because if you talk to like my kids. If they've ever been inside of an actual bank and then and how often do they go to the atm? So not even atm, so the way that people more and more interact with the company is through digital mediums. >> Right. >> But I'm curious to get you're input on the big question that we always ask people is how do I get started. Right, what is the easy paths to success? How do I get some early success so I can build on that success? What's interesting is you have a very unique approach to solve that question as oppose to what I think or based on what I'm really good at, I think we should start here. >> Yes, we really do-- >> Do you guys have different-- >> And this is really why DORA exist and this is what we do. So myself Jess Humble, Jean Kim. This explains the genesis of DORA. So we have a couple different things so the mission of DORA is to help companies get better through science and proven methods. Ans so we have a couple of different things we do. The first is that state of DevOps report that we put together at Puppet. And those are all open sourced and so if you want some ideas of what really statistically drives improvement, go find those. They're open source, they're totally free. We've tried so many resources because we don't want companies to fail. We've all lived through that awful dot com mess. We've seen companies fail. Go find those resources. Now your question though, where should I start? If I'm a company, what should I do? We've all go into conferences myself, Jean, Jess and we've had companies come up and say well where should I start? And the answer is always, it depends. The answer is always it depends because I can't tell you absent context, absent data, absent information. If I don't know about someone's detail information. I can't tell you and so what we also have is we offer an assessment where I can collect data from the doers. Right there's this fantastic report from Forester. It's called the dangerous disconnect and that's such a great title because if you ask executives. They drastically over estimate technology and DevOps maturity in organizations. So you shouldn't be, I mean I love-- >> Over estimate. >> Of course they do. I mean because we need to be really, really optimistic about where our organizations are going. >> Right, right. >> Those are our roles as executives. And so that's appropriate but in certain conditions that's appropriate. But where it's not appropriate is when you're setting detail strategy for your organizations. And so what we do is we offer an assessment where using these strong scientifically based measure that we have prepared and refined over now, four years of rigorous academic research. We can go with a 15 minute survey, collect data from everyone in organization that like I said are the doers. DevOps, TestOps, QA, InfoSec including vendors, contractors, consultants to people that are in the weeds every single day. I can measure you. I can benchmark you against the industry. I've got over 23,000 data points around the world. All industries, all company sizes. And then, where should they start? I can algorithmically tell you what your bottle neck is, what your constraint is. Where you should start to accelerate your performance. >> Based on my data? >> Based on your data. >> Based on your algorithms and based on your population data from this huge data set >> Yes, and with the companies that we're working with right now, they're seeing amazing results. They're calling it out-sized results. So a really great example we have was with Capital One. They did the assessment across over a dozen lines of business. And by focusing on two core capabilities out of over 20. We focus them on the right two capabilities. They saw a 20X improvement in deploy frequency in only two months with zero increase in internet. >> 20% improvement-- >> 20X >> 20X? >> 20X >> In two months. >> 20 times. >> Wow. >> So it's that ability to measure consistently see visibility throughout that software engineering life cycle. So we also had feedback from customer like Verizon. That that visibility, that consistency of measurement was also a really huge value add. >> Jeff: Right, right. >> Measurements hard. >> Well it's interesting, I saw some of your videos and some of your prior key notes and stuff and talking about, everyone says data is in the world. But the data without context, the data without the right algorithms, and you talk about a bunch data dirty things and data problems. Data itself is not the new oil. So I wanted to get to your report 'cause that's kind of your bench mark. That's your big stake in the ground. So how are we've been doing it? What do you do different than other things that are out there? Besides the fact that it's open source which I'll ask you about as a follow up. What makes your research special? >> So why is our report different from any other reports out there? I think there's a couple things. The piece that makes me the proudest is that, the state of DevOps report is so different because it's academically rigorous. It's a true research report and I love that the team has been so loving and so patient with me. Because when I started working with the rest of the group four years ago, I stepped in and I said. This is what I want to do. These are my ideas. I was still a professor at the time, so as you mentioned, I was industry and then academia and I'm now in industry again. But I stepped in and I said, I think there's this really, really fantastic opportunity to take a look of what's going on but we have to measure this in really rigorous ways. And by doing that, it allows us to look at predictive relationships, which is interesting because it let's us say. If we focus on core capabilities, they will predict organization's ability to develop and deliver quality software with speed and stability. Which will in turn drive improvements in organizational performance. Profitability, productivity, market share. Effectiveness, efficiency delivering mission and organizational goals. Notice I'm saying predict and drive. I'm not saying correlate, which is really interesting. And so in these years of research, we've been able to identify core capabilities that drive improvement. So it allows organizations to understand what's important to invest in. It's not just this worked for my team. This worked for that team. Hey, I think this is what I'm going to try because as someone fond of joking. Anecdote is nice but the plural of anecdote isn't anecdata. (laughing) Right, and that was my frustration when I was in tech and before and when I was in consulting. If you want to try a thing and you want to apply it but it's really hard if I only have one or two or three or five maybe even 10 stories. We need so much data to really understand what will likely work for teams and for industries as a whole. And like I said, God bless the team, because I came in and I was really rigorous and I would say that doesn't work, we can't measure that. That doesn't work here and sometimes I'd come back and I'd say that doesn't hold. The stats don't hold and they say, "But it has to." "I know it worked here and I know it worked here." And I'm like, but it's not, we have no evidence to support that. The stats don't hold. This doesn't work. We can't say that and we're like hey, we'll have to try it again next year. Not try it again next year but we have to find a different way to measure it. We have to have a different hypothesis to test. But then we also find really amazing things like I said a couple times, it predicts a team's ability to develop and deliver code with speed and stability. Speed and stability. We found four years ago speed and stability go together. For years, we didn't know that was the case or we thought that in order to get stability, you had to slow down. It doesn't show up anywhere in the data. No where, high performers get both. >> So do the executives, do they realize the leader that having better internal thought for development has an impact on their business relative to saving a few bucks on parts or spending a few more bucks on marketing? As a real driver of value as oppose to it's just always internal apps that we have to build for whatever reason. >> They're starting to get there. And so what we're starting to do is we're really focusing heavily on delivering code with speed and stability. And then, we're saying okay, imagine if you could deliver with speed and stability here. What could you do with delivering features? How does that help you get to market faster? How does that help you beat your competitors? How does it allow you to respond to complaints and regulatory changes? And so that's really what helps us drive and then another way that we are a little different from other reports that are out there. Other industry reports are also very helpful but they are very different. So I don't say things like 27% of the industry is using configuration management. Other report say that and that is interesting. I don't report on percentage of the industry that's doing something. >> Right, right. >> But those other reports can not say what is predictive of improvement. So we are the prediction. Occasionally, I'll report correlations if I don't have the statistics to go as strong as-- >> And what moves it from correlation to prediction is the strength of the algorithms? >> No, it's the strength of the research design. >> The strength of the research design upfront? >> Yep, up front. >> Before you feed it in. >> Upfront and-- >> 'Cause really, you're knocking them at research. >> Yes. >> Rigor. >> Yep. >> That's the underpinning of the whole thing. >> And much more data has been published in academic periodicals, so we are still actively doing research. >> And I would imagine that the annual report is really an ongoing, longitudinal study across a whole lot of the same companies over and over and over, year in, year out. So you get them-- >> So it's open every year. >> As well. >> Yep. >> Awesome, alright Nicole. Well that is fascinating and everyone should go to DORA and get the free research. And then if they want to bring you guys in, and you offer custom services to help the particular company execute and do better. >> Yes, absolutely. So you can go to DevOps-research.com to find all of our research and anything else you want to find out about engaging with us or anything like that. >> Nicole Forsgren. She's DORA the explorer. She'll help you out with your DevOps. I'm Jeff Frick, you're watching theCUBE from PagerDuty Summit. Thanks for watching. (uptempo techno music)

Published Date : Sep 8 2017

SUMMARY :

So we are excited to have Nicole Forsgren It's good to be here. Alright so you are the DevOps expert. It's all about the pivot right? Absolutely, absolutely so you do an annual report and customers in the business. They just happen to warp their technology and how do you see, So like you said, it's really, really involved. So that we can do what we need to do for the company, that really can't do what they do So not even atm, so the way that people more that we always ask people is how do I get started. and so if you want some ideas of what really statistically I mean because we need to be really, really optimistic I can algorithmically tell you what your bottle neck is, So a really great example we have was with Capital One. So it's that ability to measure consistently and talking about, everyone says data is in the world. and I love that the team has been so loving it's just always internal apps that we have to build How does that help you beat your competitors? if I don't have the statistics to go as strong as-- so we are still actively doing research. So you get them-- and you offer custom services to help the particular and anything else you want to find out about engaging with us She'll help you out with your DevOps.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NicolePERSON

0.99+

Jess HumblePERSON

0.99+

Jeff FrickPERSON

0.99+

Nicole ForsgrenPERSON

0.99+

JeffPERSON

0.99+

JeanPERSON

0.99+

VerizonORGANIZATION

0.99+

twoQUANTITY

0.99+

San FranciscoLOCATION

0.99+

27%QUANTITY

0.99+

Capital OneORGANIZATION

0.99+

15 minuteQUANTITY

0.99+

oneQUANTITY

0.99+

JenPERSON

0.99+

threeQUANTITY

0.99+

next yearDATE

0.99+

Jean KimPERSON

0.99+

Jame KimPERSON

0.99+

fiveQUANTITY

0.99+

PuppetORGANIZATION

0.99+

Pier 39LOCATION

0.99+

JessPERSON

0.99+

10 storiesQUANTITY

0.99+

Bay bridgeLOCATION

0.99+

firstQUANTITY

0.99+

Pier 27LOCATION

0.99+

20%QUANTITY

0.99+

PagerDutyORGANIZATION

0.99+

two monthsQUANTITY

0.98+

DevOps-research.comOTHER

0.98+

20 timesQUANTITY

0.98+

four yearsQUANTITY

0.98+

20XQUANTITY

0.98+

two core capabilitiesQUANTITY

0.98+

two capabilitiesQUANTITY

0.98+

Sue ChowPERSON

0.98+

DevOpsTITLE

0.98+

DORAORGANIZATION

0.98+

LinkedInORGANIZATION

0.98+

four years agoDATE

0.98+

bothQUANTITY

0.97+

over 20QUANTITY

0.97+

decadesQUANTITY

0.96+

GodPERSON

0.92+

over 23,000 data pointsQUANTITY

0.92+

theCUBEORGANIZATION

0.92+

later this afternoonDATE

0.91+

PagerDuty SummitEVENT

0.91+

over a dozen linesQUANTITY

0.85+

PagerDuty Summit 2017EVENT

0.85+

more bucksQUANTITY

0.83+

DevOpsORGANIZATION

0.8+

zero increaseQUANTITY

0.79+

single dayQUANTITY

0.77+

yearsDATE

0.77+

yearsQUANTITY

0.77+

DORATITLE

0.71+

couple timesQUANTITY

0.68+

pagerORGANIZATION

0.66+

PagerDuty SummitORGANIZATION

0.64+

ForesterORGANIZATION

0.63+

coupleQUANTITY

0.6+

InfoSecORGANIZATION

0.52+

AcademeORGANIZATION

0.51+

few bucksQUANTITY

0.51+

TestOpsORGANIZATION

0.4+

Emilia A'Bell Platform9


 

(Gentle music) >> Hello and welcome to the Cube here in Palo Alto, California. I'm John Furrier here, joined by Platform nine, Amelia Bell the Chief Revenue Officer, really digging into the conversation around Kubernetes Cloud native and the journey this next generation cloud. Amelia, thanks for coming in and joining me today. >> Thank you, thank you. Great pleasure to be here. >> So, CRO, chief Revenue Officer. So you're mainly in charge of serving the customers, making sure they're they're happy with the solution you guys have. >> That's right. >> And this market must be pretty exciting. >> Oh, it's very exciting and we are seeing a lot of new use cases coming up all the time. So part of my job is to obtain new customers but then of course, service our existing customers and then there's a constant evolution. Nothing is standing still right now. >> We've had all your co-founders on, on the show here and we've kind of talked about the trends and where you guys have come from, where you guys are going now. And it's interesting, if you look at the cloud native market, the scale is still huge. You seeing now this next wave of AI coming on, which I call that's the real web three in my mind in terms of like the next experiences really still points to data infrastructure scale. These next gen apps are coming. And so that's being built on the previous generation of DevSecOps. >> Right >> And so a lot of enterprises are having to grow up really, really fast >> Right. >> And figure out, okay, I got to have scale I got large scale data, I got horizontal scalability I got to apply machine learning now the new software engineering practice. And then, oh, by the way I got the Kubernetes clusters I got to manage >> Right. >> I got what's containers weather, the security problems. This is a really complicated but important area of build out right now in the marketplace. >> Right. What are you seeing? >> So it's, it's really important that the infrastructure is not the hindrance in these cases. And we, one of our customers is in fact a large AI company and we, I met with them yesterday and asked them, you know, why are you giving that to us? You've got really smart engineers. They can run and create the infrastructure, you know in a custom way that you want it. And they said, we've got to be core to our business. There's plenty of work to do just on delivering the AI capabilities, and there's plenty of work to do. We can't get bogged down in the infrastructure. We don't want to have people running the engine we want them driving the car. We want them creating value on top of that. so they can't have the infrastructure being the bottleneck for them. >> It's interesting, the AI companies, that's their value proposition to their customers is that they don't want the technical talent. >> Right. >> Working on, you know, non-differentiated heavy lifting things. >> Right. >> And automate those and scale it up. Can you talk about the problem that you guys are solving? Because there's a lot going on here. >> Yeah. >> You can look at all aspects of the DevOps scale. There's a lot of little problems, some big problems. What are you guys focusing on? What's the bullseye for Platform known? >> Okay, so the bullseye is that Kubernetes infrastructure is really hard, right? It's really hard to create and run. So we introduce a time to market efficiency, let's get this up and running and let's get you into production and and producing results for your customers fast. But at the same time, let's reduce your cost and complexity and increase reliability. So, >> And what are some of the things that they're having problems with that are breaking? Is it more of updates on code? Is it size of the, I mean clusters they have, what what is it more operational? What are the, what are some of the things that are that kind of get them to call you guys up? What's the main thing? >> It's the operations. It's all operations. So what, what happens is that if you have a look at Kubernetes platform it's made up of many, many components. And that's where it gets complex. It's not just Kubernetes. There's load balances, networking, there's observability. All these things have to operate together. And all the piece parts have to be upgraded and maintained. The integrations need to work, you need to have probes into the system to predict where problems can be coming. So the operational part of it is complex. So you need to be observing not only your clusters in the health of the clusters and the nodes and so on but the health of the platform itself. >> We're going to get Peter Frey in on here after I talk about some of the technical issues on deployments. But what's the, what's the big decision for the customer? Because there's kind of, there's two schools of thought. One is, I'm going to build my own and have my team build it or I'm going to go with a partner >> Right. >> Say platform nine, what's the trade offs there? Because it seems to me that, that there's a there's a certain area of where it's core competency but I can outsource it or partner with it and, and work with platform nine versus trying to take it all on internally >> Right. >> Of which requires more costs. So there's a, there's a line where you kind of like figure out that customers have to figure out that, that piece >> Right >> What do, what's your view on that? Because I'm hearing that more people are saying, hey I want to, I want to focus my people on solutions. The app side, not so much the ops >> Right. >> What's the trade off? How do you talk about? >> It's a really interesting question because most companies think they have two options. It's either a DIY option and they love that engineers love playing with the new and on the latest. And then they think the other option is going to cloud, public cloud and have it semi managed by them. And you get very different out of those. So in the DIY you get flexibility coz you get to choose your infrastructure but then you've got all the complexities of the DIY piece. You've got to not only choose all your components but you've got to keep them working. Now if you go to public cloud option, you lose flexibility because a lot of those choices are made for you but you gain agility because quite frankly it's really easy to spin up clusters. So what we are, is that in the middle we bring the agility and the flexibility because we bring the control plane that allows you to spin up clusters and and lifecycle manage them very quickly. So the agility's there but you can do it on the infrastructure of your choice. And in the DIY culture, one of the hardest things to do actually is to convince them they don't have to do it themselves. They can focus on higher value activities, which are more focused on delivering outcomes to their customers. >> So you provide the solution that allows them to feel like they're billing it themselves. >> Correct. >> And get these scale and speed and the efficiencies of the op side. So it's kind of the best of both worlds. It's not a full outsource. >> Right, right. >> You're bringing them in to make their jobs easier >> Right, That's right. So they get choices. >> Yeah. >> We, we, they get choices on how they build it and then we run and operate it for them. But they, they have all the observability. The benefit is that if we are managing their operations and most of our customers choose the managed operations piece of it, then they don't. If something goes wrong, we fix that and they, they they get told, oh, by the way, you had a problem. We've dealt with it. But in the other model is they've got to create all that observability themselves and they've got to get ahead of the issues themselves, and then they've got to raise tickets to whoever they need to raise tickets to. Whereas we have things like auto ticket generation and so on where, look, just drive the car let us worry about the engine and all of that. Let us deal with that. And you can choose whatever you want about the engine but let us manage it for you. So >> What do you, what do you say to folks out there that are may have a need for platform nine? What's the signals inside their company that they should be calling you guys up and, and leaning in with platform nine? >> Right. >> Is it more sprawl on on clusters? Is it more errors? Is it more tickets? Is it more hassle? What are some of the signs? If someone's watching this say, hey I have, I have an issue with this. >> I would say, if there's operational inefficiencies you can't get things to market fast enough because you are building this and it's just taking too long you're spending way too much time operationally on the infrastructure, then you are, you are not using your resources where they should best be used. And, and that is delivering services to the customer. >> Ed me Hora on for International Women's Day. And she was talking about how they love to solve complex problems on the engineering team at Platform nine. It's going to get pretty complex with the edge emerging >> Indeed >> and cloud native on-premises distributed computing. >> Indeed. >> essentially is what it is. That's kind of the core DNA of the team. >> Yeah. >> What, how does that translate to the customers? Because IT seems to be, okay, I have virtual machines were great, now I got to scale up and and convert over a transform to containers, Kubernetes >> Right. >> And then large scale app, app applications. >> Right, so when it comes to Edge it gets complex pretty fast because it's highly distributed. So how do you have standardization and governance across all the different edge locations? So what we bring into play is an ability to, um, at each edge, location eh, provision from bare metal up all the way up to the application. So let's say you have thousands of stores and you want to modernize those stores, you know rather than having a server being sent somewhere to have an image loaded up and then sent that and then you've got to send a technical guide to the store and you've got to implement it all there. Forget all that. That's just, that's just a ridiculous waste of time. So what we've done is we've created the ability where the server can just be sent to the store. You can get your barista or your chef just to plug it in, right? You don't need to send any technical person over there. As long as we have access to it, we get access to it and we provision the whole thing from bare metal up and then we can maintain it according to the standards that are needed and upgrade accordingly. And that gives standardization across all your stores or edge locations or 5G towers or whatever it is, distribution centers. And we can create nice governance and good standardization which allows them to innovate fast as well. >> So this is a real opportunity for you guys. >> Yeah. >> This is an advantage from your expertise. >> Yes. >> The edge piece, dropping in a box, self-provisioning. >> That's right. So yeah. >> Can people do that? What's the, >> No, actually it, it's, it's very difficult to do. I I, from my understanding, we're the only people that can provision it from bare metal up, right? So if anyone has a different story, I'd love to hear about that. But that's my understanding today. >> That's a good value purpose. So talk about the value of the customer. What kind of scope do you got? Can you scope some of the customer environments you have from >> Sure. >> From, you know, small to the large, how give us an idea of the order of magnitude of the >> Yeah, so, so small customers may have 20 clusters or something like that. 20 nodes, I beg your pardon. Our large customers, like we're we are scaling one particular distributed environment from 2200 nodes to 10,000 nodes by the end of this year and 26,000 nodes next year. We have another customer that's scaling up to 10,000 nodes this year as well. So we have some very large scale, but some smaller ones too. And we're, we're happy to work with either end. >> Okay, so pretend I'm a customer. I'm really, I got pain and Kubernetes like I want to, I can't hire enough people. I want to have my all focus. What's the pitch? >> Okay. So skill shortage is something that that everyone is facing right now. And if, if you've got skill shortage it's going to be really hard to hire if you are competing against really, you know, high salary you know, offering companies that are out there. So the pitch is, let us do it for you. We have, we have a team of excellent probably the best Kubernetes engineers on the planet. We will create your environment for you. We will get it up and running. We will allow you to, you know, run your applica, just consume the platform, we'll run it for you. We'll have SLAs and up times guaranteed and you can just focus on delivering the software and the value needed to your customers. >> What are some of the testimonials that you get from people? Just anecdotally, what do they say? Oh my god, you guys save. >> Yeah. >> Our butts. >> Yeah. >> This is amazing. We just shipped our code out much faster. >> Yeah. >> What are some of the things that you hear? >> So, so the number one thing I hear is it just works right? It's, we don't have to worry about it, it just works. So that, that's a really great feedback that we get. The other thing I hear is if we do have issues that your team are amazing, they they fix things, they're proactive, you know, they're we really enjoy working with you. So from, from that perspective, that's great. But the other side of it is we hear things like if we were to do that ourselves we would've taken six to 12 months to build that. And you guys have just saved us six to 12 months. The other thing that we hear is with the same two engineers we started on, you know, a hundred nodes we're now running thousands of nodes. We have not had to increase the size of the team and expand and scale exponentially. >> Awesome. What's next for you guys? What's on your, your plate? >> Yeah. >> With CRO, what's some of the goals you have? >> Yeah, so growth of course as a CRO, you don't get away from that. We've got some very exciting, actually, initiatives coming up. One of the things that we are seeing a lot of demand for and is, is in the area of virtualization bringing virtual machine, virtual virtual containers, sorry I'm saying that all wrong. Bringing virtual machine, the virtual machines onto the cloud native infrastructure using Kubernetes technology. So that provides a, an excellent stepping stone for those guys who are in the virtualization world. And they can't move to containers, they can't refactor their applications and workloads fast enough. So just bring your virtual machine and put it onto the container infrastructure. So we're seeing a lot of demand for that, because it provides an excellent stepping stone. Why not use Kubernetes to orchestrate virtual the virtual world? And then we've got some really interesting cost optimization. >> So a lot of migration kind of thinking around VMs and >> Oh, tremendous. The, the VM world is just massively bigger than the container world right now. So you can't ignore that. So we are providing basically the evolution, the the journey for the customers to utilize the greatest of technologies without having to do that in a, in a in a way that just breaks the bank and they can't get there fast enough. So we provide those stepping stones for them. Yeah. >> Amelia thank you for coming on. Sharing. >> Thank you. >> The update on platform nine. Congratulations on your big accounts you have and >> thank you. >> And the world could get more complex, which Means >> indeed >> have more customers. >> Thank you, thank you John. Appreciate that. Thank you. >> I'm John Furry. You're watching Platform nine and the Cube Conversations here. Thanks for watching. (gentle music)

Published Date : Mar 10 2023

SUMMARY :

and the journey this Great pleasure to be here. mainly in charge of serving the customers, And this market must and we are seeing a lot and where you guys have come from, I got the Kubernetes of build out right now in the marketplace. What are you seeing? that the infrastructure is not It's interesting, the AI Working on, you know, that you guys are solving? aspects of the DevOps scale. Okay, so the bullseye is into the system to predict of the technical issues out that customers have to The app side, not so much the ops So in the DIY you get flexibility So you provide the solution of the best of both worlds. So they get choices. get ahead of the issues are some of the signs? on the infrastructure, complex problems on the engineering team and cloud native on-premises is. That's kind of the core And then large scale So let's say you have thousands of stores opportunity for you guys. from your expertise. in a box, self-provisioning. So yeah. different story, I'd love to So talk about the value of the customer. by the end of this year What's the pitch? and the value needed to your customers. What are some of the testimonials This is amazing. of the team and expand What's next for you guys? and is, is in the area of virtualization So you can't ignore Amelia thank you for coming on. big accounts you have and Thank you. and the Cube Conversations here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmeliaPERSON

0.99+

Amelia BellPERSON

0.99+

JohnPERSON

0.99+

sixQUANTITY

0.99+

John FurrierPERSON

0.99+

yesterdayDATE

0.99+

Emilia A'BellPERSON

0.99+

John FurryPERSON

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Peter FreyPERSON

0.99+

12 monthsQUANTITY

0.99+

International Women's DayEVENT

0.99+

two engineersQUANTITY

0.99+

two optionsQUANTITY

0.99+

20 clustersQUANTITY

0.99+

next yearDATE

0.99+

two schoolsQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

this yearDATE

0.98+

todayDATE

0.98+

20 nodesQUANTITY

0.97+

each edgeQUANTITY

0.96+

KubernetesORGANIZATION

0.96+

thousands of storesQUANTITY

0.93+

end of this yearDATE

0.93+

2200 nodesQUANTITY

0.93+

CubeORGANIZATION

0.93+

10,000 nodesQUANTITY

0.93+

KubernetesTITLE

0.92+

both worldsQUANTITY

0.91+

up to 10,000 nodesQUANTITY

0.88+

thousands of nodesQUANTITY

0.87+

EdgeTITLE

0.84+

26,000 nodesQUANTITY

0.81+

Ed me HoraPERSON

0.8+

Platform nineTITLE

0.75+

hundred nodesQUANTITY

0.69+

DevSecOpsTITLE

0.68+

Platform nineORGANIZATION

0.68+

one thingQUANTITY

0.62+

waveEVENT

0.57+

Chief Revenue OfficerPERSON

0.57+

nineQUANTITY

0.56+

CROPERSON

0.54+

threeQUANTITY

0.53+

nineOTHER

0.52+

DevOpsTITLE

0.5+

nextEVENT

0.49+

platform nineOTHER

0.49+

CubeTITLE

0.39+

Jay Marshall, Neural Magic | AWS Startup Showcase S3E1


 

(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jay MarshallPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

Nir ShavitPERSON

0.99+

20-yearQUANTITY

0.99+

AlexaTITLE

0.99+

2010sDATE

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

each coreQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

JavaTITLE

0.99+

YouTubeORGANIZATION

0.99+

TodayDATE

0.99+

nine yearsQUANTITY

0.98+

bothQUANTITY

0.98+

BERTTITLE

0.98+

theCUBEORGANIZATION

0.98+

ChatGPTTITLE

0.98+

20 yearsQUANTITY

0.98+

over 50%QUANTITY

0.97+

second natureQUANTITY

0.96+

todayDATE

0.96+

ARMORGANIZATION

0.96+

oneQUANTITY

0.95+

DeepSparseTITLE

0.94+

neuralmagic.com/gravitonOTHER

0.94+

SiliconANGLEORGANIZATION

0.94+

WebSphereTITLE

0.94+

nineQUANTITY

0.94+

firstQUANTITY

0.93+

Startup ShowcaseEVENT

0.93+

five millisecondsQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.91+

twoQUANTITY

0.9+

YOLOORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

OPTTITLE

0.88+

last six monthsDATE

0.88+

season threeQUANTITY

0.86+

doubleQUANTITY

0.86+

one customerQUANTITY

0.86+

SupercloudEVENT

0.86+

one sideQUANTITY

0.85+

VicePERSON

0.85+

x86OTHER

0.83+

AI/ML: Top Startups Building Foundational ModelsTITLE

0.82+

ECSTITLE

0.81+

$100 billionQUANTITY

0.81+

DevOpsTITLE

0.81+

WebLogicTITLE

0.8+

EKSTITLE

0.8+

a minuteQUANTITY

0.8+

neuralmagic.comOTHER

0.79+

Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1


 

(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert NishiharaPERSON

0.99+

JohnPERSON

0.99+

RobertPERSON

0.99+

John FurrierPERSON

0.99+

NetflixORGANIZATION

0.99+

35 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

UberORGANIZATION

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Ant GroupORGANIZATION

0.99+

firstQUANTITY

0.99+

PythonTITLE

0.99+

20%QUANTITY

0.99+

32 GPUsQUANTITY

0.99+

LyftORGANIZATION

0.99+

hundredsQUANTITY

0.99+

tomorrowDATE

0.99+

AnyscaleORGANIZATION

0.99+

threeQUANTITY

0.99+

128QUANTITY

0.99+

SeptemberDATE

0.99+

todayDATE

0.99+

Moore's LawTITLE

0.99+

Adam SelipskyPERSON

0.99+

PyTorchTITLE

0.99+

RayORGANIZATION

0.99+

second reasonQUANTITY

0.99+

64QUANTITY

0.99+

each workerQUANTITY

0.99+

each workerQUANTITY

0.99+

PhotoshopTITLE

0.99+

UC BerkeleyORGANIZATION

0.99+

JavaTITLE

0.99+

ShopifyORGANIZATION

0.99+

OpenAIORGANIZATION

0.99+

AnyscalePERSON

0.99+

thirdQUANTITY

0.99+

two thingsQUANTITY

0.99+

ByteDanceORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

OneQUANTITY

0.99+

95QUANTITY

0.99+

AsureORGANIZATION

0.98+

one lineQUANTITY

0.98+

one GPUQUANTITY

0.98+

ChatGPTTITLE

0.98+

TensorFlowTITLE

0.98+

last yearDATE

0.98+

first bucketQUANTITY

0.98+

bothQUANTITY

0.98+

two layersQUANTITY

0.98+

CohereORGANIZATION

0.98+

AlipayORGANIZATION

0.98+

RayPERSON

0.97+

oneQUANTITY

0.97+

InstacartORGANIZATION

0.97+

Lena Smart & Tara Hernandez, MongoDB | International Women's Day


 

(upbeat music) >> Hello and welcome to theCube's coverage of International Women's Day. I'm John Furrier, your host of "theCUBE." We've got great two remote guests coming into our Palo Alto Studios, some tech athletes, as we say, people that've been in the trenches, years of experience, Lena Smart, CISO at MongoDB, Cube alumni, and Tara Hernandez, VP of Developer Productivity at MongoDB as well. Thanks for coming in to this program and supporting our efforts today. Thanks so much. >> Thanks for having us. >> Yeah, everyone talk about the journey in tech, where it all started. Before we get there, talk about what you guys are doing at MongoDB specifically. MongoDB is kind of gone the next level as a platform. You have your own ecosystem, lot of developers, very technical crowd, but it's changing the business transformation. What do you guys do at Mongo? We'll start with you, Lena. >> So I'm the CISO, so all security goes through me. I like to say, well, I don't like to say, I'm described as the ones throat to choke. So anything to do with security basically starts and ends with me. We do have a fantastic Cloud engineering security team and a product security team, and they don't report directly to me, but obviously we have very close relationships. I like to keep that kind of church and state separate and I know I've spoken about that before. And we just recently set up a physical security team with an amazing gentleman who left the FBI and he came to join us after 26 years for the agency. So, really starting to look at the physical aspects of what we offer as well. >> I interviewed a CISO the other day and she said, "Every day is day zero for me." Kind of goofing on the Amazon Day one thing, but Tara, go ahead. Tara, go ahead. What's your role there, developer productivity? What are you focusing on? >> Sure. Developer productivity is kind of the latest description for things that we've described over the years as, you know, DevOps oriented engineering or platform engineering or build and release engineering development infrastructure. It's all part and parcel, which is how do we actually get our code from developer to customer, you know, and all the mechanics that go into that. It's been something I discovered from my first job way back in the early '90s at Borland. And the art has just evolved enormously ever since, so. >> Yeah, this is a very great conversation both of you guys, right in the middle of all the action and data infrastructures changing, exploding, and involving big time AI and data tsunami and security never stops. Well, let's get into, we'll talk about that later, but let's get into what motivated you guys to pursue a career in tech and what were some of the challenges that you faced along the way? >> I'll go first. The fact of the matter was I intended to be a double major in history and literature when I went off to university, but I was informed that I had to do a math or a science degree or else the university would not be paid for. At the time, UC Santa Cruz had a policy that called Open Access Computing. This is, you know, the late '80s, early '90s. And anybody at the university could get an email account and that was unusual at the time if you were, those of us who remember, you used to have to pay for that CompuServe or AOL or, there's another one, I forget what it was called, but if a student at Santa Cruz could have an email account. And because of that email account, I met people who were computer science majors and I'm like, "Okay, I'll try that." That seems good. And it was a little bit of a struggle for me, a lot I won't lie, but I can't complain with how it ended up. And certainly once I found my niche, which was development infrastructure, I found my true love and I've been doing it for almost 30 years now. >> Awesome. Great story. Can't wait to ask a few questions on that. We'll go back to that late '80s, early '90s. Lena, your journey, how you got into it. >> So slightly different start. I did not go to university. I had to leave school when I was 16, got a job, had to help support my family. Worked a bunch of various jobs till I was about 21 and then computers became more, I think, I wouldn't say they were ubiquitous, but they were certainly out there. And I'd also been saving up every penny I could earn to buy my own computer and bought an Amstrad 1640, 20 meg hard drive. It rocked. And kind of took that apart, put it back together again, and thought that could be money in this. And so basically just teaching myself about computers any job that I got. 'Cause most of my jobs were like clerical work and secretary at that point. But any job that had a computer in front of that, I would make it my business to go find the guy who did computing 'cause it was always a guy. And I would say, you know, I want to learn how these work. Let, you know, show me. And, you know, I would take my lunch hour and after work and anytime I could with these people and they were very kind with their time and I just kept learning, so yep. >> Yeah, those early days remind me of the inflection point we're going through now. This major C change coming. Back then, if you had a computer, you had to kind of be your own internal engineer to fix things. Remember back on the systems revolution, late '80s, Tara, when, you know, your career started, those were major inflection points. Now we're seeing a similar wave right now, security, infrastructure. It feels like it's going to a whole nother level. At Mongo, you guys certainly see this as well, with this AI surge coming in. A lot more action is coming in. And so there's a lot of parallels between these inflection points. How do you guys see this next wave of change? Obviously, the AI stuff's blowing everyone away. Oh, new user interface. It's been called the browser moment, the mobile iPhone moment, kind of for this generation. There's a lot of people out there who are watching that are young in their careers, what's your take on this? How would you talk to those folks around how important this wave is? >> It, you know, it's funny, I've been having this conversation quite a bit recently in part because, you know, to me AI in a lot of ways is very similar to, you know, back in the '90s when we were talking about bringing in the worldwide web to the forefront of the world, right. And we tended to think in terms of all the optimistic benefits that would come of it. You know, free passing of information, availability to anyone, anywhere. You just needed an internet connection, which back then of course meant a modem. >> John: Not everyone had though. >> Exactly. But what we found in the subsequent years is that human beings are what they are and we bring ourselves to whatever platforms that are there, right. And so, you know, as much as it was amazing to have this freely available HTML based internet experience, it also meant that the negatives came to the forefront quite quickly. And there were ramifications of that. And so to me, when I look at AI, we're already seeing the ramifications to that. Yes, are there these amazing, optimistic, wonderful things that can be done? Yes. >> Yeah. >> But we're also human and the bad stuff's going to come out too. And how do we- >> Yeah. >> How do we as an industry, as a community, you know, understand and mitigate those ramifications so that we can benefit more from the positive than the negative. So it is interesting that it comes kind of full circle in really interesting ways. >> Yeah. The underbelly takes place first, gets it in the early adopter mode. Normally industries with, you know, money involved arbitrage, no standards. But we've seen this movie before. Is there hope, Lena, that we can have a more secure environment? >> I would hope so. (Lena laughs) Although depressingly, we've been in this well for 30 years now and we're, at the end of the day, still telling people not to click links on emails. So yeah, that kind of still keeps me awake at night a wee bit. The whole thing about AI, I mean, it's, obviously I am not an expert by any stretch of the imagination in AI. I did read (indistinct) book recently about AI and that was kind of interesting. And I'm just trying to teach myself as much as I can about it to the extent of even buying the "Dummies Guide to AI." Just because, it's actually not a dummies guide. It's actually fairly interesting, but I'm always thinking about it from a security standpoint. So it's kind of my worst nightmare and the best thing that could ever happen in the same dream. You know, you've got this technology where I can ask it a question and you know, it spits out generally a reasonable answer. And my team are working on with Mark Porter our CTO and his team on almost like an incubation of AI link. What would it look like from MongoDB? What's the legal ramifications? 'Cause there will be legal ramifications even though it's the wild, wild west just now, I think. Regulation's going to catch up to us pretty quickly, I would think. >> John: Yeah, yeah. >> And so I think, you know, as long as companies have a seat at the table and governments perhaps don't become too dictatorial over this, then hopefully we'll be in a good place. But we'll see. I think it's a really interest, there's that curse, we're living in interesting times. I think that's where we are. >> It's interesting just to stay on this tech trend for a minute. The standards bodies are different now. Back in the old days there were, you know, IEEE standards, ITF standards. >> Tara: TPC. >> The developers are the new standard. I mean, now you're seeing open source completely different where it was in the '90s to here beginning, that was gen one, some say gen two, but I say gen one, now we're exploding with open source. You have kind of developers setting the standards. If developers like it in droves, it becomes defacto, which then kind of rolls into implementation. >> Yeah, I mean I think if you don't have developer input, and this is why I love working with Tara and her team so much is 'cause they get it. If we don't have input from developers, it's not going to get used. There's going to be ways of of working around it, especially when it comes to security. If they don't, you know, if you're a developer and you're sat at your screen and you don't want to do that particular thing, you're going to find a way around it. You're a smart person. >> Yeah. >> So. >> Developers on the front lines now versus, even back in the '90s, they're like, "Okay, consider the dev's, got a QA team." Everything was Waterfall, now it's Cloud, and developers are on the front lines of everything. Tara, I mean, this is where the standards are being met. What's your reaction to that? >> Well, I think it's outstanding. I mean, you know, like I was at Netscape and part of the crowd that released the browser as open source and we founded mozilla.org, right. And that was, you know, in many ways kind of the birth of the modern open source movement beyond what we used to have, what was basically free software foundation was sort of the only game in town. And I think it is so incredibly valuable. I want to emphasize, you know, and pile onto what Lena was saying, it's not just that the developers are having input on a sort of company by company basis. Open source to me is like a checks and balance, where it allows us as a broader community to be able to agree on and enforce certain standards in order to try and keep the technology platforms as accessible as possible. I think Kubernetes is a great example of that, right. If we didn't have Kubernetes, that would've really changed the nature of how we think about container orchestration. But even before that, Linux, right. Linux allowed us as an industry to end the Unix Wars and as someone who was on the front lines of that as well and having to support 42 different operating systems with our product, you know, that was a huge win. And it allowed us to stop arguing about operating systems and start arguing about software or not arguing, but developing it in positive ways. So with, you know, with Kubernetes, with container orchestration, we all agree, okay, that's just how we're going to orchestrate. Now we can build up this huge ecosystem, everybody gets taken along, right. And now it changes the game for what we're defining as business differentials, right. And so when we talk about crypto, that's a little bit harder, but certainly with AI, right, you know, what are the checks and balances that as an industry and as the developers around this, that we can in, you know, enforce to make sure that no one company or no one body is able to overly control how these things are managed, how it's defined. And I think that is only for the benefit in the industry as a whole, particularly when we think about the only other option is it gets regulated in ways that do not involve the people who actually know the details of what they're talking about. >> Regulated and or thrown away or bankrupt or- >> Driven underground. >> Yeah. >> Which would be even worse actually. >> Yeah, that's a really interesting, the checks and balances. I love that call out. And I was just talking with another interview part of the series around women being represented in the 51% ratio. Software is for everybody. So that we believe that open source movement around the collective intelligence of the participants in the industry and independent of gender, this is going to be the next wave. You're starting to see these videos really have impact because there are a lot more leaders now at the table in companies developing software systems and with AI, the aperture increases for applications. And this is the new dynamic. What's your guys view on this dynamic? How does this go forward in a positive way? Is there a certain trajectory you see? For women in the industry? >> I mean, I think some of the states are trying to, again, from the government angle, some of the states are trying to force women into the boardroom, for example, California, which can be no bad thing, but I don't know, sometimes I feel a bit iffy about all this kind of forced- >> John: Yeah. >> You know, making, I don't even know how to say it properly so you can cut this part of the interview. (John laughs) >> Tara: Well, and I think that they're >> I'll say it's not organic. >> No, and I think they're already pulling it out, right. It's already been challenged so they're in the process- >> Well, this is the open source angle, Tara, you are getting at it. The change agent is open, right? So to me, the history of the proven model is openness drives transparency drives progress. >> No, it's- >> If you believe that to be true, this could have another impact. >> Yeah, it's so interesting, right. Because if you look at McKinsey Consulting or Boston Consulting or some of the other, I'm blocking on all of the names. There has been a decade or more of research that shows that a non homogeneous employee base, be it gender or ethnicity or whatever, generates more revenue, right? There's dollar signs that can be attached to this, but it's not enough for all companies to want to invest in that way. And it's not enough for all, you know, venture firms or investment firms to grant that seed money or do those seed rounds. I think it's getting better very slowly, but socialization is a much harder thing to overcome over time. Particularly, when you're not just talking about one country like the United States in our case, but around the world. You know, tech centers now exist all over the world, including places that even 10 years ago we might not have expected like Nairobi, right. Which I think is amazing, but you have to factor in the cultural implications of that as well, right. So yes, the openness is important and we have, it's important that we have those voices, but I don't think it's a panacea solution, right. It's just one more piece. I think honestly that one of the most important opportunities has been with Cloud computing and Cloud's been around for a while. So why would I say that? It's because if you think about like everybody holds up the Steve Jobs, Steve Wozniak, back in the '70s, or Sergey and Larry for Google, you know, you had to have access to enough credit card limit to go to Fry's and buy your servers and then access to somebody like Susan Wojcicki to borrow the garage or whatever. But there was still a certain amount of upfrontness that you had to be able to commit to, whereas now, and we've, I think, seen a really good evidence of this being able to lease server resources by the second and have development platforms that you can do on your phone. I mean, for a while I think Africa, that the majority of development happened on mobile devices because there wasn't a sufficient supply chain of laptops yet. And that's no longer true now as far as I know. But like the power that that enables for people who would otherwise be underrepresented in our industry instantly opens it up, right? And so to me that's I think probably the biggest opportunity that we've seen from an industry on how to make more availability in underrepresented representation for entrepreneurship. >> Yeah. >> Something like AI, I think that's actually going to take us backwards if we're not careful. >> Yeah. >> Because of we're reinforcing that socialization. >> Well, also the bias. A lot of people commenting on the biases of the large language inherently built in are also problem. Lena, I want you to weigh on this too, because I think the skills question comes up here and I've been advocating that you don't need the pedigree, college pedigree, to get into a certain jobs, you mentioned Cloud computing. I mean, it's been around for you think a long time, but not really, really think about it. The ability to level up, okay, if you're going to join something new and half the jobs in cybersecurity are created in the past year, right? So, you have this what used to be a barrier, your degree, your pedigree, your certification would take years, would be a blocker. Now that's gone. >> Lena: Yeah, it's the opposite. >> That's, in fact, psychology. >> I think so, but the people who I, by and large, who I interview for jobs, they have, I think security people and also I work with our compliance folks and I can't forget them, but let's talk about security just now. I've always found a particular kind of mindset with security folks. We're very curious, not very good at following rules a lot of the time, and we'd love to teach others. I mean, that's one of the big things stem from the start of my career. People were always interested in teaching and I was interested in learning. So it was perfect. And I think also having, you know, strong women leaders at MongoDB allows other underrepresented groups to actually apply to the company 'cause they see that we're kind of talking the talk. And that's been important. I think it's really important. You know, you've got Tara and I on here today. There's obviously other senior women at MongoDB that you can talk to as well. There's a bunch of us. There's not a whole ton of us, but there's a bunch of us. And it's good. It's definitely growing. I've been there for four years now and I've seen a growth in women in senior leadership positions. And I think having that kind of track record of getting really good quality underrepresented candidates to not just interview, but come and join us, it's seen. And it's seen in the industry and people take notice and they're like, "Oh, okay, well if that person's working, you know, if Tara Hernandez is working there, I'm going to apply for that." And that in itself I think can really, you know, reap the rewards. But it's getting started. It's like how do you get your first strong female into that position or your first strong underrepresented person into that position? It's hard. I get it. If it was easy, we would've sold already. >> It's like anything. I want to see people like me, my friends in there. Am I going to be alone? Am I going to be of a group? It's a group psychology. Why wouldn't? So getting it out there is key. Is there skills that you think that people should pay attention to? One's come up as curiosity, learning. What are some of the best practices for folks trying to get into the tech field or that's in the tech field and advancing through? What advice are you guys- >> I mean, yeah, definitely, what I say to my team is within my budget, we try and give every at least one training course a year. And there's so much free stuff out there as well. But, you know, keep learning. And even if it's not right in your wheelhouse, don't pick about it. Don't, you know, take a look at what else could be out there that could interest you and then go for it. You know, what does it take you few minutes each night to read a book on something that might change your entire career? You know, be enthusiastic about the opportunities out there. And there's so many opportunities in security. Just so many. >> Tara, what's your advice for folks out there? Tons of stuff to taste, taste test, try things. >> Absolutely. I mean, I always say, you know, my primary qualifications for people, I'm looking for them to be smart and motivated, right. Because the industry changes so quickly. What we're doing now versus what we did even last year versus five years ago, you know, is completely different though themes are certainly the same. You know, we still have to code and we still have to compile that code or package the code and ship the code so, you know, how well can we adapt to these new things instead of creating floppy disks, which was my first job. Five and a quarters, even. The big ones. >> That's old school, OG. There it is. Well done. >> And now it's, you know, containers, you know, (indistinct) image containers. And so, you know, I've gotten a lot of really great success hiring boot campers, you know, career transitioners. Because they bring a lot experience in addition to the technical skills. I think the most important thing is to experiment and figuring out what do you like, because, you know, maybe you are really into security or maybe you're really into like deep level coding and you want to go back, you know, try to go to school to get a degree where you would actually want that level of learning. Or maybe you're a front end engineer, you want to be full stacked. Like there's so many different things, data science, right. Maybe you want to go learn R right. You know, I think it's like figure out what you like because once you find that, that in turn is going to energize you 'cause you're going to feel motivated. I think the worst thing you could do is try to force yourself to learn something that you really could not care less about. That's just the worst. You're going in handicapped. >> Yeah and there's choices now versus when we were breaking into the business. It was like, okay, you software engineer. They call it software engineering, that's all it was. You were that or you were in sales. Like, you know, some sort of systems engineer or sales and now it's,- >> I had never heard of my job when I was in school, right. I didn't even know it was a possibility. But there's so many different types of technical roles, you know, absolutely. >> It's so exciting. I wish I was young again. >> One of the- >> Me too. (Lena laughs) >> I don't. I like the age I am. So one of the things that I did to kind of harness that curiosity is we've set up a security champions programs. About 120, I guess, volunteers globally. And these are people from all different backgrounds and all genders, diversity groups, underrepresented groups, we feel are now represented within this champions program. And people basically give up about an hour or two of their time each week, with their supervisors permission, and we basically teach them different things about security. And we've now had seven full-time people move from different areas within MongoDB into my team as a result of that program. So, you know, monetarily and time, yeah, saved us both. But also we're showing people that there is a path, you know, if you start off in Tara's team, for example, doing X, you join the champions program, you're like, "You know, I'd really like to get into red teaming. That would be so cool." If it fits, then we make that happen. And that has been really important for me, especially to give, you know, the women in the underrepresented groups within MongoDB just that window into something they might never have seen otherwise. >> That's a great common fit is fit matters. Also that getting access to what you fit is also access to either mentoring or sponsorship or some sort of, at least some navigation. Like what's out there and not being afraid to like, you know, just ask. >> Yeah, we just actually kicked off our big mentor program last week, so I'm the executive sponsor of that. I know Tara is part of it, which is fantastic. >> We'll put a plug in for it. Go ahead. >> Yeah, no, it's amazing. There's, gosh, I don't even know the numbers anymore, but there's a lot of people involved in this and so much so that we've had to set up mentoring groups rather than one-on-one. And I think it was 45% of the mentors are actually male, which is quite incredible for a program called Mentor Her. And then what we want to do in the future is actually create a program called Mentor Them so that it's not, you know, not just on the female and so that we can live other groups represented and, you know, kind of break down those groups a wee bit more and have some more granularity in the offering. >> Tara, talk about mentoring and sponsorship. Open source has been there for a long time. People help each other. It's community-oriented. What's your view of how to work with mentors and sponsors if someone's moving through ranks? >> You know, one of the things that was really interesting, unfortunately, in some of the earliest open source communities is there was a lot of pervasive misogyny to be perfectly honest. >> Yeah. >> And one of the important adaptations that we made as an open source community was the idea, an introduction of code of conducts. And so when I'm talking to women who are thinking about expanding their skills, I encourage them to join open source communities to have opportunity, even if they're not getting paid for it, you know, to develop their skills to work with people to get those code reviews, right. I'm like, "Whatever you join, make sure they have a code of conduct and a good leadership team. It's very important." And there are plenty, right. And then that idea has come into, you know, conferences now. So now conferences have codes of contact, if there are any good, and maybe not all of them, but most of them, right. And the ideas of expanding that idea of intentional healthy culture. >> John: Yeah. >> As a business goal and business differentiator. I mean, I won't lie, when I was recruited to come to MongoDB, the culture that I was able to discern through talking to people, in addition to seeing that there was actually women in senior leadership roles like Lena, like Kayla Nelson, that was a huge win. And so it just builds on momentum. And so now, you know, those of us who are in that are now representing. And so that kind of reinforces, but it's all ties together, right. As the open source world goes, particularly for a company like MongoDB, which has an open source product, you know, and our community builds. You know, it's a good thing to be mindful of for us, how we interact with the community and you know, because that could also become an opportunity for recruiting. >> John: Yeah. >> Right. So we, in addition to people who might become advocates on Mongo's behalf in their own company as a solution for themselves, so. >> You guys had great successful company and great leadership there. I mean, I can't tell you how many times someone's told me "MongoDB doesn't scale. It's going to be dead next year." I mean, I was going back 10 years. It's like, just keeps getting better and better. You guys do a great job. So it's so fun to see the success of developers. Really appreciate you guys coming on the program. Final question, what are you guys excited about to end the segment? We'll give you guys the last word. Lena will start with you and Tara, you can wrap us up. What are you excited about? >> I'm excited to see what this year brings. I think with ChatGPT and its copycats, I think it'll be a very interesting year when it comes to AI and always in the lookout for the authentic deep fakes that we see coming out. So just trying to make people aware that this is a real thing. It's not just pretend. And then of course, our old friend ransomware, let's see where that's going to go. >> John: Yeah. >> And let's see where we get to and just genuine hygiene and housekeeping when it comes to security. >> Excellent. Tara. >> Ah, well for us, you know, we're always constantly trying to up our game from a security perspective in the software development life cycle. But also, you know, what can we do? You know, one interesting application of AI that maybe Google doesn't like to talk about is it is really cool as an addendum to search and you know, how we might incorporate that as far as our learning environment and developer productivity, and how can we enable our developers to be more efficient, productive in their day-to-day work. So, I don't know, there's all kinds of opportunities that we're looking at for how we might improve that process here at MongoDB and then maybe be able to share it with the world. One of the things I love about working at MongoDB is we get to use our own products, right. And so being able to have this interesting document database in order to put information and then maybe apply some sort of AI to get it out again, is something that we may well be looking at, if not this year, then certainly in the coming year. >> Awesome. Lena Smart, the chief information security officer. Tara Hernandez, vice president developer of productivity from MongoDB. Thank you so much for sharing here on International Women's Day. We're going to do this quarterly every year. We're going to do it and then we're going to do quarterly updates. Thank you so much for being part of this program. >> Thank you. >> Thanks for having us. >> Okay, this is theCube's coverage of International Women's Day. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 6 2023

SUMMARY :

Thanks for coming in to this program MongoDB is kind of gone the I'm described as the ones throat to choke. Kind of goofing on the you know, and all the challenges that you faced the time if you were, We'll go back to that you know, I want to learn how these work. Tara, when, you know, your career started, you know, to me AI in a lot And so, you know, and the bad stuff's going to come out too. you know, understand you know, money involved and you know, it spits out And so I think, you know, you know, IEEE standards, ITF standards. The developers are the new standard. and you don't want to do and developers are on the And that was, you know, in many ways of the participants I don't even know how to say it properly No, and I think they're of the proven model is If you believe that that you can do on your phone. going to take us backwards Because of we're and half the jobs in cybersecurity And I think also having, you know, I going to be of a group? You know, what does it take you Tons of stuff to taste, you know, my primary There it is. And now it's, you know, containers, Like, you know, some sort you know, absolutely. I (Lena laughs) especially to give, you know, Also that getting access to so I'm the executive sponsor of that. We'll put a plug in for it. and so that we can live to work with mentors You know, one of the things And one of the important and you know, because So we, in addition to people and Tara, you can wrap us up. and always in the lookout for it comes to security. addendum to search and you know, We're going to do it and then we're I'm John Furrier, your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Susan WojcickiPERSON

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

JimPERSON

0.99+

JasonPERSON

0.99+

Tara HernandezPERSON

0.99+

David FloyerPERSON

0.99+

DavePERSON

0.99+

Lena SmartPERSON

0.99+

John TroyerPERSON

0.99+

Mark PorterPERSON

0.99+

MellanoxORGANIZATION

0.99+

Kevin DeierlingPERSON

0.99+

Marty LansPERSON

0.99+

TaraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Jim JacksonPERSON

0.99+

Jason NewtonPERSON

0.99+

IBMORGANIZATION

0.99+

Daniel HernandezPERSON

0.99+

Dave WinokurPERSON

0.99+

DanielPERSON

0.99+

LenaPERSON

0.99+

Meg WhitmanPERSON

0.99+

TelcoORGANIZATION

0.99+

Julie SweetPERSON

0.99+

MartyPERSON

0.99+

Yaron HavivPERSON

0.99+

AmazonORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

Kayla NelsonPERSON

0.99+

Mike PiechPERSON

0.99+

JeffPERSON

0.99+

Dave VolantePERSON

0.99+

John WallsPERSON

0.99+

Keith TownsendPERSON

0.99+

fiveQUANTITY

0.99+

IrelandLOCATION

0.99+

AntonioPERSON

0.99+

Daniel LauryPERSON

0.99+

Jeff FrickPERSON

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

Todd KerryPERSON

0.99+

John FurrierPERSON

0.99+

$20QUANTITY

0.99+

MikePERSON

0.99+

January 30thDATE

0.99+

MegPERSON

0.99+

Mark LittlePERSON

0.99+

Luke CerneyPERSON

0.99+

PeterPERSON

0.99+

Jeff BasilPERSON

0.99+

Stu MinimanPERSON

0.99+

DanPERSON

0.99+

10QUANTITY

0.99+

AllanPERSON

0.99+

40 gigQUANTITY

0.99+

Adam Wenchel, Arthur.ai | CUBE Conversation


 

(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)

Published Date : Mar 3 2023

SUMMARY :

I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John MarkoffPERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Adam WenchelPERSON

0.99+

JohnPERSON

0.99+

Red SoxORGANIZATION

0.99+

John DickersonPERSON

0.99+

AmazonORGANIZATION

0.99+

AdamPERSON

0.99+

John FurrierPERSON

0.99+

Palo AltoLOCATION

0.99+

2015DATE

0.99+

Capital OneORGANIZATION

0.99+

fiveQUANTITY

0.99+

100%QUANTITY

0.99+

2016DATE

0.99+

13 yearsQUANTITY

0.99+

SnowflakeTITLE

0.99+

threeQUANTITY

0.99+

first questionQUANTITY

0.99+

twoQUANTITY

0.99+

fiveDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

four yearsQUANTITY

0.99+

Billy BeanePERSON

0.99+

over 20 yearsQUANTITY

0.99+

DARPAORGANIZATION

0.99+

third oneQUANTITY

0.98+

AWSORGANIZATION

0.98+

siliconangle.comOTHER

0.98+

University of MarylandORGANIZATION

0.97+

first timeQUANTITY

0.97+

USLOCATION

0.97+

firstQUANTITY

0.96+

six years agoDATE

0.96+

New York TimesORGANIZATION

0.96+

ChatGPTORGANIZATION

0.96+

SwamiPERSON

0.95+

ChatGPTTITLE

0.95+

hundreds of modelsQUANTITY

0.95+

25, 30%QUANTITY

0.95+

single problemQUANTITY

0.95+

hundreds of millions of dollarsQUANTITY

0.95+

10QUANTITY

0.94+

MoneyballTITLE

0.94+

waveEVENT

0.91+

three thingsQUANTITY

0.9+

AIOpsTITLE

0.9+

last six monthsDATE

0.89+

few months agoDATE

0.88+

bigEVENT

0.86+

next couple yearsDATE

0.86+

DevOpsTITLE

0.85+

ArthurPERSON

0.85+

CUBEORGANIZATION

0.83+

dozens of modelsQUANTITY

0.8+

a few years backDATE

0.8+

six years agoDATE

0.78+

theCUBEORGANIZATION

0.76+

SageMakerTITLE

0.75+

decadesQUANTITY

0.75+

TwitterORGANIZATION

0.74+

MLOpsTITLE

0.74+

supercloudORGANIZATION

0.73+

super AI waveEVENT

0.73+

a couple monthsQUANTITY

0.72+

ArthurORGANIZATION

0.72+

100 customersQUANTITY

0.71+

Cube ConversationEVENT

0.69+

theCUBE.netOTHER

0.67+

Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Mar 2 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Prem Balasubramanian and Manoj Narayanan | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(Upbeat music playing) >> Hey everyone, thanks for joining us today. Welcome to this event of Building your Cloud Center of Excellence with Hitachi Vantara. I'm your host, Lisa Martin. I've got a couple of guests here with me next to talk about redefining cloud operations and application modernization for customers. Please welcome Prem Balasubramanian the SVP and CTO at Hitachi Vantara, and Manoj Narayanan is here as well, the Managing Director of Technology at GTCR. Guys, thank you so much for joining me today. Excited to have this conversation about redefining CloudOps with you. >> Pleasure to be here. >> Pleasure to be here >> Prem, let's go ahead and start with you. You have done well over a thousand cloud engagements in your career. I'd love to get your point of view on how the complexity around cloud operations and management has evolved in the last, say, three to four years. >> It's a great question, Lisa before we understand the complexity around the management itself, the cloud has evolved over the last decade significantly from being a backend infrastructure or infrastructure as a service for many companies to become the business for many companies. If you think about a lot of these cloud bond companies cloud is where their entire workload and their business wants. With that, as a background for this conversation if you think about the cloud operations, there was a lot of there was a lot of lift and shift happening in the market where people lifted their workloads or applications and moved them onto the cloud where they treated cloud significantly as an infrastructure. And the way they started to manage it was again, the same format they were managing there on-prem infrastructure and they call it I&O, Infrastructure and Operations. That's kind of the way traditionally cloud is managed. In the last few years, we are seeing a significant shift around thinking of cloud more as a workload rather than as just an infrastructure. And what I mean by workload is in the cloud, everything is now code. So you are codifying your infrastructure. Your application is already code and your data is also codified as data services. With now that context apply the way you think about managing the cloud has to significantly change and many companies are moving towards trying to change their models to look at this complex environment as opposed to treating it like a simple infrastructure that is sitting somewhere else. So that's one of the biggest changes and shifts that are causing a lot of complexity and headache for actually a lot of customers for managing environments. The second critical aspect is even that, even exasperates the situation is multicloud environments. Now, there are companies that have got it right with things about right cloud for the right workload. So there are companies that I reach out and I talk with. They've got their office applications and emails and stuff running on Microsoft 365 which can be on the Azure cloud whereas they're running their engineering applications the ones that they build and leverage for their end customers on Amazon. And to some extent they've got it right but still they have a multiple cloud that they have to go after and maintain. This becomes complex when you have two clouds for the same type of workload. When I have to host applications for my end customers on Amazon as well as Azure, Azure as well as Google then, I get into security issues that I have to be consistent across all three. I get into talent because I need to have people that focus on Amazon as well as Azure, as well as Google which means I need so much more workforce, I need so many so much more skills that I need to build, right? That's becoming the second issue. The third one is around data costs. Can I make these clouds talk to each other? Then you get into the ingress egress cost and that creates some complexity. So bringing all of this together and managing is really become becoming more complex for our customers. And obviously as a part of this we will talk about some of the, some of the ideas that we can bring for in managing such complex environments but this is what we are seeing in terms of why the complexity has become a lot more in the last few years. >> Right. A lot of complexity in the last few years. Manoj, let's bring you into the conversation now. Before we dig into your cloud environment give the audience a little bit of an overview of GTCR. What kind of company are you? What do you guys do? >> Definitely Lisa. GTCR is a Chicago based private equity firm. We've been in the market for more than 40 years and what we do is we invest in companies across different sectors and then we manage the company drive it to increase the value and then over a period of time, sell it to future buyers. So in a nutshell, we got a large portfolio of companies that we need to manage and make sure that they perform to expectations. And my role within GTCR is from a technology viewpoint so where I work with all the companies their technology leadership to make sure that we are getting the best out of technology and technology today drives everything. So how can technology be a good compliment to the business itself? So, my role is to play that intermediary role to make sure that there is synergy between the investment thesis and the technology lures that we can pull and also work with partners like Hitachi to make sure that it is done in an optimal manner. >> I like that you said, you know, technology needs to really compliment the business and vice versa. So Manoj, let's get into the cloud operations environment at GTCR. Talk to me about what the experience has been the last couple of years. Give us an idea of some of the challenges that you were facing with existing cloud ops and and the solution that you're using from Hitachi Vantara. >> A a absolutely. In fact, in fact Prem phrased it really well, one of the key things that we're facing is the workload management. So there's so many choices there, so much complexities. We have these companies buying more companies there is organic growth that is happening. So the variables that we have to deal with are very high in such a scenario to make sure that the workload management of each of the companies are done in an optimal manner is becoming an increasing concern. So, so that's one area where any help we can get anything we can try to make sure it is done better becomes a huge value at each. A second aspect is a financial transparency. We need to know where the money is going where the money is coming in from, what is the scale especially in the cloud environment. We are talking about an auto scale ecosystem. Having that financial transparency and the metrics associated with that, it, these these become very, very critical to ensure that we have a successful presence in the multicloud environment. >> Talk a little bit about the solution that you're using with Hitachi and, and the challenges that it is eradicated. >> Yeah, so it end of the day, right, we we need to focus on our core competence. So, so we have got a very strong technology leadership team. We've got a very strong presence in the respective domains of each of the portfolio companies. But where Hitachi comes in and HAR comes in as a solution is that they allow us to excel in focusing on our core business and then make sure that we are able to take care of workload management or financial transparency. All of that is taken off the table from us and and Hitachi manages it for us, right? So it's such a perfectly compliment relationship where they act as two partners and HARC is a solution that is extremely useful in driving that. And, and and I'm anticipating that it'll become more important with time as the complexity of cloud and cloud associate workloads are only becoming more challenging to manage and not less. >> Right? That's the thing that complexity is there and it's also increasing Prem, you talked about the complexities that are existent today with respect to cloud operations the things that have happened over the last couple of years. What are some of your tips, Prem for the audience, like the the top two or three things that you would say on cloud operations that that people need to understand so that they can manage that complexity and allow their business to be driven and complimented by technology? >> Yeah, a big great question again, Lisa, right? And I think Manoj alluded to a few of these things as well. The first one is in the new world of the cloud I think think of migration, modernization and management as a single continuum to the cloud. Now there is no lift and shift and there is no way somebody else separately manages it, right? If you do not lift and shift the right applications the right way onto the cloud, you are going to deal with the complexity of managing it and you'll end up spending more money time and effort in managing it. So that's number one. Migration, modernization, management of cloud work growth is a single continuum and it's not three separate activities, right? That's number one. And the, the second is cost. Cost traditionally has been an afterthought, right? People move the workload to the cloud. And I think, again, like I said, I'll refer back to what Manoj said once we move it to the cloud and then we put all these fancy engineering capability around self-provisioning, every developer can go and ask for what he or she wants and they get an environment immediately spun up so on and so forth. Suddenly the CIO wakes up to a bill that is significantly larger than what he or she expected right? And, and this is this is become a bit common nowadays, right? The the challenge is because we think cost in the cloud as an afterthought. But consider this example in, in previous world you buy hard, well, you put it in your data center you have already amortized the cost as a CapEx. So you can write an application throw it onto the infrastructure and the application continues to use the infrastructure until you hit a ceiling, you don't care about the money you spent. But if I write a line of code that is inefficient today and I deploy it on the cloud from minute one, I am paying for the inefficiency. So if I realize it after six months, I've already spent the money. So financial discipline, especially when managing the cloud is now is no more an afterthought. It is as much something that you have to include in your engineering practice as much as any other DevOps practices, right? Those are my top two tips, Lisa, from my standpoint, think about cloud, think about cloud work, cloud workloads. And the last one again, and you will see you will hear me saying this again and again, get into the mindset of everything is code. You don't have a touch and feel infrastructure anymore. So you don't really need to have foot on the ground to go manage that infrastructure. It's codified. So your code should be managing it, but think of how it happens, right? That's where we, we are going as an evolution >> Everything is code. That's great advice, great tips for the audience there. Manoj, I'll bring you back into the conversation. You know, we, we can talk about skills gaps on on in many different facets of technology the SRE role, relatively new, skillset. We're hearing, hearing a lot about it. SRE led DevSecOps is probably even more so of a new skillset. If I'm an IT leader or an application leader how do I ensure that I have the right skillset within my organization to be able to manage my cloud operations to, to dial down that complexity so that I can really operate successfully as a business? >> Yeah. And so unfortunately there is no perfect answer, right? It's such a, such a scarce skillset that a, any day any of the portfolio company CTOs if I go and talk and say, Hey here's a great SRE team member, they'll be more than willing to fight with each of to get the person in right? It's just that scarce of a skillset. So, so a few things we need to look at it. One is, how can I build it within, right? So nobody gets born as an SRE, you, you make a person an SRE. So how do you inculcate that culture? So like Prem said earlier, right? Everything is software. So how do we make sure that everybody inculcates that as part of their operating philosophy be they part of the operations team or the development team or the testing team they need to understand that that is a common guideline and common objective that we are driving towards. So, so that skillset and that associated training needs to be driven from within the organization. And that in my mind is the fastest way to make sure that that role gets propagated across organization. That is one. The second thing is rely on the right partners. So it's not going to be possible for us, to get all of these roles built in-house. So instead prioritize what roles need to be done from within the organization and what roles can we rely on our partners to drive it for us. So that becomes an important consideration for us to look at as well. >> Absolutely. That partnership angle is incredibly important from, from the, the beginning really kind of weaving these companies together on this journey to to redefine cloud operations and build that, as we talked about at the beginning of the conversation really building a cloud center of excellence that allows the organization to be competitive, successful and and really deliver what the end user is, is expecting. I want to ask - Sorry Lisa, - go ahead. >> May I add something to it, I think? >> Sure. >> Yeah. One of the, one of the common things that I tell customers when we talk about SRE and to manages point is don't think of SRE as a skillset which is the common way today the industry tries to solve the problem. SRE is a mindset, right? Everybody in >> Well well said, yeah >> That, so everybody in a company should think of him or her as a cycle liability engineer. And everybody has a role in it, right? Even if you take the new process layout from SRE there are individuals that are responsible to whom we can go to when there is a problem directly as opposed to going through the traditional ways of AI talk to L one and L one contras all. They go to L two and then L three. So we, we, we are trying to move away from an issue escalation model to what we call as a a issue routing or a incident routing model, right? Move away from incident escalation to an incident routing model. So you get to route to the right folks. So again, to sum it up, SRE should not be solved as a skillset set because there is not enough people in the market to solve it that way. If you start solving it as a mindset I think companies can get a handhold of it. >> I love that. I've actually never heard that before, but it it makes perfect sense to think about the SRE as a mindset rather than a skillset that will allow organizations to be much more successful. Prem I wanted to get your thoughts as enterprises are are innovating, they're moving more products and services to the as a service model. Talk about how the dev teams the ops teams are working together to build and run reliable, cost efficient services. Are they working better together? >> Again, a a very polarizing question because some customers are getting it right many customers aren't, there is still a big wall between development and operations, right? Even when you think about DevOps as a terminology the fundamental principle was to make sure dev and ops works together. But what many companies have achieved today, honestly is automating the operations for development. For example, as a developer, I can check in code and my code will appear in production without any friction, right? There is automated testing, automated provisioning and it gets promoted to production, but after production, it goes back into the 20 year old model of operating the code, right? So there is more work that needs to be done for Devon and Ops to come closer and work together. And one of the ways that we think this is achievable is not by doing radical org changes, but more by focusing on a product-oriented single backlog approach across development and operations. Which is, again, there is change management involved but I think that's a way to start embracing the culture of dev ops coming together much better now, again SRE principles as we double click and understand it more and Google has done a very good job playing it out for the world. As you think about SRE principle, there are ways and means in that process of how to think about a single backlog. And in HARC, Hitachi Application Reliability Centers we've really got a way to look at prioritizing the backlog. And what I mean by that is dev teams try to work on backlog that come from product managers on features. The SRE and the operations team try to put backlog into the say sorry, try to put features into the same backlog for improving stability, availability and financials financial optimization of your code. And there are ways when you look at your SLOs and error budgets to really coach the product teams to prioritize your backlog based on what's important for you. So if you understand your spending more money then you reduce your product features going in and implement the financial optimization that came from your operations team, right? So you now have the ability to throttle these parameters and that's where SRE becomes a mindset and a principle as opposed to a skillset because this is not an individual telling you to do. This is the company that is, is embarking on how to prioritize my backlog beyond just user features. >> Right. Great point. Last question for both of you is the same talk kind of take away things that you want me to remember. If I am at an IT leader at, at an organization and I am planning on redefining CloudOps for my company Manoj will start with you and then Prem to you what are the top two things that you want me to walk away with understanding how to do that successfully? >> Yeah, so I'll, I'll go back to basics. So the two things I would say need to be taken care of is, one is customer experience. So all the things that I do end of the day is it improving the customer experience or not? So that's a first metric. The second thing is anything that I do is there an ROI by doing that incremental step or not? Otherwise we might get lost in the technology with surgery, the new tech, et cetera. But end of the day, if the customers are not happy if there is no ROI, everything else you just can't do much on top of that >> Now it's all about the customer experience. Right? That's so true. Prem what are your thoughts, the the top things that I need to be taking away if I am a a leader planning to redefine my cloud eye company? >> Absolutely. And I think from a, from a company standpoint I think Manoj summarized it extremely well, right? There is this ROI and there is this customer experience from my end, again, I'll, I'll suggest two two more things as a takeaway, right? One, cloud cost is not an afterthought. It's essential for us to think about it upfront. Number two, do not delink migration modernization and operations. They are one stream. If you migrate a long, wrong workload onto the cloud you're going to be stuck with it for a long time. And an example of a wrong workload, Lisa for everybody that that is listening to this is if my cost per transaction profile doesn't change and I am not improving my revenue per transaction for a piece of code that's going run in production it's better off running in a data center where my cost is CapEx than amortized and I have control over when I want to upgrade as opposed to putting it on a cloud and continuing to pay unless it gives me more dividends towards improvement. But that's a simple example of when we think about what should I migrate and how will it cost pain when I want to manage it in the longer run. But that's, that's something that I'll leave the audience and you with as a takeaway. >> Excellent. Guys, thank you so much for talking to me today about what Hitachi Vantara and GTCR are doing together how you've really dialed down those complexities enabling the business and the technology folks to really live harmoniously. We appreciate your insights and your perspectives on building a cloud center of excellence. Thank you both for joining me. >> Thank you. >> For my guests, I'm Lisa. Martin, you're watching this event building Your Cloud Center of Excellence with Hitachi Vantara. Thanks for watching. (Upbeat music playing) (Upbeat music playing) (Upbeat music playing) (Upbeat music playing)

Published Date : Mar 2 2023

SUMMARY :

the SVP and CTO at Hitachi Vantara, in the last, say, three to four years. apply the way you think in the last few years. and the technology lures that we can pull and the solution that you're that the workload management the solution that you're using All of that is taken off the table from us and allow their business to be driven have foot on the ground to have the right skillset And that in my mind is the that allows the organization to be and to manages point is don't of AI talk to L one and L one contras all. Talk about how the dev teams The SRE and the operations team that you want me to remember. But end of the day, if the I need to be taking away that I'll leave the audience and the technology folks to building Your Cloud Center of Excellence

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HitachiORGANIZATION

0.99+

GTCRORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Prem BalasubramanianPERSON

0.99+

HARCORGANIZATION

0.99+

LisaPERSON

0.99+

Manoj NarayananPERSON

0.99+

GoogleORGANIZATION

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

two partnersQUANTITY

0.99+

threeQUANTITY

0.99+

second issueQUANTITY

0.99+

bothQUANTITY

0.99+

more than 40 yearsQUANTITY

0.99+

ManojORGANIZATION

0.99+

eachQUANTITY

0.99+

third oneQUANTITY

0.99+

SREORGANIZATION

0.99+

todayDATE

0.99+

first metricQUANTITY

0.99+

one streamQUANTITY

0.99+

PremPERSON

0.99+

secondQUANTITY

0.99+

OneQUANTITY

0.99+

MartinPERSON

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

first oneQUANTITY

0.98+

four yearsQUANTITY

0.98+

second thingQUANTITY

0.98+

second aspectQUANTITY

0.98+

three thingsQUANTITY

0.98+

ManojPERSON

0.98+

DevonORGANIZATION

0.97+

one areaQUANTITY

0.97+

two thingsQUANTITY

0.96+

Hitachi Application Reliability CentersORGANIZATION

0.96+

singleQUANTITY

0.95+

L twoOTHER

0.95+

single backlogQUANTITY

0.93+

two tipsQUANTITY

0.93+

three separate activitiesQUANTITY

0.92+

SRETITLE

0.91+

20 year oldQUANTITY

0.91+

CloudOpsTITLE

0.9+

L threeOTHER

0.9+

last decadeDATE

0.9+

second critical aspectQUANTITY

0.89+

yearsDATE

0.89+

MicrosoftORGANIZATION

0.89+

last couple of yearsDATE

0.88+

AzureTITLE

0.88+

Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Feb 27 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Prem Balasubramanian and Manoj Narayanan | Hitachi Vantara: Build Your Cloud Center of Excellence


 

(Upbeat music playing) >> Hey everyone, thanks for joining us today. Welcome to this event of Building your Cloud Center of Excellence with Hitachi Vantara. I'm your host, Lisa Martin. I've got a couple of guests here with me next to talk about redefining cloud operations and application modernization for customers. Please welcome Prem Balasubramanian the SVP and CTO at Hitachi Vantara, and Manoj Narayanan is here as well, the Managing Director of Technology at GTCR. Guys, thank you so much for joining me today. Excited to have this conversation about redefining CloudOps with you. >> Pleasure to be here. >> Pleasure to be here >> Prem, let's go ahead and start with you. You have done well over a thousand cloud engagements in your career. I'd love to get your point of view on how the complexity around cloud operations and management has evolved in the last, say, three to four years. >> It's a great question, Lisa before we understand the complexity around the management itself, the cloud has evolved over the last decade significantly from being a backend infrastructure or infrastructure as a service for many companies to become the business for many companies. If you think about a lot of these cloud bond companies cloud is where their entire workload and their business wants. With that, as a background for this conversation if you think about the cloud operations, there was a lot of there was a lot of lift and shift happening in the market where people lifted their workloads or applications and moved them onto the cloud where they treated cloud significantly as an infrastructure. And the way they started to manage it was again, the same format they were managing there on-prem infrastructure and they call it I&O, Infrastructure and Operations. That's kind of the way traditionally cloud is managed. In the last few years, we are seeing a significant shift around thinking of cloud more as a workload rather than as just an infrastructure. And what I mean by workload is in the cloud, everything is now code. So you are codifying your infrastructure. Your application is already code and your data is also codified as data services. With now that context apply the way you think about managing the cloud has to significantly change and many companies are moving towards trying to change their models to look at this complex environment as opposed to treating it like a simple infrastructure that is sitting somewhere else. So that's one of the biggest changes and shifts that are causing a lot of complexity and headache for actually a lot of customers for managing environments. The second critical aspect is even that, even exasperates the situation is multicloud environments. Now, there are companies that have got it right with things about right cloud for the right workload. So there are companies that I reach out and I talk with. They've got their office applications and emails and stuff running on Microsoft 365 which can be on the Azure cloud whereas they're running their engineering applications the ones that they build and leverage for their end customers on Amazon. And to some extent they've got it right but still they have a multiple cloud that they have to go after and maintain. This becomes complex when you have two clouds for the same type of workload. When I have to host applications for my end customers on Amazon as well as Azure, Azure as well as Google then, I get into security issues that I have to be consistent across all three. I get into talent because I need to have people that focus on Amazon as well as Azure, as well as Google which means I need so much more workforce, I need so many so much more skills that I need to build, right? That's becoming the second issue. The third one is around data costs. Can I make these clouds talk to each other? Then you get into the ingress egress cost and that creates some complexity. So bringing all of this together and managing is really become becoming more complex for our customers. And obviously as a part of this we will talk about some of the, some of the ideas that we can bring for in managing such complex environments but this is what we are seeing in terms of why the complexity has become a lot more in the last few years. >> Right. A lot of complexity in the last few years. Manoj, let's bring you into the conversation now. Before we dig into your cloud environment give the audience a little bit of an overview of GTCR. What kind of company are you? What do you guys do? >> Definitely Lisa. GTCR is a Chicago based private equity firm. We've been in the market for more than 40 years and what we do is we invest in companies across different sectors and then we manage the company drive it to increase the value and then over a period of time, sell it to future buyers. So in a nutshell, we got a large portfolio of companies that we need to manage and make sure that they perform to expectations. And my role within GTCR is from a technology viewpoint so where I work with all the companies their technology leadership to make sure that we are getting the best out of technology and technology today drives everything. So how can technology be a good compliment to the business itself? So, my role is to play that intermediary role to make sure that there is synergy between the investment thesis and the technology lures that we can pull and also work with partners like Hitachi to make sure that it is done in an optimal manner. >> I like that you said, you know, technology needs to really compliment the business and vice versa. So Manoj, let's get into the cloud operations environment at GTCR. Talk to me about what the experience has been the last couple of years. Give us an idea of some of the challenges that you were facing with existing cloud ops and and the solution that you're using from Hitachi Vantara. >> A a absolutely. In fact, in fact Prem phrased it really well, one of the key things that we're facing is the workload management. So there's so many choices there, so much complexities. We have these companies buying more companies there is organic growth that is happening. So the variables that we have to deal with are very high in such a scenario to make sure that the workload management of each of the companies are done in an optimal manner is becoming an increasing concern. So, so that's one area where any help we can get anything we can try to make sure it is done better becomes a huge value at each. A second aspect is a financial transparency. We need to know where the money is going where the money is coming in from, what is the scale especially in the cloud environment. We are talking about an auto scale ecosystem. Having that financial transparency and the metrics associated with that, it, these these become very, very critical to ensure that we have a successful presence in the multicloud environment. >> Talk a little bit about the solution that you're using with Hitachi and, and the challenges that it is eradicated. >> Yeah, so it end of the day, right, we we need to focus on our core competence. So, so we have got a very strong technology leadership team. We've got a very strong presence in the respective domains of each of the portfolio companies. But where Hitachi comes in and HAR comes in as a solution is that they allow us to excel in focusing on our core business and then make sure that we are able to take care of workload management or financial transparency. All of that is taken off the table from us and and Hitachi manages it for us, right? So it's such a perfectly compliment relationship where they act as two partners and HARC is a solution that is extremely useful in driving that. And, and and I'm anticipating that it'll become more important with time as the complexity of cloud and cloud associate workloads are only becoming more challenging to manage and not less. >> Right? That's the thing that complexity is there and it's also increasing Prem, you talked about the complexities that are existent today with respect to cloud operations the things that have happened over the last couple of years. What are some of your tips, Prem for the audience, like the the top two or three things that you would say on cloud operations that that people need to understand so that they can manage that complexity and allow their business to be driven and complimented by technology? >> Yeah, a big great question again, Lisa, right? And I think Manoj alluded to a few of these things as well. The first one is in the new world of the cloud I think think of migration, modernization and management as a single continuum to the cloud. Now there is no lift and shift and there is no way somebody else separately manages it, right? If you do not lift and shift the right applications the right way onto the cloud, you are going to deal with the complexity of managing it and you'll end up spending more money time and effort in managing it. So that's number one. Migration, modernization, management of cloud work growth is a single continuum and it's not three separate activities, right? That's number one. And the, the second is cost. Cost traditionally has been an afterthought, right? People move the workload to the cloud. And I think, again, like I said, I'll refer back to what Manoj said once we move it to the cloud and then we put all these fancy engineering capability around self-provisioning, every developer can go and ask for what he or she wants and they get an environment immediately spun up so on and so forth. Suddenly the CIO wakes up to a bill that is significantly larger than what he or she expected right? And, and this is this is become a bit common nowadays, right? The the challenge is because we think cost in the cloud as an afterthought. But consider this example in, in previous world you buy hard, well, you put it in your data center you have already amortized the cost as a CapEx. So you can write an application throw it onto the infrastructure and the application continues to use the infrastructure until you hit a ceiling, you don't care about the money you spent. But if I write a line of code that is inefficient today and I deploy it on the cloud from minute one, I am paying for the inefficiency. So if I realize it after six months, I've already spent the money. So financial discipline, especially when managing the cloud is now is no more an afterthought. It is as much something that you have to include in your engineering practice as much as any other DevOps practices, right? Those are my top two tips, Lisa, from my standpoint, think about cloud, think about cloud work, cloud workloads. And the last one again, and you will see you will hear me saying this again and again, get into the mindset of everything is code. You don't have a touch and feel infrastructure anymore. So you don't really need to have foot on the ground to go manage that infrastructure. It's codified. So your code should be managing it, but think of how it happens, right? That's where we, we are going as an evolution >> Everything is code. That's great advice, great tips for the audience there. Manoj, I'll bring you back into the conversation. You know, we, we can talk about skills gaps on on in many different facets of technology the SRE role, relatively new, skillset. We're hearing, hearing a lot about it. SRE led DevSecOps is probably even more so of a new skillset. If I'm an IT leader or an application leader how do I ensure that I have the right skillset within my organization to be able to manage my cloud operations to, to dial down that complexity so that I can really operate successfully as a business? >> Yeah. And so unfortunately there is no perfect answer, right? It's such a, such a scarce skillset that a, any day any of the portfolio company CTOs if I go and talk and say, Hey here's a great SRE team member, they'll be more than willing to fight with each of to get the person in right? It's just that scarce of a skillset. So, so a few things we need to look at it. One is, how can I build it within, right? So nobody gets born as an SRE, you, you make a person an SRE. So how do you inculcate that culture? So like Prem said earlier, right? Everything is software. So how do we make sure that everybody inculcates that as part of their operating philosophy be they part of the operations team or the development team or the testing team they need to understand that that is a common guideline and common objective that we are driving towards. So, so that skillset and that associated training needs to be driven from within the organization. And that in my mind is the fastest way to make sure that that role gets propagated across organization. That is one. The second thing is rely on the right partners. So it's not going to be possible for us, to get all of these roles built in-house. So instead prioritize what roles need to be done from within the organization and what roles can we rely on our partners to drive it for us. So that becomes an important consideration for us to look at as well. >> Absolutely. That partnership angle is incredibly important from, from the, the beginning really kind of weaving these companies together on this journey to to redefine cloud operations and build that, as we talked about at the beginning of the conversation really building a cloud center of excellence that allows the organization to be competitive, successful and and really deliver what the end user is, is expecting. I want to ask - Sorry Lisa, - go ahead. >> May I add something to it, I think? >> Sure. >> Yeah. One of the, one of the common things that I tell customers when we talk about SRE and to manages point is don't think of SRE as a skillset which is the common way today the industry tries to solve the problem. SRE is a mindset, right? Everybody in >> Well well said, yeah >> That, so everybody in a company should think of him or her as a cycle liability engineer. And everybody has a role in it, right? Even if you take the new process layout from SRE there are individuals that are responsible to whom we can go to when there is a problem directly as opposed to going through the traditional ways of AI talk to L one and L one contras all. They go to L two and then L three. So we, we, we are trying to move away from an issue escalation model to what we call as a a issue routing or a incident routing model, right? Move away from incident escalation to an incident routing model. So you get to route to the right folks. So again, to sum it up, SRE should not be solved as a skillset set because there is not enough people in the market to solve it that way. If you start solving it as a mindset I think companies can get a handhold of it. >> I love that. I've actually never heard that before, but it it makes perfect sense to think about the SRE as a mindset rather than a skillset that will allow organizations to be much more successful. Prem I wanted to get your thoughts as enterprises are are innovating, they're moving more products and services to the as a service model. Talk about how the dev teams the ops teams are working together to build and run reliable, cost efficient services. Are they working better together? >> Again, a a very polarizing question because some customers are getting it right many customers aren't, there is still a big wall between development and operations, right? Even when you think about DevOps as a terminology the fundamental principle was to make sure dev and ops works together. But what many companies have achieved today, honestly is automating the operations for development. For example, as a developer, I can check in code and my code will appear in production without any friction, right? There is automated testing, automated provisioning and it gets promoted to production, but after production, it goes back into the 20 year old model of operating the code, right? So there is more work that needs to be done for Devon and Ops to come closer and work together. And one of the ways that we think this is achievable is not by doing radical org changes, but more by focusing on a product-oriented single backlog approach across development and operations. Which is, again, there is change management involved but I think that's a way to start embracing the culture of dev ops coming together much better now, again SRE principles as we double click and understand it more and Google has done a very good job playing it out for the world. As you think about SRE principle, there are ways and means in that process of how to think about a single backlog. And in HARC, Hitachi Application Reliability Centers we've really got a way to look at prioritizing the backlog. And what I mean by that is dev teams try to work on backlog that come from product managers on features. The SRE and the operations team try to put backlog into the say sorry, try to put features into the same backlog for improving stability, availability and financials financial optimization of your code. And there are ways when you look at your SLOs and error budgets to really coach the product teams to prioritize your backlog based on what's important for you. So if you understand your spending more money then you reduce your product features going in and implement the financial optimization that came from your operations team, right? So you now have the ability to throttle these parameters and that's where SRE becomes a mindset and a principle as opposed to a skillset because this is not an individual telling you to do. This is the company that is, is embarking on how to prioritize my backlog beyond just user features. >> Right. Great point. Last question for both of you is the same talk kind of take away things that you want me to remember. If I am at an IT leader at, at an organization and I am planning on redefining CloudOps for my company Manoj will start with you and then Prem to you what are the top two things that you want me to walk away with understanding how to do that successfully? >> Yeah, so I'll, I'll go back to basics. So the two things I would say need to be taken care of is, one is customer experience. So all the things that I do end of the day is it improving the customer experience or not? So that's a first metric. The second thing is anything that I do is there an ROI by doing that incremental step or not? Otherwise we might get lost in the technology with surgery, the new tech, et cetera. But end of the day, if the customers are not happy if there is no ROI, everything else you just can't do much on top of that >> Now it's all about the customer experience. Right? That's so true. Prem what are your thoughts, the the top things that I need to be taking away if I am a a leader planning to redefine my cloud eye company? >> Absolutely. And I think from a, from a company standpoint I think Manoj summarized it extremely well, right? There is this ROI and there is this customer experience from my end, again, I'll, I'll suggest two two more things as a takeaway, right? One, cloud cost is not an afterthought. It's essential for us to think about it upfront. Number two, do not delink migration modernization and operations. They are one stream. If you migrate a long, wrong workload onto the cloud you're going to be stuck with it for a long time. And an example of a wrong workload, Lisa for everybody that that is listening to this is if my cost per transaction profile doesn't change and I am not improving my revenue per transaction for a piece of code that's going run in production it's better off running in a data center where my cost is CapEx than amortized and I have control over when I want to upgrade as opposed to putting it on a cloud and continuing to pay unless it gives me more dividends towards improvement. But that's a simple example of when we think about what should I migrate and how will it cost pain when I want to manage it in the longer run. But that's, that's something that I'll leave the audience and you with as a takeaway. >> Excellent. Guys, thank you so much for talking to me today about what Hitachi Vantara and GTCR are doing together how you've really dialed down those complexities enabling the business and the technology folks to really live harmoniously. We appreciate your insights and your perspectives on building a cloud center of excellence. Thank you both for joining me. >> Thank you. >> For my guests, I'm Lisa. Martin, you're watching this event building Your Cloud Center of Excellence with Hitachi Vantara. Thanks for watching. (Upbeat music playing) (Upbeat music playing) (Upbeat music playing) (Upbeat music playing)

Published Date : Feb 27 2023

SUMMARY :

the SVP and CTO at Hitachi Vantara, in the last, say, three to four years. apply the way you think in the last few years. and the technology lures that we can pull and the solution that you're that the workload management the solution that you're using All of that is taken off the table from us and allow their business to be driven have foot on the ground to have the right skillset And that in my mind is the that allows the organization to be and to manages point is don't of AI talk to L one and L one contras all. Talk about how the dev teams The SRE and the operations team that you want me to remember. But end of the day, if the I need to be taking away that I'll leave the audience and the technology folks to building Your Cloud Center of Excellence

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HitachiORGANIZATION

0.99+

GTCRORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Prem BalasubramanianPERSON

0.99+

HARCORGANIZATION

0.99+

LisaPERSON

0.99+

Manoj NarayananPERSON

0.99+

GoogleORGANIZATION

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

two partnersQUANTITY

0.99+

threeQUANTITY

0.99+

second issueQUANTITY

0.99+

bothQUANTITY

0.99+

more than 40 yearsQUANTITY

0.99+

ManojORGANIZATION

0.99+

eachQUANTITY

0.99+

third oneQUANTITY

0.99+

SREORGANIZATION

0.99+

todayDATE

0.99+

first metricQUANTITY

0.99+

one streamQUANTITY

0.99+

PremPERSON

0.99+

secondQUANTITY

0.99+

OneQUANTITY

0.99+

MartinPERSON

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

first oneQUANTITY

0.98+

four yearsQUANTITY

0.98+

second thingQUANTITY

0.98+

second aspectQUANTITY

0.98+

three thingsQUANTITY

0.98+

ManojPERSON

0.98+

DevonORGANIZATION

0.97+

one areaQUANTITY

0.97+

two thingsQUANTITY

0.96+

Hitachi Application Reliability CentersORGANIZATION

0.96+

singleQUANTITY

0.95+

L twoOTHER

0.95+

single backlogQUANTITY

0.93+

two tipsQUANTITY

0.93+

three separate activitiesQUANTITY

0.92+

SRETITLE

0.91+

20 year oldQUANTITY

0.91+

CloudOpsTITLE

0.9+

L threeOTHER

0.9+

last decadeDATE

0.9+

second critical aspectQUANTITY

0.89+

yearsDATE

0.89+

MicrosoftORGANIZATION

0.89+

last couple of yearsDATE

0.88+

AzureTITLE

0.88+

Prem Balasubramanian & Suresh Mothikuru


 

(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)

Published Date : Feb 24 2023

SUMMARY :

In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SureshPERSON

0.99+

HitachiORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Suresh MothikuruPERSON

0.99+

JapanLOCATION

0.99+

Prem BalasubramanianPERSON

0.99+

JCIORGANIZATION

0.99+

LisaPERSON

0.99+

HarcORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

DallasLOCATION

0.99+

IndiaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

HyderabadLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

Johnson ControlsORGANIZATION

0.99+

PortugalLOCATION

0.99+

USLOCATION

0.99+

SCLORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

two partsQUANTITY

0.99+

150 servicesQUANTITY

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

next weekDATE

0.99+

200 servicesQUANTITY

0.99+

First questionQUANTITY

0.99+

PremPERSON

0.99+

tomorrowDATE

0.99+

PolarisORGANIZATION

0.99+

T&MORGANIZATION

0.99+

hundreds of servicesQUANTITY

0.99+

three thingsQUANTITY

0.98+

threeQUANTITY

0.98+

agileTITLE

0.98+

Manoj Narayanan & Prem Balasubramanian | Build Your Cloud Center of Excellence


 

(Upbeat music playing) >> Hey everyone, thanks for joining us today. Welcome to this event of Building your Cloud Center of Excellence with Hitachi Vantara. I'm your host, Lisa Martin. I've got a couple of guests here with me next to talk about redefining cloud operations and application modernization for customers. Please welcome Param Balasubramanian the SVP and CTO at Hitachi Vantara, and Manoj Narayanan is here as well, the Managing Director of Technology at GTCR. Guys, thank you so much for joining me today. Excited to have this conversation about redefining CloudOps with you. >> Pleasure to be here. >> Pleasure to be here >> Param, let's go ahead and start with you. You have done well over a thousand cloud engagements in your career. I'd love to get your point of view on how the complexity around cloud operations and management has evolved in the last, say, three to four years. >> It's a great question, Lisa before we understand the complexity around the management itself, the cloud has evolved over the last decade significantly from being a backend infrastructure or infrastructure as a service for many companies to become the business for many companies. If you think about a lot of these cloud bond companies cloud is where their entire workload and their business wants. With that, as a background for this conversation if you think about the cloud operations, there was a lot of there was a lot of lift and shift happening in the market where people lifted their workloads or applications and moved them onto the cloud where they treated cloud significantly as an infrastructure. And the way they started to manage it was again, the same format they were managing there on-prem infrastructure and they call it I&O, Infrastructure and Operations. That's kind of the way traditionally cloud is managed. In the last few years, we are seeing a significant shift around thinking of cloud more as a workload rather than as just an infrastructure. And what I mean by workload is in the cloud, everything is now code. So you are codifying your infrastructure. Your application is already code and your data is also codified as data services. With now that context apply the way you think about managing the cloud has to significantly change and many companies are moving towards trying to change their models to look at this complex environment as opposed to treating it like a simple infrastructure that is sitting somewhere else. So that's one of the biggest changes and shifts that are causing a lot of complexity and headache for actually a lot of customers for managing environments. The second critical aspect is even that, even exasperates the situation is multicloud environments. Now, there are companies that have got it right with things about right cloud for the right workload. So there are companies that I reach out and I talk with. They've got their office applications and emails and stuff running on Microsoft 365 which can be on the Azure cloud whereas they're running their engineering applications the ones that they build and leverage for their end customers on Amazon. And to some extent they've got it right but still they have a multiple cloud that they have to go after and maintain. This becomes complex when you have two clouds for the same type of workload. When I have to host applications for my end customers on Amazon as well as Azure, Azure as well as Google then, I get into security issues that I have to be consistent across all three. I get into talent because I need to have people that focus on Amazon as well as Azure, as well as Google which means I need so much more workforce, I need so many so much more skills that I need to build, right? That's becoming the second issue. The third one is around data costs. Can I make these clouds talk to each other? Then you get into the ingress egress cost and that creates some complexity. So bringing all of this together and managing is really become becoming more complex for our customers. And obviously as a part of this we will talk about some of the, some of the ideas that we can bring for in managing such complex environments but this is what we are seeing in terms of why the complexity has become a lot more in the last few years. >> Right. A lot of complexity in the last few years. Manoj, let's bring you into the conversation now. Before we dig into your cloud environment give the audience a little bit of an overview of GTCR. What kind of company are you? What do you guys do? >> Definitely Lisa. GTCR is a Chicago based private equity firm. We've been in the market for more than 40 years and what we do is we invest in companies across different sectors and then we manage the company drive it to increase the value and then over a period of time, sell it to future buyers. So in a nutshell, we got a large portfolio of companies that we need to manage and make sure that they perform to expectations. And my role within GTCR is from a technology viewpoint so where I work with all the companies their technology leadership to make sure that we are getting the best out of technology and technology today drives everything. So how can technology be a good compliment to the business itself? So, my role is to play that intermediary role to make sure that there is synergy between the investment thesis and the technology lures that we can pull and also work with partners like Hitachi to make sure that it is done in an optimal manner. >> I like that you said, you know, technology needs to really compliment the business and vice versa. So Manoj, let's get into the cloud operations environment at GTCR. Talk to me about what the experience has been the last couple of years. Give us an idea of some of the challenges that you were facing with existing cloud ops and and the solution that you're using from Hitachi Vantara. >> A a absolutely. In fact, in fact Param phrased it really well, one of the key things that we're facing is the workload management. So there's so many choices there, so much complexities. We have these companies buying more companies there is organic growth that is happening. So the variables that we have to deal with are very high in such a scenario to make sure that the workload management of each of the companies are done in an optimal manner is becoming an increasing concern. So, so that's one area where any help we can get anything we can try to make sure it is done better becomes a huge value at each. A second aspect is a financial transparency. We need to know where the money is going where the money is coming in from, what is the scale especially in the cloud environment. We are talking about an auto scale ecosystem. Having that financial transparency and the metrics associated with that, it, these these become very, very critical to ensure that we have a successful presence in the multicloud environment. >> Talk a little bit about the solution that you're using with Hitachi and, and the challenges that it is eradicated. >> Yeah, so it end of the day, right, we we need to focus on our core competence. So, so we have got a very strong technology leadership team. We've got a very strong presence in the respective domains of each of the portfolio companies. But where Hitachi comes in and HAR comes in as a solution is that they allow us to excel in focusing on our core business and then make sure that we are able to take care of workload management or financial transparency. All of that is taken off the table from us and and Hitachi manages it for us, right? So it's such a perfectly compliment relationship where they act as two partners and HARC is a solution that is extremely useful in driving that. And, and and I'm anticipating that it'll become more important with time as the complexity of cloud and cloud associate workloads are only becoming more challenging to manage and not less. >> Right? That's the thing that complexity is there and it's also increasing Param, you talked about the complexities that are existent today with respect to cloud operations the things that have happened over the last couple of years. What are some of your tips, Param for the audience, like the the top two or three things that you would say on cloud operations that that people need to understand so that they can manage that complexity and allow their business to be driven and complimented by technology? >> Yeah, a big great question again, Lisa, right? And I think Manoj alluded to a few of these things as well. The first one is in the new world of the cloud I think think of migration, modernization and management as a single continuum to the cloud. Now there is no lift and shift and there is no way somebody else separately manages it, right? If you do not lift and shift the right applications the right way onto the cloud, you are going to deal with the complexity of managing it and you'll end up spending more money time and effort in managing it. So that's number one. Migration, modernization, management of cloud work growth is a single continuum and it's not three separate activities, right? That's number one. And the, the second is cost. Cost traditionally has been an afterthought, right? People move the workload to the cloud. And I think, again, like I said, I'll refer back to what Manoj said once we move it to the cloud and then we put all these fancy engineering capability around self-provisioning, every developer can go and ask for what he or she wants and they get an environment immediately spun up so on and so forth. Suddenly the CIO wakes up to a bill that is significantly larger than what he or she expected right? And, and this is this is become a bit common nowadays, right? The the challenge is because we think cost in the cloud as an afterthought. But consider this example in, in previous world you buy hard, well, you put it in your data center you have already amortized the cost as a CapEx. So you can write an application throw it onto the infrastructure and the application continues to use the infrastructure until you hit a ceiling, you don't care about the money you spent. But if I write a line of code that is inefficient today and I deploy it on the cloud from minute one, I am paying for the inefficiency. So if I realize it after six months, I've already spent the money. So financial discipline, especially when managing the cloud is now is no more an afterthought. It is as much something that you have to include in your engineering practice as much as any other DevOps practices, right? Those are my top two tips, Lisa, from my standpoint, think about cloud, think about cloud work, cloud workloads. And the last one again, and you will see you will hear me saying this again and again, get into the mindset of everything is code. You don't have a touch and feel infrastructure anymore. So you don't really need to have foot on the ground to go manage that infrastructure. It's codified. So your code should be managing it, but think of how it happens, right? That's where we, we are going as an evolution >> Everything is code. That's great advice, great tips for the audience there. Manoj, I'll bring you back into the conversation. You know, we, we can talk about skills gaps on on in many different facets of technology the SRE role, relatively new, skillset. We're hearing, hearing a lot about it. SRE led DevSecOps is probably even more so of a new skillset. If I'm an IT leader or an application leader how do I ensure that I have the right skillset within my organization to be able to manage my cloud operations to, to dial down that complexity so that I can really operate successfully as a business? >> Yeah. And so unfortunately there is no perfect answer, right? It's such a, such a scarce skillset that a, any day any of the portfolio company CTOs if I go and talk and say, Hey here's a great SRE team member, they'll be more than willing to fight with each of to get the person in right? It's just that scarce of a skillset. So, so a few things we need to look at it. One is, how can I build it within, right? So nobody gets born as an SRE, you, you make a person an SRE. So how do you inculcate that culture? So like Param said earlier, right? Everything is software. So how do we make sure that everybody inculcates that as part of their operating philosophy be they part of the operations team or the development team or the testing team they need to understand that that is a common guideline and common objective that we are driving towards. So, so that skillset and that associated training needs to be driven from within the organization. And that in my mind is the fastest way to make sure that that role gets propagated across organization. That is one. The second thing is rely on the right partners. So it's not going to be possible for us, to get all of these roles built in-house. So instead prioritize what roles need to be done from within the organization and what roles can we rely on our partners to drive it for us. So that becomes an important consideration for us to look at as well. >> Absolutely. That partnership angle is incredibly important from, from the, the beginning really kind of weaving these companies together on this journey to to redefine cloud operations and build that, as we talked about at the beginning of the conversation really building a cloud center of excellence that allows the organization to be competitive, successful and and really deliver what the end user is, is expecting. I want to ask - Sorry Lisa, - go ahead. >> May I add something to it, I think? >> Sure. >> Yeah. One of the, one of the common things that I tell customers when we talk about SRE and to manages point is don't think of SRE as a skillset which is the common way today the industry tries to solve the problem. SRE is a mindset, right? Everybody in >> Well well said, yeah >> That, so everybody in a company should think of him or her as a cycle liability engineer. And everybody has a role in it, right? Even if you take the new process layout from SRE there are individuals that are responsible to whom we can go to when there is a problem directly as opposed to going through the traditional ways of AI talk to L one and L one contras all. They go to L two and then L three. So we, we, we are trying to move away from an issue escalation model to what we call as a a issue routing or a incident routing model, right? Move away from incident escalation to an incident routing model. So you get to route to the right folks. So again, to sum it up, SRE should not be solved as a skillset set because there is not enough people in the market to solve it that way. If you start solving it as a mindset I think companies can get a handhold of it. >> I love that. I've actually never heard that before, but it it makes perfect sense to think about the SRE as a mindset rather than a skillset that will allow organizations to be much more successful. Param I wanted to get your thoughts as enterprises are are innovating, they're moving more products and services to the as a service model. Talk about how the dev teams the ops teams are working together to build and run reliable, cost efficient services. Are they working better together? >> Again, a a very polarizing question because some customers are getting it right many customers aren't, there is still a big wall between development and operations, right? Even when you think about DevOps as a terminology the fundamental principle was to make sure dev and ops works together. But what many companies have achieved today, honestly is automating the operations for development. For example, as a developer, I can check in code and my code will appear in production without any friction, right? There is automated testing, automated provisioning and it gets promoted to production, but after production, it goes back into the 20 year old model of operating the code, right? So there is more work that needs to be done for Devon and Ops to come closer and work together. And one of the ways that we think this is achievable is not by doing radical org changes, but more by focusing on a product-oriented single backlog approach across development and operations. Which is, again, there is change management involved but I think that's a way to start embracing the culture of dev ops coming together much better now, again SRE principles as we double click and understand it more and Google has done a very good job playing it out for the world. As you think about SRE principle, there are ways and means in that process of how to think about a single backlog. And in HARC, Hitachi Application Reliability Centers we've really got a way to look at prioritizing the backlog. And what I mean by that is dev teams try to work on backlog that come from product managers on features. The SRE and the operations team try to put backlog into the say sorry, try to put features into the same backlog for improving stability, availability and financials financial optimization of your code. And there are ways when you look at your SLOs and error budgets to really coach the product teams to prioritize your backlog based on what's important for you. So if you understand your spending more money then you reduce your product features going in and implement the financial optimization that came from your operations team, right? So you now have the ability to throttle these parameters and that's where SRE becomes a mindset and a principle as opposed to a skillset because this is not an individual telling you to do. This is the company that is, is embarking on how to prioritize my backlog beyond just user features. >> Right. Great point. Last question for both of you is the same talk kind of take away things that you want me to remember. If I am at an IT leader at, at an organization and I am planning on redefining CloudOps for my company Manoj will start with you and then Param to you what are the top two things that you want me to walk away with understanding how to do that successfully? >> Yeah, so I'll, I'll go back to basics. So the two things I would say need to be taken care of is, one is customer experience. So all the things that I do end of the day is it improving the customer experience or not? So that's a first metric. The second thing is anything that I do is there an ROI by doing that incremental step or not? Otherwise we might get lost in the technology with surgery, the new tech, et cetera. But end of the day, if the customers are not happy if there is no ROI, everything else you just can't do much on top of that >> Now it's all about the customer experience. Right? That's so true. Param what are your thoughts, the the top things that I need to be taking away if I am a a leader planning to redefine my cloud eye company? >> Absolutely. And I think from a, from a company standpoint I think Manoj summarized it extremely well, right? There is this ROI and there is this customer experience from my end, again, I'll, I'll suggest two two more things as a takeaway, right? One, cloud cost is not an afterthought. It's essential for us to think about it upfront. Number two, do not delink migration modernization and operations. They are one stream. If you migrate a long, wrong workload onto the cloud you're going to be stuck with it for a long time. And an example of a wrong workload, Lisa for everybody that that is listening to this is if my cost per transaction profile doesn't change and I am not improving my revenue per transaction for a piece of code that's going run in production it's better off running in a data center where my cost is CapEx than amortized and I have control over when I want to upgrade as opposed to putting it on a cloud and continuing to pay unless it gives me more dividends towards improvement. But that's a simple example of when we think about what should I migrate and how will it cost pain when I want to manage it in the longer run. But that's, that's something that I'll leave the audience and you with as a takeaway. >> Excellent. Guys, thank you so much for talking to me today about what Hitachi Vantara and GTCR are doing together how you've really dialed down those complexities enabling the business and the technology folks to really live harmoniously. We appreciate your insights and your perspectives on building a cloud center of excellence. Thank you both for joining me. >> Thank you. >> For my guests, I'm Lisa. Martin, you're watching this event building Your Cloud Center of Excellence with Hitachi Vantara. Thanks for watching. (Upbeat music playing) (Upbeat music playing) (Upbeat music playing) (Upbeat music playing)

Published Date : Feb 21 2023

SUMMARY :

the SVP and CTO at Hitachi Vantara, in the last, say, three to four years. apply the way you think in the last few years. and the technology lures that we can pull and the solution that you're that the workload management the solution that you're using All of that is taken off the table from us and allow their business to be driven have foot on the ground to have the right skillset And that in my mind is the that allows the organization to be and to manages point is don't of AI talk to L one and L one contras all. Talk about how the dev teams The SRE and the operations team that you want me to remember. But end of the day, if the I need to be taking away that I'll leave the audience and the technology folks to building Your Cloud Center of Excellence

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
HitachiORGANIZATION

0.99+

GTCRORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

ChicagoLOCATION

0.99+

Hitachi VantaraORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Prem BalasubramanianPERSON

0.99+

HARCORGANIZATION

0.99+

two partnersQUANTITY

0.99+

Manoj NarayananPERSON

0.99+

threeQUANTITY

0.99+

Param BalasubramanianPERSON

0.99+

second issueQUANTITY

0.99+

SREORGANIZATION

0.99+

bothQUANTITY

0.99+

first metricQUANTITY

0.99+

more than 40 yearsQUANTITY

0.99+

one streamQUANTITY

0.99+

eachQUANTITY

0.99+

OneQUANTITY

0.99+

ParamPERSON

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

secondQUANTITY

0.99+

third oneQUANTITY

0.99+

four yearsQUANTITY

0.98+

second thingQUANTITY

0.98+

twoQUANTITY

0.98+

ManojORGANIZATION

0.98+

second aspectQUANTITY

0.98+

first oneQUANTITY

0.97+

three thingsQUANTITY

0.97+

ManojPERSON

0.97+

singleQUANTITY

0.97+

two thingsQUANTITY

0.96+

DevonORGANIZATION

0.96+

Hitachi Application Reliability CentersORGANIZATION

0.94+

MartinPERSON

0.94+

three separate activitiesQUANTITY

0.92+

one areaQUANTITY

0.92+

single backlogQUANTITY

0.92+

L twoOTHER

0.91+

CloudOpsTITLE

0.9+

L threeOTHER

0.89+

SRETITLE

0.89+

AzureTITLE

0.88+

two tipsQUANTITY

0.88+

last couple of yearsDATE

0.88+

MicrosoftORGANIZATION

0.87+

two more thingsQUANTITY

0.87+

Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native


 

(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)

Published Date : Feb 17 2023

SUMMARY :

And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Chris JonesPERSON

0.99+

12 gigQUANTITY

0.99+

five yearQUANTITY

0.99+

John FurrierPERSON

0.99+

two yearsQUANTITY

0.99+

six peopleQUANTITY

0.99+

two personasQUANTITY

0.99+

AdobeORGANIZATION

0.99+

JavaTITLE

0.99+

three monthsQUANTITY

0.99+

20 mileQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

SeattleLOCATION

0.99+

two toolsQUANTITY

0.99+

twoQUANTITY

0.99+

eight coresQUANTITY

0.99+

KubeConEVENT

0.99+

last yearDATE

0.99+

GitOpsTITLE

0.99+

oneQUANTITY

0.99+

tomorrowDATE

0.99+

over two yearsQUANTITY

0.99+

HashiCorpORGANIZATION

0.99+

TerraformORGANIZATION

0.99+

two separate platformsQUANTITY

0.99+

24QUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

two waysQUANTITY

0.98+

third alternativeQUANTITY

0.98+

each releaseQUANTITY

0.98+

IntuitORGANIZATION

0.98+

third pillarQUANTITY

0.98+

2:00amDATE

0.98+

first jobsQUANTITY

0.98+

Mobile World CongressEVENT

0.98+

Cloud NativeTITLE

0.98+

this yearDATE

0.98+

late 2019DATE

0.98+

Platform9TITLE

0.98+

one environmentQUANTITY

0.98+

last fallDATE

0.97+

KubernetesTITLE

0.97+

yesterdayDATE

0.97+

two experiencesQUANTITY

0.97+

about eight yearsQUANTITY

0.97+

DevSecOpsTITLE

0.97+

GitTITLE

0.97+

FluxORGANIZATION

0.96+

CNCFORGANIZATION

0.96+

two big contributorsQUANTITY

0.96+

Cloud NativeTITLE

0.96+

DevOpsTITLE

0.96+

RebarsORGANIZATION

0.95+

Amir Khan & Atif Khan, Alkira | Supercloud2


 

(lively music) >> Hello, everyone. Welcome back to the Supercloud presentation here. I'm theCUBE, I'm John Furrier, your host. What a great segment here. We're going to unpack the networking aspect of the cloud, how that translates into what Supercloud architecture and platform deployment scenarios look like. And demystify multi-cloud, hybridcloud. We've got two great experts. Amir Khan, the Co-Founder and CEO of Alkira, Atif Khan, Co-Founder and CTO of Alkira. These guys been around since 2018 with the startup, but before that story, history in the tech industry. I mean, routing early days, multiple waves, multiple cycles. >> Welcome three decades. >> Welcome to Supercloud. >> Thanks. >> Thanks for coming on. >> Thank you so much for having us. >> So, let's get your take on Supercloud because it's been one of those conversations that really galvanized the industry because it kind of highlights almost this next wave, this next side of the street that everyone's going to be on that's going to be successful. The laggards on the legacy seem to be stuck on the old model. SaaS is growing up, it's ISVs, it's ecosystems, hyperscale, full hybrid. And then multi-cloud around the corners cause all this confusion, everyone's hand waving. You know, this is a solution, that solution, where are we? What do you guys see as this supercloud dynamic? >> So where we start from is always focusing on the customer problem. And in 2018 when we identified the problem, we saw that there were multiple clouds with many diverse ways of doing things from the network perspective, and customers were struggling with that. So we delved deeper into that and looked at each one of the cloud architectures completely independent. And there was no common solution and customers were struggling with that from the perspective. They wanted to be in multiple clouds, either through mergers and acquisitions or running an application which may be more cost effective to run in something or maybe optimized for certain reasons to run in a different cloud. But from the networking perspective, everything needed to come together. So that's, we are starting to define it as a supercloud now, but basically, it's a common infrastructure across all clouds. And then integration of high lift services like, you know, security or IPAM services or many other types of services like inter-partner routing and stuff like that. So, Amir, you agree then that multi-cloud is simply a default result of having whatever outcomes, either M&A, some productivity software, maybe Azure. >> Yes. >> Amazon has this and then I've got on-premise application, so it's kinds mishmash. >> So, I would qualify it with hybrid multi-cloud because everything is going to be interconnected. >> John: Got it. >> Whether it's on-premise, remote users or clouds. >> But have CTO perspective, obviously, you got developers, multiple stacks, got AWS, Azure and GCP, other. Not everyone wants to kind of like go all in, but yet they don't want to hedge too much because it's a resource issue. And I got to learn this stack, I got to learn that stack. So then now, you have this default multi-cloud, hybrid multi-cloud, then it's like, okay, what do I do? How do you spread that around? Is it dangerous? What's the the approach technically? What's some of the challenges there? >> Yeah, certainly. John, first, thanks for having us here. So, before I get to that, I'll just add a little bit to what Amir was saying, like how we started, what we were seeing and how it, you know, correlates with the supercloud. So, as you know, before this company, Alkira, we were doing, we did the SD-WAN company, which was Viptela. So there, we started seeing when people started deploying SD-WAN at like a larger scale. We started like, you know, customers coming to us and saying they needed connectivity into the cloud from the SD-WAN. They wanted to extend the SD-WAN fabric to the cloud. So we came up with an architecture, which was like later we started calling them Cloud onRamps, where we built, you know, a transit VPC and put like the virtual instances of SD-WAN appliances extended from there to the cloud. But before we knew, like it started becoming very complicated for the customers because it wasn't just connectivity, it also required, you know, other use cases. You had to instantiate or bring in security appliances in there. You had to secure all of that stuff. There were requirements for, you know, different regions. So you had to bring up the same thing in different regions. Then multiple clouds, what did you do? You had to replicate the same thing in multiple clouds. And now if there was was requirement between clouds, how were you going to do it? You had to route traffic from somewhere, and come up with all those routing controls and stuff. So, it was very complicated. >> Like spaghetti code, but on network. >> The games begin, in fact, one of our customers called it spaghetti mess. And so, that's where like we thought about where was the industry going and which direction the industry was going into? And we came up with the Alkira where what we are doing is building a common infrastructure across multiple clouds, across in, you know, on-prem locations, be it data centers or physical sites, branches sites, et cetera, with integrated security and network networking services inside. And, you know, nowadays, networking is not only about connectivity, you have to secure everything. So, security has to be built in. Redundancy, high availability, disaster recovery. So all of that needs to be built in. So that's like, you know, kind of a definition of like what we thought at that time, what is turning into supercloud now. >> Yeah. It's interesting too, you mentioned, you know, VPCs is not, configuration of loans a hassle. Nevermind the manual mistakes could be made, but as you decide to do something you got to, "Oh, we got to get these other things." A lot of the hyper scales and a lot of the alpha cloud players now, and cloud native folks, they're kind of in that mode of, "Wow, look at what we've built." Now, they're got to maintain, how do I refresh it? Like, how do I keep the talent? So they got this similar chaotic environment where it's like, okay, now they're already already through, so I think they're going to be okay. But then some people want to bypass it completely. So there's a lot of customers that we see out there that fit the makeup of, I'm cloud first, I've lifted and shifted, I move some stuff to the cloud. But I want to bypass all that learnings from all the people that are gone through the past three years. Can I just skip that and go to a multi-cloud or coherent infrastructure? What do you think about that? What's your view? >> So yeah, so if you look at these enterprises, you know, many of them just to find like the talent, which for one cloud as far as the IT staff is concerned, it's hard enough. And now, when you have multiple clouds, it's hard to find people the talent which is, you know, which has expertise across different clouds. So that's where we come into the picture. So our vision was always to simplify all of this stuff. And simplification, it cannot be just simplification because you cannot just automate the workflows of the cloud providers underneath. So you have to, you know, provide your full data plane on top of it, fed full control plane, management plane, policy and management on top of it. And coming back to like your question, so these nowadays, those people who are working on networking, you know, before it used to be like CLI. You used to learn about Cisco CLI or Juniper CLI, and you used to work on it. Nowadays, it's very different. So automation, programmability, all of that stuff is the key. So now, you know, Ops guys, the DevOps guys, so these are the people who are in high demand. >> So what do you think about the folks out there that are saying, okay, you got a lot of fragmentation. I got the stacks, I got a lot of stove pipes, if you will, out there on the stack. I got to learn this from Azure. Can you guys have with your product abstract the way that's so developers don't need to know the ins and outs of stack's, almost like a gateway, if you will, the old days. But like I'm a developer or team develop, why should I have to learn the management layer of Azure? >> That's exactly what we started, you know, out with to solve. So it's, what we have built is a platform and the platform sits inside the cloud. And customers are able to build their own network or a virtual network on top using that platform. So the platform has its own data plane, own control plane and management plane with a policy layer on top of it. So now, it's the platform which is sitting in different clouds, but from a customer's point of view, it's one way of doing networking. One way of instantiating or bringing in services or security services in the middle. Whether those are our security services or whether those are like services from our partners, like Palo Alto or Checkpoint or Cisco. >> So you guys brought the SD-WAN mojo and refactored it for the cloud it sounds like. >> No. >> No? (chuckles) >> We cannot said. >> All right, explain. >> It's way more than that. >> I mean, SD-WAN was wan. I mean, you're talking about wide area networks, talking about connected, so explain the difference. >> SD-WAN was primarily done for one major reason. MPLS was expensive, very strong SLAs, but very low speed. Internet, on the other hand, you sat at home and you could access your applications much faster. No SLA, very low cost, right? So we wanted to marry the two together so you could have a purely private infrastructure and a public infrastructure and secure both of them by creating a common secure fabric across all those environments. And then seamlessly tying it into your internal branch and data center and cloud network. So, it merely brought you to the edge of the cloud. It didn't do anything inside the cloud. Now, the major problem resides inside the clouds where you have to optimize the clouds themselves. Take a step back. How were the clouds built? Basically, the cloud providers went to the Ciscos and Junipers and the rest of the world, built the network in the data centers or across wide area infrastructure, and brought it all together and tried to create a virtualized layer on top of that. But there were many limitations of this underlying infrastructure that they had built. So number of routes per region, how inter region connectivity worked, or how many routes you could carry to the VPCs of V nets? That all those were becoming no common policy across, you know, these environments, no segmentation across these environments, right? So the networking constructs that the enterprise customers were used to as enterprise class carry class capabilities, they did not exist in the cloud. So what did the customer do? They ended up stitching it together all manually. And that's why Atif was alluding to earlier that it became a spaghetti mess for the customers. And then what happens is, as a result, day two operations, you know, troubleshooting, everything becomes a nightmare. So what do you do? You have to build an infrastructure inside the cloud. Cloud has enough raw capabilities to build the solutions inside there. Netflix's of the world. And many different companies have been born in the cloud and evolved from there. So why could we not take the raw capabilities of the clouds and build a network cloud or a supercloud on top of these clouds to optimize the whole infrastructure and seamlessly connecting it into the on-premise and remote user locations, right? So that's your, you know, hybrid multi-cloud solution. >> Well, great call out on the SD-WAN in common versus cloud. 'Cause I think this is important because you're building a network layer in the cloud that spans out so the customers don't have to get into the, there's a gap in the system that I'm used to, my operating environment, of having lockdown security and network. >> So yeah. So what you do is you use the raw capabilities like bandwidth or virtual machines, or you know, containers, or, you know, different types of serverless capabilities. And you bring it all together in a way to solve the networking problems, thereby creating a supercloud, which is an abstraction layer which hides all the complexity of the underlying clouds from the customer, right? And it provides a common infrastructure across all environments to that customer, right? That's the beauty of it. And it does it in a way that it looks like, if they have the networking knowledge, they can apply it to this new environment and carry it forward. One way of doing security across all clouds and hybrid environments. One way of doing routing. One way of doing large-scale network address translation. One way of doing IPAM services. So people are tired of doing individual things and individual clouds and on-premise locations, right? So now they're getting something common. >> You guys brought that, you brought all that to bear and flexible for the customer to essentially self-serve their network cloud. >> Yes, yeah. Is that the wave? >> And nowadays, from business perspective, agility is the key, right? You have to move at the pace of the business. If you don't, you are losing. >> So, would it be safe to say that you guys have a network supercloud? >> Absolutely, yeah. >> We, pretty much, yeah. Absolutely. >> What does that mean to our customer? What's in it for them? What's the benefit to the customer? I got a network supercloud, it connects, provides SLA, all the capabilities I need. What do they get? What's the end point for them? What's the end? >> Atif, maybe you can talk some examples. >> The IT infrastructure is all like distributed now, right? So you have applications running in data centers. You have applications running in one cloud. Other cloud, public clouds, enterprises are depending on so many SaaS applications. So now, these are, you can call these endpoints. So a supercloud or a network cloud, from our perspective, it's a cloud in the middle or a network in the middle, which provides connectivity from any endpoint to any endpoint. So, you are able to connect to the supercloud or network cloud in one way no matter where you are. So now, whichever cloud you are in, whichever cloud you need to connect to. And also, it's not just connecting to the cloud. So you need to do a lot of stuff, a lot of networking inside the cloud also. So now, as Amir was saying, every cloud has its own from a networking, you know, the concept perspective or the construct, they are different. There are limitations in there also. So this supercloud, which is sitting on top, basically, your platform is sitting into the cloud, but the supercloud is built on top of using your platform. So that abstracts all those complexities, all those limitations. So now your limitations are whatever the limitations of that platform are. So now your platform, that platform is in our control. So we can keep building it, we can keep scaling it horizontally. Because one of the things is that, you know, in this cloud era, one of the things is autoscaling these services. So why can't the network now autoscale also, just like your other services. >> Network autoscaling is a genius idea, and I think that's a killer. I want to ask the the follow on question because I think, first of all, I love what you guys are doing. So, I think it's a great example of this new innovation. It's not obvious until you see it, right? Geographical is huge. So, you know, single instance, global instances, multiple instances, you're seeing global. How do you guys look at that global equation? Because as companies expand their clouds into geos, and then ultimately, you know, it's obviously continent, region and locales. You're going to have geographic issues. So, this is an extension of your network cloud? >> Amir: It is the extension of the network cloud because if you look at this hyperscalers, they're sitting pretty much everywhere in the globe. So, wherever their regions are, the beauty of building a supercloud is that you can by definition, be available in those regions. It literally takes a day or two of testing for our stack to run in those regions, to make sure there are no nuances that we run into, you know, for that region. The moment we bring it up in that region, all customers can onboard into that solution. So literally, what used to take months or years to build a global infrastructure, now, you can configure it in 10 minutes basically, and bring it up in less than one hour. Since when did we see any solution- >> And by the way, >> that can come up with. >> when the edge comes out too, you're going to start to see more clouds get bolted on. >> Exactly. And you can expand to the edge of the network. That's why we call cloud the new edge, right? >> John: Yeah, it is. Now, I think you guys got a good solutions, network clouds, superclouds, good. So the question on the premise side, so I get the cloud play. It's very cool. You can expand out. It's a nice layer. I'm sure you manage the SLAs between latency and all kinds of things. Knowing when not to do things. Physics or physics. Okay. Now, you've got the on-premise. What's the on-premise equation look like? >> So on-premise, the kind of customers, we are working with large enterprises, mid-size enterprises. So they have on-prem networks, they have deployed, in many cases, they have deployed SD-WAN. In many cases, they have MPLS. They have data centers also. And a lot of these companies are, you know, moving the applications from the data center into the cloud. But we still have large enterprise- >> But for you guys, you can sit there too with non server or is it a box or what is it? >> It's a software stack, right? So, we are a software company. >> Okay, so no box. >> No box. >> Okay, got it. >> No box. >> It's even better. So, we can connect any, as I mentioned, any endpoint, whether it's data centers. So, what happens is usually these enterprises from the data centers- >> John: It's a cloud endpoint for you. >> Cloud endpoint for us. And they need highspeed connectivity into the cloud. And our network cloud is sitting inside the or supercloud is sitting inside the cloud. So we need highspeed connectivity from the data centers. This is like multi-gig type of connectivity. So we enable that connectivity as a service. And as Amir was saying, you are able to bring it up in minutes, pretty much. >> John: Well, you guys have a great handle on supercloud. I really appreciate you guys coming on. I have to ask you guys, since you have so much experience in the industry, multiple inflection points you've guys lived through and we're all old, and we can remember those glory days. What's the big deal going on right now? Because you can connect the dots and you can imagine, okay, like a Lambda function spinning up some connectivity. I need instant access to a new route, throw some, I need to send compute to an edge point for process data. A lot of these kind of ad hoc services are going to start flying around, which used to be manually configured as you guys remember. >> Amir: And that's been the problem, right? The shadow IT, that was the biggest problem in the enterprise environment. So that's what we are trying to get the customers away from. Cloud teams came in, individuals or small groups of people spun up instances in the cloud. It was completely disconnected from the on-premise environment or the existing IT environment that the customer had. So, how do you bring it together? And that's what we are trying to solve for, right? At a large scale, in a carrier cloud center (indistinct). >> What do you call that? Shift right or shift left? Shift left is in the cloud native world security. >> Amir: Yes. >> Networking and security, the two hottest areas. What are you shifting? Up or down? I mean, the network's moving up the stack. I mean, you're seeing the run times at Kubernetes later' >> Amir: Right, right. It's true we're end-to-end virtualization. So you have plumbing, which is the physical infrastructure. Then on top of that, now for the first time, you have true end-to-end virtualization, which the cloud-like constructs are providing to us. We tried to virtualize the routers, we try to virtualize instances at the server level. Now, we are bringing it all together in a truly end-to-end virtualized manner to connect any endpoint anywhere across the globe. Whether it's on-premise, home, multiple clouds, or SaaS type environments. >> Yeah. If you talk about the technical benefits beyond virtualizations, you kind of see in virtualization be abstracted away. So you got end-to-end virtualization, but you don't need to know virtualization to take advantage of it. >> Exactly. Exactly. >> What are some of the tech involved where, what's the trend around on top of virtual? What's the easy button for that? >> So there are many, many use cases from the customers and they're, you know, some of those use cases, they used to deliver out of their data centers before. So now, because you, know, it takes a long time to spend something up in the data center and stuff. So the trend is and what enterprises are looking for is agility. And to achieve that agility, they are moving those services or those use cases into the cloud. So another technical benefit of like something like a supercloud and what we are doing is we allow customers to, you know, move their services from existing data centers into the cloud as well. And I'll give you some examples. You know, these enterprises have, you know, tons of partners. They provide connectivity to their partners, to select resources. It used to happen inside the data center. You would bring in connectivity into the data center and apply like tons of ACLs and whatnot to make sure that you are able to only connect. And now those use cases are, they need to be enabled inside the cloud. And the customer's customers are also, it's not just coming from the on-prem, they're coming from the cloud as well. So, if they're coming from the cloud as well as from on-prem, so you need like an infrastructure like supercloud, which is sitting inside the cloud and is able to handle all these use cases. So all of these use cases have to be, so that requires like moving those services from the data center into the cloud or into the supercloud. So, they're, oh, as we started building this service over the last four years, we have come across so many use cases. And to deliver those use cases, you have to have a platform. So you have to have your own platform because otherwise you are depending on somebody else's, you know, capabilities. And every time their capabilities change, you have to change. >> John: I'm glad you brought up the platform 'cause I want to get your both reaction to this. So Bob Muglia just said on theCUBE here at Supercloud, that supercloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. So the question is, is supercloud a platform or an architecture in your view? >> That's an interesting view on things, you know? I mean, if you think of it, you have to design or architect a solution before we turn it into a platform. >> John: It's a trick question actually. >> So it's a, you know, so we look at it as that you have to have an architectural approach end to end, right? And then you build a solution based on that approach. So, I don't think that they are mutually exclusive. I think they go hand in hand. It's an architecture that you turn into a solution and provide that agility and high availability and disaster recovery capability that it built into that. >> It's interesting that these definitions might be actually redefined with this new configuration. >> Amir: Yes. >> Because architecture and platform used to mean something, like, aight here's a platform, you buy this platform. >> And then you architecture solution. >> Architect it via vendor. >> Right, right, right. >> Okay. And they have to deal with that architecture in the place of multiple superclouds. If you have too many stove pipes, then what's the purpose of supercloud? >> Right, right, right. And because, you know, historically, you built a router and you sold it to the customer. And the poor customer was supposed to install it all, you know, and interconnect all those things. And if you have 40, 50,000 router network, which we saw in our lifetime, 'cause there used to be many more branches when we were growing up in the networking industry, right? You had to create hierarchy and all kinds of things to figure out how to solve that problem. We are no longer living in that world anymore. You cannot deploy individual virtual instances. And that's what approach a lot of people are taking, which is a pure overly network. You cannot take that approach anymore. You have to evolve the architecture and then build the solution based on that architecture so that it becomes a platform which is readily available, highly scalable, and available. And at the same time, it's very, very easy to deploy. It's a SaaS type solution, right? >> So you're saying, do the architecture to get the solution for the platform that the customer has. >> Amir: Yes. >> They're not buying a platform, they end up with a platform- >> With the platform. >> as a result of Supercloud path. All right. So that's what's, so you mentioned, that's a great point. I want to double click on what you just said. 'Cause I like that what you said. What's the deployment strategy in your mind for supercloud? I'm an architect. I'm at an enterprise in the Midwest. I'm an insurance company, got some cloud action going on. I'm mostly on-premise. I've got the mandate to transform the company. We have apps. We'll be fully transformed in five years. What's my strategy? What do I do? >> Amir: The resources. >> What's the deployment strategy? Single global instance, code in every region, on every cloud? >> It needs to be a solution which is available as a SaaS service, right? So from the customer's perspective, they are onboarding into the supercloud. And then the supercloud is allowing them to do whatever they used to do, you know, historically and in the new world, right? That needs to come together. And that's what we have built is that, we have brought everything together in a way that what used to take months or years, and now taking an hour or two hours, and then people test it for a week or so and deploy it in production. >> I want to bring up something we were talking about before we were on camera about the TCP/IP, the OSI model. That was a concept that destroyed the proprietary narcissist. Work operating systems of the mini computers, which brought in an era of tech prosperity for generations. TCP/IP was kind of the magical moment that allowed for that kind of super networking connection. Inter networking is what's called as a category. It feels like something's going on here with supercloud. The way you describe it, it feels like there's this unification idea. Like the reality is we've got multiple stuff sitting around by default, you either clean it up or get rid of it, right? Or it's almost a, it's either a nuance, a new nuisance or chaos. >> Yeah. And we live in the new world now. We don't have the luxury of time. So we need to move as fast as possible to solve the business problems. And that's what we are running into. If we don't have automated solutions which scale, which solve our problems, then it's going to be a problem. And that's why SaaS is so important in today's world. Why should we have to deploy the network piecemeal? Why can't we have a solution? We solve our problem as we move forward and we accomplish what we need to accomplish and move forward. >> And we don't really need standards here, dude. It's not that we need a standards body if you have unification. >> So because things move so fast, there's no time to create a standards body. And that's why you see companies like ours popping up, which are trying to create a common infrastructure across all clouds. Otherwise if we vent the standardization path may take long. Eventually, we should be going in that direction. But we don't have the luxury of time. That's what I was trying to get to. >> Well, what's interesting is, is that to your point about standards and ratification, what ratifies a defacto anything? In the old days there was some technical bodies involved, but here, I think developers drive everything. So if you look at the developers and how they're voting with their code. They're instantly, organically defining everything as a collective intelligence. >> And just like you're putting out the paper and making it available, everybody's contributing to that. That's why you need to have APIs and terra form type constructs, which are available so that the customers can continue to improve upon that. And that's the Net DevOps, right? So that you need to have. >> What was once sacrilege, just sayin', in business school, back in the days when I got my business degree after my CS degree was, you know, no one wants to have a better mousetrap, a bad business model to have a better mouse trap. In this case, the better mouse trap, the better solution actually could be that thing. >> It is that thing. >> I mean, that can trigger, tips over the industry. >> And that that's where we are seeing our customers. You know, I mean, we have some publicly referenceable customers like Coke or Warner Music Group or, you know, multiple others and chart industries. The way we are solving the problem. They have some of the largest environments in the industry from the cloud perspective. And their whole network infrastructure is running on the Alkira infrastructure. And they're able to adopt new clouds within days rather than waiting for months to architect and then deploy and then figure out how to manage it and operate it. It's available as a service. >> John: And we've heard from your customer, Warner, they were just on the program. >> Amir: Yes. Okay, okay. >> So they're building a supercloud. So superclouds aren't just for tech companies. >> Amir: No. >> You guys build a supercloud for networking. >> Amir: It is. >> But people are building their own superclouds on top of all this new stuff. Talk about that dynamic. >> Healthcare providers, financials, high-tech companies, even startups. One of our startup customers, Tekion, right? They have these dealerships that they provide sales and support services to across the globe. And for them to be able to onboard those dealerships, it is 80% less time to production. That is real money, right? So, maybe Atif can give you a lot more examples of customers who are deploying. >> Talk about some of the customer activity. What are they like? Are they laggards, they innovators? Are they trying to hit the easy button? Are they coming in late or are you got some high customers? >> Actually most of our customers, all of our customers or customers in general. I don't think they have a choice but to move in this direction because, you know, the cloud has, like everything is quick now. So the cloud teams are moving faster in these enterprises. So now that they cannot afford the network nor to keep up pace with the cloud teams. So, they don't have a choice but to go with something similar where you can, you know, build your network on demand and bring up your network as quickly as possible to meet all those use cases. So, I'll give you an example. >> John: So the demand's high for what you guys do. >> Demand is very high because the cloud teams have- >> John: Yeah. They're going fast. >> They're going fast and there's no stopping. And then network teams, they have to keep up with them. And you cannot keep deploying, you know, networks the way you used to deploy back in the day. And as far as the use cases are concerned, there are so many use cases which our customers are using our platform for. One of the use cases, I'll give you an example of these financial customers. Some of the financial customers, they have their customers who they provide data, like stock exchanges, that provide like market data information to their customers out of data centers part. But now, their customers are moving into the cloud as well. So they need to come in from the cloud. So when they're coming in from the cloud, you cannot be giving them data from your data center because that takes time, and your hair pinning everything back. >> Moving data is like moving, moving money, someone said. >> Exactly. >> Exactly. And the other thing is like you have to optimize your traffic flows in the cloud as well because every time you leave the cloud, you get charged a lot. So, you don't want to leave the cloud unless you have to leave the cloud, your traffic. So, you have to come up or use a service which allows you to optimize all those traffic flows as well, you know? >> My final question to you guys, first of all, thanks for coming on Supercloud Program. Really appreciate it. Congratulations on your success. And you guys have a great positioning and I'm a big fan. And I have to ask, you guys are agile, nimble startup, smart on the cutting edge. Supercloud concept seems to resonate with people who are kind of on the front range of this major wave. While all the incumbents like Cisco, Microsoft, even AWS, they're like, I think they're looking at it, like what is that? I think it's coming up really fast, this trend. Because I know people talk about multi-cloud, I get that. But like, this whole supercloud is not just SaaS, it's more going on there. What do you think is going on between the folks who get it, supercloud, get the concept, and some are who are scratching their heads, whether it's the Ciscos or someone, like I don't get it. Why is supercloud important for the folks that aren't really seeing it? >> So first of all, I mean, the customers, what we saw about six months, 12 months ago, were a little slower to adopt the supercloud kind of concept. And there were leading edge customers who were coming and adopting it. Now, all of a sudden, over the last six to nine months, we've seen a flurry of customers coming in and they are from all disciplines or all very diverse set of customers. And they're starting to see the value of that because of the practical implications of what they're doing. You know, these shadow IT type environments are no longer working and there's a lot of pressure from the management to move faster. And then that's where they're coming in. And perhaps, Atif, if you can give a few examples of. >> Yeah. And I'll also just add to your point earlier about the network needing to be there 'cause the cloud teams are like, let's go faster. And the network's always been slow because, but now, it's been almost turbocharged. >> Atif: Yeah. Yeah, exactly. And as I said, like there was no choice here. You had to move in this industry. And the other thing I would add a little bit is now if you look at all these enterprises, most of their traffic is from, even from which is coming from the on-prem, it's going to the cloud SaaS applications or public clouds. And it's more than 50% of traffic, which is leaving your, you know, what you used to call, your network or the private network. So now it's like, you know, before it used to just connect sites to data centers and sites together. Now, it's a cloud as well as the SaaS application. So it's either internet bound or the public cloud bound. So now you have to build a network quickly, which caters to all these use cases. And that's where like something- >> And you guys, your solution to me is you eliminate all that work for the customer. Now, they can treat the cloud like a bag of Legos. And do their thing. Well, I oversimplify. Well, you know I'm talking about. >> Atif: Right, exactly. >> And to answer your question earlier about what about the big companies coming in and, you know, now they slow to adopt? And, you know, what normally happens is when Cisco came up, right? There used to be 16 different protocols suites. And then we finally settled on TCP/IP and DECnet or AppleTalk or X&S or, you know, you name it, right? Those companies did not adapt to the networking the way it was supposed to be done. And guess what happened, right? So if the companies in the networking space do not adopt this new concept or new way of doing things, I think some of them will become extinct over time. >> Well, I think the force and function too is the cloud teams as well. So you got two evolutions. You got architectural relevance. That's real as impact. >> It's very important. >> Cost, speed. >> And I look at it as a very similar disruption to what Cisco's the world, very early days did to, you know, bring the networking out, right? And it became the internet. But now we are going through the cloud. It's the cloud era, right? How does the cloud evolve over the next 10, 15, 20 years? Everything's is going to be offered as a service, right? So slowly data centers go away, the network becomes a plumbing thing. Very, you know, simple to deploy. And everything on top of that is virtualized in the cloud-like manners. >> And that makes the networks hardened and more secure. >> More secure. >> It's a great way to be secure. You remember the glory days, we'll go back 15 years. The Cisco conversation was, we got to move up to stack. All the manager would fight each other. Now, what does that actually mean? Stay where we are. Stay in your lane. This is kind of like the network's version of moving up the stack because not so much up the stack, but the cloud is everywhere. It's almost horizontally scaled. >> It's extending into the on-premise. It is already moving towards the edge, right? So, you will see a lot- >> So, programmability is a big program. So you guys are hitting programmability, compatibility, getting people into an environment they're comfortable operating. So the Ops people love it. >> Exactly. >> Spans the clouds to a level of SLA management. It might not be perfectly spanning applications, but you can actually know latencies between clouds, measure that. And then so you're basically managing your network now as the overall infrastructure. >> Right. And it needs to be a very intelligent infrastructure going forward, right? Because customers do not want to wait to be able to troubleshoot. They don't want to be able to wait to deploy something, right? So, it needs to be a level of automation. >> Okay. So the question for you guys both on we'll end on is what is the enablement that, because you guys are a disruptive enabler, right? You create this fabric. You're going to enable companies to do stuff. What are some of the things that you see and your customers might be seeing as things that they're going to do as a result of having this enablement? So what are some of those things? >> Amir: Atif, perhaps you can talk through the some of the customer experience on that. >> It's agility. And we are allowing these customers to move very, very quickly and build these networks which meet all these requirements inside the cloud. Because as Amir was saying, in the cloud era, networking is changing. And if you look at, you know, going back to your comment about the existing networking vendors. Some of them still think that, you know, just connecting to the cloud using some concepts like Cloud OnRamp is cloud networking, but it's changing now. >> John: 'Cause there's apps that are depending upon. >> Exactly. And it's all distributed. Like IT infrastructure, as I said earlier, is all distributed. And at the end of the day, you have to make sure that wherever your user is, wherever your app is, you are able to connect them securely. >> Historically, it used to be about building a router bigger and bigger and bigger and bigger, you know, and then interconnecting those routers. Now, it's all about horizontal scale. You don't need to build big, you need to scale it, right? And that's what cloud brings to the customer. >> It's a cultural change for Cisco and Juniper because they have to understand that they're still could be in the game and still win. >> Exactly. >> The question I have for you, what are your customers telling you that, what's some of the anecdotal, like, 'cause you guys have a good solution, is it, "Oh my god, you guys saved my butt." Or what are some of the commentary that you hear from the customers in terms of praise and and glory from your solution? >> Oh, some even say, when we do our demo and stuff, they say it's too hard to believe. >> Believe. >> Like, too hard. It's hard, you know, it's >> I dont believe you. They're skeptics. >> I don't believe you that because now you're able to bring up a global network within minutes. With networking services, like let's say you have APAC, you know, on-prem users, cloud also there, cloud here, users here, you can bring up a global network with full routed connectivity between all these endpoints with security services. You can bring up like a firewall from a third party or our services in the middle. This is a matter of minutes now. And this is all high speed connectivity with SLAs. Imagine like before connecting, you know, Singapore to U.S. East or Hong Kong to Frankfurt, you know, if you were putting your infrastructure in columns like E-connects, you would have to go, you know, figure out like, how am I going to- >> Seal line In, connect to it? Yeah. A lot of hassles, >> If you had to put like firewalls in the middle, segmentation, you had to, you know, isolate different entities. >> That's called heavy lifting. >> So what you're seeing is, you know, it's like customer comes in, there's a disbelief, can you really do that? And then they try it out, they go, "Wow, this works." Right? It's deployed in a small environment. And then all of a sudden they start taking off, right? And literally we have seen customers go from few thousand dollars a month or year type deployments to multi-million dollars a year type deployments in very, very short amount of time, in a few months. >> And you guys are pay as you go? >> Pay as you go. >> Pay as go usage cloud-based compatibility. >> Exactly. And it's amazing once they get to deploy the solution. >> What's the variable on the cost? >> On the cost? >> Is it traffic or is it. >> It's multiple different things. It's packaged into the overall solution. And as a matter of fact, we end up saving a lot of money to the customers. And not only in one way, in multiple different ways. And we do a complete TOI analysis for the customers. So it's bandwidth, it's number of connections, it's the amount of compute power that we are using. >> John: Similar things that they're used to. >> Just like the cloud constructs. Yeah. >> All right. Networking supercloud. Great. Congratulations. >> Thank you so much. >> Thanks for coming on Supercloud. >> Atif: Thank you. >> And looking forward to seeing more of the demand. Translate, instant networking. I'm sure it's going to be huge with the edge exploding. >> Oh yeah, yeah, yeah, yeah. >> Congratulations. >> Thank you so much. >> Thank you so much. >> Okay. So this is Supercloud 2 event here in Palo Alto. I'm John Furrier. The network Supercloud is here. Checkout Alkira. I'm John Furry, the host. Thanks for watching. (lively music)

Published Date : Feb 17 2023

SUMMARY :

networking aspect of the cloud, that really galvanized the industry of the cloud architectures Amazon has this and then going to be interconnected. Whether it's on-premise, So then now, you have So you had to bring up the same So all of that needs to be built in. and a lot of the alpha cloud players now, So now, you know, Ops So what do you think So now, it's the platform which is sitting So you guys brought the SD-WAN mojo so explain the difference. So what do you do? a network layer in the So what you do is and flexible for the customer Is that the wave? agility is the key, right? We, pretty much, yeah. the benefit to the customer? So you need to do a lot of stuff, and then ultimately, you know, that we run into, you when the edge comes out too, And you can expand So the question on the premise side, So on-premise, the kind of customers, So, we are a software company. from the data centers- or supercloud is sitting inside the cloud. I have to ask you guys, since that the customer had. Shift left is in the cloud I mean, the network's moving up the stack. So you have plumbing, which is So you got end-to-end virtualization, Exactly. So you have to have your own platform So the question is, it, you have to design So it's a, you know, It's interesting that these definitions you buy this platform. in the place of multiple superclouds. And because, you know, for the platform that the customer has. 'Cause I like that what you said. So from the customer's perspective, of the mini computers, We don't have the luxury of time. if you have unification. And that's why you see So if you look at the developers So that you need to have. in business school, back in the days I mean, that can trigger, from the cloud perspective. from your customer, Warner, So they're building a supercloud. You guys build a Talk about that dynamic. And for them to be able to the customer activity. So the cloud teams are moving John: So the demand's the way you used to Moving data is like moving, And the other thing is And I have to ask, you guys from the management to move faster. about the network needing to So now you have to to me is you eliminate all So if the companies in So you got two evolutions. And it became the internet. And that makes the networks hardened This is kind of like the network's version It's extending into the on-premise. So you guys are hitting Spans the clouds to a So, it needs to be a level of automation. What are some of the things that you see of the customer experience on that. And if you look at, you know, that are depending upon. And at the end of the day, and bigger, you know, in the game and still win. commentary that you hear they say it's too hard to believe. It's hard, you know, it's I dont believe you. Imagine like before connecting, you know, Seal line In, connect to it? firewalls in the middle, can you really do that? Pay as go usage get to deploy the solution. it's the amount of compute that they're used to. Just like the cloud constructs. All right. And looking forward to I'm John Furry, the host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

JohnPERSON

0.99+

AmirPERSON

0.99+

Bob MugliaPERSON

0.99+

Amir KhanPERSON

0.99+

Atif KhanPERSON

0.99+

John FurryPERSON

0.99+

John FurrierPERSON

0.99+

2018DATE

0.99+

CokeORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Warner Music GroupORGANIZATION

0.99+

AtifPERSON

0.99+

CiscosORGANIZATION

0.99+

AlkiraPERSON

0.99+

Palo AltoLOCATION

0.99+

an hourQUANTITY

0.99+

AlkiraORGANIZATION

0.99+

FrankfurtLOCATION

0.99+

AmazonORGANIZATION

0.99+

JuniperORGANIZATION

0.99+

SingaporeLOCATION

0.99+

a dayQUANTITY

0.99+

NetflixORGANIZATION

0.99+

U.S. EastLOCATION

0.99+

Palo AltoORGANIZATION

0.99+

16 different protocolsQUANTITY

0.99+

JunipersORGANIZATION

0.99+

CheckpointORGANIZATION

0.99+

Hong KongLOCATION

0.99+

10 minutesQUANTITY

0.99+

less than one hourQUANTITY

0.99+

ViptelaORGANIZATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

first timeQUANTITY

0.99+

OneQUANTITY

0.99+

more than 50%QUANTITY

0.99+

one wayQUANTITY

0.99+

firstQUANTITY

0.99+

SupercloudORGANIZATION

0.98+

Supercloud 2EVENT

0.98+

LambdaTITLE

0.98+

One wayQUANTITY

0.98+

CLITITLE

0.98+

supercloudORGANIZATION

0.98+

12 months agoDATE

0.98+

LegosORGANIZATION

0.98+

APACORGANIZATION

0.98+

oneQUANTITY

0.98+

Brian Stevens, Neural Magic | Cube Conversation


 

>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.

Published Date : Feb 13 2023

SUMMARY :

CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

BrianPERSON

0.99+

Brian StevensPERSON

0.99+

DavePERSON

0.99+

95%QUANTITY

0.99+

2015DATE

0.99+

John FurrierPERSON

0.99+

90QUANTITY

0.99+

2016DATE

0.99+

32 bitQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

Brian StevePERSON

0.99+

Neural MagicORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

two callsQUANTITY

0.99+

both thingsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

second thingQUANTITY

0.99+

bothQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

first callQUANTITY

0.99+

two thingsQUANTITY

0.99+

second partQUANTITY

0.99+

OneQUANTITY

0.99+

both feetQUANTITY

0.98+

OracleORGANIZATION

0.98+

both modesQUANTITY

0.98+

todayDATE

0.98+

80sDATE

0.98+

firstQUANTITY

0.98+

second commandQUANTITY

0.98+

Opher Kahane, Sonoma Ventures | CloudNativeSecurityCon 23


 

(uplifting music) >> Hello, welcome back to theCUBE's coverage of CloudNativeSecurityCon, the inaugural event, in Seattle. I'm John Furrier, host of theCUBE, here in the Palo Alto Studios. We're calling it theCUBE Center. It's kind of like our Sports Center for tech. It's kind of remote coverage. We've been doing this now for a few years. We're going to amp it up this year as more events are remote, and happening all around the world. So, we're going to continue the coverage with this segment focusing on the data stack, entrepreneurial opportunities around all things security, and as, obviously, data's involved. And our next guest is a friend of theCUBE, and CUBE alumni from 2013, entrepreneur himself, turned, now, venture capitalist angel investor, with his own firm, Opher Kahane, Managing Director, Sonoma Ventures. Formerly the founder of Origami, sold to Intuit a few years back. Focusing now on having a lot of fun, angel investing on boards, focusing on data-driven applications, and stacks around that, and all the stuff going on in, really, in the wheelhouse for what's going on around security data. Opher, great to see you. Thanks for coming on. >> My pleasure. Great to be back. It's been a while. >> So you're kind of on Easy Street now. You did the entrepreneurial venture, you've worked hard. We were on together in 2013 when theCUBE just started. XCEL Partners had an event in Stanford, XCEL, and they had all the features there. We interviewed Satya Nadella, who was just a manager at Microsoft at that time, he was there. He's now the CEO of Microsoft. >> Yeah, he was. >> A lot's changed in nine years. But congratulations on your venture you sold, and you got an exit there, and now you're doing a lot of investments. I'd love to get your take, because this is really the biggest change I've seen in the past 12 years, around an inflection point around a lot of converging forces. Data, which, big data, 10 years ago, was a big part of your career, but now it's accelerated, with cloud scale. You're seeing people building scale on top of other clouds, and becoming their own cloud. You're seeing data being a big part of it. Cybersecurity kind of has not really changed much, but it's the most important thing everyone's talking about. So, developers are involved, data's involved, a lot of entrepreneurial opportunities. So I'd love to get your take on how you see the current situation, as it relates to what's gone on in the past five years or so. What's the big story? >> So, a lot of big stories, but I think a lot of it has to do with a promise of making value from data, whether it's for cybersecurity, for Fintech, for DevOps, for RevTech startups and companies. There's a lot of challenges in actually driving and monetizing the value from data with velocity. Historically, the challenge has been more around, "How do I store data at massive scale?" And then you had the big data infrastructure company, like Cloudera, and MapR, and others, deal with it from a scale perspective, from a storage perspective. Then you had a whole layer of companies that evolved to deal with, "How do I index massive scales of data, for quick querying, and federated access, et cetera?" But now that a lot of those underlying problems, if you will, have been solved, to a certain extent, although they're always being stretched, given the scale of data, and its utility is becoming more and more massive, in particular with AI use cases being very prominent right now, the next level is how to actually make value from the data. How do I manage the full lifecycle of data in complex environments, with complex organizations, complex use cases? And having seen this from the inside, with Origami Logic, as we dealt with a lot of large corporations, and post-acquisition by Intuit, and a lot of the startups I'm involved with, it's clear that we're now onto that next step. And you have fundamental new paradigms, such as data mesh, that attempt to address that complexity, and responsibly scaling access, and democratizing access in the value monetization from data, across large organizations. You have a slew of startups that are evolving to help the entire lifecycle of data, from the data engineering side of it, to the data analytics side of it, to the AI use cases side of it. And it feels like the early days, to a certain extent, of the revolution that we've seen in transition from traditional databases, to data warehouses, to cloud-based data processing, and big data. It feels like we're at the genesis of that next wave. And it's super, super exciting, for me at least, as someone who's sitting more in the coach seat, rather than being on the pitch, and building startups, helping folks as they go through those motions. >> So that's awesome. I want to get into some of these data infrastructure dynamics you mentioned, but before that, talk to the audience around what you're working on now. You've been a successful entrepreneur, you're focused on angel investing, so, super-early seed stage. What kind of deals are you looking at? What's interesting to you? What is Sonoma Ventures looking for, and what are some of the entrepreneurial dynamics that you're seeing right now, from a startup standpoint? >> Cool, so, at a macro level, this is a little bit of background of my history, because it shapes very heavily what it is that I'm looking at. So, I've been very fortunate with entrepreneurial career. I founded three startups. All three of them are successful. Final two were sold, the first one merged and went public. And my third career has been about data, moving data, passing data, processing data, generating insights from it. And, at this phase, I wanted to really evolve from just going and building startup number four, from going through the same motions again. A 10 year adventure, I'm a little bit too old for that, I guess. But the next best thing is to sit from a point whereby I can be more elevated in where I'm dealing with, and broaden the variety of startups I'm focused on, rather than just do your own thing, and just go very, very deep into it. Now, what specifically am I focused on at Sonoma Ventures? So, basically, looking at what I refer to as a data-driven application stack. Anything from the low-level data infrastructure and cloud infrastructure, that helps any persona in the data universe maximize value for data, from their particular point of view, for their particular role, whether it's data analysts, data scientists, data engineers, cloud engineers, DevOps folks, et cetera. All the way up to the application layer, in applications that are very data-heavy. And what are very typical data-heavy applications? FinTech, cyber, Web3, revenue technologies, and product and DevOps. So these are the areas we're focused on. I have almost 23 or 24 startups in the portfolio that span all these different areas. And this is in terms of the aperture. Now, typically, focus on pre-seed, seed. Sometimes a little bit later stage, but this is the primary focus. And it's really about partnering with entrepreneurs, and helping them make, if you will, original mistakes, avoid the mistakes I made. >> Yeah. >> And take it to the next level, whatever the milestone they're driving with. So I'm very, very hands-on with many of those startups. Now, what is it that's happening right now, initially, and why is it so exciting? So, on one hand, you have this scaling of data and its complexity, yet lagging value creation from it, across those different personas we've touched on. So that's one fundamental opportunity which is secular. The other one, which is more a cyclic situation, is the fact that we're going through a down cycle in tech, as is very evident in the public markets, and everything we're hearing about funding going slower and lower, terms shifting more into the hands of typical VCs versus entrepreneur-friendly market, and so on and so forth. And a very significant amount of layoffs. Now, when you combine these two trends together, you're observing a very interesting thing, that a lot of folks, really bright folks, who have sold a startup to a company, or have been in the guts of the large startup, or a large corporation, have, hands-on, experienced all those challenges we've spoken about earlier, in turf, maximizing value from data, irrespective of their role, in a specific angle, or vantage point they have on those challenges. So, for many of them, it's an opportunity to, "Now, let me now start a startup. I've been laid off, maybe, or my company's stock isn't doing as well as it used to, as a large corporation. Now I have an opportunity to actually go and take my entrepreneurial passion, and apply it to a product and experience as part of this larger company." >> Yeah. >> And you see a slew of folks who are emerging with these great ideas. So it's a very, very exciting period of time to innovate. >> It's interesting, a lot of people look at, I mean, I look at Snowflake as an example of a company that refactored data warehouses. They just basically took data warehouse, and put it on the cloud, and called it a data cloud. That, to me, was compelling. They didn't pay any CapEx. They rode Amazon's wave there. So, a similar thing going on with data. You mentioned this, and I see it as an enabling opportunity. So whether it's cybersecurity, FinTech, whatever vertical, you have an enablement. Now, you mentioned data infrastructure. It's a super exciting area, as there's so many stacks emerging. We got an analytics stack, there's real-time stacks, there's data lakes, AI stack, foundational models. So, you're seeing an explosion of stacks, different tools probably will emerge. So, how do you look at that, as a seasoned entrepreneur, now investor? Is that a good thing? Is that just more of the market? 'Cause it just seems like more and more kind of decomposed stacks targeted at use cases seems to be a trend. >> Yeah. >> And how do you vet that, is it? >> So it's a great observation, and if you take a step back and look at the evolution of technology over the last 30 years, maybe longer, you always see these cycles of expansion, fragmentation, contraction, expansion, contraction. Go decentralize, go centralize, go decentralize, go centralize, as manifested in different types of technology paradigms. From client server, to storage, to microservices, to et cetera, et cetera. So I think we're going through another big bang, to a certain extent, whereby end up with more specialized data stacks for specific use cases, as you need performance, the data models, the tooling to best adapt to the particular task at hand, and the particular personas at hand. As the needs of the data analysts are quite different from the needs of an NL engineer, it's quite different from the needs of the data engineer. And what happens is, when you end up with these siloed stacks, you end up with new fragmentation, and new gaps that need to be filled with a new layer of innovation. And I suspect that, in part, that's what we're seeing right now, in terms of the next wave of data innovation. Whether it's in a service of FinTech use cases, or cyber use cases, or other, is a set of tools that end up having to try and stitch together those elements and bridge between them. So I see that as a fantastic gap to innovate around. I see, also, a fundamental need in creating a common data language, and common data management processes and governance across those different personas, because ultimately, the same underlying data these folks need, albeit in different mediums, different access models, different velocities, et cetera, the subject matter, if you will, the underlying raw data, and some of the taxonomies right on top of it, do need to be consistent. So, once again, a great opportunity to innovate, whether it's about semantic layers, whether it's about data mesh, whether it's about CICD tools for data engineers, and so on and so forth. >> I got to ask you, first of all, I see you have a friend you brought into the interview. You have a dog in the background who made a little cameo appearance. And that's awesome. Sitting right next to you, making sure everything's going well. On the AI thing, 'cause I think that's the hot trend here. >> Yeah. >> You're starting to see, that ChatGPT's got everyone excited, because it's kind of that first time you see kind of next-gen functionality, large-language models, where you can bring data in, and it integrates well. So, to me, I think, connecting the dots, this kind of speaks to the beginning of what will be a trend of really blending of data stacks together, or blending of models. And so, as more data modeling emerges, you start to have this AI stack kind of situation, where you have things out there that you can compose. It's almost very developer-friendly, conceptually. This is kind of new, but kind of the same concept's been working on with Google and others. How do you see this emerging, as an investor? What are some of the things that you're excited about, around the ChatGPT kind of things that's happening? 'Cause it brings it mainstream. Again, a million downloads, fastest applications get a million downloads, even among all the successes. So it's obviously hit a nerve. People are talking about it. What's your take on that? >> Yeah, so, I think that's a great point, and clearly, it feels like an iPhone moment, right, to the industry, in this case, AI, and lots of applications. And I think there's, at a high level, probably three different layers of innovation. One is on top of those platforms. What use cases can one bring to the table that would drive on top of a ChatGPT-like service? Whereby, the startup, the company, can bring some unique datasets to infuse and add value on top of it, by custom-focusing it and purpose-building it for a particular use case or particular vertical. Whether it's applying it to customer service, in a particular vertical, applying it to, I don't know, marketing content creation, and so on and so forth. That's one category. And I do know that, as one of my startups is in Y Combinator, this season, winter '23, they're saying that a very large chunk of the YC companies in this cycle are about GPT use cases. So we'll see a flurry of that. The next layer, the one below that, is those who actually provide those platforms, whether it's ChatGPT, whatever will emerge from the partnership with Microsoft, and any competitive players that emerge from other startups, or from the big cloud providers, whether it's Facebook, if they ever get into this, and Google, which clearly will, as they need to, to survive around search. The third layer is the enabling layer. As you're going to have more and more of those different large-language models and use case running on top of it, the underlying layers, all the way down to cloud infrastructure, the data infrastructure, and the entire set of tools and systems, that take raw data, and massage it into useful, labeled, contextualized features and data to feed the models, the AI models, whether it's during training, or during inference stages, in production. Personally, my focus is more on the infrastructure than on the application use cases. And I believe that there's going to be a massive amount of innovation opportunity around that, to reach cost-effective, quality, fair models that are deployed easily and maintained easily, or at least with as little pain as possible, at scale. So there are startups that are dealing with it, in various areas. Some are about focusing on labeling automation, some about fairness, about, speaking about cyber, protecting models from threats through data and other issues with it, and so on and so forth. And I believe that this will be, too, a big driver for massive innovation, the infrastructure layer. >> Awesome, and I love how you mentioned the iPhone moment. I call it the browser moment, 'cause it felt that way for me, personally. >> Yep. >> But I think, from a business model standpoint, there is that iPhone shift. It's not the BlackBerry. It's a whole 'nother thing. And I like that. But I do have to ask you, because this is interesting. You mentioned iPhone. iPhone's mostly proprietary. So, in these machine learning foundational models, >> Yeah. >> you're starting to see proprietary hardware, bolt-on, acceleration, bundled together, for faster uptake. And now you got open source emerging, as two things. It's almost iPhone-Android situation happening. >> Yeah. >> So what's your view on that? Because there's pros and cons for either one. You're seeing a lot of these machine learning laws are very proprietary, but they work, and do you care, right? >> Yeah. >> And then you got open source, which is like, "Okay, let's get some upsource code, and let people verify it, and then build with that." Is it a balance? >> Yes, I think- >> Is it mutually exclusive? What's your view? >> I think it's going to be, markets will drive the proportion of both, and I think, for a certain use case, you'll end up with more proprietary offerings. With certain use cases, I guess the fundamental infrastructure for ChatGPT-like, let's say, large-language models and all the use cases running on top of it, that's likely going to be more platform-oriented and open source, and will allow innovation. Think of it as the equivalent of iPhone apps or Android apps running on top of those platforms, as in AI apps. So we'll have a lot of that. Now, when you start going a little bit more into the guts, the lower layers, then it's clear that, for performance reasons, in particular, for certain use cases, we'll end up with more proprietary offerings, whether it's advanced silicon, such as some of the silicon that emerged from entrepreneurs who have left Google, around TensorFlow, and all the silicon that powers that. You'll see a lot of innovation in that area as well. It hopefully intends to improve the cost efficiency of running large AI-oriented workloads, both in inference and in learning stages. >> I got to ask you, because this has come up a lot around Azure and Microsoft. Microsoft, pretty good move getting into the ChatGPT >> Yep. >> and the open AI, because I was talking to someone who's a hardcore Amazon developer, and they said, they swore they would never use Azure, right? One of those types. And they're spinning up Azure servers to get access to the API. So, the developers are flocking, as you mentioned. The YC class is all doing large data things, because you can now program with data, which is amazing, which is amazing. So, what's your take on, I know you got to be kind of neutral 'cause you're an investor, but you got, Amazon has to respond, Google, essentially, did all the work, so they have to have a solution. So, I'm expecting Google to have something very compelling, but Microsoft, right now, is going to just, might run the table on developers, this new wave of data developers. What's your take on the cloud responses to this? What's Amazon, what do you think AWS is going to do? What should Google be doing? What's your take? >> So, each of them is coming from a slightly different angle, of course. I'll say, Google, I think, has massive assets in the AI space, and their underlying cloud platform, I think, has been designed to support such complicated workloads, but they have yet to go as far as opening it up the same way ChatGPT is now in that Microsoft partnership, and Azure. Good question regarding Amazon. AWS has had a significant investment in AI-related infrastructure. Seeing it through my startups, through other lens as well. How will they respond to that higher layer, above and beyond the low level, if you will, AI-enabling apparatuses? How do they elevate to at least one or two layers above, and get to the same ChatGPT layer, good question. Is there an acquisition that will make sense for them to accelerate it, maybe. Is there an in-house development that they can reapply from a different domain towards that, possibly. But I do suspect we'll end up with acquisitions as the arms race around the next level of cloud wars emerges, and it's going to be no longer just about the basic tooling for basic cloud-based applications, and the infrastructure, and the cost management, but rather, faster time to deliver AI in data-heavy applications. Once again, each one of those cloud suppliers, their vendor is coming with different assets, and different pros and cons. All of them will need to just elevate the level of the fight, if you will, in this case, to the AI layer. >> It's going to be very interesting, the different stacks on the data infrastructure, like I mentioned, analytics, data lake, AI, all happening. It's going to be interesting to see how this turns into this AI cloud, like data clouds, data operating systems. So, super fascinating area. Opher, thank you for coming on and sharing your expertise with us. Great to see you, and congratulations on the work. I'll give you the final word here. Give a plugin for what you're looking for for startup seats, pre-seeds. What's the kind of profile that gets your attention, from a seed, pre-seed candidate or entrepreneur? >> Cool, first of all, it's my pleasure. Enjoy our chats, as always. Hopefully the next one's not going to be in nine years. As to what I'm looking for, ideally, smart data entrepreneurs, who have come from a particular domain problem, or problem domain, that they understand, they felt it in their own 10 fingers, or millions of neurons in their brains, and they figured out a way to solve it. Whether it's a data infrastructure play, a cloud infrastructure play, or a very, very smart application that takes advantage of data at scale. These are the things I'm looking for. >> One final, final question I have to ask you, because you're a seasoned entrepreneur, and now coach. What's different about the current entrepreneurial environment right now, vis-a-vis, the past decade? What's new? Is it different, highly accelerated? What advice do you give entrepreneurs out there who are putting together their plan? Obviously, a global resource pool now of engineering. It might not be yesterday's formula for success to putting a venture together to get to that product-market fit. What's new and different, and what's your advice to the folks out there about what's different about the current environment for being an entrepreneur? >> Fantastic, so I think it's a great question. So I think there's a few axes of difference, compared to, let's say, five years ago, 10 years ago, 15 years ago. First and foremost, given the amount of infrastructure out there, the amount of open-source technologies, amount of developer toolkits and frameworks, trying to develop an application, at least at the application layer, is much faster than ever. So, it's faster and cheaper, to the most part, unless you're building very fundamental, core, deep tech, where you still have a big technology challenge to deal with. And absent that, the challenge shifts more to how do you manage my resources, to product-market fit, how are you integrating the GTM lens, the go-to-market lens, as early as possible in the product-market fit cycle, such that you reach from pre-seed to seed, from seed to A, from A to B, with an optimal amount of velocity, and a minimal amount of resources. One big difference, specifically as of, let's say, beginning of this year, late last year, is that money is no longer free for entrepreneurs, which means that you need to operate and build startup in an environment with a lot more constraints. And in my mind, some of the best startups that have ever been built, and some of the big market-changing, generational-changing, if you will, technology startups, in their respective industry verticals, have actually emerged from these times. And these tend to be the smartest, best startups that emerge because they operate with a lot less money. Money is not as available for them, which means that they need to make tough decisions, and make verticals every day. What you don't need to do, you can kick the cow down the road. When you have plenty of money, and it cushions for a lot of mistakes, you don't have that cushion. And hopefully we'll end up with companies with a more agile, more, if you will, resilience, and better cultures in making those tough decisions that startups need to make every day. Which is why I'm super, super excited to see the next batch of amazing unicorns, true unicorns, not just valuation, market rising with the water type unicorns that emerged from this particular era, which we're in the beginning of. And very much enjoy working with entrepreneurs during this difficult time, the times we're in. >> The next 24 months will be the next wave, like you said, best time to do a company. Remember, Airbnb's pitch was, "We'll rent cots in apartments, and sell cereal." Boy, a lot of people passed on that deal, in that last down market, that turned out to be a game-changer. So the crazy ideas might not be that bad. So it's all about the entrepreneurs, and >> 100%. >> this is a big wave, and it's certainly happening. Opher, thank you for sharing. Obviously, data is going to change all the markets. Refactoring, security, FinTech, user experience, applications are going to be changed by data, data operating system. Thanks for coming on, and thanks for sharing. Appreciate it. >> My pleasure. Have a good one. >> Okay, more coverage for the CloudNativeSecurityCon inaugural event. Data will be the key for cybersecurity. theCUBE's coverage continues after this break. (uplifting music)

Published Date : Feb 2 2023

SUMMARY :

and happening all around the world. Great to be back. He's now the CEO in the past five years or so. and a lot of the startups What kind of deals are you looking at? and broaden the variety of and apply it to a product and experience And you see a slew of folks and put it on the cloud, and new gaps that need to be filled You have a dog in the background but kind of the same and the entire set of tools and systems, I call it the browser moment, But I do have to ask you, And now you got open source and do you care, right? and then build with that." and all the use cases I got to ask you, because and the open AI, and it's going to be no longer What's the kind of profile These are the things I'm looking for. about the current environment and some of the big market-changing, So it's all about the entrepreneurs, and to change all the markets. Have a good one. for the CloudNativeSecurityCon

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Satya NadellaPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2013DATE

0.99+

OpherPERSON

0.99+

CapExORGANIZATION

0.99+

SeattleLOCATION

0.99+

John FurrierPERSON

0.99+

Sonoma VenturesORGANIZATION

0.99+

BlackBerryORGANIZATION

0.99+

10 fingersQUANTITY

0.99+

AirbnbORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

nine yearsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Origami LogicORGANIZATION

0.99+

OrigamiORGANIZATION

0.99+

IntuitORGANIZATION

0.99+

RevTechORGANIZATION

0.99+

eachQUANTITY

0.99+

Opher KahanePERSON

0.99+

CloudNativeSecurityConEVENT

0.99+

Palo Alto StudiosLOCATION

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

third layerQUANTITY

0.98+

theCUBEORGANIZATION

0.98+

two layersQUANTITY

0.98+

AndroidTITLE

0.98+

third careerQUANTITY

0.98+

two thingsQUANTITY

0.98+

bothQUANTITY

0.98+

MapRORGANIZATION

0.98+

oneQUANTITY

0.98+

one categoryQUANTITY

0.98+

late last yearDATE

0.98+

millions of neuronsQUANTITY

0.98+

a million downloadsQUANTITY

0.98+

three startupsQUANTITY

0.98+

10 years agoDATE

0.97+

FintechORGANIZATION

0.97+

winter '23DATE

0.97+

first oneQUANTITY

0.97+

this yearDATE

0.97+

StanfordLOCATION

0.97+

ClouderaORGANIZATION

0.97+

theCUBE CenterORGANIZATION

0.96+

five years agoDATE

0.96+

10 yearQUANTITY

0.96+

ChatGPTTITLE

0.96+

threeQUANTITY

0.95+

first timeQUANTITY

0.95+

XCEL PartnersORGANIZATION

0.95+

15 years agoDATE

0.94+

24 startupsQUANTITY

0.93+