Image Title

Search Results for Moore:

Venkat Venkataramani, Rockset & Doug Moore, Command Alkon | AWS Startup Showcase S2 E2


 

(upbeat music) >> Hey everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. This is Data as Code, The Future of Enterprise Data and Analytics. This is also season two, episode two of our ongoing series with exciting partners from the AWS ecosystem who are here to talk with us about data and analytics. I'm your host, Lisa Martin. Two guests join me, one, a cube alumni. Venkat Venkataramani is here CEO & Co-Founder of Rockset. Good to see you again. And Doug Moore, VP of cloud platforms at Command Alkon. You're here to talk to me about how Command Alkon implemented real time analytics in just days with Rockset. Guys, welcome to the program. >> Thanks for having us. >> Yeah, great to be here. >> Doug, give us a little bit of a overview of Command Alkon, what type of business you are? what your mission is? That good stuff. >> Yeah, great. I'll pref it by saying I've been in this industry for only three years. The 30 years prior I was in financial services. So this was really exciting and eye opening. It actually plays into the story of how we met Rockset. So that's why I wanted to preface that. But Command Alkon is in the business, is in the what's called The Heavy Building Materials Industry. And I had never heard of it until I got here. But if you think about large projects like building buildings, cities, roads anything that requires concrete asphalt or just really big trucks, full of bulky materials that's the heavy building materials industry. So for over 40 years Command Alkon has been the north American leader in providing software to quarries and production facilities to help mine and load these materials and to produce them and then get them to the job site. So that's what our supply chain is, is from the quarry through the development of these materials, then out to the to a heavy building material job site. >> Got it, and now how historically in the past has the movement of construction materials been coordinated? What was that like before you guys came on the scene? >> You'll love this answer. So 'cause, again, it's like a step back in time. When I got here the people told me that we're trying to come up with the platform that there are 27 industries studied globally. And our industry is second to last in terms of automation which meant that literally everything is still being done with paper and a lot of paper. So when one of those, let's say material is developed, concrete asphalt is produced and then needs to get to the job site. They start by creating a five part printed ticket or delivery description that then goes to multiple parties. It ends up getting touched physically over 50 times for every delivery. And to give you some idea what kind of scale it is there are over 330 million of these type deliveries in north America every year. So it's really a lot of favor and a lot of manual work. So that was the state of really where we were. And obviously there are compelling reasons certainly today but even 3, 4, 5 years ago to automate that and digitize it. >> Wow, tremendous potential to go nowhere but up with the amount of paper, the lack of, of automation. So, you guys Command Alkon built a platform, a cloud software construction software platform. Talk to me of about that. Why you built it, what was the compelling event? I mean, I think you've kind of already explained the compelling event of all the paper but give us a little bit more context. >> Yeah. That was the original. And then we'll get into what happened two years ago which has made it even more compelling but essentially with everything on premises there's really in a huge amount of inefficiency. So, people have heard the enormous numbers that it takes to build up a highway or a really large construction project. And a lot of that is tied up in these inefficiencies. So we felt like with our significant presence in this market, that if we could figure out how to automate getting this data into the cloud so that at least the partners in the supply chain could begin sharing information. That's not on paper a little bit closer to real time that we could make has an impact on everything from the timing it takes to do a project to even the amount of carbon dioxide that's admitted, for example from trucks running around and being delayed and not being coordinated well. >> So you built the connect platform you started on Amazon DynamoDB and ran into some performance challenges. Talk to us about the, some of those performance bottlenecks and how you found Venkat and Rockset. >> So from the beginning, we were fortunate, if you start building a cloud three years ago you're you have a lot of opportunity to use some of the what we call more fully managed or serverless offerings from Amazon and all the cloud vendors have them but Amazon is the one we're most familiar with throughout the past 10 years. So we went head first into saying, we're going to do everything we can to not manage infrastructure ourselves. So we can really focus on solving this problem efficiently. And it paid off great. And so we chose dynamo as our primary database and it still was a great decision. We have obviously hundreds of millions of billions of these data points in dynamo. And it's great from a transactional perspective, but at some point you need to get the data back out. And what plays into the story of the beginning when I came here with no background basically in this industry, is that, and as did most of the other people on my team, we weren't really sure what questions were going to be asked of the data. And that's super, super important with a NoSQL database like dynamo. You sort of have to know in advance what those usage patterns are going to be and what people are going to want to get back out of it. And that's what really began to strain us on both performance and just availability of information. >> Got it. Venkat, let's bring you into the conversation. Talk to me about some of the challenges that Doug articulated the, is industry with such little automation so much paper. Are you finding that still out there for in quite a few industries that really have nowhere to go but up? >> I think that's a very good point. We talk about digital transformation 2.0 as like this abstract thing. And then you meet like disruptors and innovators like Doug, and you realize how much impact, it has on the real world. But now it's not just about disrupting, and digitizing all of these records but doing it at a faster pace than ever before, right. I think this is really what digital transformation in the cloud really enable tools you do that, a small team in a, with a very very big mission and responsibility like what Doug team have been, shepherding here. They're able to move very, very, very fast, to be able to kind of accelerate this. And, they're not only on the forefront of digitizing and transforming a very big, paper-heavy kind of process, but real-time analytics and real time reporting is a requirement, right? Nobody's wondering where is my supply chain three days ago? Are my, one of the most important thing in heavy construction is to keep running on a schedule. If you fall behind, there's no way to catch up because there's so many things that falls apart. Now, how do you make sure you don't fall behind, realtime analytics and realtime reporting on how many trucks are supposed to be delivered today? Halfway through the day, are they on track? Are they getting behind? And all of those things is not just able to manage the data but also be able to get reporting and analytics on that is a extremely important aspect of this. So this is like a combination of digital transformation happening in the cloud in realtime and realtime analytics being in the forefront of it. And so we are very, very happy to partner with digital disruptors like Doug and his team to be part of this movement. >> Doug, as Venkat mentioned, access to real time data is a requirement that is just simple truth these days. I'm just curious, compelling event wise was COVID and accelerator? 'Cause we all know of the supply chain challenges that we're all facing in one way or the other, was that part of the compelling event that had you guys go and say, we want to do DynamoDB plus Rockset? >> Yeah, that is a fantastic question. In fact, more so than you can imagine. So anytime you come into an industry and you're going to try to completely change or revolutionize the way it operates it takes a long time to get the message out. Sometimes years, I remember in insurance it took almost 10 years really to get that message out and get great adoption and then COVID came along. And when COVID came along, we all of a sudden had a situation where drivers and the foreman on the job site didn't want to exchange the paperwork. I heard one story of a driver taping the ticket for signature to the foreman on a broomstick and putting it out his windows so that he didn't get too close. It really was that dramatic. And again, this is the early days and no one really has any idea what's happening and we're all working from home. So we launched, we saw that as an opportunity to really help people solve that problem and understand more what this transformation would mean in the long term. So we launched internally what we called Project Lemonade obviously from, make lemonade out of lemons, that's the situation that we were in and we immediately made some enhancements to a mobile app and then launched that to the field. So that basically there's now a digital acceptance capability where the driver can just stay in the vehicle and the foreman can be anywhere, look at the material say it's acceptable for delivery and go from there. So yeah, it made a, it actually immediately caused many of our customers hundreds to begin, to want to push their data to the cloud for that reason just to take advantage of that one capability >> Project lemonade, sounds like it's made a lot of lemonade out of a lot of lemons. Can you comment Doug on kind of the larger trend of real time analytics and logistics? >> Yeah, obviously, and this is something I didn't think about much either not knowing anything about concrete other than it was in my driveway before I got here. And that it's a perishable product and you've got that basically no more than about an hour and a half from the time you mix it, put it in the drum and get it to the job site and pour it. And then the next one has to come behind it. And I remember I, the trend is that we can't really do that on paper anymore and stay on top of what has to be done we'll get into the field. So a foreman, I recall saying that when you're in the field waiting on delivery, that you have people standing around and preparing the site ready to make a pour that two minutes is an eternity. And so, working a real time is all always a controversial word because it means something different to anyone, but that gave it real, a real clarity to mean, what it really meant to have real time analytics and how we are doing and where are my vehicles and how is this job performing today? And I think that a lot of people are still trying to figure out how to do that. And fortunately, we found a great tool set that's allowing us to do that at scale. Thankfully, for Rockset primarily. >> Venkat talk about it from your perspective the larger trend of real time analytics not just in logistics, but in other key industries. >> Yeah. I think we're seeing this across the board. I think, whether, even we see a huge trend even within an enterprise different teams from the marketing team to the support teams to more and more business operations team to the security team, really moving more and more of their use cases from real time. So we see this, the industries that are the innovators and the pioneers here are the ones for whom real times that requirement like Doug and his team here or where, if it is all news, it's no news, it's useless, right? But I think even within, across all industries, whether it is, gaming whether it is, FinTech, Bino related companies, e-learning platforms, so across, ed tech and so many different platforms, there is always this need for business operations. Some, certain aspects certain teams within large organizations to, have to tell me how to win the game and not like, play Monday morning quarterback after the game is over. >> Right, Doug, let's go back at you, I'm curious with connects, have you been able to scale the platform since you integrated with Rockset? Talk to us about some of the outcomes that you've achieved so far? >> Yeah, we have, and of course we knew and we made our database selection with dynamo that it really doesn't have a top end in terms of how much information that we can throw at it. But that's very, very challenging when it comes to using that information from reporting. But we've found the same thing as we've scaled the analytics side with Rockset indexing and searching of that database. So the scale in terms of the number of customers and the amount of data we've been able to take on has been, not been a problem. And honestly, for the first time in my career, I can say that we've always had to add people every time we add a certain number of customers. And that has absolutely not been the case with this platform. >> Well, and I imagine the team that you do have is far more, sorry Venkat, far more strategic and able to focus on bigger projects. >> It, is, and, you've amazed at, I mean Venkat hit on a couple of points that it's in terms of the adoption of analytics. What we found is that we are as big a customer of this analytic engine as our customers are because our marketing team and our sales team are always coming to us. Well how many customers are doing this? How many partners are connected in this way? Which feature flags are turned on the platform? And the way this works is all data that we push into the platform is automatically just indexed and ready for reporting analytics. So we really it's no additional ad of work, to answer these questions, which is really been phenomenal. >> I think the thing I want to add here is the speed at which they were able to build a scalable solution and also how little, operational and administrative overhead that it has cost of their teams, right. I think, this is again, realtime analytics. If you go and ask hundred people, do you want fast analytics on realtime data or slow analytics on scale data, people, no one would say give me slow and scale. So, I think it goes back to again our fundamental pieces that you have to remove all the cost and complexity barriers for realtime analytics to be the new default, right? Today companies try to get away with batch and the pioneers and the innovators are forced to solve, I know, kind of like address some of these realtime analytics challenges. I think with the platforms like the realtime analytics platform, like Rockset, we want to completely flip it on its head. You can do everything in real time. And there may be some extreme situations where you're dealing with like, hundreds of petabytes of data and you just need an analyst to generate like, quarterly reports out of that, go ahead and use some really, really good batch base system but you should be able to get anything, and everything you want without additional cost or complexity, in real time. That is really the vision. That is what we are really enabling here. >> Venkat, I want to also get your perspective and Doug I'd like your perspective on this as well but that is the role of cloud native and serverless technologies in digital disruption. And what do you see there? >> Yeah, I think it's huge. I think, again and again, every customer, and we meet, Command Alkon and Doug and his team is a great example of this where they really want to spend as much time and energies and calories that they have to, help their business, right? Like what, are we accomplishing trying to accomplish as a business? How do we enable, how do we build better products? How do we grow revenue? How do we eliminate risk that is inherent in the business? And that is really where they want to spend all of their energy not trying to like, install some backend software, administer build IDL pipelines and so on and so forth. And so, doing serverless on the compute side of that things like AWS lambda does and what have you. And, it's a very important innovation but that isn't, complete the story or your data stack also have to become serverless. And, that is really the vision with Rockset that your entire realtime analytics stack can be operating and managing. It could be as simple as managing a serverless stack for your compute environments like your APS servers and what have you. And so I think that is going to be a that is for here to stay. This is a path towards simplicity and simplicity scales really, really well, right? Complexity will always be the killer that'll limit, how far you can use this solution and how many problems can you solve with that solution? So, simplicity is a very, very important aspect here. And serverless helps you, deliver that. >> And Doug your thoughts on cloud native and serverless in terms of digital disruption >> Great point, and there are two parts to the scalability part. The second one is the one that's more subtle unless you're in charge of the budget. And that is, with enough effort and enough money that you can make almost any technology scale whether it's multiple copies of it, it may take a long time to get there but you can get there with most technologies but what is least scalable, at least that I as I see that this industry is the people, everybody knows we have a talent shortage and these other ways of getting the real time analytics and scaling infrastructure for compute and database storage, it really takes a highly skilled set of resources. And the more your company grows, the more of those you need. And that is what we really can't find. And that's actually what drove our team in our last industry to even go this way we reached a point where our growth was limited by the people we could find. And so we really wanted to break out of that. So now we had the best of both scalable people because we don't have to scale them and scalable technology. >> Excellent. The best of both worlds. Isn't it great when those two things come together? Gentlemen, thank you so much for joining me on "theCUBE" today. Talking about what Rockset and Command Alkon are doing together better together what you're enabling from a supply chain digitization perspective. We appreciate your insights. >> Great. Thank you. >> Thanks, Lisa. Thanks for having us. >> My pleasure. For Doug Moore and Venkat Venkatramani, I'm Lisa Martin. Keep it right here for more coverage of "theCUBE", your leader in high tech event coverage. (upbeat music)

Published Date : Mar 30 2022

SUMMARY :

Good to see you again. what type of business you are? and to produce them and then And to give you some idea Talk to me of about that. And a lot of that is tied and how you found Venkat and Rockset. and as did most of the that really have nowhere to go but up? and his team to be part of this movement. and say, we want to do and then launched that to the field. kind of the larger trend and get it to the job site and pour it. the larger trend of real time analytics team to the support teams And that has absolutely not been the case and able to focus on bigger projects. that it's in terms of the and the pioneers and the but that is the role of cloud native And so I think that is going to be a And that is what we really can't find. and Command Alkon are doing Thank you. Moore and Venkat Venkatramani,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Doug MoorePERSON

0.99+

DougPERSON

0.99+

Venkat VenkataramaniPERSON

0.99+

Command AlkonORGANIZATION

0.99+

RocksetORGANIZATION

0.99+

LisaPERSON

0.99+

Doug MoorePERSON

0.99+

AmazonORGANIZATION

0.99+

Two guestsQUANTITY

0.99+

AWSORGANIZATION

0.99+

27 industriesQUANTITY

0.99+

two minutesQUANTITY

0.99+

bothQUANTITY

0.99+

VenkatORGANIZATION

0.99+

north AmericaLOCATION

0.99+

Monday morningDATE

0.99+

two partsQUANTITY

0.99+

over 50 timesQUANTITY

0.99+

oneQUANTITY

0.99+

over 330 millionQUANTITY

0.99+

Venkat VenkatramaniPERSON

0.99+

hundred peopleQUANTITY

0.99+

three days agoDATE

0.99+

two thingsQUANTITY

0.99+

over 40 yearsQUANTITY

0.99+

two years agoDATE

0.98+

three years agoDATE

0.98+

secondQUANTITY

0.98+

five partQUANTITY

0.98+

first timeQUANTITY

0.98+

todayDATE

0.98+

VenkatPERSON

0.97+

hundredsQUANTITY

0.97+

30 years priorDATE

0.97+

both worldsQUANTITY

0.97+

TodayDATE

0.97+

three yearsQUANTITY

0.96+

one storyQUANTITY

0.95+

DynamoDBTITLE

0.94+

almost 10 yearsQUANTITY

0.94+

hundreds of millions of billionsQUANTITY

0.93+

dynamoORGANIZATION

0.92+

second oneQUANTITY

0.91+

about an hour and a halfQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

NoSQLTITLE

0.89+

3DATE

0.87+

BinoORGANIZATION

0.85+

past 10 yearsDATE

0.84+

every yearQUANTITY

0.84+

DougORGANIZATION

0.83+

AnalyticsTITLE

0.83+

5 years agoDATE

0.82+

north AmericanOTHER

0.81+

Startup ShowcaseEVENT

0.81+

Wendy Moore, Trend Micro & Geva Solomonovich, Snyk | AWS re:Invent 2020


 

>> (narrator) From around the globe. It's theCUBE. With digital coverage of AWS re:Invent 2020. Sponsored by Intel, AWS and our community partners. >> Welcome to theCUBE virtual. Our coverage of AWS re:Invent 2020 continues. I'm Lisa Martin. Got a couple of guests joining me next. Wendy Moore the VP of product marketing from Trend Micro is here and Geva Solomonovich Global Alliances CTO from Snyk. Wendy and Geva, It's great to have you both on the program today. >> Thanks for having us. Great to be here. >> Hi, thanks for having us. >> Last year we were probably all crammed in Vegas together. Here we are virtually but it's great that we're still able to connect. So lot has gone on since we were all at re:Invent in Vegas last year. Wendy, let's start with you from a security perspective there's been a growth in open source vulnerabilities that have impacted enterprises globally. Talk to me about what you're seeing there. What's going on? >> Yeah. Well. I think everybody in this audience recognizes the rapid shift to the use of open source in development teams. And what we've seen alongside that is a rapid increase in the number of vulnerabilities that are showing up in open source software. So that means that vulnerabilities that can be exploited and cause damage to your company's application, reputation and your customers, are on the increase out there. >> And a number that you sent over was two and a half X growth in open source vulnerabilities in the last year. Has that number gone up during the pandemic? >> So I'm not sure if the vulnerabilities have gone up during the pandemic, but we've definitely seen an increase in exploitation of vulnerabilities. There's so much in the news about ransomware incidents in healthcare targeting pharmaceutical organizations, and most of those are taking advantage of vulnerabilities. Not necessarily in open source, but some of it is definitely happening in open source. >> Now we've been talking about the rise in ransomware for awhile, and it's all... The numbers and types of companies and healthcare organizations like is it schools, governments, for example lot of vulnerabilities being exploited that's for sure. >> So Geva let's go over to you. Talk about from Synk's perspective. The impact on businesses and how can you guys help. >> And then I'll put in a few insights there. on the open source risk. Wendy talked about it as well. Why is it growing? One of course is open source tuition usage is growing. So of course it bulges, the amounts of vulnerabilities is growing and the amount of exploits. But when you look at it from a hacker's perspective, attacking is an ROI based activity. Hackers want to spend their hacking hours where they're more likely to get our reward, be able to get that ransom or steal the data or do whatever they can. And open source actually makes it much easier for them than a lot of these other alternatives. One, the source is open. So just finding a vulnerability is much easier than trying to find the vulnerability in proprietary code. Two, there's like a market for these exploits and companies even like need for chapter. One of the byproducts of that is you can just go and feel the vulnerabilities out there and pick the ones that you want to try to exploit. But three, which is really the most critical piece is that if you do find the juicy vulnerability in a very popular open source package, the amount of companies you can attack is not one, is thousands or tens of thousands because that's precisely what makes the popular open source packages popular. It's being used broadly and so if you spend this effort to develop an exploit and then you can send it like there just across the world to 10 thousands of companies you're more likely to be successful. And that's what's driving a lot of the hacker attention into the open source vulnerabilities and that's why the growing. >> So it's a low cost high reward for those hackers. Wendy what are some of the ways that organizations can protect themselves from this? >> Well, one of the best ways to protect themselves against exploitation of vulnerabilities and against vulnerability showing up in their code is to actually analyze their code and scan it looking for vulnerabilities. And the best possible place to do that is actually in the code repository. So before code is ever packaged up and deployed it actually gets caught really early. So it's all about shifting security left. But some of the challenges with that is that you know the code repository, Tory and the code and open source has largely been the domain of DevOps and the developers and security who is tasked with managing the risk of the organization has little to know visibility into what vulnerabilities might exist. So something that's a growing part of an enterprise risk profile the security team doesn't really see. And that's a big gap for most organizations. >> So in terms of that visibility being essential, sounds like maybe even a cultural gap there. Geva what are your recommendations? We, you know, we talk about SecOps, we talk about DevOps. Is the solution DevSecOps or SecDevOps? >> I mean, all these partners are definitely helping there but you kind of need to break it down and understand what their problems, which is what Wendy was articulating. Why you have these traditional security teams have all their traditional tools. They look at mostly and let's call it the IC type security. Then you have this entire new category of risk which is lets say open source risk, but it's just inside the code repository inside a GitHub repo or somewhere, or they completely have no visibility into. And what that causes is one has to have a conversation with the developers who are those who are convenient to pick those vulnerabilities, remove them from the code. And, but to also, just from the mind ensuring that in our location it's hard for you to protect something that you don't have visibility into which causes opensource security to be possibly under provisioned in your entire a security fence. As you're looking at the security risk. And as we are talking about solution, so one of the movements we've seen with DevOps, where you know engineering team and IT teams have come together to have a shared ownership of the results of deploying these applications. In production now you expand out into DevSecOps. It's okay to actually make this work. We need to have a shared responsibility model where both developers step up to take some ownership and the traditional security each step up to understand what the developers are doing, build tools to make it easier for them. And ultimately I think Wendy nailed it on the head. She said the best way to protect yourself is actually to remove the vulnerable line of code from your application, not wait for it to be deployed and try to put some blocks in there. >> All right. So Wendy how are Trend Micro and Snyk working together to resolve that challenge that you guys just described? >> Yeah, we'll Trend Micro and Snyk have been working together for over a year now. And we came out with an initial offering and now we're coming out with a new offering that is really focused on basically delivering that code scanning ability right in the code repository. And through Trend Micro's Cloud One platform, we are delivering this as a service to the security operations team so that they get visibility of anything that Snyk finds in the code repository. And they can take action from there. So Trend Micro's Cloud One security services platform basically equips cloud builders with a whole bunch of different types of technologies to satisfy their different infrastructure requirements. So we've got things like workload security application security, network security, a number of different take types of security tools. And this just brings another security tool to the security operations team and the DevOps team so that they can basically extend their visibility and their security controls back to the code repository. >> Geva what are some of the impacts that you're seeing. So for obviously besides wanting to find those vulnerabilities faster as when you talk about shifting left. Give me some examples of some customers that you were working with maybe in the first iteration and what the impact has been. >> The impact is the... what, sorry, can you repeat the question? >> Yeah. Impact of your technologies together? You said that there's a new offering coming up but talk to me about some of the impact that these customers are making. >> Yeah. Okay. Sorry. Thank you for repeating the question. And so this joint product is very cunning from a multiple perspective. So one, it's going to be delivered inside the Cloud One platform, which Wendy just talked about. You asked before what is the impact of COVID? And one of the big impacts has been on the financial stress. Every company in every, every vendor is having. And so just the ease of managing less vendors and less tools and less places to procurement is of high value for every organization Just in terms of efficiency of operations. And just being able to acquire this new product on an existing platform where there are already consuming security tools. That by itself is amazing value. And number two, we're taking again... We're taking a technology which is a cloud native, it's a modern technology. And that's typically has been outside of the purview of a traditional security team and making it accessible to them in a place where it's easy for them to try out and they can, you know, start small and grow from there. They don't have to make a big commitment to get going. And more importantly, it's giving them visibility into this important technology that they didn't have before. >> So Wendy this is all intended at bridging that gap? I'm just curious, like if we take a peek inside, what this enables SecOps to do what it enables DevOps to do. What were some of the feedback that you're hearing from customers about those teams coming together and actually being able to work very collaboratively with that shift left actually being able to be done? >> Yeah. I mean, you know, if you talk to... There's some organizations who do this really well. They're very mature and their security operations teams and their DevOps teams work very closely together collaboratively, excuse me. And they also understand each other's needs. So they're able to insert tools into the security pipeline that don't slow DevOps down but also meet the needs of the security team. Whereas we see some other organizations where Dev is at one side of the pipeline and you've got security at the other and they don't tend to converse or meet. And those are the organizations where there tends to be more challenges. So the idea with this new solution is it's going to give the security team visibility of basically the scale and scope of their open source situation. So that they've actually got some data to go have conversations with the DevOps teams and start going in that direction of making those teams work more seamlessly together. I mean, you used the term DevSecOps before, some organizations that's a very real situation. Others still have a long way to go. And we think this is a great first step to bring those teams together. >> Fostering long-term friendships I'm sure. Just talk to me about the go to market, Wendy. How are you guys going to market together? Trend Micro and Snyk selling direct channel? What is it like? >> So this is actually going to be a Trend Micro Cloud One offering. So we jointly developed it with Snyk but it's going to be Trend Micro who is selling it. And we go to market a number of different ways. AWS marketplace is a big channel to market for us And this will be available for purchase there. When it becomes available in January. And also, we also work very closely with channel partners as well who also participate in AWS marketplace. >> So what are some of the things that you're expecting to customers to be able to take advantage of around the time of re:Invent and into early 2021? >> Yeah. I really encourage customers to visit our page on the AWS re:Invent platform. We're going to have all kinds of exciting demos there. You can go learn more about this new offering that we're delivering jointly developed with Snyk. And you can also ask about how you can sign up for early access to this new offering. So highly encourage you to go check that out. >> Excellent, early access is always nice to be a beta tester and really get that symbiotic relationship. >> Geva last question for you is as the Global Alliances CTO I imagine your customer conversations in the last year have changed dramatically. Talk to me about some of the things that you really think like in terms of like exposing vulnerabilities. Let's talk about exposing opportunities that that Snyk is helping organizations do so that they can not just keep the lights on during this very unprecedented time but actually be winners of tomorrow. >> Yeah, I think again at the heart of the DevOps movement and why it's been successful it's reducing that feedback loop between writing some codes, getting it to production in the hands of customers, getting the feedback from them and rinse and repeat and starting that loop. And those who have it, the faster you can get to market faster and can deliver value faster ultimately are the winners. Now, one of the things we've seen with the COVID is a lot of the this outbound activity has been going down. People have been going less to events and need to look more internally and how you can become better as an organization. And you've actually seen an increase in the investment of a digital transformation and cloud journeys and stuff like that. And one of the... One of kind of the traditional inhibitors that's going fast and all in into the cloud is the loss of control of the traditional security teams on the application development. Where now people can, you know... deploy hundreds of times every application to the cloud a day. And what we've seen is that they come to Snyk or to companies like ours, so we can secure those new modern development life cycles and give the security feedback to the developers as they're building the applications and give the security teams the visibility into those pipelines and application domain. So they have a sense that they're not losing all the control they used to have. They're still getting visibility into those application development and actually allowing their organizations to go faster because of it they can sign up to and be doing the technologies and actually increase the speed of going to the cloud. >> Yeah and that's critical because as we, you mentioned as we've been talking about for months now that the acceleration of cloud adoption, the speed of digital transformation it's one of those things that's challenging to do. You've got to have visibility. Period. In order to facilitate that. And if it's another thing that you kind of were describing Geva as that visibility provides that sense of control or trust, and that's also huge for not just a business to catch vulnerabilities but for teams the DevOps teams, the SecOps teams to be working together in a highly collaborative way. Do you agree Wendy? >> Absolutely. And the beautiful thing is this sets that up This tool. So it allows them to work together very collaboratively but it also sets up that visibility. So that down the road there could be even further automation into that process. Because you know, the whole purpose of DevOps is to take the people out of it. Right. So, but in order... You need to set up those processes to begin with. So this is a first step in terms of setting up that automation and visibility amongst those two teams. >> Excellent. And can you say one more time Wendy where prospective customers can go to learn more and become a early adopter? >> Yeah, absolutely. So visit our Trend Micro page at the AWS reinvent platform. And there you'll be able to learn much more about the offering and also learn how you can access the early adopter program. >> Excellent. You guys thank you so much for joining me on the program today. Sharing what Trend Micro and Snyk are doing together and how you're helping organizations cross-functionally be successful. We appreciate your time. >> Thank you, Lisa. Appreciate it. >> Thank you so much. >> My pleasure. For my guests, I'm Lisa Martin and you're watching theCUBE virtual. (upbeat music)

Published Date : Dec 2 2020

SUMMARY :

(narrator) From around the globe. It's great to have you both Great to be here. Talk to me about what you're seeing there. in the number of vulnerabilities And a number that you sent over and most of those are taking advantage and it's all... So Geva let's go over to you. and pick the ones that you want So it's a low cost Tory and the code So in terms of that and the traditional security each step up that you guys just described? and the DevOps team of some customers that you were working can you repeat the question? but talk to me about some of the impact and less places to procurement is to do what it enables DevOps to do. of the security team. the go to market, Wendy. but it's going to be Trend Micro on the AWS re:Invent platform. and really get that of the things that you really think like and all in into the cloud the SecOps teams to be working together So that down the road can go to learn more and also learn how you can access for joining me on the program today. Thank you, Lisa. and you're watching theCUBE virtual.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GevaPERSON

0.99+

Wendy MoorePERSON

0.99+

Lisa MartinPERSON

0.99+

JanuaryDATE

0.99+

thousandsQUANTITY

0.99+

Geva SolomonovichPERSON

0.99+

WendyPERSON

0.99+

Trend MicroORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

two teamsQUANTITY

0.99+

VegasLOCATION

0.99+

Last yearDATE

0.99+

last yearDATE

0.99+

SnykORGANIZATION

0.99+

two and a halfQUANTITY

0.99+

pandemicEVENT

0.99+

first iterationQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

first stepQUANTITY

0.99+

IntelORGANIZATION

0.99+

Global AlliancesORGANIZATION

0.98+

10 thousands of companiesQUANTITY

0.98+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

GitHubORGANIZATION

0.98+

eachQUANTITY

0.97+

TwoQUANTITY

0.97+

SynkORGANIZATION

0.97+

todayDATE

0.97+

early 2021DATE

0.97+

tomorrowDATE

0.96+

DevSecOpsTITLE

0.96+

bothQUANTITY

0.95+

over a yearQUANTITY

0.95+

SecDevOpsTITLE

0.94+

DevOpsTITLE

0.92+

re:InventEVENT

0.91+

COVIDOTHER

0.9+

both developersQUANTITY

0.9+

one sideQUANTITY

0.87+

hundreds of timesQUANTITY

0.85+

coupleQUANTITY

0.83+

a dayQUANTITY

0.82+

Cloud OneTITLE

0.8+

InventEVENT

0.77+

Fred Moore, Horison Information Strategies | CUBE Conversation, August 2020


 

>> Introducer: From the CUBE studios in Palo Alto and in Boston connecting with thought leaders all around the world. This is a CUBE Conversation. >> Hi everybody this is Dave Volante. Welcome to the special CUBE Conversation. I'm really excited to invite in my mentor and friend. We go way back. Fred Moore is here. He's the president of Horizon Information Strategies. We going to talk about managing data in the zettabyte era. Fred, I think when we first met, we were talking about like the megabyte era. >> Right, exactly. I think back then we had, you know, maybe 10 bytes in our telephone and one on the wristwatch, you know, but now you can put a whole data center in a single cartridge of tape and take off. Things that really changed. >> It's pretty amazing. And of course, for those who don't know Fred, he was the first a systems engineer at Storage Tech. And as I said, somebody who taught me a lot in my early days, of course he's very famous for the term that everybody uses today. Backup is one thing, recovery is everything. And Fred just wrote, you know, this fantastic paper. He's done this year after year after year. He's just dug in, he's a clear thinker, strategic planner with a technical bent in a business bent. You're like one of those five tool baseball players, Fred. But tell me about this paper. Why, did you write it? >> Well, the reason I wrote that is there's been so much focus in the last year or so on the archive component of the storage hierarchy. And the thing that's happening, we're generating data lots faster than we're analyzing it. So it's piling up being unanalyzed and sitting basically untapped for years at a time. So that has posed a big challenge for people. The other thing that got me deeper into this last year was the Hyperscale market. They are, those people are so big in terms of footprint and infrastructure that they can no longer keep everything on disk. It's just economically not possible. The energy consumption per disk, the infrastructure costs, the frequency of, you know, taking a disc out every three, four or five years for just for replacement, has made it very difficult to do that. So Hyperscale has gone to tape in a big way, and it's kind of where most of the tape business in the future is going to wind up in these Hyperscale businesses. >> Right. >> We know tape doesn't exist in the home. It doesn't exist in a small data center. It's only a large scale data center technology, but that whole cosmos led me into the archive space and in a need for a new archive technology beyond tape. >> So, I want to set up the premise here. Just going to pull this out of your paper. It says a 60% of all data is archival, and could reach 80% or more by 2024, making archival data by far the largest storage class. And given this trajectory, the traditional storage hierarchy paradigm is going to to need to disrupt itself. And quickly we're going to talk about that. That really is the premise of your paper here, isn't it? >> It is, you know, to do all this with traditional technologies is going to get very painful for a variety of reasons. So the stage is set for a new tier and a new technology to appear in the next five years. Fortunately, I'm actually working with somebody who is after this in a big way, and in a different way than what you and I know. So I think there is some hope here that we can redefine and really add a new tier down at the bottom. You see it kind of emerging on that picture of the deep archive tier it's. Beginning to show up now and it's, you know, infinite storage. I mean, if you look at major league sports, the world series and Superbowl, you know, that data will never be deleted. It'll be here forever. It'll be used periodically based on circumstances. >> Yeah, well, we've got that pyramid chart up here. I mean, you invented this chart, essentially. At least you were the first person that ever showed it to me. I honestly think that you first created this concept where you had a high performance tier, and a high cost per bit, and then an archive tier. Maybe it wasn't this granular, you know, back in the '70s and '80s? But it's constantly been changing with different media types and different use cases. >> You know, you're right. I mean, and you all know this because you know, when storage deck introduced the nearline architecture, nearline set in between online and offline storage, we called it nearline, and trademarked that term. So that was the tape library concept to move data from offline status to online status, with a robotic library. So that brought up that third tier online, nearline, and offline, but you're right. This pyramid has evolved and morphed into several things. And, you know, I keep it alive. Somebody said, I'll have a pyramid on my tombstone instead of my name when I go down. (both chuckles) But it's really the heart and soul of the infrastructure for data. And then out of this comes all the management and security, the deletion, the immutable storage concepts, the whole thing starts here. So it's like your house, you got to have a foundation, then you can build everything on top of it. >> Well, and as you pointed out in your paper, a minute ago, it always comes down to economics. So I want to bring up the sort of 10 year expected cost of ownership the TCO for the three levels you got all disk, you got all cloud and you got LTO and you got the different aspects of the cost. The purple is always the biggest piece of cost. It's the labor costs. But of course, you know, in cloud, you've got the big media cost because they've done so much automation. I wonder if you could take us through this slide, what are the key takeaways there? >> Well, you know the thing that hurts here with all these technologies is, as you can see up on top up there, what the key issues are with this and the staff and personnel. So the less people you have to manage data, the better off you are. And then, you know, it's pretty high for disk compared to a lot of things to do on desk, but lack of manage a lot of, you know, sadly what you and I had to deal with years ago and provision kind of, I mean, a lot of this stuff is just labor intensive. The further you get, the further down the pyramid and you also get less labor intensive storage. And that helps then you get a lower cost for energy and cost of ownership. The TCO thing is kind of taking on a new meaning. I hate to put up a TCO chart in some regards, because it's all based on what your input variables are. So you can decide something different, but we've tried to normalize all kinds of pricing and come up with everything. And the cloud is a big question for most people as to how does it stack up. And if you don't ever touch the data in the cloud, you know, the price comes way down. If you want to start moving data in and out of the cloud, you're going to have to ante up in a big way like that. But, you know we're going to see dollar a terabyte storage prices down at the bottom of this pyramid here in the next five years. But hey, you can get down to four or five terabyte with drives media in libraries tape, just entire flash and certainly higher than that. But you know, we're going to have the race to a dollar a terabyte, total TCO cost here in 2025. >> So when Amazon announced, they just announced a glacier. Everybody said, okay, what is that? Is that tape is that, you know, this spun down disk, cause it took a while to get it back. But you're kind of seeing that tape technology as you said, really move into the Hyperscale space and that's going to accommodate this massive, you know, lower part of the pyramid, isn't it? >> Exactly. Yeah. And we don't have a spin down disk solution today. I was actually on the board of a company that started that called Copay and years ago, right up here near Boulder. >> You watch him (both chuckles) You absolutely right. And a few other people that, you know also, but the spin down disk never made it. And you know, you can spin up and down on a desk on your desktop computer, but doing that in a data center, then on a fiber channel drive never made it. So we don't have a spin down disk to do that. The archive space is kind of dominated by very high capacity disc and then tape. And most of the archive data in the world today, unfortunately sits on display. It's not used and spinning seven by 24, three 65 and not touch much. So that's a bad economic move, but customers just found that easier to handle by doing that then going back to tape. So we've got a lot of data stored in the wrong place from a total economics point of view. >> But the Hyperscalers are solving this problem, or they're not through automation. And, you know, you referenced storage, tiering, really trying to take the labor cost out. How are they doing? Are they doing a good job? >> They've done really well taking the labor costs down, I mean, they have optimized every screw, nut and bolt in the 42 chassis that you could imagine to make it as clean as possible to do that. So they've done a whole lot to bring that cost down, but still the magnitude of these data centers, we're going to finish the year 2020 with about 570 Hyperscale data centers. So it's going right now around the world. You know, each one of these things is 350 400,000 square feet, and up of race wars space. And the economics just don't allow you to keep putting inactive data on spinning disk. We don't have to spin down disk, tape You know, I feel like the only guy in the industry that says this sometimes, but, you know, tapes had a, you know, a renaissance. That people don't appreciate in terms of reliability, throughput, you know, tapes three orders of reliability higher than disc right now. And most people don't know this. So tape's viable, the Hyperscalers see that. And read one Hyperscalers or you know, by over a million pieces of LTO tape last year alone. Just to handle this, you know, be the pressure valve to take all of this inactive stuff off of the gigantic disc farms that they have. >> Well, so let's talk about that a little bit. So you just try to keep it simple. You've got, you know, flash disk and tape. It feels like disc is getting squeezed. We know what flash has done in terms of eating into disc. And you see in that, in the storage market generally, it's soft right now. And I've posited that a lot of that is the headroom that data centers have with flash, is they don't have to buy spindles anymore for performance reasons. And the market is soft. Only pure is showing consistent growth, and ends up a little bit, cause because of mainframe, you've got Dell popping back and forth, but generally speaking, the primary storage market is not a great place to be right now, all the actions and sort of secondary storage and data protection. And so just going to get squeezed, and you mentioned tape, you said that if your only person talking about it, but you said in your paper, you know, it's sequential. So time to first bite is, is sometimes problematic, but you can front end a tape with cash. You can use algorithms and, you know, smart scans and to really address that problem. And dramatically lower the cost. Plus you could do things like you tell me Fred, you're the technologists here, but you're going to have multiple heads things that you can't necessarily do in a hermetically sealed disc drive. >> (chuckles) You can. And what you just described is called the active archive layer in the pyramid. So when you front end a tape library with a disk array for a cash buffer, you create an active archive and that data will sit in there three or four or five days before it gets demoted based on inactivity. So, you know for repetitive use and you're going to get dislike performance for tape data, and that's the same cash in concept that deserve systems had 30 years ago. So that does work and the active archive has got a lot of momentum right now. There's right here near me, where I live in Boulder. We have the Active Archive Alliances headquarters, and I get to do their annual report every year. And this whole active archives thing is a big way to make and overcome that time, the first bike problem that we've had in tape. And we'll have for quite a while. >> In your paper, you've talked about some of the use cases and workloads and you laid out, you know basically taking the pyramid and saying, okay based on the workload, some certain percentage should be up at the top of the pyramid for the high performance stuff. And of course lower for the, you know, the less, you know, important traditional workloads, et cetera. And it was striking to see the Delta between annual, the highest performance we had 70% , I think was up in the top of the pyramid versus, you know the last use case. So in you're talking about what it costs to store a zettabyte in services is that if I talk about 108 million at the high end versus a about 11 or 12 million, so huge Delta 10 X Delta between the top and the bottom based on those, you know allocations based on the workload. >> Yeah, I tried to get at the value of tiered storage based on your individual workload in your business. So I looked at five different workloads, the top one that you referenced. That was in there at 108 million, you know, is the HPC market. I mean, when I visited a few of the HPC people, you know, their DOD agencies in many cases, you know that and I threw the pyramid up. The first thing they would say our permanents inverted. You know (chuckles), all of our archive data is about 10%. You know, we were all flash as much as we can. And we have a little bit archived, we're in constant. Simulation and compute mode and producing results like crazy from the data. So we do an IO, bring in maybe a whole file at a time and compute for minutes before we come up with an answer. So just the reverse. And then I got to look into all the different workloads talking to people, and that's how we develop these profiles. >> So let's pull up this future of the storage hierarchy, was again kind of of talks to the premise of your paper. Walk us through this like, what changes should we be expecting, and you got air gap in here. We're going to, I'm going to ask you about remastering and lifespan, but take us through this. >> Yeah, you know, the traditional chart that you had up on the first big year had four tiers, you know, two disturbs and solid state at the top. And then the big archive tier, which is kind of everything falling down into tape at this point. But you know again, tape has some challenges. You know time to first bite and sequential access on. And then when we couple using tape or disc as an archive, most of that data that's archival is captured as unstructured data. So we don't have, we don't have tags, we don't have metadata, we don't have indices, and that has led to the movement for object storage, to be a primary, maybe in the next five years, the primary format in store archived data, because it's got all that information inside of it. So now we have a way to search things and we can get to objects, but in the interim, you know, it's hard to find and search out things that are unstructured and, you know, most estimates would say 80% of the world's data is at least that much is unstructured. So archives are hard to find once you store it, there's one storing is one thing, retrieving it is another thing. And that's led to the formation of another layer in the story tier. It's going to be data that doesn't have to be remastered or converted to a new technology. in the case of the disc, every three, four or five years or tape drive every eight, maybe 10 years take large lost. Kate Media can go 30 years, but with all new modern tape media, but unfortunately, you know, the underlying drive doesn't go back that far, you can't support that many different versions. So the media life is actually longer than it needs to be. So the stage is set for a new technology to appear down here to deal with this archives. So it'll have faster access will not need to be remastered every five or 10 years, but you'll have, you know, a 50 year life in here. And I believe me, I've been looking for a long time to be able find something like this. And, you know we have a shot at this now, and I'm actually working with the technology that could pull this off. >> Well, it's interesting also as well, you calling out the air gap and the chart we go back to our mainframe guesses, is not a lot we haven't seen before, you know, maybe data D duplication, but you know, the adversary has become a lot more sophisticated. And so air gaps and, you know, ransomware on everybody's mind today, but you've sort of highlighted three layers of the pyramid that are actually candidates for that air gapping. >> Yeah. The active archive up there, of course, you know, with the disk and tape combined, then just pure tape. And then this new technology, which can be removable. You know, when you have removability you create an air gap. little did we know when you and I met that removability would be important to take. We thought we were trying to get rid of the Chevy truck access method, and now without electricity with a terrorist attack and pandemic or whatever. The fastest way to move data is put it on a truck and get it out of town. So that has got renewed life right now. Removability much to my shock from where we started. >> You talked about remastering and you said it's a costly labor intensive process that typically migrates previously archived data to new media every five to 10 years. First of all, explain why you have to do that and how a data center operators can solve that problem. >> Yeah. And let's start with data where most of it sits today on described, you know it describes useful life is four to five years before it either fails or is replaced. That's pretty much common now. So then they have to start replacing these things. And that means you have to copy, you know, read the data off the disk and write it somewhere else, big data move. And as the years go by that amount of data to revamp or gets bigger and bigger. So, I mean, you can do the math as you well know, you want to move, you know, 50 petabytes of data. It's going to take several weeks to do that electronically. So this gets to be a real time consuming effort. So most data centers that I've seen will keep about one fifth of their disposal every year migrating to a new technology, just kind of rolling forward as they go like that rather than do the whole thing every five years. So that's the new build in the disc world. And then for tape the drive stay in there longer, you know the LTO family drives a good read. You know two generations back from the current one that's been there. They cut that off a year ago. They'll go back to something like this soon. But you know, you can go into 10 years on a tape drive. The media life because of very unfair right media, which was already oxidized the last 30 years or more. The old media metal particle was not oxidized. So, you know, the oxidized flake, the particles would fall off people will say shit. I've had this in here eight years, you know, and it's kind flake it I put it back in. So that didn't work well. But now that we had various Verite Media, it was all oxidized, the media lives skyrocket. So that was the whole trick with tape to get into something that was preoxidized before time could cause it to decay. So the remastering is a lot, is less on tape by two to one to three to one, but still when you've got petabytes, maybe an exabyte sitting on tape in the future, that's going to take a long time to do that. >> Right. >> So remastering you'd love a way to scale capacity without having to continue to move the data to something new ever so often. >> So my last question is you've , you know, you went from a technical role into a strategic planning role, which of course the more technical you are in that role, the better off you're going to be. You don't understand that the guardrails, but you've always had a sort of telescope in the industry and you close the paper and it's kind of where I want to end here on, you know, what's ahead. And you talk about some of the technologies that obviously have legs, like three D NAND and obviously magnetic storage. You got optical in here, but then you've got all these other ones that you even mentioned, you know, don't hold your breath waiting for these multilayer photonics and dedic DNA. What class media, holographic storage, quantum storage we do a lot about quantum. What should we be thinking about and expecting as observers as to, you know, new technologies that might drive some innovation in the storage business? >> Well, I've listed the ones that are in the lab that have any life at all, right on this paper. So, you know can kind of take your pick at what goes on there. I mean, optical disk has not made it in the data center. We talked about it for 35 years. We invested in it in storage deck and never saw the light of day. You know, optical disk has remained an entertainment technology throughout the last 35 years. And the bigger rate is very low compared to data center technology. So, you know optical would have to take a huge step going forward. We got a lot of legs left in the solid state business. That's really active SSB, the whole nonvolatile memory spaces. Probably not 45% of the total disc shipments in terms of units, from what it was at it's high and in 2010. Unbelievable though. You know, in disc shipment 650 million drives a year announced just under 400, 35,400. So flashes has taken this stuff away, like crazy. Tape shouldn't be taking just away, but the tape industry doesn't do a very effective job of marketing itself. Most people still don't know what's going on with tape. They're still looking out of the roof, still looking out of the rear view mirror at a tape, as opposed to the front windshield. We see all the new things that have happened. So, you know they have bad memories of taping the past load stretch, edge damage tape, wouldn't work a tear or anything like that. It was a problem. Oh, that's pretty well gone away now. In a moderate tape is a whole different ball game, but most people don't know that. So, you know tapes going to have to struggle with access time and sequential reality. They've done a few things to come over excess time and the order request now to take the optimizer based on physical movement on the tape that can take out 50% of your access time for multiple requests on a cartridge. The one on here that's got the most promise right now would be a version of a multilayer photonic storage, which is. I would say sort like optical, but, you know, with data center, class characteristics, multi-layer recording capability on that random access, which tape doesn't have. And, you know, I would say that's probably the one that you would want to take some look at going forward like this. The others are highly specular. You know, we've been talking about DNA since we were kids. So we don't have a DNA product out here yet. You know, it's access times eight hours. It's probably not going to work for us. That's your, that's not your deep archive anymore. That's your time capsule storage. >> Yeah, right. >> Lock the earth. So, I mean, I think you kind of see what's here. I mean, the chances are it's still going to be the magnetic technologies tape disc, and then the solid state number and stuff. >> Right. >> But these are the ones that I'm tracking and looking at, trying to have worked with a few of the companies that are in this. Future list and I'd love to see something breakthrough out there, but it's like, we've always said about a holographic storage. For example, you know, there's been more written about it than there's ever been written on it. (both chuckles) >> Well, the paper's called Reinventing Archival Storage. You can get it on your website I presume Fredhorizon.com >> Yep, absolutely. >> Awesome. >> Fred Moore, great to see you again. Thanks so much for coming on the CUBE. >> My pleasure, Dave. Thanks a lot. Great job. >> All right. And thank you for watching everybody. This is Dave Volante for the CUBE. We'll see you next time. (upbeat music)

Published Date : Aug 5 2020

SUMMARY :

all around the world. data in the zettabyte era. I think back then we had, you know, And Fred just wrote, you business in the future is going to We know tape doesn't exist in the home. That really is the premise the world series and Superbowl, you know, you know, back in the '70s and '80s? this because you know, But of course, you know, in cloud, So the less people you Is that tape is that, you know, of a company that started that And most of the archive And, you know, you that says this sometimes, but, you know, lot of that is the headroom and that's the same cash in concept the, you know, the less, the top one that you referenced. to ask you about remastering that are unstructured and, you know, And so air gaps and, you know, up there, of course, you know, and you said it's a costly the math as you well know, continue to move the data and you close the paper ones that are in the lab I mean, the chances For example, you know, Well, the paper's called Fred Moore, great to see you again. Thanks a lot. This is Dave Volante for the CUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Fred MoorePERSON

0.99+

FredPERSON

0.99+

Dave VolantePERSON

0.99+

BostonLOCATION

0.99+

30 yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

70%QUANTITY

0.99+

August 2020DATE

0.99+

2025DATE

0.99+

2010DATE

0.99+

10 yearQUANTITY

0.99+

60%QUANTITY

0.99+

80%QUANTITY

0.99+

BoulderLOCATION

0.99+

50 yearQUANTITY

0.99+

10 yearsQUANTITY

0.99+

35 yearsQUANTITY

0.99+

HyperscaleORGANIZATION

0.99+

eight hoursQUANTITY

0.99+

2024DATE

0.99+

50 petabytesQUANTITY

0.99+

last yearDATE

0.99+

CopayORGANIZATION

0.99+

five yearsQUANTITY

0.99+

Horizon Information StrategiesORGANIZATION

0.99+

fourQUANTITY

0.99+

Active Archive AlliancesORGANIZATION

0.99+

12 millionQUANTITY

0.99+

108 millionQUANTITY

0.99+

10 bytesQUANTITY

0.99+

24QUANTITY

0.99+

twoQUANTITY

0.99+

eight yearsQUANTITY

0.99+

a year agoDATE

0.99+

350 400,000 square feetQUANTITY

0.99+

Horison Information StrategiesORGANIZATION

0.99+

2020DATE

0.98+

45%QUANTITY

0.98+

firstQUANTITY

0.98+

DellORGANIZATION

0.98+

50%QUANTITY

0.98+

five daysQUANTITY

0.98+

over a million piecesQUANTITY

0.98+

threeQUANTITY

0.98+

five toolQUANTITY

0.98+

first biteQUANTITY

0.98+

about 10%QUANTITY

0.97+

DODORGANIZATION

0.97+

FirstQUANTITY

0.97+

yearsDATE

0.97+

10QUANTITY

0.97+

each oneQUANTITY

0.97+

first personQUANTITY

0.96+

65QUANTITY

0.96+

one thingQUANTITY

0.96+

30 years agoDATE

0.96+

todayDATE

0.96+

dollar a terabyteQUANTITY

0.96+

sevenQUANTITY

0.96+

four tiersQUANTITY

0.95+

oneQUANTITY

0.95+

five different workloadsQUANTITY

0.95+

two generationsQUANTITY

0.95+

Storage TechORGANIZATION

0.95+

five terabyteQUANTITY

0.95+

three levelsQUANTITY

0.95+

CUBEORGANIZATION

0.95+

pandemicEVENT

0.94+

earthLOCATION

0.94+

under 400, 35,400QUANTITY

0.94+

Andrew Wilson & Mike Moore, Accenture | AWS Executive Summit 2018


 

>> Live from Las Vegas It's theCUBE covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back everyone to theCUBE's live coverage of the AWS Executive Summit here at the Venetian in Las Vegas. I'm your host, Rebecca Knight. We have two guests for this segment. We have Mike Moore, Senior Principal at Accenture Research, and Andrew Wilson, Chief Information Officer at Accenture. Thank you both so much for returning to theCUBE. >> Good to see you as ever, Rebecca, and to be back in Las Vegas as well. >> Exactly, back in Sin City, right, here we are. So our topic is innovation. A buzzword that is so buzzy it's almost boring. Let's start the conversation with just defining innovation. What does innovation mean? >> An objective, a behavior, a way of working. To me, innovation is what we need to do with modern technology to enable the enterprise and the business world and be creative humans and to use disciplines which we didn't typically bring to work before. >> And is it creativity, or is there sort of logic and rationale too? >> I think there's logic and rationale. But there's also entertainment, fun, modern consumer-like experimentation, risk-taking, things of that nature. >> I think that a big key is actually striking a balance between creativity and logic and rationale and that's the really tricky bit, because you need to give your employees the license to be creative but within a certain set of boundaries as well. >> The rules of work have definitely changed, and behaviors that we encourage, even the clothes we wear, how we work, when we work, those are all characteristic of a more innovative, accepting diverse world, and a world that can keep up with the modern technology and the advancements and the announcements like we're hearing about here at re:Invent. >> It's the ultimate right brain, left brain behavior and activity. So Mike, you've done some research recently about the hallmarks of innovative companies, what they do differently from the ones that are not innovative, that are failing here, so tell our viewers a little bit about what you've found in your research. >> We surveyed 840 executives from a variety of different companies, different industries, different geographies, to understand their approach to innovation, and those who were doing it particularly well, and those maybe not so well. And around about 14 percent of our respondents were turning their investments in innovation into accelerated growth, and there were lots of different reasons for their success but three things really stood out. So first of all their outcome lacked in terms of the way they approach innovation, so they put a clear set of processes around their innovation activities, and then linked those to operational and financial performance metrics. They're also disruption minded, so they're not just pursuing incremental tweaks to their products and services, but their investing in disruptive technologies that could actually create entirely new markets. And then finally they're change orientated. They're not just using innovation to change their products and services, but also to fundamentally change the nature of their own organizations as a whole. >> So 14 percent are knocking it out of the park. Does that mean the rest of them are all laggards or are sort of some in the middle? What is the state of innovation in industry today, would you say, Andrew? >> I would say it's hugely variable by industry, geography, type of company, and individual instance of leader and culture, but I am sure that the most successful companies, those that are pivoting to the new, those that are imaginative, those that have recently arrived, all have that DNA that we're describing, all have that way of working, all have that ability to operate cleverly, intelligently, humorously, and at speed. I think innovation is very much characterized by something that can be fast-failed, do, step, move sideways, do again. The way of working has changed in modern enterprises. We as CIO's have to accept that. We have to speed up. We have to create the environment in where that productivity, where that creation can occur, and I think all of that's key. >> You keep mentioning this, the way of working has changed, and I think we all sort of know what you mean but explain a little bit what you're seeing. >> Experimentation, the ability to get more done with the resources that you have. So here we are at AWS re:Invent, cloud-based operations. Cloud gives you, gives me as a CIO the means to do more, more quickly, more rapidly, on a greater scale, in more places that I ever could have imagined in my old old-fashioned data senses. So the services we can consume, the data we can connect together, the artificial intelligence we can bring to it, the consumer-like experience. All of those things, which by the way, are drawing on innovative behaviors in their own right, are absolutely what the game is about now. >> How does AWS figure into your cloud transformation? >> Well for our cloud transformation at Accenture, AWS is one of the core cloud platform providers who power Accenture. We are nearly 95 percent in cloud. So as an organization that's very pronounced, and typically ahead of most organizations. But we sort of have to be, don't we? I mean, we have to be our own North Star. I can't sit here and explain the virtues of what Accenture can bring to a client's cloud transformation if we haven't already done it to ourselves. And by the way, that drew on innovative approaches, risk-taking approaches because over the last three years we've moved Accenture to the cloud. >> So I love how you said it, we are our own North Star, and other people would say we eat our own dog food, I mean that's just kind of more gross, but in terms of having experienced this transformation yourselves, how do you use what you've learned to help your companies transform as well? And make these moves, take these risks, what would you say to that? >> Well I think we keep an eye on the research with our colleagues there, they're our own North Star. I think we look at the ecosystem, we assess readiness for enterprise, security compliance, scale, availability, and then we also look and say, and what's ready for prime time in terms of Accenture scale, half a million people nearly. You bring all of those things together and it's a recipe, and that's why we consult our business, that's why we guide and educate and experiment and innovate together. And that's very much how we adopted cloud, it's very much how we do a number of other things, and the creative services we have. >> In terms of, let's get back to the research. So how do you, I mean as you said, the research is, as Andrew said, it's something that executive leaders are looking at to figure out what's actually happening in the market as well as what's happening within the organization itself. So how do you set your research agenda in terms of figuring out where you want to focus your time and energy and resources. >> Well I think we do it in a very similar way to in which we consult with clients, we speak to them. We talk to them about some of the key issues that they're facing and we always interview a series of executives and also academics to get their perspective at the start of their project. And that's something that we did in this particular instance and what we heard from many executives was that, to the point that Andrew was making before, the speed and scale of innovation today is happening at a completely different pace than in the past. So product cycle times are just diminishing in every single industry and as a consequence, executives now need to build new innovation units to make sure that they can respond to that changing market. So that's we wanted to explore through the research. >> So in this research, with the 14 percent doing it well, the 86 percent sort of either, somewhere on the spectrum of doing terribly or figuring things out, getting better, what are their pain points, and what's your advice to those companies? >> Well I think, and we take the positive spin on it in terms of what the companies are doing well, one of the points that Andrew was making before was how Accenture works with other partners to become more innovative itself. And that's something that we saw many of the high performing companies doing. So many of them were what we call networks powers. Not just innovating using their own resources, their own people, but their drawing on a broader ecosystem of partners to bring the very best products and services to their customers, and their spending not just on R and D internally but also on accelerators, incubators, technology based M and A, and actually their spending as much on inorganic innovation as they are on organic innovation. >> At Accenture we actually help our clients look for trap value, and what we mean by that is if an organization with a history, with a set of business processes, a set of technologies, and a set of disciplines and employees that have been successful and worked possibly for decades in that model, then they're going to be in some pretty tight guide rails. How do you innovate out of that, to deal with all of the destruction that's now available, good healthy disruption, that actually reveals the next level of efficiency, customer satisfaction, product creativity, and innovation in it's own right, so that's innovation in action, if you like. >> I want to ask, here we are at AWS re:Invent, Andy Jassy on the main stage this morning announcing a dizzying number of new products, services, and AWS, this is Amazon, this is a huge company that really seems to know how to innovate, and do it constantly, but is that is that, can every company be Amazon? You know what I'm saying? I mean, is this really possible and attainable? >> Is such a thing as innovation fatigue perhaps? >> Well, exactly, right! >> My view is that you have to find a way to make innovation a constant and a norm. It doesn't mean that you always will have to operate with the same ridiculous pace, but creativity and pace do go hand in hand to a point, but to be ahead, to stay ahead, and to lead an organization of technologists, who can comprehend all of these announcements, so you have to innovate in both how you lead and operate as well. It's not just your product, it's your behaviors, because there's just so much coming all the time. >> Right, and we've seen a number of large companies, not necessarily technology companies, but I'm thinking of Sears and Toys-R-Us, that have really, you've seen what can happen, the cautionary tales. >> Look at the attrition in the Fortune 500, and you can see how companies have a, a half life now, which perhaps is very different to 20 or 30 years ago. >> Right, right, exactly. Well, Mike and Andrew, thank you so much for coming on theCUBE. This was a really fascinating discussion. >> Thanks. >> Thank you, good to see you again. >> I'm Rebecca Knight, stay tuned for more of theCUBE's live coverage of the AWS Executive Summit. (techno music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Accenture. of the AWS Executive Summit here and to be back in Las Vegas as well. Exactly, back in Sin City, right, here we are. and to use disciplines which we didn't typically bring I think there's logic and rationale. and that's the really tricky bit, and behaviors that we encourage, It's the ultimate right brain, left brain behavior and then linked those to operational Does that mean the rest of them are all laggards all have that ability to operate cleverly, intelligently, and I think we all sort of know what you mean So the services we can consume, I can't sit here and explain the virtues and the creative services we have. in the market as well as and also academics to get their perspective of the high performing companies doing. and employees that have been successful and to lead an organization of technologists, Right, and we've seen a number of large companies, and you can see how companies have a, a half life now, Well, Mike and Andrew, of the AWS Executive Summit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

AndrewPERSON

0.99+

MikePERSON

0.99+

Mike MoorePERSON

0.99+

RebeccaPERSON

0.99+

Andrew WilsonPERSON

0.99+

AWSORGANIZATION

0.99+

AccentureORGANIZATION

0.99+

SearsORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

Sin CityLOCATION

0.99+

Toys-R-UsORGANIZATION

0.99+

Las VegasLOCATION

0.99+

North StarORGANIZATION

0.99+

14 percentQUANTITY

0.99+

86 percentQUANTITY

0.99+

two guestsQUANTITY

0.99+

840 executivesQUANTITY

0.99+

20DATE

0.99+

VenetianLOCATION

0.99+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

AWS Executive SummitEVENT

0.98+

Accenture ResearchORGANIZATION

0.98+

half a million peopleQUANTITY

0.96+

three thingsQUANTITY

0.96+

nearly 95 percentQUANTITY

0.95+

todayDATE

0.92+

theCUBEORGANIZATION

0.89+

AWS Executive Summit 2018EVENT

0.85+

last three yearsDATE

0.83+

30 years agoDATE

0.83+

single industryQUANTITY

0.82+

AWSEVENT

0.81+

firstQUANTITY

0.8+

Accenture Executive SummitEVENT

0.79+

about 14 percentQUANTITY

0.76+

this morningDATE

0.76+

Fortune 500TITLE

0.67+

pointsQUANTITY

0.65+

aroundQUANTITY

0.64+

ChiefPERSON

0.6+

decadesQUANTITY

0.59+

InventEVENT

0.54+

half lifeQUANTITY

0.5+

Mike Moore & Chris Wegmann, Accenture | AWS Executive Summit 2018


 

>> Live from Las Vegas, it's theCUBE, covering the AWS Accenture executive summit. Brought to you by Accenture. >> Welcome back everyone to theCUBE's live coverage of the AWS executive summit, I'm Rebecca Knight, your host, we're here at the Venetian in Las Vegas. We have two guests for this segment, we have Mike Moore, Senior Principal Accenture Research, and Chris Wegmann, Managing Director Accenture AWS business group, thank both you so much for coming on theCUBE, and you for returning to theCUBE. >> Thanks for having me back, it's good to be back. >> So Mike I want to start with you and talk about your recent research which is entitled Discover Where the Value's Hiding: How to Unlock the Value of Your Innovation Investments. I like it, 'cause it just makes me think that innovation's just hiding somewhere in the corner, maybe underneath this desk. So talk a little bit about why companies can't find the innovation, and how they're failing at this. We'll get to the rays of hope later, but talk a little bit about what the problem is as you see it. >> Well, it certainly seems to be hiding for a lot of companies. Based on the research that we did, we found that over the course of the past five years large income companies like our own, and then also start-ups, have spent a combined 3.2 trillion dollars on innovation, which is obviously a huge sum. But when we looked at the rate of return on that investment, incumbent companies, we found it was actually declining by 27% over the course of that five year period. So there's a clear disconnect there in terms of what companies are doing. >> So, why are companies, why are so many companies not good at this? >> Well, we asked 840 executives from around the world exactly that question. And what we found is that for the vast majority of them who have been increasing their investments in innovation, they weren't seeing a great return, and what we found is that many of them were focused on incremental innovation, just small tweaks and adjustments to their products and services. But that's really not enough to get new customers in this day and age. But there was some reason for optimism, around about 14% of our respondents said that actually they were translating their investments in innovation into accelerated growth, they were outperforming their peers in terms of profitability, in terms of their market growth and they actually expected to continue to do that over the course of the next four to five years. >> So, I want to talk about that 14%, that rarefied group, but I want to bring you into the conversation Chris and just talk a little bit about the relationship between Accenture and AWS, and how you approach innovation, and how you help clients think about their innovation and driving ingenuity and creativity in their businesses. >> So Accenture and AWS have been partners for over twelve years, even before the first AWS service hit the market, Accenture was starting to use it in our labs at that point, and looking at how we could leverage S3 to really innovate on, and we've carried that tradition on for a while now. A couple years ago we sat down and really looked at what our enterprise customers were struggling with as they moved to the Cloud, and at that point innovation wasn't quite the topic yet, it was really how do we use Cloud to get better returns on our investment, better TCOs, things like that, and now we've seen that turn, as AWS has created more and more capabilities and solutions and offerings, our customers are really wanting to figure out how they innovate. They go and ask AWS "How do you innovate?" And that's their number one, one of their biggest EBCs is how do they innovate. So, we looked at it and said, that's great, how do we take that to the next level? How do we fix these failures that're happening and what we've seen is most customers are in this stop and go innovation traffic, I like to call it. There's people that're whizzing by them, the 14% are whizzing by them in the fast lane, so the question is how do you get them out of that stop and go traffic, into that fast lane, and, There's no lack of ideas, they have tons of ideas on how they can innovate, how they can use drones, how they can use all this. The ideas are out there, but taking those and turning those into operationalized assets that're continuously working, continuously growing, continuously maturing is where they struggle. >> And the question, when companies would ask you how do you innovate, I mean it is this question, but as you're implying it sounds as though it's a very, you have to have some discipline around it, there has to be real processes around the innovation, it's not just throw a bunch of creative people in a room together. >> No, that's great, you can do the creative people and they come up with the great ideas and there's no lack of those, but then you've got to operationalize those and go through the disciplining to take those, pick which ones are going to drive value, invest in those, operationalize those, and take them from a proof of concept or a pilot or whatever you want to call it and actually turn it into something that gets used every day, and what we've been focusing on with AWS is how do you get out of that, take what's out of that ideation stage, operationalize it using the full set of AWS services, and then how do you continue to run that and prove it going forward. >> So Mike, the 14%, what are they doing? What makes them different? >> Well I think there's lots of things that stand out, but there are really three things that came of the research, so firstly that group of companies is outcome led, as you were just saying, they're not just relying on the method, the genius in the garage tinkering away, but they're putting a real set of processes around the innovation activities that they're pursuing. Then they're linking those activities to clear operational and financial metrics. And then secondly they're disruption minded, so unlike the other companies that aren't performing well, they're really focused on investing in disruptive technologies that have the potential to create entirely new markets. And then finally, they're change orientated, so they're not just using innovation to develop new products and services, but they're also using it to drive more fundamental change across their organizations. And one of the principle changes that they're making is that they're becoming what we call network powered, so they're not just relying on their own internal innovation but they're drawing on a wide ecosystem of partners, like AWS, to really supercharge the rate at which they innovate. >> So those are the characteristics, what are you seeing on their ground, can you give us some specific examples of how they're taking those characteristics and what they're actually doing? >> I think you see companies set up and grow these innovation pods, so what we see customers doing is expanding those beyond just one pod. So, not just focusing on one part of their organization to do this, bringing that into a central location, creating a hub of pods and capabilities using everything, AWS services, using DevOps, doing all the cool stuff that's out there, but operationalizing that and getting to that center of excellence where they're actually seeing it end to end and they're not just jumping from one problem to then next. And once that graduates out, they have an organization waiting to take that on and continuing that journey while the next set comes in. So it's this process, it's this ongoing kind of chain of different problems coming in, being solved, and graduating out the other end. >> Is this a technology issue or is it a culture issue? >> The technology is there, I don't see it as a technology issue, I see it as a cultural issue, a change issue, a organizational issue, a resource issue, you got to find the talent that does that, you got to have the operational discipline to lead this stuff, and you have to go through that change. And we're seeing I think a lot of our customers struggle with that, and they want to learn how Accenture's done it, they want to learn how AWS has done it because obviously they've been very successful at it. >> And in terms of the cultural, the change management challenges that you're talking about, those are harder to overcome. So, do you have any best practices from your own experiences with it? >> We've obviously, Accenture's been in this game for a long time, whether it's innovation or whether we called it solution integration, whatever we called it, change was always a big part of that. So, a lot of those same change principles that we've used for twenty, thirty years still apply here. We see, you need very top down ownership and sponsorship, so from the very top down, whether that's the CEO, the CDO, the CIO, whoever it is they have to be 100% behind this, and have to be the cheerleaders. They have to be the people that're going to go get on stage, at re:Invent or other conferences and be that, this is how we're going, so you need that lead, and then you need very strong leadership underneath it that have gone through the journey before, this isn't the first time they've done it, they know where the potholes are in that road, they know what the signs are when they're going down the wrong way and how to get out of that. So you got to have those two key levels of experience. >> And to bring the others on-board. >> Absolutely, and they have to be the visionaries, they have to be the people guiding them through that, and you know, if you've got those people, if they're very strong-willed, very luminaries, those people will follow, and they'll follow them through that journey. And then they also got to go sell that to the rest of the organization, 'cause it's a change for the rest of the organization, the business is now much more engaged in that process, they're not just sending the requirements over the fence, they're very much engaged, they've got to understand and go through that agile transformation and understand when they're getting capabilities, what those capabilities are, so they need to go through that new operational paradigm that we're running in. >> So, finally, we're talking about innovation and then in particular AWS and Accenture, almost as the use cases here, how would you describe the innovation engine at AWS, Accenture in terms of the report that you've just published? It's obviously, I mean AWS is the biggest part of Amazon, I mean the high-growth engine of Amazon, and obviously a huge growth engine for Accenture too, so how would you characterize it Mike? >> Well, I think if we look at the three factors I was talking about before, being outcome led, being disruption minded, and being change orientated, then the relationship between Accenture and AWS really exhibits all of those three things, so in terms of being outcome led, Accenture has always been an organization that's laser focused on delivering results, delivering high performance, delivering value for our clients, and so is AWS. In terms of being disruption minded, we're innovating on the Cloud, using AWS to bring genuinely new and groundbreaking products and services to our clients. One of my favorite ones of those that we worked on in the UK, is our partnership with AWS and Age UK, which is a charity that helps the elderly and we're developing products and services for the elderly that helps them feel more connected to their family, and that's really opening up a brand new market. And then finally in terms of being change orientated, well, it's a relationship that really personifies being network powered and bringing the power there to multiple organizations that we can develop great products and services for our clients. >> Great, well Mike, Chris, thank you both so much for coming on theCUBE, it was a really fun conversation. >> Thanks for having us. >> Thanks. >> I'm Rebecca Knight, we will have more of theCUBE's live coverage of the AWS executive summit coming up in just a little bit.

Published Date : Nov 28 2018

SUMMARY :

Brought to you by Accenture. and you for returning to theCUBE. can't find the innovation, and how they're failing at this. Based on the research that we did, and they actually expected to continue to do that and just talk a little bit about the relationship so the question is how do you get them out of that And the question, when companies would ask you and then how do you continue to run that disruptive technologies that have the potential to and they're not just jumping from one problem to then next. to lead this stuff, and you have to go through that change. And in terms of the cultural, the change management and then you need very strong leadership underneath it And to bring the others and they have to be the visionaries, for the elderly that helps them feel more connected Great, well Mike, Chris, thank you both so much live coverage of the AWS executive summit

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

Mike MoorePERSON

0.99+

Chris WegmannPERSON

0.99+

AccentureORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ChrisPERSON

0.99+

MikePERSON

0.99+

UKLOCATION

0.99+

27%QUANTITY

0.99+

twentyQUANTITY

0.99+

14%QUANTITY

0.99+

100%QUANTITY

0.99+

3.2 trillion dollarsQUANTITY

0.99+

840 executivesQUANTITY

0.99+

two guestsQUANTITY

0.99+

Discover Where the Value's Hiding: How to Unlock the Value of Your Innovation InvestmentsTITLE

0.99+

Las VegasLOCATION

0.99+

three thingsQUANTITY

0.99+

three factorsQUANTITY

0.98+

over twelve yearsQUANTITY

0.98+

bothQUANTITY

0.98+

one partQUANTITY

0.98+

VenetianLOCATION

0.98+

OneQUANTITY

0.98+

two key levelsQUANTITY

0.98+

five yearQUANTITY

0.98+

one podQUANTITY

0.97+

one problemQUANTITY

0.97+

oneQUANTITY

0.97+

firstQUANTITY

0.97+

five yearsQUANTITY

0.96+

firstlyQUANTITY

0.94+

secondlyQUANTITY

0.94+

Age UKORGANIZATION

0.94+

about 14%QUANTITY

0.93+

S3TITLE

0.9+

thirty yearsQUANTITY

0.9+

first timeQUANTITY

0.89+

CloudTITLE

0.89+

theCUBEORGANIZATION

0.87+

A couple years agoDATE

0.87+

fourQUANTITY

0.85+

AWS Executive Summit 2018EVENT

0.8+

InventEVENT

0.73+

aroundQUANTITY

0.73+

Accenture ResearchORGANIZATION

0.72+

every dayQUANTITY

0.68+

past five yearsDATE

0.67+

Accenture executiveEVENT

0.65+

DevOpsTITLE

0.6+

Ryan O’Connor, Splunk & Jon Moore, UConn | Splunk .conf18


 

you live from Orlando Florida it's the cube coverage conf 18 got to you by Splunk welcome back to comp 2018 this is the cube the leader in live tech coverage my name is Dave Volante I'm here with my co-host Stu minimun we're gonna start the day we're going to talk to some customers we love that John Morris here is the MIS program director at UConn the Huskies welcome to the cube good to see you and he's joined by Ryan O'Connor who's the senior advisory engineer at Splunk he's got the cool hat on gents welcome to the cube great to have you thank you thank you for having us so kind of a cool setting this morning is the Stu's first conf and I said you know when you see this it's kind of crazy we're all shaking our phones we had the horse race this morning we won so that was kind of orange yeah team are and team orange as well that's great you're on Team Orange so we're in the media section and the median guys were like sitting on their hands but Stu and I were getting into it good job nice and easy so Jon let's start with you start always left to start with the customer perspective maybe you describe your role and we'll get into it sure so as you mentioned I'm the director of our undergrad program Mis management information systems business technology we're in the school of business under the operations and information management department the acronym OPI M okay cool and gesture Ryan tell us about your role explain the Hat absolutely yeah so I'm a member of an honorary member of the Splunk trust now I recently joined Splunk about a month ago back in August and yeah and outside of my full-time job working at Splunk I'm also an adjunct professor at the University of Connecticut and so I helped John in teaching and you know that's that's kind of my role and where our worlds sort of meet so John we were to when I were talking about the sort of evolution of Splunk the company that was just you know okay log file analysis kind of on-prem perpetual license model and it's really evolved and its permanent permeating throughout you know many organizations but maybe you could take us through sort of the early days and it was UConn for a while what what was life like before Splunk what prompted you to start playing around with Splunk and where have you taken it what's your journey look like so about three years ago we started looking at it through kind of an educational lens started to think of how could we tie it into the curriculum we started talking to a lot of the recruiters and companies that many of our students go into saying what skillsets are you looking for and Splunk was definitely one of those so academia takes a while to change the curriculum make that pendulum swing so it was how can we get this into students hands as quickly as possible and also make it applicable so we developed this initiative in our department called OPI M innovate which was all based around bringing emerging technology skills to students outside of the general curriculum we built an innovation space a research lab and really focused in bringing students in classes and incorporating it that way we started kind of slowly different parts of some early classes about three years ago different data analytics predictive analytics courses and then that really built into we did a few workshops with our innovate initiative which Ryan taught and then from there it kind of exploded we started doing projects and our latest one was with the Splunk mobile team okay you guys had some hard news around now well today right yeah maybe take us through that absolutely wanted sure yeah I'll take that so we we teach a course on IOT industrial IOT at the University of Connecticut and so we heard about the mobile projects and you know the basically they were doing a beta of the mobile and application so we we partnered with them this summer and they came in you know we have a Splunk Enterprise license through Splunk for good so we're able to actually ingest Splunk data and so as part of that course we can ingest IOT data and use Splunk mobile to visualize it all right right right maybe you could explain to our audience that might not know spun for good absolutely yeah so spun for good is a great initiative they offer a Splunk pledge license they call it to higher education institutions and research initiatives so we're able to have a 10 gig license for free that we can you know run our own Splunk enterprise we can have students actually get hands-on experience with it and in addition to that they also get free training so they can take Splunk fundamentals one and two and actually come out of school with hands-on experience and certifications when they go into the job market that's John name you know we talk so much about them the important role of data and you know that the tools change a lot you know when we talk about kind of the next generation of jobs you're right at that intersection maybe you can give you know what what are what are the students what are they looking for what are the people that are looking for them hoping that they come out of school with you know yeah it's it's um you have two different types of students I would say those that know what they're looking for and those that don't that I really have the curiosity they want to learn and so we we try to build this initiative around both those that maybe they're afraid of the technology and the skills so how do we bring them in how do we make a very immersive environment kind of have that aha moment quickly so we have a series of services around that we have what's called tech kits the students come in they're able to do something applicable right away and it sparks an interest and then we also kind of developed another path for those that were more interested in doing projects or they had that higher level skill set but we also wanted to cultivate an environment where they could learn more so a lot of it is being able to scaffold the learning environment based off of the different student coming in so it's interesting my son's a junior in college at GW and he's very excited he's playing around with date he says I'm learning are I'm learning Tablo I'm like great what about Splunk and he said what's that yes so yeah then though it's a little off-center from some of the more traditional visualization tools for example so it's it's interesting and impressive that you guys sort of identified that need and actually brought it to two students how did that all how what was in an epiphany or was that demand from the students how'd that come about it was a combination of a lot of things you know we were lucky Ryan and I have known each other for a long time as the director of the program trying to figure out what classes we should bring in how to build out the curriculum and we have our core classes but then we have the liberty to build out special topics things that we think are irrelevant up-and-coming we can try it out once if it's good maybe teach it a few more times maybe it becomes a permanent class and that's kind of where we were able to pull Ryan in and he had been doing consulting for Splunk for a number of years I said I think you know this is our important skill set is it something that you could help bring to the students sure yeah yeah I mean one of the big courses we looked at was a data analytics course and we were already teaching with a separate piece of software not gonna name names but essentially I looked at it one for one like what key benefits does this piece of software have you know what are the students trying to get out of it and then just compared to one for one to Splunk like could Splunk actually give them the same learning components and all that and it could and and with this one for swum for a good license and all that stuff we could give them the hands-on experience and augment our teaching with that free training so and they come out of school they have something tangible they can say you know I have this and so that would kind of snowball once that course worked then we could integrate it into multiple other courses so you were able to essentially replicate the value to the students of the legacy software and but also have a modern platform exactly exactly yep yeah you know that and that was a what was like a Doug was talking about making jokes about MDM and codifying business processes and yeah it's a little bit more of an antiquated piece of software essentially you know and I mean it was nice it did a great job but there wasn't when we were talking to recruiters and stuff it wasn't a piece of software that recruiters were actually looking at so we said we were hearing Splunk over and over again so why not just bring it into the classroom and give them that so in the keynote this morning started to give a vision I believe they call it Splunk next and mobile things like augmented reality are fitting in you know how do you look at things like this what what how's the mobile going to impact you especially I would think yeah so when we kind of came up with our initiative we identified five tracks that both skill sets we believe the students needed and that and companies were kind of looking for a lot of that was our students would go into internships and say hey you know the the set skills that were learning you know they're asking us to do all this other work in AWS and drones and VR so as again it's part of this it was identifying let's start small five tracks so we started with 3d printing virtual reality microcontrollers IOT and then analytics kind of tying that all together so we had already been building an environment to try and incorporate that and when we kind of started working with the spunk mobile team there's all these different components we wanted to not only tie into the class but tied into the larger initiative so the goal of the class is not to just get these students the skills interesting interested in it but to spread that awareness the Augmented part is just kind of another feature was the next piece that we're looking in to build activities and it just had this great synergy of coming in at the right time saying hey look at this sensor that we built and now you can look at data in an AR it's a really powerful thing to most people so yeah they showed that screenshot of AR during the keynote and one of the challenges that we have at the farm so we're teaching that this is the latest course that we're talking about on an industrial IOT one of the challenges we have at that farm is we don't have a desk we don't have a laptop but we do have a phone in our pocket and we have we can put a QR code or NFC tag anywhere inside that facility so we can actually have we have students go around and you know they can put an iPhone upto a sensor or scan a QR code and see actual live real-time data of what those sensors are doing which is it's an invaluable tool inside the classroom and in an environment like that for sure so it's interesting able to do things we never would have been able to do before I want to ask you about come back to mobile yeah as you you just saying it was a something that people have wanted for a long time it took a while yeah presumably it's not trivial to take all this data and present it in a format and mobile that's simple number one and number two is a lot of spunky users are you know they're at the command center right and they're on the grid yep so maybe that worked to your advantage a little bit and that you know you look at how quickly mobile apps become obsolete hmm so is that why it took so long because it was so complicated and you had a user profile that was largely stationary yeah and how is that change yes honestly I'm not sure in the full history of the mobile app I know there previously was a new mobile app and I are there was an old mobile app and this new one though is you use it the new one yes oh so when we're talking about augmented reality that might be we may not been clear on that augmented reality is actually part of its features and then in addition we have the Apple TV app is in our lab we have a dashboard displayed on a monitor so not only can we teach this class and have students setting up sensors and all this but we can live display it for everyone to come in and look at all the time and we know that it's a secure connection to our back-end people walk into the lab and the first thing I see is this live dashboard Splunk data from the Apple TV based off of project we've been working on what's that well that's a live feed from a farm five miles off campus giving us all these data points and it's just a talking point people are like wow how did you do that and you know it kind of goes from there yeah and the farm managers are actively looking at it too so they can see when the doors are open and closed to the facility you know the temperature gets too high all these metrics are actually used by the you know that was the important part to actually solve a business problem for them you know we we built a proof of concept for the class so the students could see it then their students are kind of replicating another final project in the class class is still ongoing but where they have to build out a sensor for for plants to so it's kind of the same type of sensor kit but it's they're more stationary plant systems and then they have to figure out how to take that data put it into Splunk and make sense of it so there's all these different components and you get for the students get the glam factor you can put it in a fishbowl have the Apple TV up there exactly and that's I mean part of it when we when we started to think about in ishutin you know it was recruitment you know how do we get students beyond that fear of technology especially kind of coming into a business school but it really went well beyond that we aligned it with the launch of our analytics minor which was open to anyone so now we're getting students from outside the school a bit liberal arts students creating very diverse teams and even in the class itself we have engineers business psychology student history student that are all looking to understand data and platforms to be able to make decisions so there's essentially one Splunk class today instead of a Splunk 101 there this semester there's there's a couple classes that are actually using Splunk inside the classroom and I mean depends on the semester how many we have going on that are actually there's three the semester I sorry I misspoke there we have a another professor as well who's also utilizing it so so yeah we have three three classes that are essentially relying on Splunk to teach different components or you know is it helped us understand is it part of almost exclusively part of the analytics you know curriculum or is it sort of permeate into other Mis and computer science or right now it's within our kind of Mis purview trying to you know build all their partners within the university and the classes they're not it's not solely on spunk spunk is a component of you the tool so it's like for example the particular industrial IOT course it is understanding microcontrollers understanding aquaponics and sustainability understanding how to look at data clean data and then using Splunk as a tool to help bring that all together yeah it's kind of the backbone you know love it and then and I mean in addition to I just wanted to mention that we've had students already go out into the field which is great and come back and tell us hey we went out to a job and we mentioned that we knew Splunk and we were you know a shoo-in for certain things once it goes up on their LinkedIn profile and start getting yeah I mean I again I would think it's right up there with I mean even even more so I mean everybody says and right and our day it was SPSS now it's our yep tableau obviously for the VIS everybody's kind of playing around but spunk is a very you know specific capability that not everybody has except every IT department on the planet exactly coming out of school you take a little bit deeper you either you find you find that out yeah cool well great work guys really thank you guys coming on the cube it was great to meet you I appreciate it incoming all right you're welcome all right keep it right - everybody stew and I will be right back after this this is day one of cough 18 from Splunk this is the cube [Music]

Published Date : Oct 2 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Ryan O'ConnorPERSON

0.99+

RyanPERSON

0.99+

Dave VolantePERSON

0.99+

JohnPERSON

0.99+

Jon MoorePERSON

0.99+

John MorrisPERSON

0.99+

Ryan O’ConnorPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

five milesQUANTITY

0.99+

10 gigQUANTITY

0.99+

SplunkORGANIZATION

0.99+

two studentsQUANTITY

0.99+

StuPERSON

0.99+

Orlando FloridaLOCATION

0.99+

DougPERSON

0.99+

AWSORGANIZATION

0.99+

five tracksQUANTITY

0.99+

Apple TVCOMMERCIAL_ITEM

0.99+

University of ConnecticutORGANIZATION

0.98+

oneQUANTITY

0.98+

todayDATE

0.98+

JonPERSON

0.97+

first thingQUANTITY

0.97+

threeQUANTITY

0.97+

five tracksQUANTITY

0.97+

bothQUANTITY

0.96+

two different typesQUANTITY

0.92+

Stu minimunPERSON

0.92+

three three classesQUANTITY

0.91+

number twoQUANTITY

0.91+

LinkedInORGANIZATION

0.9+

University of ConnecticutORGANIZATION

0.9+

twoQUANTITY

0.9+

this summerDATE

0.9+

about three years agoDATE

0.89+

this morningDATE

0.89+

GWORGANIZATION

0.89+

AugustDATE

0.88+

both skill setsQUANTITY

0.88+

Team OrangeORGANIZATION

0.87+

UConn the HuskiesORGANIZATION

0.87+

first confQUANTITY

0.86+

Splunk .conf18OTHER

0.86+

about three years agoDATE

0.85+

a month agoDATE

0.83+

SplunkPERSON

0.83+

AppleCOMMERCIAL_ITEM

0.81+

this morningDATE

0.8+

UConnLOCATION

0.8+

couple classesQUANTITY

0.79+

OPI M innovateORGANIZATION

0.77+

number oneQUANTITY

0.77+

SplunkTITLE

0.76+

UConnORGANIZATION

0.72+

TVTITLE

0.72+

Splunk 101TITLE

0.72+

one of the challengesQUANTITY

0.69+

few more timesQUANTITY

0.65+

stewPERSON

0.63+

comp 2018EVENT

0.63+

Patrick Osborne & Bob Moore, HPE | HPE Discover 2017 Madrid


 

(upbeat music) >> Announcer: Live from Madrid, Spain, it's theCUBE. Covering HPE Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. >> Hi everybody, welcome to Madrid, Spain. My name is Dave Vellante, and this is theCUBE, the leader in live tech coverage. We're here, this is day one of HPE Discover Madrid, the European version of the event that we cover in the summer, in the spring, in Las Vegas. I'm here with my cohost, Peter Burris, and Bob Moore is here, he's the director of server software and product security at HPE, and he's joined by good friend Patrick Osborne, who runs product marketing and management for the storage group at HPE. Gents, welcome to theCUBE. >> Good to be here, Dave, Peter. >> Yeah, very happy to be here. >> Dave: Always good to see you Did you bring your sax? >> Not this time, my friend. (laughing) >> We had a lot of fun. Where were we in New Orleans last year? >> Oh yeah, it was great. >> And you're an awesome sax player, we love it, big fan, and you're a bass player, we got more sax, more horns over there. So, I digress. >> Patrick: You need a CUBE band (laughing) >> We need a CUBE band. >> Bob, we talked this spring in Las Vegas, you guys made a big deal about the silicon-level security, you made some innovations there. Give us the update on why, again, that's so important, and how that's been received by customers. >> Yeah, well I think, answer the second part of the question first, it's really resonating pretty well with customers. Honestly, as we get to them, and we describe the level of cryptography we have, down right into the hardware, the firmware, down into our silicon, those customers that are concerned with security, and frankly, all customers are now, really does resonate with them pretty well. And the reason that it's important is because tying all of that security down into a bedrock foundation provides that ability to then leverage in or pull in other objects like storage and provide that security without any increase in latency but also the access and the shared access, being able to do that across multiple platforms, do it securely, and have that sharing capability like we all need to have to keep our IT infrastructure running. So it's really critically important, still, to this day, HPE is the only server manufacturer that's able to do that down into the silicon level that we're talking about here. So we're quite proud about that. And it's allowed us to claim the world's most secure industry standard servers and now, of course, today we're branching out with other technologies across our storage platform and including those into our security strategy. >> So, how does it, Patrick, relate to what you guys are doing on the storage side? >> Yeah, so I think it's a really good complementary solution and the fact that we can provide the silicon root of trust on the infrastructure level, and then on the storage side, we provide some similar capabilities at the infrastructure level, with encryption and other techniques that we have, and then, we assist customers in being able to, in a number of different cases, being able to take, for example, snapshots in backup, move those offsite, or even into the cloud, encrypt those, so you have essentially a silicon-rooted trust on the infrastructure side for your operating system and your firmware. And then you have essentially a golden image at a point in time of your data, which is a pretty valuable asset. So combine those two, we're able to help customers with a pretty aggressive RTO, and RPO, to be able to recover, if they'd been breached, or when they get breached, essentially. So we have some great examples here today in the show, of some customers that have used combinations of things like, the Gen10 servers, 3PAR, and StoreOnce, to achieve that level of recovery, in, not days, in, basically in hours, or even faster. And then we have some other technologies where you can set up a media break, essentially send all that data out to the cloud, and completely have a self-contained, encrypted copy of your data to recover from. So we're providing a number of different solutions, all the way up and down the stack for customers to be able to help to recover very quickly. >> So obviously security's been in the news lately, the huge Equifax breach, you go back to the spring, WannaCry, and ransomware, >> Patrick: Yep. >> So let's talk about ransomware specifically. How do you guys help a customer sort of address that. What's the, there's no silver bullet. You hear talk about air gaps, you guys are talking about >> Patrick: Right. >> silicon-level security, What's the prescription for customers? >> Well, I'm glad you asked that, because ransomware really is on every customer's mind these days. And it is, because it's gone up, ransomware is so lucrative and profitable, it's gone up by 15 full, 15 times in the last two years, to the point where it's cost companies five billion dollars in 2017, and by 2019, a company will be infected by ransomware every 14 seconds, so it's just really huge. And not only, and we don't encourage paying the ransom, but the ransom, if you paid it, would be expensive, but the downtime that you experience in recovering can be really expensive for companies as well. So this ability to recover from ransomware, or ransomware neutralizer, which is what we're talking about and announcing here today, is really new and a revolutionary way to recover in a systematic, orderly fashion, starting with the firmware that we talked about, that's anchored down in the silicon, so we recover that firmware, in case that ransomware malware virus has migrated. Because the hackers are getting so incredibly ingenious these days, that that malware can hide inside the firmware and will go everywhere, the tentacles will go everywhere, but we start the recovery with a firmware so you've got that firm foundation, routing out any remnants of the malware. And then on top of that, new today, we're announcing the fact that we can then recover the server settings that take days, sometimes weeks to set up initially, and that'll be recovered and restored automatically. Then we restore the operating system through an ISO site, along with the applications and then finally, we bring the data back, as Patrick was mentioning, we do that relatively quickly. We're demonstrating that here this week at Discover Madrid. And it really does allow customers to avoid having to pay the ransom, we want them to be able to recover, do it quickly and easily, without paying the ransom, and that's what we help. >> But you mention the word "trust," which is one of the most increasingly important words in the tech industry. We're in Madrid, GDPR is going to start moving in into a force in the first quarter of next year. >> Bob: May 2018. >> So, second quarter. And it's going to create some fair amount of attention, not just here in Europe, but on a global basis. I was talking to an expert who suggested that if the Equifax breach had occurred in Europe, under GDPR, it would not have been just embarrassment, it would have been about 60, 70 billion dollars worth of funds. >> Bob: Right. >> So we're talking not just about nice things to have, we're talking about, over the course of the next five years, you have to have this level of capability inside your infrastructure, or you will be out of business. >> I think it's true, absolutely. The GDPR, the penalties associated are so severe with that, up to 20 million dollars, or four percent of the annual revenue of the parent company, so it can just be massively impactful, financially impactful, hurtful to the companies. We're talking today, and this week, about GDPR, and how we help companies get ready for that, and you mention the Equifax breach, actually, we have, with our HP Gen9 and Gen10 solutions server networking and storage, applied the NIST 800-53 controls to that, and if they had applied those and used our solution, we believe that, after having looked at the Equifax breach, that would not have happened, had they followed the security controls that are in NIST. There's a lot of articles published about how NIST can help companies get ready for the GDPR in Europe, and so we've got the NIST controls, we went through all the time, energy, and funding to create the NIST security controls that will help a hundred percent of those applied to the ISO certification, ISO 27000-1, 27000-2, which then lends itself to being GDPR compliant. So, not only do we help customers through this great new technology that we have in the silicon-rooted trust, and that's helpful in getting ready for the GDPR, but also these NIST controls. >> But it's also that it's also that the well the conversations that we're having with CIOs is that GDPR, even though it's centered here in Europe, is likely to have an effect on global behavior. And so, one of the things that they're looking for is, they're looking for greater commonality in the base infrastructure about how it handles security, so that they can have greater commonality in how their people do things, so they can be better at targeting where the problem is, when the problem happens, and how to remediate the problem. Talk a little bit about how more commonality in the infrastructure, especially when you talk about storage, which is increasingly the value proposition, is how you share data is going to liberate resources elsewhere in the business to do new and better things faster. >> I think for, from the HPE perspective, you're not going to solve GDPR with any specific point product. Right? And that's not, it's not really our message to the market, that, you implement this and you're going to go satisfy those requirements. It's definitely part of a solution, but what we've been trying to do is, you see, we've got the silicon root of trust on the server side, and a number of security features, and we're talking about how we integrate that with the storage. We're starting to bring together more of a vertically oriented stack, that includes all those pieces and they work together. So instead of having a security or commonality layer at the server layer, at the networking layer, at the storage layer, thinking about it as a service that's more vertically oriented through the stack, where you're able to take a look at all aspects of the networking, what's going on with the firmware and the operating system and all the way down to essentially your secure and most important data. >> Peter: Securing the data >> Exactly. >> And not the device. >> Exactly. Exactly. And so for us, you see it in themes for for 3PAR, for SimpliVity on the hyperconverged area, and all the converged systems on the compute side, we're really providing integrated security and integrated data protection that is inherently secure with encyryption and a host of other techniques. So really, we're trying to provide it from the application level on down through the infrastructure, a set of capabilities within the products that work together to provide a little bit more of a secure infrastructure. >> One of the things we talked to Bill Philbin about on theCUBE recently was, and Patrick, I'm sure you've heard this, maybe you too as well, Bob, but boo-boos happen now, today, really fast. So they replicate very quickly. So how do you deal with fast boo-boo replication and sort of rolling back to the point where you can trust that data? >> There's a couple techniques and innovations that we brought within the storage realm, in terms of integrating that whole experience, so our big thing is, on the storage side, has been how can you provide an experience from all-flash on-prem out to the cloud, from a data perspective, and have all that integrated so we've got a number of things that we've actually announced here at Discover, in terms of 3PAR, all-flash, and Nimble, being able to federate that primary storage, with your secondary storage, on-prem, and then being able to have that experience go off-prem, into the cloud, so you do have a media break and a number of things. I think, from a solution perspective, integrating with some of our top-tier partners on the availability side, like Deem, for example, it gives you that really holistic application-level view, in the context of virtualization, it's something that helps do the very rich cataloging experience, and pieces. >> So I wonder if we could talk about a topic that's been discussed in our communities, which is the biggest threat within cyber is the weaponization of social media. You've sort of seen it with fake news, and Facebook, and I wonder if you guys are having similar conversations with customers and even ransomware. You look at WannaCry, it was sort of state-sponsored, and actually not a lot of money went back >> Patrick: Right. >> To the perpetrators, maybe it was a distraction to get other credentials. And you're seeing different signatures of Russians, very sophisticated hackers, they target pawns and make 'em feel like kings, and then grab their credentials, and then go in and get critical data. So when you think about things like the weaponization of social media, how can you guys help, sort of, detect what's going on, anomalous behavior, and address that? You've got silicon level >> Right. >> You've got the storage component. Do analytics come into play? Is there a whole house picture that you can help customers >> Yeah, I think that's the next level. It's almost an iterative process as soon as we've developed a protection, or the ability to detect a cybersecurity breach, is then the hackers try to outdo that, and so we're continually leapfrogging, and I think the next step is probably with machine learning. We're starting to actually deploy some of that at HPE, that artificial intelligence, and we have some of that now with our storage, our Nimble storage, as well as our Aruba Networking with the technologies that Aruba has with IntroSpect, can now look at the communication inside of a network and determine if there's nefarious behavior, and watch the behavior analytics, as well as the signatures that are going on inside the network, and actually, then communicates with ClearPass, and can proactively take some charge of that and rule out that user that's potentially a bad actor before any damage is really done. Same way on, with the storage side, >> Patrick: Yep. >> With the InfoSight that has great, in fact, so great of AI intelligence, that we're actually sharing as we look at ransomware viruses, they're looking at the signatures that those leave, and the trails that ransomware leaves behind, so that the storage systems can actually proactively route that out with machine learning and artificial intelligence. That's where we're headed with HPE. >> But it's, it's not only, it's not only finding ways to fix the boo-boos, it's acknowledging or recognizing that the boo-boos occurred. So how is this new capability facilitating, or increasing the speed with which problems are recognized? >> I think one of the important points that Bob made is that we are, we're announcing this week, on the storage side, some concepts around AI for the data center, and specifically, around our predictive analytics with InfoSight, and applying that from Nimble to the 3PAR systems, and then setting out a vision that is going to basically enable us to use that AI at the infrastructure layer, across other areas within the portfolio. Servers, networking, and for, at the speed at which this is moving, you can't solve this at the human level, right? So for us, to be able to whitelist and blacklist customers, based on our learning across a very large install base, if you think about the amount of compute nodes and the amount of storage that we sell as a infrastructure company, you can learn and be enabled to proactively help customers avoid those situations, that's something we're actually implementing today. >> And let me follow up with that, because it's a great lead-in or tie-back to GDPR that we were discussing. >> Yep. >> Because there's reporting requirements within 72 hours, right, >> Yep. >> That GDPR says that you've got to report that you had a breach, and how do you report that if you're not certain? Well, with our silicon-rooted trust and the Gen10 servers, we actually are monitoring all that server essential firmware every 24 hours. Now some of our competitors monitor, or check the firmware, one time when you boot up the server, and never again until you, maybe reboot the server, right? But we're doing, at HPE, that check every 24 hours, and that's an automated process. And so, you ask, how can be detected? Well, we can detect that, because you'll get an alert, coming back to the user of the server, that there's been a breach, and that can be reported. >> We got to go. I'm glad you mentioned automation, because that's a big factor, >> Bob: Yeah. >> Using false positives, because people just don't have time, they're drinking from the fire hose. Bob, Patrick, thanks very much for coming to theCUBE. >> Great, thanks so much for having us. >> Dave: Enjoy the week. >> Thank you so much, we appreciate it. >> All right, keep it right there everybody, we'll be back with our next guest. This is theCUBE. We're live, from HPE Discover in Madrid. We'll be right back. (upbeat music)

Published Date : Nov 28 2017

SUMMARY :

Brought to you by Hewlett Packard Enterprise. and Bob Moore is here, he's the director of server software Not this time, my friend. We had a lot of fun. and you're a bass player, we got more sax, and how that's been received by customers. and the shared access, being able to do that and the fact that we can provide the silicon root of trust How do you guys help a customer sort of address that. but the downtime that you experience of the most increasingly important words if the Equifax breach had occurred in Europe, you have to have this level of capability applied the NIST 800-53 controls to that, in the business to do new and better things faster. of the networking, what's going on with the firmware and all the converged systems on the compute side, One of the things we talked to Bill Philbin about in the context of virtualization, and I wonder if you guys are having similar conversations the weaponization of social media, You've got the storage component. or the ability to detect a cybersecurity breach, so that the storage systems can actually that the boo-boos occurred. and the amount of storage that we sell that we were discussing. that you had a breach, and how do you report that We got to go. Bob, Patrick, thanks very much for coming to theCUBE. we'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

Peter BurrisPERSON

0.99+

Dave VellantePERSON

0.99+

EuropeLOCATION

0.99+

DavePERSON

0.99+

Patrick OsbornePERSON

0.99+

BobPERSON

0.99+

Bob MoorePERSON

0.99+

NISTORGANIZATION

0.99+

Bill PhilbinPERSON

0.99+

New OrleansLOCATION

0.99+

May 2018DATE

0.99+

MadridLOCATION

0.99+

2017DATE

0.99+

EquifaxORGANIZATION

0.99+

Las VegasLOCATION

0.99+

five billion dollarsQUANTITY

0.99+

four percentQUANTITY

0.99+

GDPRTITLE

0.99+

PeterPERSON

0.99+

this weekDATE

0.99+

ArubaORGANIZATION

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

second quarterDATE

0.99+

2019DATE

0.99+

twoQUANTITY

0.99+

Madrid, SpainLOCATION

0.99+

todayDATE

0.98+

CUBEORGANIZATION

0.98+

15 timesQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

DiscoverORGANIZATION

0.98+

HPORGANIZATION

0.98+

Discover MadridORGANIZATION

0.98+

Gen10COMMERCIAL_ITEM

0.98+

up to 20 million dollarsQUANTITY

0.98+

one timeQUANTITY

0.98+

InfoSightORGANIZATION

0.98+

second partQUANTITY

0.98+

oneQUANTITY

0.97+

about 60, 70 billion dollarsQUANTITY

0.97+

FacebookORGANIZATION

0.97+

hundred percentQUANTITY

0.97+

Gen9COMMERCIAL_ITEM

0.96+

first quarter of next yearDATE

0.96+

OneQUANTITY

0.95+

Shaun Moore, Trueface.ai – When IoT Met AI: The Intelligence of Things - #theCUBE


 

>> Male Voice: From the Fairmont Hotel in the heart of Silicon Valley, it's the Cube covering when IoT Met AI: the Intelligence of Things brought to you by Western Digital. >> Hey welcome back here everybody. Jeff Frick with the Cube. We're in downtown San Jose at the Fairmont Hotel at a small event talking about data and really in IoT and the intersection of all those things and we're excited to have a little startup boutique here and one of the startups is great enough to take the time to sit down with us. This is Shaun Moore, he's the founder and CEO of the recently renamed Trueface.ai. Shaun, welcome. >> Thank you for having me. >> So you've got a really cool company, Trueface,ai. I looked at the site. You have facial recognition software so that's cool but what I think is really more interesting is you're really doing facial recognition as a service. >> Shaun: Yes. >> And you a have a freemium model so I can go in and connect to your API and basically integrate your facial recognition software into whatever application that I built. >> Right so we were thinking about what we wanted to do in terms of pricing structure. We wanted to focus on the developer community so we wanted tinkers, people that just want to play with technology to help us improve it and then go after the kind of bigger clients and so we'll be hosting hack-a-thons. We just actually had one this past week in San Francisco. We had great feedback. We're really trying to get a base of you know, almost outsource engineers to help us improve this technology and so we have to offer it to them for free so we can see what they build from there. >> Right but you don't have an opensource component yet so you haven't gone that route? >> Not quite yet, no. >> Okay. >> We're thinking about that though. >> Okay, and still really young company, angel-funded, haven't taken it the institutional route yet. >> Right, yeah, we've been around since 2013, end of 2013, early 2014, and we were building smart home hardware so we had built the technology around originally to be a smart doorbell that used facial recognition to customize the smart home. From the the trajectory went, we realized our clients were using it more for security purposes and access-control, not necessarily personalization. We made a quick pivot to a quick access control company and continue to learn about how people are using facial recognition in practice. Could it be a commercial technology that people are comfortable with? And throughout that thought process and going through and testing a bunch of other facial recognition technologies, we realized we could actually build our own platform and reach a larger audience with it and essentially be the core technology of a lot cooler and more innovative products. >> Right, and not get into the hardware business of doorbells >> Yeah, the hardware business is tough. >> That's a tough one. >> We were going to through manufacturing one and I'm glad we don't have to do that again. >> So what are some of the cool ways that people are using facial recognition that maybe we would never have thought about? >> Sure, so for face matching - The API is four components. It's face matching, face detection, face identification, and what we call spoof detection. Face matching is what it sounds like: one-to-one matching. Face detection is just detecting that someone is in the frame. The face identification is your one to act so your going into a database of people. And your spoof detection is if someone holds up a picture of me or of you and tries to get it, we'll identify that as an attack attempt and that's kind of where we differentiate our technology from most is not a lot of technology out there can do that piece and so we've packaged that all up into essentially the API for all these developers to use and some of the different ideas that people have come up with for us have been for banking logins, so for ATMs, you walk up to an ATM, you put your card in and set up a PIN so to prevent against fraud it actually scans your face and does a one-to-one match. For ship industries, so for things like cruise ships, when people get off and then come back on, instead of having them show ID, they use quick facial recognition scans. So we're seeing a lot of different ideas. One of the more funny ones is based off a company out in LA that is doing probation monitoring for drunk drivers and so we've built technology that's drunk or not drunk. >> Drunk or not drunk? >> Right so we can actually measure based on historical data if your face appears to be drunk and so you know, the possibilities are truly endless. And that's why I said we went after the development community first because >> Right right >> They're coming to use with these creative ideas. >> So it's interesting with this drunk or not drunk, of course, not to make fun of drunk driving, it's not a funny subject but obviously you've got an algorithm that determines anchor points on the eyes and the nose and certain biometric features but drunk, you're looking for much softer, more subtle clues, I would imagine because the fundamental structure of your face hasn't changed. >> Right so it's a lot of training data, so it's a lot of training data. >> Well a lot of training data, yeah. We don't want to go down that path. >> So a lot of research on our team's part. >> Well then the other thing too is the picture, is the fraud attempt. You must be looking around and shadowing and really more 3D-types of things to look over something as simple as holding up a 2D picture. >> Right so a lot of the technology that's tried to do it, that's tried to prevent against picture attacks has done so with extra hardware or extra sensors. We're actually all cloud-based right now so it isn't our software and that is what is special to us is that picture attack detection but we've a got a very very intelligent way to do it. Everything is powered by deep learning so we're constantly understanding the surroundings, the context, and making an analysis on that. >> So I'm curious from the data side, obviously you're pulling in kind of your anchor data and then for doing comparisons but then are you constantly updating that data? I mean, what's kind of your data flow look like in terms of your algorithms, are you constantly training them and adjusting those algorithms? How does that work kind of based on real time data versus your historical data? >> So we have to continue to innovate and that is how we do it, is we continue to train every single time someone shows up we train their profile once more and so if you decide to grow a beard, you're not going to grow a beard in one day, right? It's going to take you a week, two weeks. We're learning throughout those two weeks and so it's just a way for use to continue to get more data for us but also to ensure that we are identifying you properly. >> Right, do you use any external databases that you pull in as some type of you know, adding more detail or you know, kind of, other public sources or it's all your own? >> It's all our own. >> Okay and I'm curious too on the kind of opening up to the developer community, how has that kind of shaped your product roadmap and your product development? >> It - we've got to be very very conscious of not getting sidetracked because we get to hear cool ideas about what we could do but we've got our core focus of building this API for more people to use. So you know, we continue to reach out them and ask for help and you know if they find flaw or they find something cool that we want to continue to improve, we'll keep working on that so I think it's more of a - we're finding the developer community likes to really tinker and to play and because they're doing it out of passion, it helps us drive our product. >> Right right. Okay, so priorities for the rest of the year? What's at the top of the list? >> We'll be doing a bigger rollout with a couple of partners later on this year and those will be kind of our flagship partners. But again, like I said, we want to continue to support those development communities so we'll be hosting a lot of hack-a-thons and just really pushing the name out there. So we launched our product yesterday and that helped generate some awareness but we're going to have to continue to have to get the brand out there as it's now one day old. >> Right right, well good. Well it was Chui before and it's Trueface.ai so we look forward to keeping an eye on progress and congratulations on where you've gotten to date. >> Thank you very much. I appreciate that. >> Absolutely. Alrighty, Shaun Moore, it's Trueface.ai. Look at the cameras, smile, it will know it's you. You're watching Jeff Frick down at the Cube in downtown San Jose at the When IoT Met AI: The Intelligence of Things. Thanks for watching. We'll be right back after this short break.

Published Date : Jul 3 2017

SUMMARY :

in the heart of Silicon Valley, and really in IoT and the intersection of all those things I looked at the site. so I can go in and connect to your API and so we have to offer it to them for free angel-funded, haven't taken it the institutional route yet. the technology around originally to be a smart doorbell and I'm glad we don't have to do that again. and some of the different ideas and so you know, the possibilities are truly endless. anchor points on the eyes and the nose Right so it's a lot of training data, Well a lot of training data, yeah. the picture, is the fraud attempt. Right so a lot of the technology that's tried to do it, and so if you decide to grow a beard, and ask for help and you know Okay, so priorities for the rest of the year? and just really pushing the name out there. so we look forward to keeping an eye on progress Thank you very much. in downtown San Jose at the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

Shaun MoorePERSON

0.99+

ShaunPERSON

0.99+

TruefaceORGANIZATION

0.99+

LALOCATION

0.99+

a weekQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

two weeksQUANTITY

0.99+

one dayQUANTITY

0.99+

early 2014DATE

0.99+

Western DigitalORGANIZATION

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.98+

Trueface.aiORGANIZATION

0.97+

2013DATE

0.97+

end of 2013DATE

0.97+

The Intelligence of ThingsTITLE

0.96+

Fairmont HotelORGANIZATION

0.95+

San JoseLOCATION

0.93+

oneQUANTITY

0.93+

this yearDATE

0.88+

CubeCOMMERCIAL_ITEM

0.83+

four componentsQUANTITY

0.81+

2DQUANTITY

0.8+

firstQUANTITY

0.8+

past weekDATE

0.73+

When IoT MetTITLE

0.71+

of ThingsTITLE

0.69+

ChuiORGANIZATION

0.68+

single timeQUANTITY

0.67+

FairmontLOCATION

0.63+

ofTITLE

0.62+

Trueface.aiTITLE

0.61+

3DQUANTITY

0.6+

#theCUBEORGANIZATION

0.58+

HotelORGANIZATION

0.56+

CubeLOCATION

0.52+

Bob Moore & Jason Shropshire | HPE Discover 2017


 

>> Announcer: Live from Las Vegas it's theCUBE. Covering HPE Discover 2017 Brought to you by Hewlett-Packard Enterprise. >> Okay welcome back everyone, we're here live in Las Vegas it's theCUBE's exclusive coverage of HPE Discover 2017. HP Enterprises premier show, it's theCube on our third day I'm John Furrier, my co-host Dave Vallante. And our next guest Bob Moore returning back, Director of Server Software Private Security, he's got the hottest product, he's here on the show. We're going to go do a deeper dive. And Jason Shropshire SVP, CTO of InfusionPoints. Welcome back welcome to theCUBE. >> John, thank you Dave. You're the talk of the town here on the show with the simple messaging that is clean and tight. But outside of that, from a product stand point is really some of the security stuff you guys are doing in the Silicon. >> Bob: It is. >> In the server with Gen 10, pretty game changing, we've been curious, we want more information. >> Bob: Yeah. >> John: Give us some more update, what's the update? >> Glad to do that we're really proud of the announcement of course it's a big bold announcement this week. Claiming ourselves the world's most secure industry standard server. So that's big, that's huge, that's based on this new revolutionary security technology that we've been developing frankly over the past couple of years. So it's been two or three years in the making. A lot of hard work, we actually started to look at what type of security trends were happening, and what we might have to do to protect the servers. And we've come up with a game changing capability here. And it's one thing for us to say it internally at HPE, but we are so certain that we are in a great security position that we went external and found a security firm outside, that independently could look at it and do some compare, contrast testing with a competitive unit, so. >> So let's drill into that, actually I had some other questions on the industry in terms of what is going on at the chip level. Always on security is kind of a theme we've heard in the past from some of your competitors, but lets get into some of the competitive analysis. What do you guys see in the benchmarks. Jason, what do you guys discussing , because at the end of the day, claims are one thing. No offense to HP, you're kind of biased of course. We have folks on from the marketing team as well. Where's the proof in the pudding? >> Oh yeah, well one thing that we know for sure is that threat is real, right, with firmware. And it was great for us to analyze HP's new technology. We had on the bench two different beta units. >> John: You guys are the ones who did the benchmark. >> Jason: Yes the analysis. >> Independent. >> Independent yeah, FusionPoints is a cyber security firm independent from HP, they approached us to do the testing. >> John: Okay. >> We have head analysts that do this sort of thing all the time for our customers. >> So take us through what happened. >> Yeah so they procured for us three competitor servers. Sent them to our shop. We set them on a bench, all side by side. From what I can tell, no one's ever really done a test like that, you know, within the server industry. It was very exciting. There's been a lot of benchmarking done and performance, things like that. But from a black hat stand point, to actually look at the hardware, that hardware level testing, we couldn't find any examples of anyone doing it. I thought that alone was just evidence that HP was very serious about security and they knew what they had. So. >> John: You guys getting your answer, because you know the malware, and all the ransomware going on. People are going through elaborate lengths. >> Jason: Absolutely. >> Business model, organized teams, this is a really orchestrated security market now, with the black hat guys out there, really hacking away at every angle. >> Yeah well, you know we saw evidence that firmware issues and exploits are here to stay. The Vault7 release that happened recently showed us that there are exploit kits. Intel security released within a day a tool to let you do firmware validation. But to do that you have to take your server offline and build a gold image of what that firmware should look like. And then compare like a week later if you think you might have had a breech. You have to take your server down and compare it against that gold image. And who has the time to do that? But what we found in analyzing the Gen 10 server is HP has built this in, where this can be done in real time, while the server is running. No performance hit, no down time. It really is a revolutionary game changer I think for firmware security. >> So Bob, can you explain what IP you developed in Silicon that Intel, where do they leave off and you pick up? >> Sure, sure because Intel has some great security technology. And we actually support a lot of Intel technology. Their TXT, their Trusted Execution Technology as part of our Gen 10 servers. But what we've done at HPE is we've really taken it multiple steps further than that and we've developed. Because we're in a position where we develop our own custom HPE iLo-silicon chip, we're able to anchor what we actually do, imbed the cryptographic algorithms into that, and we anchor all the server's essential firmware. Right, think of it as anchoring it down into the bedrock. So there's really no way you can get in and breach that. And even if you did, instead of taking it offline like Jason was talking about here. We have the ability to not only provide that protection, but we would detect any type of malware or virus that gets in. And then frankly, we can recover that, almost immediately within a few minutes. In fact we're demonstrating that here during Discover this week. >> Is there anyplace online where people can get information, people watching, probably curious. >> Bob: Sure >> You can just give them the URL. >> Yeah just naturally it's our HPE.com/security. And that where there we've got some white papers there and other things there. >> So you say you can recover universally instantaneously. >> Bob: Yes. >> And you do that by what, fencing certain resources or... >> Yeah well what we've done, is we verify as the server is running, we're doing a runtime for more validation. So we're checking that firmware, making sure it's free of any malware, viruses, or compromised code. Completely perfect in original shape, like when we ship it from the factory. And we're storing in another location inside the server, a secure copy of that. Think of it as log box, inside the server, where it can't be found unless we need it to go into recovery mode. Then we draw from that, we've checked it daily, we've stored it there, we know it's authentic, and we can pull that back into recover in case something does happen to the server. >> And then asynchronously reclaim that wasted resource, clean it up and bring it back online. >> We can, we can recover the server, the firmware, toward the end of the year, we'll be recovering the operating system as well. Also we've got a really holistic way to get that recovered. When we talk to customers, a real big concern, and sometimes it's called bricking a server, you've got a bricked server, something that just won't operate. And it's important because 60% of small businesses that suffer a security breach, go out of business within six months, so it can be huge that lack of cashflow for customers. It's that denial of service, that disruption of business. Well we prevent all that, because we can not only protect the server, but then recover from a breach. >> So the anatomy of that breach, can we go through a common use case? So malware gets in, it gets into the server, it's hiding, typically you don't know about it. And in this new scenario with your Gen 10. You'll be able to identify that. >> Bob: That's right. >> To protect it, okay. And if I understand, the business impact of the problem you're solving is, not only are you sort of automating that protection, but you're also eliminating, a lot of wasted time, and downtime, and accelerating the response. >> Yeah I think that's what Jason was talking about earlier. Normally, if you're server gets infected, you completely take it off line and then do a manual recovery. And customers still have the choice to do that, but in our case we can recover immediately within a few minutes if something happens and gets a breach. >> Those types of exploits are typically in the data plane as well. With firmware you can't even really detect that you've been hacked. So down in the firmware virus scanners, those things don't work. So if you have a BIOS exploit, that is on either the iLO, that would be on the BMC the baseboard management controller. And undetectable by the operating system. >> That's crazy because it's a clean haven for hackers. I mean they know how to get in there, once you're in there, you're in. >> I don't know if a lot of customers realize this but the first thing when you turn a server on, there first thing that comes on is the firmware. In our case it's the iLO firmware. Over a million lines of the firmware code run before the operating system even starts. So that can be a cess pool for a trojan horse. And the research shows a virus, somewhat analogous to a human virus, it can stay there, hibernate in there for months, maybe even a year or more until it springs forth and opens up the passwords or bricks your servers, or does some nefarious thing. >> A cesspool from the customer standpoint, from a hacker is like going to the beach. Pina Coladas, you're clean you're down there having fun. >> Well what's your stats? The average time to detect an intrusion is over 200 days. >> Bob: That's right yeah. >> So essentially, you're detecting it instantaneously. >> We can, we run that runtime firmware validation on a regular basis, can be run as much as everyday, and so you'll know almost immediately. Which is really great because of a lot of regulatory bodies want to know if a breach has occurred. So this gives customers the ability to know somethings happened to them. >> Jason I want to challenge the claim here, because first of all I love the bravado. Yeah, we're bad ass, we're number one. >> We know that. >> What is the, how did the leaderboard come out? What was the results? Did HP come out number one? >> Oh absolutely. >> What's the lead, what's the gap, talk about the gap between HP and other servers. Did they send you the best servers? What was the benchmark, I'm sure you did your due diligence, take us to more of the results. >> Sure, sure, so yeah again we were comparing all the servers side by side. A test that had never been done from what I'd seen. When we looked at by feature, by feature, and started analyzing things. We sort of broke down and we saw we really had two different angles that we were looking at. The penetration test as aspect. What we were looking for vulnerabilities in the firmware, at the physical layer, at the network layer. They passed that with flying colors. We found a few minor issues that they jumped on and resolved for us in a matter of hours or days. And then the other aspect was a feature by feature comparison that we looked at. We looked at the silicon retruss obviously and we saw what the others were doing there. At best the other guys were using firmware to validate firmware. The obvious issue with that is if the firmware is compromised it's not trustworthy. >> Spoof, yeah, yeah. >> It's in no position to validate and verify. >> It's like Wallstreet policing itself. >> Jason: Yeah, can't trust that, They have a revolutionary intrusion detection switch on the Gen 10, that actually detects. If the lid is lifted on the server, anywhere from when it leaves the factory to the garage of the installation point, server doesn't have to be plugged in like the other guys. >> So if it's just a physical casing breach, >> Jason: Exactly. >> What happens there, flags the firmware, makes a note, does it shut it down? What happens? >> It makes a note, it puts it in the log entry so you can tell if that server has been tampered with in transit. >> So the insider threat potential should go away with that. >> Right, physical access, you don't have to worry about that because we can verify that server gets to the customer in it's unique, original, authentic condition. Because even though the power is off that is going to register and auto log an alert if that chassis has been opened. >> So I can't go to the vault of the Bellagio, like they did in Ocean's Eleven and put my little, and break into the server and you know go in there. >> Bob: Exactly. >> Okay, now back to the results. So the other guys, did they all pass or what. >> Well we did find some issues that we're looking at and doing some further testing on. >> So we're going to be polite and respect the confidentiality you have the ethos of security as you know sharing data is a huge deal, and it's for the integrity of the customer that you have to think about so props for that. For not digging into it, we'll wait for an official report if it does come out. Alright, so I got to ask you a personal question Jason. As someone who is in the front lines. You know every time there's a new kind of way, whether it's Bitcoin and Block chaining, you see a slew of underbelly hacking that goes mainstream that people are victimized. In this case firmware is now exposed, well known. >> Jason: Yeah. >> What as a professional, what gets you excited, and what gets you alarmed if anything about this? What new revelations have you walked away with from this? >> Well it's just how pervasive this issue is. You know the internet of things has exploded the number of IP devices that are out there. Most of them have, firmware issues, almost all of them have firmware issues. And we've just now seen bot nets being created by these devices. Cameras, IP cameras and things like that, that become attack platforms. So I just want, one of the things that impressed me very much about HP's approach here is that they're being a good corporate citizen by making a platform that's going to be implemented in tens of thousands IP addresses. Those systems I think will be much more secure. Again it can't become an attack platform for other people, for attackers to abuse. >> So the surface area, so your point about IOT. We always talk about the surface area of attack vectors. And that vector can then be minimized at the server level, because that's like the first mile in. >> Right we come and really refer to that as the attack vector or the attack surface. And so we narrow that attack surface way down. >> Can you even subjectively giVe us a sense as to how much of the problem this approach addresses? I mean is it 1%, 10%, 50% of the attacks that are out there? >> I think the important thing here is moving, shifting the bar. I've likened this, what HP is doing here to what Bill Gates did 15 years ago with the Microsoft memo. I mean that really revolutionized operating systems security within Microsoft and I think it had a ripple effect out into the industry as well. So I think HP is really pushing the bar in the same way but for firmware, instead of the operating system level that was the paradigm 15 years ago. >> And I think you'll find on our website we put some of the studies actually, and it's over half, I think it's 52% of the firms that responded have had a breach or malware virus in their firmware. So over half of those, and 17% had a catastrophic issue with that, it really is more pervasive. We've seen a lot of news about the data plane level, where thefts are taking place at the application level of the operati6ng system. And we've got to pay attention to the firmware layer now because that's like I said, a million lines of code in there running. And it could be an area where a trojan horse can sit, and we essentailly narrow that attack surface. We're also delivering with the Gen 10, the highest, the strongest set of security ciphers available in the world today. And that's a commercial national security algorithms. We're the only ones to support in our server, so we're proud of that. >> Well Bob and Jason thanks so much for sharing the insite. It's super exciting and relevant area, in the sense that it's super important for businesses and we're going to keep tracking this because the Wikibond team just put out new research around true private cloud, showing the on prim, cloudlike environments will be 260 billion dollar market. That's new research, that's groundbreaking, but points to the fact that the on pram server situation is going to be growing actually. >> Jason: For sure. >> So this is, and with cloud there's no perimeter so here you go, firmwares, potential exposure you solved that problem with good innovation. Thanks so much for sharing. >> Thanks you guys. >> Thank you. >> The inside Jason and Bob here on theCUBE talking about security servers, attack vectors, no perimeter, it's a bad world out there. Make sure you keep it protected of course. This is CUBE bringing you all the action here at HPE Discover. We'll be right back with more live coverage after this short break. I'm John Furrier, Dave Vellan6te. Be right back after this short break, stay with us.

Published Date : Jun 8 2017

SUMMARY :

Brought to you by Hewlett-Packard Enterprise. he's got the hottest product, he's here on the show. You're the talk of the town here on the show In the server with Gen 10, pretty game changing, been developing frankly over the past couple of years. We have folks on from the marketing team as well. We had on the bench two different beta units. independent from HP, they approached us to do the testing. all the time for our customers. at the hardware, that hardware level testing, the malware, and all the ransomware going on. orchestrated security market now, with the black hat guys But to do that you have to take your server offline We have the ability to not only provide that protection, Is there anyplace online where people can And that where there we've got Think of it as log box, inside the server, And then asynchronously reclaim that wasted resource, And it's important because 60% of small businesses that So the anatomy of that breach, of the problem you're solving is, not only are you And customers still have the choice to do that, So down in the firmware virus scanners, I mean they know how to get in there, but the first thing when you turn a server on, A cesspool from the customer standpoint, The average time to detect an intrusion is over 200 days. We can, we run that runtime firmware validation because first of all I love the bravado. What's the lead, what's the gap, talk about the gap We looked at the silicon retruss obviously of the installation point, It makes a note, it puts it in the log entry that is going to register and auto log and break into the server and you know go in there. So the other guys, did they all pass or what. Well we did find some issues that we're looking at and it's for the integrity of the customer You know the internet of things has exploded So the surface area, so your point about IOT. And so we narrow that attack surface way down. but for firmware, instead of the operating system level We're the only ones to support in our server, Well Bob and Jason thanks so much for sharing the insite. So this is, and with cloud there's no perimeter the action here at HPE Discover.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

Dave VallantePERSON

0.99+

Bob MoorePERSON

0.99+

JohnPERSON

0.99+

BobPERSON

0.99+

HPORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

Dave Vellan6tePERSON

0.99+

Jason ShropshirePERSON

0.99+

60%QUANTITY

0.99+

Bill GatesPERSON

0.99+

DavePERSON

0.99+

52%QUANTITY

0.99+

twoQUANTITY

0.99+

17%QUANTITY

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

50%QUANTITY

0.99+

three yearsQUANTITY

0.99+

1%QUANTITY

0.99+

a week laterDATE

0.99+

BMCORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

firstQUANTITY

0.99+

10%QUANTITY

0.99+

FusionPointsORGANIZATION

0.98+

WikibondORGANIZATION

0.98+

first mileQUANTITY

0.98+

Las VegasLOCATION

0.98+

two different anglesQUANTITY

0.98+

over 200 daysQUANTITY

0.98+

first thingQUANTITY

0.98+

a yearQUANTITY

0.98+

oneQUANTITY

0.97+

two different beta unitsQUANTITY

0.97+

15 years agoDATE

0.97+

BellagioORGANIZATION

0.97+

InfusionPointsORGANIZATION

0.97+

Over a million linesQUANTITY

0.97+

this weekDATE

0.97+

six monthsQUANTITY

0.96+

over halfQUANTITY

0.96+

IntelORGANIZATION

0.96+

one thingQUANTITY

0.95+

HPE.com/securityOTHER

0.95+

260 billion dollarQUANTITY

0.95+

todayDATE

0.92+

Ocean's ElevenTITLE

0.92+

third dayQUANTITY

0.91+

tens of thousands IP addressesQUANTITY

0.9+

HPE DiscoverORGANIZATION

0.89+

HPE Discover 2017EVENT

0.89+

three competitor serversQUANTITY

0.87+

HPEORGANIZATION

0.82+

George Moore, Microsoft Azure Compute | Fortinet Accelerate 2017


 

>> Narrator: Live from Las Vegas, Nevada, it's theCUBE covering Accelerate 2017 brought to you by Fortinett. Now, here are your hosts, Lisa Martin and Peter Burris. >> Hi, welcome back to theCUBE. We are SiliconANGLE's flagship program where we go out to the events and extract the signal from the noise. Today, we are with Fortinet at their 2017 Accelerate event in Las Vegas. I'm your host, Lisa Martin, and I'm joined by my cohost, Peter Burris. We are fortunate right now to be joined by George Moore. George is the technology, excuse me, the CSO for Microsoft Azure who is a big technology alliance partner for Fortinet. George, welcome to theCUBE. >> Nice to have you, thank you. >> We are excited to have you on. You are, as you mentioned, the CSO at Azure, but you are the CSO for all of the Azure computer services. You are one of the founders of the Azure engineering team from back in 2006, and we were talking off-line. You hold over 40 patents in things like security deployment, interactive design, et cetera. You are a busy guy. >> I am, yes. (laughing) >> One of the things we have been talking about with our guests on the show today, and a great topic that was in the general session was about the value of data, and how do businesses transform to digital businesses. The value in that data has to be critical. I'd love to get your take on as businesses have to leverage that data to become more successful or to become successful as digital businesses, we know the security of the perimeter is not the only thing. It needs to be with the data. What is Azure doing to secure the cloud for your customers, and how do you help them mitigate or deal with the proliferation of mobile devices and IOT devices that they have that are connecting to their networks? >> Digital disruption is affecting everybody, and it is a huge thing that many companies are struggling to understand and to adopt to their business models, and to really leverage what digital can do for them, and certainly we are doing in the public cloud with Azure helps that significantly. As you mentioned, there is just a proliferation of devices, a proliferation of data, so how do you have defense in depth so you don't have perimeter-based security, but you actually have defense in depth at every level, and at its heart, it really falls down to how do you do encryption at rest, how do you secure the data encrypted? Who holds the keys for the data? What is the proliferation of the keys? How did the controls manage for that? Of course, of the data is encrypted, you really want to be able to do things upon it. You want to be able computer over it. You want to be able to queries, analytics, everything, so there's the question of how to securely exchange the keys? How do you make sure that the right virtual machines are running, the right computers running at the time to do the queries? That's the set of controls and security models and services that we provide in Azure that makes it super easy for customers to actually use that. >> Azure represent what's called the second big transformation for Microsoft where the first one might have been associated with Explorer, those amazing things that Microsoft did to transform itself in the 1990s and it seems to be going pretty well. How is security facilitating this transformation from a customer value proposition? >> Security is absolutely the number one question that every customer has whenever they start talking about the cloud, and so we take that very, very seriously. Microsoft spends over billion dollars a year on all of our security products all up. We have literally armies of people who do nothing every day but wake up and make sure that the product is secure, and that really boils down to two big pieces. One is how do we keep the platform secure from the security control that we have ourselves in the compliance ADA stations and everything to make sure that when customers bring their workloads to us, they are in fact kept secure. Second is a set of security controls that we provide the customers so they can actually secure their workloads, integrate their security models with whatever they're running on premise, and have the right security models, ADA stations, multifactor authentication, identity controls, et cetera for their own workloads. >> Security is very context specific. I'm not necessarily getting into a conversation about industry or whatnot, but in terms of the classifications of services that need to be provided, we were talking a little bit about how some of the services that you provide end being part of the architecture for other services within the Azure cloud. Talk a little bit about how you envision security over time evolving as a way of thinking about how different elements of the cloud are going to be integrated and come together in the role that security is going to play in making that possible and easy. >> You are absolutely right. Azure is composed of, right now, 80 some-odd different services and there's definitely a layering where for example, my components around the compute pieces are used by the higher order of services around HD insight and some of the analytic services and such, and so the security models we have in place internally for compute in turn are used by those higher order services, and the real value we can provide is having a common customer-facing security model for customers, so there is a common way by which they can access the control plane, do management operations upon these services, how they can access the endpoints of the services using a common identity model, a common security model, role-based access control, again, from a common perspective, logging, auditing, reporting, so all this has to be cohesive, correct, and unified so that customers aren't facing this tumultuous array of different services that speak different languages, so to speak. >> We are here at Fortinet Accelerate 2017. Tell us how long Microsoft Azure and Fortinet have been working together, and what are you most excited about with some of the announcements from Fortinet today? >> Microsoft and Fortinet partnership has been going on for quite some time. Specifically in Azure space we've been doing two different, two major thrusts around integration with the Azure Security Center which is a set of services that we have within Azure that provides turnkey access to many, many different vendors including Fortinet as one of our primary partners, and Fortinet also has all their products in Azure marketplace so that customers can readily in a turnkey manner use Fortinet next generation firewalls and such as virtual machines, incorporate those directly within their workloads, and have a very seamless billing model, a very seamless partnership model, a very seamless go-to-market strategy for how we jointly promote, jointly provide the services. >> One of the things that one of our guests was talking with us about today was really about it's an easy sell, if you will, at the C-level to sell the value of investing in the right infrastructure to secure environments. Looking at that in correlation to the fact that there's always historically been a challenge or concerned with security when it comes to enterprises moving workloads to the cloud, I'm curious about this easy-sell position that cyber security and the rise of attacks brings to seeing the adoption of more enterprise workloads. We are seeing numbers that are going to show, or predicting that north of 85% of enterprise workloads will be in the cloud by 2020. How much is Microsoft Azure seeing the fact that cyber security attacks are becoming more and more common, hitting some pretty big targets, affecting a lot of big names. How much are using that as an impetus to and maybe drive that adoption higher and higher from an enterprise perspective? >> Absolutely, I see that everyday. I give many, many talks to the C-level, to CSOs, CEOs, et cetera, and I can say in many industries like the banking industry, financial sector, 18 months ago banks did not have any interest in public cloud. Is just like, "Thank you, we have no interest in cloud," but recently there has been the dawning realization that Azure and the public cloud products are in fact, in many cases, more secure than what the banks and other financial industry sectors can actually provide themselves because we are providing huge amounts of investments from an ongoing basis that we can actually provide better security, better integrated security than what they can afford on premise, so as a result, we are seeing this now, literally, stampede of customers coming to us and saying, "Okay, I get it. "You can actually have a very, very "highly secure environment. "You can provide security controls "that can go well above and beyond "whatever I could do on premise, "and it's better integrated "than what I could ever pull together on premise." >> One of the reasons for that is because of the challenge of finding talent, and you guys can find a really talented person, bring them in, and that person can build security architectures for your cloud that then can be served, can be used by a lot of different customers, so what will be the role of or how will this need for talent in the future, what would be the role for how people engage your people, client's people engage your people to ensure that that people side and moves forward, and how do you keep scaling that is you scale the cloud? >> Certainly people are always the bottleneck in virtually every industry, and specifically within the computing space. The value that we are seeing from customers is that the people that they had previously on premise who were working to secure the base level common infrastructure are now freed because they don't have to do that work. They can do other interesting things at the application level and move their value added further up the stack which means I can innovate more rapidly, they can add more features more quickly, because they are not having to worry about the lower-level infrastructure pieces that are secured by Azure, so we are seeing the dawning realization that we are moving to this new golden age where there is higher degree of agility with respect to innovation happening at the application level, because remember, applications have to be, if you are having a compliant workload, if you are having PCI compliance within the credit card industry for example, you have to have the entire application and its infrastructure part of the compliance boundary, so that means when you are building that app, you have to give your auditors the complete stack for them to pass that. If you are only having to worry about this much as opposed to that much, then the amount of work that you can do, the amount of integration, the amount agility, the amount of innovation you can do at that level is many orders of magnitude higher, so you really see that the value that a lot of customers are having here is that their talented people can be put to use on more important higher order business-related problems as opposed to lower-level infrastructure level issues. >> Let's talk about that for second because one of the things that we see within our research is that the era of cloud as renting virtual machines is starting to transition as people start renting applications, or applications as services that they themselves can start putting together. Partly the reason why that's exciting is because it will liberate more developers. It brings more developers into the process of creating value in the cloud, but as they do that, they now have visibility, or they are going to be doing things that touch an enormous set of resources, so how do you make security easier to developers in Azure? >> The key is that we can do high degrees of integration at the low level between these very services. >> Peter: It goes back to that issue of a cascading of your stuff up into the other Azure services. >> Absolutely, I mean think about it, we sat on top a mountain of information. We have analytics and log files that know about virtually everything that's happening in the cloud, and we can have machine learning, we can have intelligence, we can have machine intelligence and such, that can extract signals from noise that would otherwise be impossible to discover from a single customer's perspective. If you have a low and slow attack by some sort of persistent individual, the fact that they are trying the slow and low attack means that we are able to pull that signal out and extract that information that would not be really physically possible, or economically possible for most companies to do on premise. >> Does this get embedded to some of the toolkits that we are going to use to build these next-generation cloud-based apps? >> It gets embedded into the toolkits, but it also gets embedded at the set of services like the Azure Security Center. A single pane of glass that's integrated with the products from Fortinet and others where the customer can go and have a single view across all their work was running within Azure and get comprehensive alerts and understanding about the analytics that we are able to pull out and provide to those customers. >> What's next? >> Security is an ever evolving field, and the bad guys are always trying new things, so the work that is really happening, a lot of the innovation that's happening is within the analytics, machine learning space around being able to pull more log files out, being able to refine the algorithms and basically being able to provide more AI to the logs themselves so that we can provide integrated alerts, like for example, if you have a kill chain of an individual coming in attacking one of your product, and then using that to the lateral mobility to other products, or other services within your product, we can pull this together in a common log. We can show to customers here's the sequence of this one individual that across three, or four, or five different services. You have top level disability, and we can give you then guidance to say if you insert separation of duties between these two individuals, then you could've broken that kill chain. We can do proactive guidance to customers to help them secure their own workloads even if they necessarily initially were not deployed in a necessarily most secure manner. >> George, we just have a couple of minutes left, but I'd like to get your perspective. You showed a tremendous amount of the accomplishments that Azure has made in public cloud and in security. What are the opportunities for partners to sell and resell Azure services? >> Absolutely. Microsoft has historically always worked incredibly well with partners. We have a very large partner ecosystem. >> Peter: It's the biggest. >> Is the biggest, exactly. Okay, I don't want to brag too much, yes. (laughing) >> That's what I'm here for, George. >> We see specifically in the security space that partners are increasingly, around 40% of their revenue increasingly is coming from cloud-based assets, cloud-based sales. We are setting up the necessary partner channels and partner models where we can make sure that the reseller channels and our partners are an integral part of our environment, and they can get the necessary revenue shares, and we can give them the leads on how the whole system evolves. Absolutely we believe that partners are first and foremost to our success, and we are making deep, deep, deep investments in the partner programs to make that possible. >> Well George, we wish you and Microsoft Azure continued success as well as your partnership with Fortinet. We thank you so much for taking the time to join us on theCUBE today. >> Thank you. >> And for my cohost, Peter Burris, I'm Lisa Martin. Stick around, we will be right back on theCUBE.

Published Date : Jan 11 2017

SUMMARY :

brought to you by Fortinett. and extract the signal from the noise. We are excited to have you on. I am, yes. One of the things we at the time to do the queries? and it seems to be going pretty well. and make sure that the product is secure, some of the services that you provide and the real value we can provide is and what are you most excited about that we have within Azure that are going to show, that Azure and the public is that the people that they because one of the things that we see The key is that we can do Peter: It goes back to that issue the fact that they are trying and provide to those customers. and we can give you then guidance to say amount of the accomplishments We have a very large partner ecosystem. Is the biggest, exactly. that the reseller to join us on theCUBE today. Stick around, we will be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

George MoorePERSON

0.99+

Lisa MartinPERSON

0.99+

Peter BurrisPERSON

0.99+

GeorgePERSON

0.99+

FortinetORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

PeterPERSON

0.99+

fourQUANTITY

0.99+

2006DATE

0.99+

threeQUANTITY

0.99+

Las VegasLOCATION

0.99+

oneQUANTITY

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

two individualsQUANTITY

0.99+

todayDATE

0.99+

SecondQUANTITY

0.99+

FortinettORGANIZATION

0.99+

OneQUANTITY

0.99+

two big piecesQUANTITY

0.99+

80QUANTITY

0.99+

secondQUANTITY

0.99+

1990sDATE

0.98+

18 months agoDATE

0.98+

AzureTITLE

0.98+

2017DATE

0.98+

around 40%QUANTITY

0.98+

Las Vegas, NevadaLOCATION

0.98+

AzureORGANIZATION

0.98+

over 40 patentsQUANTITY

0.98+

first oneQUANTITY

0.97+

SiliconANGLEORGANIZATION

0.97+

TodayDATE

0.97+

Azure Security CenterTITLE

0.95+

five different servicesQUANTITY

0.95+

firstQUANTITY

0.95+

singleQUANTITY

0.93+

single viewQUANTITY

0.91+

two major thrustsQUANTITY

0.9+

single paneQUANTITY

0.9+

over billion dollars a yearQUANTITY

0.86+

theCUBEORGANIZATION

0.85+

one of our guestsQUANTITY

0.81+

north of 85%QUANTITY

0.8+

ExplorerTITLE

0.8+

Accelerate 2017EVENT

0.79+

2017 AccelerateEVENT

0.77+

Microsoft AzureORGANIZATION

0.77+

one individualQUANTITY

0.77+

Geoff Moore | ServiceNow Knowledge 2014


 

but cute at servicenow knowledge 14 is sponsored by service now here are your hosts Dave vellante and Jeff Creek we're back hi everybody this is Dave vellante with Jeff Frick we're here live at knowledge 14 this is service now it's big customer event about 6,600 people up from about four thousand last year as we've been saying it's kind of tracking the growth of service now which has been pretty meteoric we heard from Mike scarpelli the CFO Frank's loot men they're really doubling down and it's exciting to see we're here in San Francisco where all the action is Jeffrey Moore is here author consultant pundits all-around smart guy cube alum greatly again thank you here so um so you're speaking at the CIO decisions i love the fact that they got so many CIOs here who real CIA a lot of times these conferences you get to you know the infrastructure guys but so what's the vibe like over there well you know it's kind of cool because if you think about service now and you go back to say 10 years this was all about how to make IT more productive around the ITIL model and you know and you'd use these automated services to do this stuff what's happening and Frank nailed it in the keynote he said look this infrastructure can be turned inside out and you can service enable the entire enterprise not just IT need a service enterprise you know HR you can decision a marketing eight-day any other shared service you can turn into a bunch of services that you can sort of call in and use service now as a platform so so the cios it was all about well that that's a different that's a different vision and so how do we map from the old way of sort of thinking about this is an internal productivity facility to this new way of saying no this is an enterprise enablement platform that's a big that's a big move a little bit like Salesforce going to force calm that same flavor yes sir frank's keynote was talking about how the CIO has to become you know more business savvy and of course we've heard that a lot for years and years and years but in fact a number of the folks that we've had on here at the service now are actually of that hill maybe they came from the business but most CIOs didn't necessarily come from the business they weren't P&L managers they weren't running sales do you see that changing yeah I think what happened in the 20th century was IT was sufficiently complex that frankly you had to be a technical person to do it it just it was just really hard and and yes you needed business consultants but the end of the day you needed ten percent business consultants and ninety percent technical people I think we've come a long way since then in the next generation of stuff is more around systems of engagement these things that that communicate with each other as opposed to systems of record and so the profile the winning IT strategy is migrating from help us run information about our business in the back office to help us actually re-engineer the dynamics of our business in the world in the present and that's like going from from data to behavior them it's a big we call it going from systems of record to systems of engagement it's a big show and is that that transition in your mind is very disruptive so what happens to all those purveyors and buyers of systems of engagement to they morph into obsessive record do they morph into systems of engagement do they just get blown away no it's interesting so so so first of all you're never going to get rid of your systems of record but at the margin we've probably extracted most of the lifetime value from that investment already so you need to maintain them and so the industry is consolidating a round of an anchor set of vendors who we trust to do that but the growth is going to be like if you look at systems of engagement we might have gotten five percent of the lifetime value there so at their margin if you have a dollar to spend people want to spend it in there so the challenge of being an incumbent is I'm not going to lose my base but man the growth is happening over here so the real challenge for that for the incumbent vendors is how can i participate in the new world and still maintain my relationships in the old world whereas the new guys are just coming and saying i don't i'll leave the old world of you guys i just want to play over here i can get your take on the structure of the IT business is you've observed as have i sort of these disruptions and these changes over time so obviously we went from being framed at pc you saw that the competitive line started to get more disintegrated yes i could use that that term a competition occurred on those I see that Intel's ascendancy in Microsoft and Oracle the best database companies the emc was the storage company and everything was sort of you know siloed and but leadership the leadership matrix has largely stayed intact I mean even IBM and okay HP said its ends up and down but it's largely stayed intact do you see the cloud changing that fundamentally changing the economic yes I think yes I think what happened is so in the client server error we did we built the stack what you're just described and every layer of the stack had a leader now I think since 2000 y2k that stack is being compressed meaning there are fewer and fewer vendors that are still in the in that in that leadership cadre and as we go to like cloud and computing the service you start saying well yeah i still have cisco in there i still have IBM in there but maybe i'm buying them as a service rather than as a set of equipment so you kind of can feel that world just I think compressing this look is the right word and where is the experimentation the opportunity to sort of find new places to go to it's very much in this world outboard of the IT data center where it it is about engaging engaging with your customer engaging with your employee engaging with your supply chain and using mobile things and social and you know analytics and cloud and all these new technologies the freedom to do that is is actually outboard of the of the old style I show you what you described as sort of an oligopoly and you've got these big whales and I've always asking you know guys who follow this it are we going to see somebody to disrupt that Amazon is the obvious you have to go to them a three billion dollar you know company growing at sixty percent a year with marginal economics of services that look like software yep but at the same time it's okay they've got this huge lead but it doesn't just make sense to me that it's sustainable I mean because hardware economics never will go to 0 so you would think that somebody was almost like the IBM early pc days remember IBM heavily yep we're domin to play that's kind of what kind of way amazon is now do you do you see that you see more competition from amazon why is it that they don't have direct competition so the less of the last book i wrote in the last the thing i've been working on most recently is around why is it so hard for the established incumbents to catch the next wave and the problem is so you look at why amazon's why is Amazon so unopposed in many of its initiatives well their business model in the economic model is completely divorced from the incumbent model and so you look at the incumbent in there going it's not that I don't see what the guys are doing I get what they're doing I just don't see how I can get my investors or my my whole infrastructure on to that new place in my example that was code at so you know Antonio Perez came from HP he knew what he was getting into he understood digital everybody at Kodak understood digital but they couldn't get to the other place so in this it would call it escape velocity how do you free yourself from your own paths and you you really do have to take a pretty dramatic approach to it and I think by the way i think i'm looking at microsoft in particular i think it I think Microsoft's going to give a very very big run at doing it and but I think that they're still more the exception than the rule you would wish that every one of those vendors would say look you know because every CIO here if any of those vendors came to him and said hey we're going to really try to play here will you help they'd say yes they don't want to change their relationships but but we get trapped in these business models and then you sort of grind and you grind and grind and after a while it's like well man you've just ground yourself to do I owe the classic label Christensen right individuals dilemma and it also makes a question is d said David's been the same characters kind of changing companies had not Jeff Bezos and Amazon come in with a completely different model to drive cloud with the other people who still has to transfer so they want to give credit to you want to bet it to be so so you want to give credit to Benioff by the way Benioff has been has been the kind of prow of a ship that brings in the illusory at work day brings in netsuite brings in service service now you know so the software-as-a-service thing is coming in at one level and remember if you were an on-premise guy it's very very how many years did did SI p commit an enormous amount of money to say we're going to have a great cloud offering and it just it's so hard so so it is so and then you're looking now at this sort of this next layer of collaborative IT and you're seeing box and octant hang all these cool thing and analytics and splunk consumer logic and all these companies going really I mean I you know I mean if your fear of my age is like okay you have a t-shirt they got love to you think I'm a teacher but but but the point is this free space and they're saying there's these cool problems to solve we're not encumbered by any of the legacy we're going to race ahead and so if you're a CIO well we spent most of our time with the cios today was ok i have established set of relationships here i'm not going to abandon them but at the margin i need them to help me think about the future I thought these really start sparkly new startups some i'm sure not going to exist next year but some are going to be the leaders so how play that game right now and and the pressure it's putting on the IT organization is the people I know that are good at this are not the people that are good at this and so how do I so we had to talk about talent and how do you manage and how do you create career paths and and is it or do you have a infrastructure officer vs an innovation office I'm it was all around that same prob right and then oh by the way there's Hadoop and mobile and big data and some of these other just open source innovations that are being just thrown all these guys played it is so from a technology plate from a technology play if you're technologists it's like bring it on right but I think the interesting thing is and most of my career aighty was about the business so you ran a business and you had IT systems which gave you information about your business what's happened in the last 15 years is that more and more sectors of the economy i T is becoming the business so you saw what happened the newspapers in facilitate with IT isn't about the newspaper business IT is displacing the newspaper business Google is displaced in the media business amazon is displacing retail you know mobile banking is displacing banking Airbnb uber I mean this so there we have the taxi guys are worried them it and so you start saying it isn't IT isn't about the business it's a digital world and and so all of us and that was it i think that was probably at the core of the discussion so which cio am i what do I have permission to be would do my colleagues get this you know am I competent to do it if they do I mean you've talked about this a lot and you've given a number of examples so so was nicked car just dead wrong in 2003 or just to a narrow it is to keep what he was saying I believe is that systems of record okay are dead I think at that time by the way it wasn't obvious there was anything else because it no serious i can remember to you know the whole venture community kind of abandoned itv4 about researcher ivan on 101 yeah it was and even in the end even in the physical infrastructure there's still the idea is the basis of the competitive and about the reporting system yeah and i think this issue about so i think there's still a few businesses we're really IT still is about the business and you know what you can kind of stick with whatever you were doing you'll be okay but if your business is under an existential threat meaning the new IT model eviscerates your business model which arguably you could say all those both those incumbent stack vendors you know I mean cloud does eviscerate the on-premise hardware data center business model which was the fundamental foundation of IT as I knew it for all my business career and now all this it's like holy how do i how do i how do I deal with it so we talk about Amazon as a potential you know new you know big whale Salesforce is obviously he's got it but they've been around since 99 there's going to be exception mm-hmm proves the rule I don't maybe a service now or a workday you know we'll see if this market is big enough it looks like it it might be what often happens is they these guys let's get gobbled up or Larry Ellison writes a check you say these to denigrate people who write write checks not code I think the biggest matter and they got such mass never was afraid to reinvent himself change the game change the dynamics of the industry so do you think we will see a another big player and where will that comfort will it be the SAS guys will it be the sum of the guys out of the hadoop world what I don't think it will so here here's what I don't think will work I don't think you can be an established incumbent vendor under this compression power and write a check and get yourself back I think what happens when you write a check if you just bring a hot property into cold molecules and it loses its exactly exactly so I don't think that will work I think if you want to be one of these incumbents and succeed over here you have to actually pull part of your own DNA and capability and we literally just jump and then I think you can acquire it to it to build a thing there but what Larry did was he consolidate he basically was the first guy to figure out Nick Carr is right I need to buy up all the properties yep and brother George ball and run a maintenance business which by the way came to read and Georgia computer associates had that play up in the eighties it's the same play with this is a different plan well I love what you say in emc is an interesting one to watch the way to chi is setting up this Federation with pivotal and VMware you know who see we'll see what happens with the quarry NC and I think VI 3 of 8 yeah I think that that is I mean VMware's one of the wonderful examples of think we're a company did not cause the hot molecules become the cold molecules the thing you wonder there though is it feels a little bit like a like a holding company if you will and so and by the way vmware is in a curious tweener right like they kind of were the most they made the old stack incredibly productive so in some sense they can feel like they're part of the old world right they're probably the newest kid on the old world but then you think well yeah but I want to look at their plan now they want to be into software-defined networks they wanted me to software-defined data centers they definitely want to play over here and what it's in this case so state partners Wow one could argue that that was it because of what big in the cloud virtualize computing absolutely absolutely so what're you working on these days that's exciting well so that I think this issue of working with management teams to say okay look this is a self-imposed exile that we're putting ourselves under you know we get it i'll call it the Kodak problem because I don't want to talk about anybody in high tech specifically at the moment but the point is every management team in the established vendor group puts itself on a self imposed discipline to make you know certain kinds of eps things certain kinds of growth you know whatever it is the expectations of their investors and you look at the situation you say guys that is a slope glide path to extinction we all know that and by the way off the record they know it's no it's not that that is this is not a failure of it like this is a failure of will so then the question is well so how do you negotiate a different path and part of it is you have to make you have you have to be able to tell a story of your investors part of it is you have to negotiate a different operating model inside the company and what they've done so far is they said well okay we've got our established businesses and we've got our innovative businesses and we know enough to keep them apart so that part is not the problem and they actually come up with cool stuff the the moment of truth is when can you scale any of these innovative businesses to compete to actually be a material part of your historical portfolio meaning in my terminology at least ten percent of your total revenue going to twenty percent in what happens in that journey is it a key point you have to draw on the resources of your established business and all the people that make their living and they're compensated on getting the next quarter in the next quarter go guys I can't make the quarter and do this and you've got it you've got to find a way to say you know if we don't figure out a way to pull some of that resource over here and play our next hand will invent everything in the world but we'll never get it to scale and so there's there's a bunch of stuff around business model planning and then Investor Relations organizational development it's all around saying and the key there's two key ideas idea number one is it's a go-to-market problem not an RD problem you do not have an innovation problem you can't get your thing to market and the second cool idea is you can only do one of the time and everybody says well but give have the risk to so high you got a three or four or five of these things maybe want to work it's like know the sacrifice is so great if you put two or more horses in the race people people won't even run so the other one that's a focus and don't it's ok not to make the quarter that's like on American looking like michael dunn right i mean that's obsessively what he's hoping to be able to do and i think one of the reasons you see people go private is to say i can't play this game bye-bye normal public company protocol i mean i like to but i can't get there from here now i actually don't think every company ought to have to go private to do this but i think they do have to change their playboys all right Jeff we have to leave it there hey great to see you thank you very much me feel smarter just hanging out with you right there buddy we'll be right back after this is the cube you

Published Date : Apr 30 2014

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
FrankPERSON

0.99+

Mike scarpelliPERSON

0.99+

twenty percentQUANTITY

0.99+

Larry EllisonPERSON

0.99+

AmazonORGANIZATION

0.99+

2003DATE

0.99+

five percentQUANTITY

0.99+

San FranciscoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

amazonORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

Jeff CreekPERSON

0.99+

10 yearsQUANTITY

0.99+

Dave vellantePERSON

0.99+

LarryPERSON

0.99+

ninety percentQUANTITY

0.99+

DavidPERSON

0.99+

Jeffrey MoorePERSON

0.99+

Nick CarrPERSON

0.99+

ten percentQUANTITY

0.99+

twoQUANTITY

0.99+

VMwareORGANIZATION

0.99+

AirbnbORGANIZATION

0.99+

KodakORGANIZATION

0.99+

Antonio PerezPERSON

0.99+

frankPERSON

0.99+

microsoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

eight-dayQUANTITY

0.99+

HPORGANIZATION

0.99+

Geoff MoorePERSON

0.99+

uberORGANIZATION

0.99+

fiveQUANTITY

0.99+

GeorgePERSON

0.99+

JeffPERSON

0.99+

next yearDATE

0.99+

three billion dollarQUANTITY

0.99+

20th centuryDATE

0.99+

2000DATE

0.99+

threeQUANTITY

0.98+

fourQUANTITY

0.98+

last yearDATE

0.98+

sixty percent a yearQUANTITY

0.98+

two key ideasQUANTITY

0.97+

oneQUANTITY

0.97+

knowledge 14ORGANIZATION

0.97+

about four thousandQUANTITY

0.96+

todayDATE

0.96+

bothQUANTITY

0.96+

about 6,600 peopleQUANTITY

0.96+

2014DATE

0.95+

first guyQUANTITY

0.95+

knowledge 14ORGANIZATION

0.94+

BenioffPERSON

0.94+

next quarterDATE

0.94+

franklyPERSON

0.92+

SASORGANIZATION

0.92+

next quarterDATE

0.92+

0QUANTITY

0.92+

IntelORGANIZATION

0.91+

GeorgiaLOCATION

0.91+

michaelPERSON

0.9+

last 15 yearsDATE

0.88+

ServiceNowORGANIZATION

0.84+

one levelQUANTITY

0.83+

8QUANTITY

0.82+

eightiesDATE

0.82+

SalesforceTITLE

0.8+

second coolQUANTITY

0.79+

ciscoORGANIZATION

0.78+

waveEVENT

0.75+

at least ten percentQUANTITY

0.74+

AmericanOTHER

0.74+

99DATE

0.74+

yearsQUANTITY

0.73+

vmwareORGANIZATION

0.72+

CFOPERSON

0.69+

ciosORGANIZATION

0.64+

CIAORGANIZATION

0.62+

Joseph Nelson, Roboflow | AWS Startup Showcase


 

(chill electronic music) >> Hello everyone, welcome to theCUBE's presentation of the AWS Startups Showcase, AI and machine learning, the top startups building generative AI on AWS. This is the season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talk about AI and machine learning. Can't believe it's three years and season one. I'm your host, John Furrier. Got a great guest today, we're joined by Joseph Nelson, the co-founder and CEO of Roboflow, doing some cutting edge stuff around computer vision and really at the front end of this massive wave coming around, large language models, computer vision. The next gen AI is here, and it's just getting started. We haven't even scratched a service. Thanks for joining us today. >> Thanks for having me. >> So you got to love the large language model, foundation models, really educating the mainstream world. ChatGPT has got everyone in the frenzy. This is educating the world around this next gen AI capabilities, enterprise, image and video data, all a big part of it. I mean the edge of the network, Mobile World Conference is happening right now, this month, and it's just ending up, it's just continue to explode. Video is huge. So take us through the company, do a quick explanation of what you guys are doing, when you were founded. Talk about what the company's mission is, and what's your North Star, why do you exist? >> Yeah, Roboflow exists to really kind of make the world programmable. I like to say make the world be read and write access. And our North Star is enabling developers, predominantly, to build that future. If you look around, anything that you see will have software related to it, and can kind of be turned into software. The limiting reactant though, is how to enable computers and machines to understand things as well as people can. And in a lot of ways, computer vision is that missing element that enables anything that you see to become software. So in the virtue of, if software is eating the world, computer vision kind of makes the aperture infinitely wide. It's something that I kind of like, the way I like to frame it. And the capabilities are there, the open source models are there, the amount of data is there, the computer capabilities are only improving annually, but there's a pretty big dearth of tooling, and an early but promising sign of the explosion of use cases, models, and data sets that companies, developers, hobbyists alike will need to bring these capabilities to bear. So Roboflow is in the game of building the community around that capability, building the use cases that allow developers and enterprises to use computer vision, and providing the tooling for companies and developers to be able to add computer vision, create better data sets, and deploy to production, quickly, easily, safely, invaluably. >> You know, Joseph, the word in production is actually real now. You're seeing a lot more people doing in production activities. That's a real hot one and usually it's slower, but it's gone faster, and I think that's going to be more the same. And I think the parallel between what we're seeing on the large language models coming into computer vision, and as you mentioned, video's data, right? I mean we're doing video right now, we're transcribing it into a transcript, linking up to your linguistics, times and the timestamp, I mean everything's data and that really kind of feeds. So this connection between what we're seeing, the large language and computer vision are coming together kind of cousins, brothers. I mean, how would you compare, how would you explain to someone, because everyone's like on this wave of watching people bang out their homework assignments, and you know, write some hacks on code with some of the open AI technologies, there is a corollary directly related to to the vision side. Can you explain? >> Yeah, the rise of large language models are showing what's possible, especially with text, and I think increasingly will get multimodal as the images and video become ingested. Though there's kind of this still core missing element of basically like understanding. So the rise of large language models kind of create this new area of generative AI, and generative AI in the context of computer vision is a lot of, you know, creating video and image assets and content. There's also this whole surface area to understanding what's already created. Basically digitizing physical, real world things. I mean the Metaverse can't be built if we don't know how to mirror or create or identify the objects that we want to interact with in our everyday lives. And where computer vision comes to play in, especially what we've seen at Roboflow is, you know, a little over a hundred thousand developers now have built with our tools. That's to the tune of a hundred million labeled open source images, over 10,000 pre-trained models. And they've kind of showcased to us all of the ways that computer vision is impacting and bringing the world to life. And these are things that, you know, even before large language models and generative AI, you had pretty impressive capabilities, and when you add the two together, it actually unlocks these kind of new capabilities. So for example, you know, one of our users actually powers the broadcast feeds at Wimbledon. So here we're talking about video, we're streaming, we're doing things live, we've got folks that are cropping and making sure we look good, and audio/visual all plugged in correctly. When you broadcast Wimbledon, you'll notice that the camera controllers need to do things like track the ball, which is moving at extremely high speeds and zoom crop, pan tilt, as well as determine if the ball bounced in or out. The very controversial but critical key to a lot of tennis matches. And a lot of that has been historically done with the trained, but fallible human eye and computer vision is, you know, well suited for this task to say, how do we track, pan, tilt, zoom, and see, track the tennis ball in real time, run at 30 plus frames per second, and do it all on the edge. And those are capabilities that, you know, were kind of like science fiction, maybe even a decade ago, and certainly five years ago. Now the interesting thing, is that with the advent of of generative AI, you can start to do things like create your own training data sets, or kind of create logic around once you have this visual input. And teams at Tesla have actually been speaking about, of course the autopilot team's focused on doing vision tasks, but they've combined large language models to add reasoning and logic. So given that you see, let's say the tennis ball, what do you want to do? And being able to combine the capabilities of what LLM's represent, which is really a lot of basically, core human reasoning and logic, with computer vision for the inputs of what's possible, creates these new capabilities, let alone multimodality, which I'm sure we'll talk more about. >> Yeah, and it's really, I mean it's almost intoxicating. It's amazing that this is so capable because the cloud scales here, you got the edge developing, you can decouple compute power, and let Moore's law and all the new silicone and the processors and the GPUs do their thing, and you got open source booming. You're kind of getting at this next segment I wanted to get into, which is the, how people should be thinking about these advances of the computer vision. So this is now a next wave, it's here. I mean I'd love to have that for baseball because I'm always like, "Oh, it should have been a strike." I'm sure that's going to be coming soon, but what is the computer vision capable of doing today? I guess that's my first question. You hit some of it, unpack that a little bit. What does general AI mean in computer vision? What's the new thing? Because there are old technology's been around, proprietary, bolted onto hardware, but hardware advances at a different pace, but now you got new capabilities, generative AI for vision, what does that mean? >> Yeah, so computer vision, you know, at its core is basically enabling machines, computers, to understand, process, and act on visual data as effective or more effective than people can. Traditionally this has been, you know, task types like classification, which you know, identifying if a given image belongs in a certain category of goods on maybe a retail site, is the shoes or is it clothing? Or object detection, which is, you know, creating bounding boxes, which allows you to do things like count how many things are present, or maybe measure the speed of something, or trigger an alert when something becomes visible in frame that wasn't previously visible in frame, or instant segmentation where you're creating pixel wise segmentations for both instance and semantic segmentation, where you often see these kind of beautiful visuals of the polygon surrounding objects that you see. Then you have key point detection, which is where you see, you know, athletes, and each of their joints are kind of outlined is another more traditional type problem in signal processing and computer vision. With generative AI, you kind of get a whole new class of problem types that are opened up. So in a lot of ways I think about generative AI in computer vision as some of the, you know, problems that you aimed to tackle, might still be better suited for one of the previous task types we were discussing. Some of those problem types may be better suited for using a generative technique, and some are problem types that just previously wouldn't have been possible absent generative AI. And so if you make that kind of Venn diagram in your head, you can think about, okay, you know, visual question answering is a task type where if I give you an image and I say, you know, "How many people are in this image?" We could either build an object detection model that might count all those people, or maybe a visual question answering system would sufficiently answer this type of problem. Let alone generative AI being able to create new training data for old systems. And that's something that we've seen be an increasingly prominent use case for our users, as much as things that we advise our customers and the community writ large to take advantage of. So ultimately those are kind of the traditional task types. I can give you some insight, maybe, into how I think about what's possible today, or five years or ten years as you sort go back. >> Yes, definitely. Let's get into that vision. >> So I kind of think about the types of use cases in terms of what's possible. If you just imagine a very simple bell curve, your normal distribution, for the longest time, the types of things that are in the center of that bell curve are identifying objects that are very common or common objects in context. Microsoft published the COCO Dataset in 2014 of common objects and contexts, of hundreds of thousands of images of chairs, forks, food, person, these sorts of things. And you know, the challenge of the day had always been, how do you identify just those 80 objects? So if we think about the bell curve, that'd be maybe the like dead center of the curve, where there's a lot of those objects present, and it's a very common thing that needs to be identified. But it's a very, very, very small sliver of the distribution. Now if you go out to the way long tail, let's go like deep into the tail of this imagined visual normal distribution, you're going to have a problem like one of our customers, Rivian, in tandem with AWS, is tackling, to do visual quality assurance and manufacturing in production processes. Now only Rivian knows what a Rivian is supposed to look like. Only they know the imagery of what their goods that are going to be produced are. And then between those long tails of proprietary data of highly specific things that need to be understood, in the center of the curve, you have a whole kind of messy middle, type of problems I like to say. The way I think about computer vision advancing, is it's basically you have larger and larger and more capable models that eat from the center out, right? So if you have a model that, you know, understands the 80 classes in COCO, well, pretty soon you have advances like Clip, which was trained on 400 million image text pairs, and has a greater understanding of a wider array of objects than just 80 classes in context. And over time you'll get more and more of these larger models that kind of eat outwards from that center of the distribution. And so the question becomes for companies, when can you rely on maybe a model that just already exists? How do you use your data to get what may be capable off the shelf, so to speak, into something that is usable for you? Or, if you're in those long tails and you have proprietary data, how do you take advantage of the greatest asset you have, which is observed visual information that you want to put to work for your customers, and you're kind of living in the long tails, and you need to adapt state of the art for your capabilities. So my mental model for like how computer vision advances is you have that bell curve, and you have increasingly powerful models that eat outward. And multimodality has a role to play in that, larger models have a role to play in that, more compute, more data generally has a role to play in that. But it will be a messy and I think long condition. >> Well, the thing I want to get, first of all, it's great, great mental model, I appreciate that, 'cause I think that makes a lot of sense. The question is, it seems now more than ever, with the scale and compute that's available, that not only can you eat out to the middle in your example, but there's other models you can integrate with. In the past there was siloed, static, almost bespoke. Now you're looking at larger models eating into the bell curve, as you said, but also integrating in with other stuff. So this seems to be part of that interaction. How does, first of all, is that really happening? Is that true? And then two, what does that mean for companies who want to take advantage of this? Because the old model was operational, you know? I have my cameras, they're watching stuff, whatever, and like now you're in this more of a, distributed computing, computer science mindset, not, you know, put the camera on the wall kind of- I'm oversimplifying, but you know what I'm saying. What's your take on that? >> Well, to the first point of, how are these advances happening? What I was kind of describing was, you know, almost uni-dimensional in that you have like, you're only thinking about vision, but the rise of generative techniques and multi-modality, like Clip is a multi-modal model, it has 400 million image text pairs. That will advance the generalizability at a faster rate than just treating everything as only vision. And that's kind of where LLMs and vision will intersect in a really nice and powerful way. Now in terms of like companies, how should they be thinking about taking advantage of these trends? The biggest thing that, and I think it's different, obviously, on the size of business, if you're an enterprise versus a startup. The biggest thing that I think if you're an enterprise, and you have an established scaled business model that is working for your customers, the question becomes, how do you take advantage of that established data moat, potentially, resource moats, and certainly, of course, establish a way of providing value to an end user. So for example, one of our customers, Walmart, has the advantage of one of the largest inventory and stock of any company in the world. And they also of course have substantial visual data, both from like their online catalogs, or understanding what's in stock or out of stock, or understanding, you know, the quality of things that they're going from the start of their supply chain to making it inside stores, for delivery of fulfillments. All these are are visual challenges. Now they already have a substantial trove of useful imagery to understand and teach and train large models to understand each of the individual SKUs and products that are in their stores. And so if I'm a Walmart, what I'm thinking is, how do I make sure that my petabytes of visual information is utilized in a way where I capture the proprietary benefit of the models that I can train to do tasks like, what item was this? Or maybe I'm going to create AmazonGo-like technology, or maybe I'm going to build like delivery robots, or I want to automatically know what's in and out of stock from visual input fees that I have across my in-store traffic. And that becomes the question and flavor of the day for enterprises. I've got this large amount of data, I've got an established way that I can provide more value to my own customers. How do I ensure I take advantage of the data advantage I'm already sitting on? If you're a startup, I think it's a pretty different question, and I'm happy to talk about. >> Yeah, what's startup angle on this? Because you know, they're going to want to take advantage. It's like cloud startups, cloud native startups, they were born in the cloud, they never had an IT department. So if you're a startup, is there a similar role here? And if I'm a computer vision startup, what's that mean? So can you share your your take on that, because there'll be a lot of people starting up from this. >> So the startup on the opposite advantage and disadvantage, right? Like a startup doesn't have an proven way of delivering repeatable value in the same way that a scaled enterprise does. But it does have the nimbleness to identify and take advantage of techniques that you can start from a blank slate. And I think the thing that startups need to be wary of in the generative AI enlarged language model, in multimodal world, is building what I like to call, kind of like sandcastles. A sandcastle is maybe a business model or a capability that's built on top of an assumption that is going to be pretty quickly wiped away by improving underlying model technology. So almost like if you imagine like the ocean, the waves are coming in, and they're going to wipe away your progress. You don't want to be in the position of building sandcastle business where, you don't want to bet on the fact that models aren't going to get good enough to solve the task type that you might be solving. In other words, don't take a screenshot of what's capable today. Assume that what's capable today is only going to continue to become possible. And so for a startup, what you can do, that like enterprises are quite comparatively less good at, is embedding these capabilities deeply within your products and delivering maybe a vertical based experience, where AI kind of exists in the background. >> Yeah. >> And we might not think of companies as, you know, even AI companies, it's just so embedded in the experience they provide, but that's like the vertical application example of taking AI and making it be immediately usable. Or, of course there's tons of picks and shovels businesses to be built like Roboflow, where you're enabling these enterprises to take advantage of something that they have, whether that's their data sets, their computes, or their intellect. >> Okay, so if I hear that right, by the way, I love, that's horizontally scalable, that's the large language models, go up and build them the apps, hence your developer focus. I'm sure that's probably the reason that the tsunami of developer's action. So you're saying picks and shovels tools, don't try to replicate the platform of what could be the platform. Oh, go to a VC, I'm going to build a platform. No, no, no, no, those are going to get wiped away by the large language models. Is there one large language model that will rule the world, or do you see many coming? >> Yeah, so to be clear, I think there will be useful platforms. I just think a lot of people think that they're building, let's say, you know, if we put this in the cloud context, you're building a specific type of EC2 instance. Well, it turns out that Amazon can offer that type of EC2 instance, and immediately distribute it to all of their customers. So you don't want to be in the position of just providing something that actually ends up looking like a feature, which in the context of AI, might be like a small incremental improvement on the model. If that's all you're doing, you're a sandcastle business. Now there's a lot of platform businesses that need to be built that enable businesses to get to value and do things like, how do I monitor my models? How do I create better models with my given data sets? How do I ensure that my models are doing what I want them to do? How do I find the right models to use? There's all these sorts of platform wide problems that certainly exist for businesses. I just think a lot of startups that I'm seeing right now are making the mistake of assuming the advances we're seeing are not going to accelerate or even get better. >> So if I'm a customer, if I'm a company, say I'm a startup or an enterprise, either one, same question. And I want to stand up, and I have developers working on stuff, I want to start standing up an environment to start doing stuff. Is that a service provider? Is that a managed service? Is that you guys? So how do you guys fit into your customers leaning in? Is it just for developers? Are you targeting with a specific like managed service? What's the product consumption? How do you talk to customers when they come to you? >> The thing that we do is enable, we give developers superpowers to build automated inventory tracking, self-checkout systems, identify if this image is malignant cancer or benign cancer, ensure that these products that I've produced are correct. Make sure that that the defect that might exist on this electric vehicle makes its way back for review. All these sorts of problems are immediately able to be solved and tackled. In terms of the managed services element, we have solutions as integrators that will often build on top of our tools, or we'll have companies that look to us for guidance, but ultimately the company is in control of developing and building and creating these capabilities in house. I really think the distinction is maybe less around managed service and tool, and more around ownership in the era of AI. So for example, if I'm using a managed service, in that managed service, part of their benefit is that they are learning across their customer sets, then it's a very different relationship than using a managed service where I'm developing some amount of proprietary advantages for my data sets. And I think that's a really important thing that companies are becoming attuned to, just the value of the data that they have. And so that's what we do. We tell companies that you have this proprietary, immense treasure trove of data, use that to your advantage, and think about us more like a set of tools that enable you to get value from that capability. You know, the HashiCorp's and GitLab's of the world have proven like what these businesses look like at scale. >> And you're targeting developers. When you go into a company, do you target developers with freemium, is there a paid service? Talk about the business model real quick. >> Sure, yeah. The tools are free to use and get started. When someone signs up for Roboflow, they may elect to make their work open source, in which case we're able to provide even more generous usage limits to basically move the computer vision community forward. If you elect to make your data private, you can use our hosted data set managing, data set training, model deployment, annotation tooling up to some limits. And then usually when someone validates that what they're doing gets them value, they purchase a subscription license to be able to scale up those capabilities. So like most developer centric products, it's free to get started, free to prove, free to poke around, develop what you think is possible. And then once you're getting to value, then we're able to capture the commercial upside in the value that's being provided. >> Love the business model. It's right in line with where the market is. There's kind of no standards bodies these days. The developers are the ones who are deciding kind of what the standards are by their adoption. I think making that easy for developers to get value as the model open sources continuing to grow, you can see more of that. Great perspective Joseph, thanks for sharing that. Put a plug in for the company. What are you guys doing right now? Where are you in your growth? What are you looking for? How should people engage? Give the quick commercial for the company. >> So as I mentioned, Roboflow is I think one of the largest, if not the largest collections of computer vision models and data sets that are open source, available on the web today, and have a private set of tools that over half the Fortune 100 now rely on those tools. So we're at the stage now where we know people want what we're working on, and we're continuing to drive that type of adoption. So companies that are looking to make better models, improve their data sets, train and deploy, often will get a lot of value from our tools, and certainly reach out to talk. I'm sure there's a lot of talented engineers that are tuning in too, we're aggressively hiring. So if you are interested in being a part of making the world programmable, and being at the ground floor of the company that's creating these capabilities to be writ large, we'd love to hear from you. >> Amazing, Joseph, thanks so much for coming on and being part of the AWS Startup Showcase. Man, if I was in my twenties, I'd be knocking on your door, because it's the hottest trend right now, it's super exciting. Generative AI is just the beginning of massive sea change. Congratulations on all your success, and we'll be following you guys. Thanks for spending the time, really appreciate it. >> Thanks for having me. >> Okay, this is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about the hottest things in tech. I'm John Furrier, your host. Thanks for watching. (chill electronic music)

Published Date : Mar 9 2023

SUMMARY :

of the AWS Startups Showcase, of what you guys are doing, of the explosion of use and you know, write some hacks on code and do it all on the edge. and the processors and of the traditional task types. Let's get into that vision. the greatest asset you have, eating into the bell curve, as you said, and flavor of the day for enterprises. So can you share your your take on that, that you can start from a blank slate. but that's like the that right, by the way, How do I find the right models to use? Is that you guys? and GitLab's of the world Talk about the business model real quick. in the value that's being provided. The developers are the that over half the Fortune and being part of the of the ongoing series

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Joseph NelsonPERSON

0.99+

JosephPERSON

0.99+

WalmartORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

TeslaORGANIZATION

0.99+

400 millionQUANTITY

0.99+

2014DATE

0.99+

80 objectsQUANTITY

0.99+

AWSORGANIZATION

0.99+

three yearsQUANTITY

0.99+

ten yearsQUANTITY

0.99+

80 classesQUANTITY

0.99+

first questionQUANTITY

0.99+

five yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

RoboflowORGANIZATION

0.99+

WimbledonEVENT

0.99+

todayDATE

0.98+

bothQUANTITY

0.98+

five years agoDATE

0.98+

GitLabORGANIZATION

0.98+

oneQUANTITY

0.98+

North StarORGANIZATION

0.98+

first pointQUANTITY

0.97+

eachQUANTITY

0.97+

over 10,000 pre-trained modelsQUANTITY

0.97+

a decade agoDATE

0.97+

RivianORGANIZATION

0.97+

Mobile World ConferenceEVENT

0.95+

over a hundred thousand developersQUANTITY

0.94+

EC2TITLE

0.94+

this monthDATE

0.93+

season oneQUANTITY

0.93+

30 plus frames per secondQUANTITY

0.93+

twentiesQUANTITY

0.93+

sandcastleORGANIZATION

0.9+

HashiCorpORGANIZATION

0.89+

theCUBEORGANIZATION

0.88+

hundreds of thousandsQUANTITY

0.87+

waveEVENT

0.87+

North StarORGANIZATION

0.86+

400 million image text pairsQUANTITY

0.78+

season threeQUANTITY

0.78+

episode oneQUANTITY

0.76+

AmazonGoORGANIZATION

0.76+

over halfQUANTITY

0.69+

a hundred millionQUANTITY

0.68+

Startup ShowcaseEVENT

0.66+

Fortune 100TITLE

0.66+

COCOTITLE

0.65+

RoboflowPERSON

0.6+

ChatGPTORGANIZATION

0.58+

DatasetTITLE

0.53+

MoorePERSON

0.5+

COCOORGANIZATION

0.39+

Luis Ceze & Anna Connolly, OctoML | AWS Startup Showcase S3 E1


 

(soft music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. AI and Machine Learning: Top Startups Building Foundational Model Infrastructure. This is season 3, episode 1 of the ongoing series covering the exciting stuff from the AWS ecosystem, talking about machine learning and AI. I'm your host, John Furrier and today we are excited to be joined by Luis Ceze who's the CEO of OctoML and Anna Connolly, VP of customer success and experience OctoML. Great to have you on again, Luis. Anna, thanks for coming on. Appreciate it. >> Thank you, John. It's great to be here. >> Thanks for having us. >> I love the company. We had a CUBE conversation about this. You guys are really addressing how to run foundational models faster for less. And this is like the key theme. But before we get into it, this is a hot trend, but let's explain what you guys do. Can you set the narrative of what the company's about, why it was founded, what's your North Star and your mission? >> Yeah, so John, our mission is to make AI sustainable and accessible for everyone. And what we offer customers is, you know, a way of taking their models into production in the most efficient way possible by automating the process of getting a model and optimizing it for a variety of hardware and making cost-effective. So better, faster, cheaper model deployment. >> You know, the big trend here is AI. Everyone's seeing the ChatGPT, kind of the shot heard around the world. The BingAI and this fiasco and the ongoing experimentation. People are into it, and I think the business impact is clear. I haven't seen this in all of my career in the technology industry of this kind of inflection point. And every senior leader I talk to is rethinking about how to rebuild their business with AI because now the large language models have come in, these foundational models are here, they can see value in their data. This is a 10 year journey in the big data world. Now it's impacting that, and everyone's rebuilding their company around this idea of being AI first 'cause they see ways to eliminate things and make things more efficient. And so now they telling 'em to go do it. And they're like, what do we do? So what do you guys think? Can you explain what is this wave of AI and why is it happening, why now, and what should people pay attention to? What does it mean to them? >> Yeah, I mean, it's pretty clear by now that AI can do amazing things that captures people's imaginations. And also now can show things that are really impactful in businesses, right? So what people have the opportunity to do today is to either train their own model that adds value to their business or find open models out there that can do very valuable things to them. So the next step really is how do you take that model and put it into production in a cost-effective way so that the business can actually get value out of it, right? >> Anna, what's your take? Because customers are there, you're there to make 'em successful, you got the new secret weapon for their business. >> Yeah, I think we just see a lot of companies struggle to get from a trained model into a model that is deployed in a cost-effective way that actually makes sense for the application they're building. I think that's a huge challenge we see today, kind of across the board across all of our customers. >> Well, I see this, everyone asking the same question. I have data, I want to get value out of it. I got to get these big models, I got to train it. What's it going to cost? So I think there's a reality of, okay, I got to do it. Then no one has any visibility on what it costs. When they get into it, this is going to break the bank. So I have to ask you guys, the cost of training these models is on everyone's mind. OctoML, your company's focus on the cost side of it as well as the efficiency side of running these models in production. Why are the production costs such a concern and where specifically are people looking at it and why did it get here? >> Yeah, so training costs get a lot of attention because normally a large number, but we shouldn't forget that it's a large, typically one time upfront cost that customers pay. But, you know, when the model is put into production, the cost grows directly with model usage and you actually want your model to be used because it's adding value, right? So, you know, the question that a customer faces is, you know, they have a model, they have a trained model and now what? So how much would it cost to run in production, right? And now without the big wave in generative AI, which rightfully is getting a lot of attention because of the amazing things that it can do. It's important for us to keep in mind that generative AI models like ChatGPT are huge, expensive energy hogs. They cost a lot to run, right? And given that model usage growth directly, model cost grows directly with usage, what you want to do is make sure that once you put a model into production, you have the best cost structure possible so that you're not surprised when it's gets popular, right? So let me give you an example. So if you have a model that costs, say 1 to $2 million to train, but then it costs about one to two cents per session to use it, right? So if you have a million active users, even if they use just once a day, it's 10 to $20,000 a day to operate that model in production. And that very, very quickly, you know, get beyond what you paid to train it. >> Anna, these aren't small numbers, and it's cost to train and cost to operate, it kind of reminds me of when the cloud came around and the data center versus cloud options. Like, wait a minute, one, it costs a ton of cash to deploy, and then running it. This is kind of a similar dynamic. What are you seeing? >> Yeah, absolutely. I think we are going to see increasingly the cost and production outpacing the costs and training by a lot. I mean, people talk about training costs now because that's what they're confronting now because people are so focused on getting models performant enough to even use in an application. And now that we have them and they're that capable, we're really going to start to see production costs go up a lot. >> Yeah, Luis, if you don't mind, I know this might be a little bit of a tangent, but, you know, training's super important. I get that. That's what people are doing now, but then there's the deployment side of production. Where do people get caught up and miss the boat or misconfigure? What's the gotcha? Where's the trip wire or so to speak? Where do people mess up on the cost side? What do they do? Is it they don't think about it, they tie it to proprietary hardware? What's the issue? >> Yeah, several things, right? So without getting really technical, which, you know, I might get into, you know, you have to understand relationship between performance, you know, both in terms of latency and throughput and cost, right? So reducing latency is important because you improve responsiveness of the model. But it's really important to keep in mind that it often leads diminishing returns. Below a certain latency, making it faster won't make a measurable difference in experience, but it's going to cost a lot more. So understanding that is important. Now, if you care more about throughputs, which is the time it takes for you to, you know, units per period of time, you care about time to solution, we should think about this throughput per dollar. And understand what you want is the highest throughput per dollar, which may come at the cost of higher latency, which you're not going to care about, right? So, and the reality here, John, is that, you know, humans and especially folks in this space want to have the latest and greatest hardware. And often they commit a lot of money to get access to them and have to commit upfront before they understand the needs that their models have, right? So common mistake here, one is not spending time to understand what you really need, and then two, over-committing and using more hardware than you actually need. And not giving yourself enough freedom to get your workload to move around to the more cost-effective choice, right? So this is just a metaphoric choice. And then another thing that's important here too is making a model run faster on the hardware directly translates to lower cost, right? So, but it takes a lot of engineers, you need to think of ways of producing very efficient versions of your model for the target hardware that you're going to use. >> Anna, what's the customer angle here? Because price performance has been around for a long time, people get that, but now latency and throughput, that's key because we're starting to see this in apps. I mean, there's an end user piece. I even seeing it on the infrastructure side where they're taking a heavy lifting away from operational costs. So you got, you know, application specific to the user and/or top of the stack, and then you got actually being used in operations where they want both. >> Yeah, absolutely. Maybe I can illustrate this with a quick story with the customer that we had recently been working with. So this customer is planning to run kind of a transformer based model for tech generation at super high scale on Nvidia T4 GPU, so kind of a commodity GPU. And the scale was so high that they would've been paying hundreds of thousands of dollars in cloud costs per year just to serve this model alone. You know, one of many models in their application stack. So we worked with this team to optimize our model and then benchmark across several possible targets. So that matching the hardware that Luis was just talking about, including the newer kind of Nvidia A10 GPUs. And what they found during this process was pretty interesting. First, the team was able to shave a quarter of their spend just by using better optimization techniques on the T4, the older hardware. But actually moving to a newer GPU would allow them to serve this model in a sub two milliseconds latency, so super fast, which was able to unlock an entirely new kind of user experience. So they were able to kind of change the value they're delivering in their application just because they were able to move to this new hardware easily. So they ultimately decided to plan their deployment on the more expensive A10 because of this, but because of the hardware specific optimizations that we helped them with, they managed to even, you know, bring costs down from what they had originally planned. And so if you extend this kind of example to everything that's happening with generative AI, I think the story we just talked about was super relevant, but the scale can be even higher, you know, it can be tenfold that. We were recently conducting kind of this internal study using GPT-J as a proxy to illustrate the experience of just a company trying to use one of these large language models with an example scenario of creating a chatbot to help job seekers prepare for interviews. So if you imagine kind of a conservative usage scenario where the model generates just 3000 words per user per day, which is, you know, pretty conservative for how people are interacting with these models. It costs 5 cents a session and if you're a company and your app goes viral, so from, you know, beginning of the year there's nobody, at the end of the year there's a million daily active active users in that year alone, going from zero to a million. You'll be spending about $6 million a year, which is pretty unmanageable. That's crazy, right? >> Yeah. >> For a company or a product that's just launching. So I think, you know, for us we see the real way to make these kind of advancements accessible and sustainable, as we said is to bring down cost to serve using these techniques. >> That's a great story and I think that illustrates this idea that deployment cost can vary from situation to situation, from model to model and that the efficiency is so strong with this new wave, it eliminates heavy lifting, creates more efficiency, automates intellect. I mean, this is the trend, this is radical, this is going to increase. So the cost could go from nominal to millions, literally, potentially. So, this is what customers are doing. Yeah, that's a great story. What makes sense on a financial, is there a cost of ownership? Is there a pattern for best practice for training? What do you guys advise cuz this is a lot of time and money involved in all potential, you know, good scenarios of upside. But you can get over your skis as they say, and be successful and be out of business if you don't manage it. I mean, that's what people are talking about, right? >> Yeah, absolutely. I think, you know, we see kind of three main vectors to reduce cost. I think one is make your deployment process easier overall, so that your engineering effort to even get your app running goes down. Two, would be get more from the compute you're already paying for, you're already paying, you know, for your instances in the cloud, but can you do more with that? And then three would be shop around for lower cost hardware to match your use case. So on the first one, I think making the deployment easier overall, there's a lot of manual work that goes into benchmarking, optimizing and packaging models for deployment. And because the performance of machine learning models can be really hardware dependent, you have to go through this process for each target you want to consider running your model on. And this is hard, you know, we see that every day. But for teams who want to incorporate some of these large language models into their applications, it might be desirable because licensing a model from a large vendor like OpenAI can leave you, you know, over provision, kind of paying for capabilities you don't need in your application or can lock you into them and you lose flexibility. So we have a customer whose team actually prepares models for deployment in a SaaS application that many of us use every day. And they told us recently that without kind of an automated benchmarking and experimentation platform, they were spending several days each to benchmark a single model on a single hardware type. So this is really, you know, manually intensive and then getting more from the compute you're already paying for. We do see customers who leave money on the table by running models that haven't been optimized specifically for the hardware target they're using, like Luis was mentioning. And for some teams they just don't have the time to go through an optimization process and for others they might lack kind of specialized expertise and this is something we can bring. And then on shopping around for different hardware types, we really see a huge variation in model performance across hardware, not just CPU vs. GPU, which is, you know, what people normally think of. But across CPU vendors themselves, high memory instances and across cloud providers even. So the best strategy here is for teams to really be able to, we say, look before you leap by running real world benchmarking and not just simulations or predictions to find the best software, hardware combination for their workload. >> Yeah. You guys sound like you have a very impressive customer base deploying large language models. Where would you categorize your current customer base? And as you look out, as you guys are growing, you have new customers coming in, take me through the progression. Take me through the profile of some of your customers you have now, size, are they hyperscalers, are they big app folks, are they kicking the tires? And then as people are out there scratching heads, I got to get in this game, what's their psychology like? Are they coming in with specific problems or do they have specific orientation point of view about what they want to do? Can you share some data around what you're seeing? >> Yeah, I think, you know, we have customers that kind of range across the spectrum of sophistication from teams that basically don't have MLOps expertise in their company at all. And so they're really looking for us to kind of give a full service, how should I do everything from, you know, optimization, find the hardware, prepare for deployment. And then we have teams that, you know, maybe already have their serving and hosting infrastructure up and ready and they already have models in production and they're really just looking to, you know, take the extra juice out of the hardware and just do really specific on that optimization piece. I think one place where we're doing a lot more work now is kind of in the developer tooling, you know, model selection space. And that's kind of an area that we're creating more tools for, particularly within the PyTorch ecosystem to bring kind of this power earlier in the development cycle so that as people are grabbing a model off the shelf, they can, you know, see how it might perform and use that to inform their development process. >> Luis, what's the big, I like this idea of picking the models because isn't that like going to the market and picking the best model for your data? It's like, you know, it's like, isn't there a certain approaches? What's your view on this? 'Cause this is where everyone, I think it's going to be a land rush for this and I want to get your thoughts. >> For sure, yeah. So, you know, I guess I'll start with saying the one main takeaway that we got from the GPT-J study is that, you know, having a different understanding of what your model's compute and memory requirements are, very quickly, early on helps with the much smarter AI model deployments, right? So, and in fact, you know, Anna just touched on this, but I want to, you know, make sure that it's clear that OctoML is putting that power into user's hands right now. So in partnership with AWS, we are launching this new PyTorch native profiler that allows you with a single, you know, one line, you know, code decorator allows you to see how your code runs on a variety of different hardware after accelerations. So it gives you very clear, you know, data on how you should think about your model deployments. And this ties back to choices of models. So like, if you have a set of choices that are equally good of models in terms of functionality and you want to understand after acceleration how are you going to deploy, how much they're going to cost or what are the options using a automated process of making a decision is really, really useful. And in fact, so I think these events can get early access to this by signing up for the Octopods, you know, this is exclusive group for insiders here, so you can go to OctoML.ai/pods to sign up. >> So that Octopod, is that a program? What is that, is that access to code? Is that a beta, what is that? Explain, take a minute and explain Octopod. >> I think the Octopod would be a group of people who is interested in experiencing this functionality. So it is the friends and users of OctoML that would be the Octopod. And then yes, after you sign up, we would provide you essentially the tool in code form for you to try out in your own. I mean, part of the benefit of this is that it happens in your own local environment and you're in control of everything kind of within the workflow that developers are already using to create and begin putting these models into their applications. So it would all be within your control. >> Got it. I think the big question I have for you is when do you, when does that one of your customers know they need to call you? What's their environment look like? What are they struggling with? What are the conversations they might be having on their side of the fence? If anyone's watching this, they're like, "Hey, you know what, I've got my team, we have a lot of data. Do we have our own language model or do I use someone else's?" There's a lot of this, I will say discovery going on around what to do, what path to take, what does that customer look like, if someone's listening, when do they know to call you guys, OctoML? >> Well, I mean the most obvious one is that you have a significant spend on AI/ML, come and talk to us, you know, putting AIML into production. So that's the clear one. In fact, just this morning I was talking to someone who is in life sciences space and is having, you know, 15 to $20 million a year cloud related to AI/ML deployment is a clear, it's a pretty clear match right there, right? So that's on the cost side. But I also want to emphasize something that Anna said earlier that, you know, the hardware and software complexity involved in putting model into production is really high. So we've been able to abstract that away, offering a clean automation flow enables one, to experiment early on, you know, how models would run and get them to production. And then two, once they are into production, gives you an automated flow to continuously updating your model and taking advantage of all this acceleration and ability to run the model on the right hardware. So anyways, let's say one then is cost, you know, you have significant cost and then two, you have an automation needs. And Anna please compliment that. >> Yeah, Anna you can please- >> Yeah, I think that's exactly right. Maybe the other time is when you are expecting a big scale up in serving your application, right? You're launching a new feature, you expect to get a lot of usage or, and you want to kind of anticipate maybe your CTO, your CIO, whoever pays your cloud bills is going to come after you, right? And so they want to know, you know, what's the return on putting this model essentially into my application stack? Am I going to, is the usage going to match what I'm paying for it? And then you can understand that. >> So you guys have a lot of the early adopters, they got big data teams, they're pushed in the production, they want to get a little QA, test the waters, understand, use your technology to figure it out. Is there any cases where people have gone into production, they have to pull it out? It's like the old lemon laws with your car, you buy a car and oh my god, it's not the way I wanted it. I mean, I can imagine the early people through the wall, so to speak, in the wave here are going to be bloody in the sense that they've gone in and tried stuff and get stuck with huge bills. Are you seeing that? Are people pulling stuff out of production and redeploying? Or I can imagine that if I had a bad deployment, I'd want to refactor that or actually replatform that. Do you see that too? >> Definitely after a sticker shock, yes, your customers will come and make sure that, you know, the sticker shock won't happen again. >> Yeah. >> But then there's another more thorough aspect here that I think we likely touched on, be worth elaborating a bit more is just how are you going to scale in a way that's feasible depending on the allocation that you get, right? So as we mentioned several times here, you know, model deployment is so hardware dependent and so complex that you tend to get a model for a hardware choice and then you want to scale that specific type of instance. But what if, when you want to scale because suddenly luckily got popular and, you know, you want to scale it up and then you don't have that instance anymore. So how do you live with whatever you have at that moment is something that we see customers needing as well. You know, so in fact, ideally what we want is customers to not think about what kind of specific instances they want. What they want is to know what their models need. Say, they know the SLA and then find a set of hybrid targets and instances that hit the SLA whenever they're also scaling, they're going to scale with more freedom, right? Instead of having to wait for AWS to give them more specific allocation for a specific instance. What if you could live with other types of hardware and scale up in a more free way, right? So that's another thing that we see customers, you know, like they need more freedom to be able to scale with whatever is available. >> Anna, you touched on this with the business model impact to that 6 million cost, if that goes out of control, there's a business model aspect and there's a technical operation aspect to the cost side too. You want to be mindful of riding the wave in a good way, but not getting over your skis. So that brings up the point around, you know, confidence, right? And teamwork. Because if you're in production, there's probably a team behind it. Talk about the team aspect of your customers. I mean, they're dedicated, they go put stuff into production, they're developers, there're data. What's in it for them? Are they getting better, are they in the beach, you know, reading the book. Are they, you know, are there easy street for them? What's the customer benefit to the teams? >> Yeah, absolutely. With just a few clicks of a button, you're in production, right? That's the dream. So yeah, I mean I think that, you know, we illustrated it before a little bit. I think the automated kind of benchmarking and optimization process, like when you think about the effort it takes to get that data by hand, which is what people are doing today, they just don't do it. So they're making decisions without the best information because it's, you know, there just isn't the bandwidth to get the information that they need to make the best decision and then know exactly how to deploy it. So I think it's actually bringing kind of a new insight and capability to these teams that they didn't have before. And then maybe another aspect on the team side is that it's making the hand-off of the models from the data science teams to the model deployment teams more seamless. So we have, you know, we have seen in the past that this kind of transition point is the place where there are a lot of hiccups, right? The data science team will give a model to the production team and it'll be too slow for the application or it'll be too expensive to run and it has to go back and be changed and kind of this loop. And so, you know, with the PyTorch profiler that Luis was talking about, and then also, you know, the other ways we do optimization that kind of prevents that hand-off problem from happening. >> Luis and Anna, you guys have a great company. Final couple minutes left. Talk about the company, the people there, what's the culture like, you know, if Intel has Moore's law, which is, you know, doubling the performance in few years, what's the culture like there? Is it, you know, more throughput, better pricing? Explain what's going on with the company and put a plug in. Luis, we'll start with you. >> Yeah, absolutely. I'm extremely proud of the team that we built here. You know, we have a people first culture, you know, very, very collaborative and folks, we all have a shared mission here of making AI more accessible and sustainable. We have a very diverse team in terms of backgrounds and life stories, you know, to do what we do here, we need a team that has expertise in software engineering, in machine learning, in computer architecture. Even though we don't build chips, we need to understand how they work, right? So, and then, you know, the fact that we have this, this very really, really varied set of backgrounds makes the environment, you know, it's say very exciting to learn more about, you know, assistance end-to-end. But also makes it for a very interesting, you know, work environment, right? So people have different backgrounds, different stories. Some of them went to grad school, others, you know, were in intelligence agencies and now are working here, you know. So we have a really interesting set of people and, you know, life is too short not to work with interesting humans. You know, that's something that I like to think about, you know. >> I'm sure your off-site meetings are a lot of fun, people talking about computer architectures, silicon advances, the next GPU, the big data models coming in. Anna, what's your take? What's the culture like? What's the company vibe and what are you guys looking to do? What's the customer success pattern? What's up? >> Yeah, absolutely. I mean, I, you know, second all of the great things that Luis just said about the team. I think one that I, an additional one that I'd really like to underscore is kind of this customer obsession, to use a term you all know well. And focus on the end users and really making the experiences that we're bringing to our user who are developers really, you know, useful and valuable for them. And so I think, you know, all of these tools that we're trying to put in the hands of users, the industry and the market is changing so rapidly that our products across the board, you know, all of the companies that, you know, are part of the showcase today, we're all evolving them so quickly and we can only do that kind of really hand in glove with our users. So that would be another thing I'd emphasize. >> I think the change dynamic, the power dynamics of this industry is just the beginning. I'm very bullish that this is going to be probably one of the biggest inflection points in history of the computer industry because of all the dynamics of the confluence of all the forces, which you mentioned some of them, I mean PC, you know, interoperability within internetworking and you got, you know, the web and then mobile. Now we have this, I mean, I wouldn't even put social media even in the close to this. Like, this is like, changes user experience, changes infrastructure. There's going to be massive accelerations in performance on the hardware side from AWS's of the world and cloud and you got the edge and more data. This is really what big data was going to look like. This is the beginning. Final question, what do you guys see going forward in the future? >> Well, it's undeniable that machine learning and AI models are becoming an integral part of an interesting application today, right? So, and the clear trends here are, you know, more and more competitional needs for these models because they're only getting more and more powerful. And then two, you know, seeing the complexity of the infrastructure where they run, you know, just considering the cloud, there's like a wide variety of choices there, right? So being able to live with that and making the most out of it in a way that does not require, you know, an impossible to find team is something that's pretty clear. So the need for automation, abstracting with the complexity is definitely here. And we are seeing this, you know, trends are that you also see models starting to move to the edge as well. So it's clear that we're seeing, we are going to live in a world where there's no large models living in the cloud. And then, you know, edge models that talk to these models in the cloud to form, you know, an end-to-end truly intelligent application. >> Anna? >> Yeah, I think, you know, our, Luis said it at the beginning. Our vision is to make AI sustainable and accessible. And I think as this technology just expands in every company and every team, that's going to happen kind of on its own. And we're here to help support that. And I think you can't do that without tools like those like OctoML. >> I think it's going to be an error of massive invention, creativity, a lot of the format heavy lifting is going to allow the talented people to automate their intellect. I mean, this is really kind of what we see going on. And Luis, thank you so much. Anna, thanks for coming on this segment. Thanks for coming on theCUBE and being part of the AWS Startup Showcase. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

Great to have you on again, Luis. It's great to be here. but let's explain what you guys do. And what we offer customers is, you know, So what do you guys think? so that the business you got the new secret kind of across the board So I have to ask you guys, And that very, very quickly, you know, and the data center versus cloud options. And now that we have them but, you know, training's super important. John, is that, you know, humans and then you got actually managed to even, you know, So I think, you know, for us we see in all potential, you know, And this is hard, you know, And as you look out, as And then we have teams that, you know, and picking the best model for your data? from the GPT-J study is that, you know, What is that, is that access to code? And then yes, after you sign up, to call you guys, OctoML? come and talk to us, you know, And so they want to know, you know, So you guys have a lot make sure that, you know, we see customers, you know, What's the customer benefit to the teams? and then also, you know, what's the culture like, you know, So, and then, you know, and what are you guys looking to do? all of the companies that, you know, I mean PC, you know, in the cloud to form, you know, And I think you can't And Luis, thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnnaPERSON

0.99+

Anna ConnollyPERSON

0.99+

John FurrierPERSON

0.99+

LuisPERSON

0.99+

Luis CezePERSON

0.99+

JohnPERSON

0.99+

1QUANTITY

0.99+

10QUANTITY

0.99+

15QUANTITY

0.99+

AWSORGANIZATION

0.99+

10 yearQUANTITY

0.99+

6 millionQUANTITY

0.99+

zeroQUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

FirstQUANTITY

0.99+

OctoMLORGANIZATION

0.99+

twoQUANTITY

0.99+

millionsQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

$2 millionQUANTITY

0.98+

3000 wordsQUANTITY

0.98+

one lineQUANTITY

0.98+

A10COMMERCIAL_ITEM

0.98+

OctoMLTITLE

0.98+

oneQUANTITY

0.98+

three main vectorsQUANTITY

0.97+

hundreds of thousands of dollarsQUANTITY

0.97+

bothQUANTITY

0.97+

CUBEORGANIZATION

0.97+

T4COMMERCIAL_ITEM

0.97+

one timeQUANTITY

0.97+

first oneQUANTITY

0.96+

two centsQUANTITY

0.96+

GPT-JORGANIZATION

0.96+

single modelQUANTITY

0.95+

a minuteQUANTITY

0.95+

about $6 million a yearQUANTITY

0.95+

once a dayQUANTITY

0.95+

$20,000 a dayQUANTITY

0.95+

a millionQUANTITY

0.94+

theCUBEORGANIZATION

0.93+

OctopodTITLE

0.93+

this morningDATE

0.93+

first cultureQUANTITY

0.92+

$20 million a yearQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.9+

North StarORGANIZATION

0.9+

Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1


 

(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert NishiharaPERSON

0.99+

JohnPERSON

0.99+

RobertPERSON

0.99+

John FurrierPERSON

0.99+

NetflixORGANIZATION

0.99+

35 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

UberORGANIZATION

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Ant GroupORGANIZATION

0.99+

firstQUANTITY

0.99+

PythonTITLE

0.99+

20%QUANTITY

0.99+

32 GPUsQUANTITY

0.99+

LyftORGANIZATION

0.99+

hundredsQUANTITY

0.99+

tomorrowDATE

0.99+

AnyscaleORGANIZATION

0.99+

threeQUANTITY

0.99+

128QUANTITY

0.99+

SeptemberDATE

0.99+

todayDATE

0.99+

Moore's LawTITLE

0.99+

Adam SelipskyPERSON

0.99+

PyTorchTITLE

0.99+

RayORGANIZATION

0.99+

second reasonQUANTITY

0.99+

64QUANTITY

0.99+

each workerQUANTITY

0.99+

each workerQUANTITY

0.99+

PhotoshopTITLE

0.99+

UC BerkeleyORGANIZATION

0.99+

JavaTITLE

0.99+

ShopifyORGANIZATION

0.99+

OpenAIORGANIZATION

0.99+

AnyscalePERSON

0.99+

thirdQUANTITY

0.99+

two thingsQUANTITY

0.99+

ByteDanceORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

OneQUANTITY

0.99+

95QUANTITY

0.99+

AsureORGANIZATION

0.98+

one lineQUANTITY

0.98+

one GPUQUANTITY

0.98+

ChatGPTTITLE

0.98+

TensorFlowTITLE

0.98+

last yearDATE

0.98+

first bucketQUANTITY

0.98+

bothQUANTITY

0.98+

two layersQUANTITY

0.98+

CohereORGANIZATION

0.98+

AlipayORGANIZATION

0.98+

RayPERSON

0.97+

oneQUANTITY

0.97+

InstacartORGANIZATION

0.97+

Rachel Skaff, AWS | International Women's Day


 

(gentle music) >> Hello, and welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE. I've got a great guest here, CUBE alumni and very impressive, inspiring, Rachel Mushahwar Skaff, who's a managing director and general manager at AWS. Rachel, great to see you. Thanks for coming on. >> Thank you so much. It's always a pleasure to be here. You all make such a tremendous impact with reporting out what's happening in the tech space, and frankly, investing in topics like this, so thank you. >> It's our pleasure. Your career has been really impressive. You worked at Intel for almost a decade, and that company is very tech, very focused on Moore's law, cadence of technology power in the industry. Now at AWS, powering next-generation cloud. What inspired you to get into tech? How did you get here and how have you approached your career journey, because it's quite a track record? >> Wow, how long do we have? (Rachel and John laugh) >> John: We can go as long as you want. (laughs) It's great. >> You know, all joking aside, I think at the end of the day, it's about this simple statement. If you don't get goosebumps every single morning that you're waking up to do your job, it's not good enough. And that's a bit about how I've made all of the different career transitions that I have. You know, everything from building out data centers around the world, to leading network and engineering teams, to leading applications teams, to going and working for, you know, the largest semiconductor in the world, and now at AWS, every single one of those opportunities gave me goosebumps. And I was really focused on how do I surround myself with humans that are better than I am, smarter than I am, companies that plan in decades, but live in moments, companies that invest in their employees and create like artists? And frankly, for me, being part of a company where people know that life is finite, but they want to make an infinite impact, that's a bit about my career journey in a nutshell. >> Yeah. What's interesting is that, you know, over the years, a lot's changed, and a theme that we're hearing from leaders now that are heading up large teams and running companies, they have, you know, they have 20-plus years of experience under their belt and they look back and they say, "Wow, "things have changed and it's changing faster now, "hopefully faster to get change." But they all talk about confidence and they talk about curiosity and building. When did you know that this was going to be something that you got the goosebumps? And were there blockers in your way and how did you handle that? (Rachel laughs) >> There's always blockers in our way, and I think a lot of people don't actually talk about the blockers. I think they make it sound like, hey, I had this plan from day one, and every decision I've made has been perfect. And for me, I'll tell you, right, there are moments in your life that mark a differentiation and those moments that you realize nothing will be the same. And time is kind of divided into two parts, right, before this moment and after this moment. And that's everything from, before I had kids, that's a pretty big moment in people's lives, to after I had kids, and how do you work through some of those opportunities? Before I got married, before I got divorced. Before I went to this company, after I left this company. And I think the key for all of those is just having an insatiable curiosity around how do you continue to do better, create better and make better? And I'll tell you, those blockers, they exist. Coming back from maternity leave, hard. Coming back from a medical leave, hard. Coming back from caring for a sick parent or a sick friend, hard. But all of those things start to help craft who you are as a human being, not as a leader, but as a human being, and allows you to have some empathy with the people that you surround yourself with, right? And for me, it's, (sighs) you can think about these blockers in one of two ways. You can think about it as, you know, every single time that you're tempted to react in the same way to a blocker, you can be a prisoner of your past, or you can change how you react and be a pioneer of the future. It's not a blocker when you think about it in those terms. >> Mindset matters, and that's really a great point. You brought up something that's interesting, I want to bring this up. Some of the challenges in different stages of our lives. You know, one thing that's come out of this set of interviews, this, of day and in conversations is, that I haven't heard before, is the result of COVID, working at home brought empathy about people's personal lives to the table. That came up in a couple interviews. What's your reaction to that? Because that highlights that we're human, to your point of view. >> It does. It does. And I'm so thankful that you don't ask about balance because that is a pet peeve of mine, because there is no such thing as balance. If you're in perfect balance, you are not moving and you're not changing. But when you think about, you know, the impact of COVID and how the world has changed since that, it has allowed all of us to really think about, you know, what do we want to do versus what do we have to do? And I think so many times, in both our professional lives and our personal lives, we get caught up in doing what we think we have to do to get ahead versus taking a step back and saying, "Hey, what do I want to do? "And how do I become a, you know, "a better human?" And many times, John, I'm asked, "Hey, "how do you define success or achievement?" And, you know, my answer is really, for me, the greatest results that I've achieved, both personally and professionally, is when I eliminate the word success and balance from my vocabulary, and replace them with two words: What's my contribution and what's my impact? Those things make a difference, regardless of gender. And I'll tell you, none of it is easy, ever. I think all of us have been broken, we've been stretched, we've been burnt out. But I also think what we have to talk about as leaders in the industry is how we've also found endurance and resilience. And when we felt unsteady, we've continued to go forward, right? When we can't decide, the best answer is do what's uncomfortable. And all of those things really stemmed from a part of what happened with COVID. >> Yeah, yeah, I love the uncomfortable and the balance highlight. You mentioned being off balance. That means you're growing, you're not standing still. I want to get your thoughts on this because one thing that has come out again this year, and last year as well, is having a team with you when you do it. So if you're off balance and you're going to stretch, if you have a good team with you, that's where people help each other. Not just pick them up, but like maybe get 'em back on track again. So, but if you're solo, you fall, (laughs) you fall harder. So what's your reaction to that? 'Cause this has come up, and this comes up in team building, workforce formation, goal setting, contribution. What's your reaction to that? >> So my reaction to that that is pretty simple. Nobody gets there on their own at all, right? Passion and ambition can only take you so far. You've got to have people and teams that are supporting you. And here's the funny thing about people, and frankly, about being a leader that I think is really important: People don't follow for you. People follow for who you help them become. Think about that for a second. And when you think about all the amazing things that companies and teams are able to do, it's because of those people. And it's because you have leaders that are out there, inspiring them to take what they believe is impossible and turn it into the possible. That's the power of teams. >> Can you give an example of your approach on how you do that? How do you build your teams? How do you grow them? How do you lead them effectively and also make 'em inclusive, diverse and equitable? >> Whew. I'll give you a great example of some work that we're doing at AWS. This year at re:Invent, for the first time in its history, we've launched an initiative with theCUBE called Women of the Cloud. And part of Women of the Cloud is highlighting the business impact that so many of our partners, our customers and our employees have had on the social, on the economic and on the financials of many companies. They just haven't had the opportunity to tell their story. And at Amazon, right, it is absolutely integral to us to highlight those examples and continue to extend that ethos to our partners and our customers. And I think one of the things that I shared with you at re:Invent was, you know, as U2's Bono put it, (John laughs) "We'll build it better than we did before "and we are the people "that we've been waiting for." So if we're not out there, advocating and highlighting all the amazing things that other women are doing in the ecosystem, who will? >> Well, I've got to say, I want to give you props for that program. Not only was it groundbreaking, it's still running strong. And I saw some things on LinkedIn that were really impressive in its network effect. And I met at least half a dozen new people I never would have met before through some of that content interaction and engagement. And this is like the power of the current world. I mean, getting the voices out there creates momentum. And it's good for Amazon. It's not just personal brand building for my next job or whatever, you know, reason. It's sharing and it's attracting others, and it's causing people to connect and meet each other in that world. So it's still going strong. (laughs) And this program we did last year was part of Rachel Thornton, who's now at MessageBird, and Mary Camarata. They were the sponsors for this International Women's Day. They're not there anymore, so we decided we're going to do it again because the impact is so significant. We had the Amazon Education group on. It's amazing and it's free, and we've got to get the word out. I mean, talk about leveling up fast. You get in and you get trained and get certified, and there's a zillion jobs out (laughs) there in cloud, right, and partners. So this kind of leadership is really important. What was the key learnings that you've taken away and how do you extend this opportunity to nurture the talent out there in the field? Because when you throw the content out there from great leaders and practitioners and developers, it attracts other people. >> It does. It does. So look, I think there's two types of people, people that are focused on being and people who are focused on doing. And let me give you an example, right? When we think about labels of, hey, Rachel's a female executive who launched Women of the Cloud, that label really limits me. I'd rather just be a great executive. Or, hey, there's a great entrepreneur. Let's not be a great entrepreneur. Just go build something and sell it. And that's part of this whole Women of the cloud, is I don't want people focused on what their label is. I want people sharing their stories about what they're doing, and that's where the lasting impact happens, right? I think about something that my grandmother used to tell me, and she used to tell me, "Rachel, how successful "you are, doesn't matter. "The lasting impact that you have "is your legacy in this very finite time "that you have on Earth. "Leave a legacy." And that's what Women of the Cloud is about. So that people can start to say, "Oh, geez, "I didn't know that that was possible. "I didn't think about my career in that way." And, you know, all of those different types of stories that you're hearing out there. >> And I want to highlight something you said. We had another Amazonian on the program for this day earlier and she coined a term, 'cause inside Amazon, you have common language. One of them is bar raising. Raise the bar, that's an Amazonian (Rachel laughs) term. It means contribute and improve and raise the bar of capability. She said, "Bar raising is gender neutral. "The bar is a bar." And I'm like, wow, that was amazing. Now, that means your contribution angle there highlights that. What's the biggest challenge to get that mindset set in culture, in these- >> Oh. >> 'Cause it's that simple, contribution is neutral. >> It absolutely is neutral, but it's like I said earlier, I think so many times, people are focused on success and being a great leader versus what's the contribution I'm making and how am I doing as a leader, you know? And when it comes to a lot of the leadership principles that Amazon has, including bar raising, which means insisting on the highest standards, and then those standards continue to raise every single time. And what that is all about is having all of our employees figure out, how do I get better every single day, right? That's what it's about. It's not about being better than the peer next to you. It's about how do I become a better leader, a better human being than I was yesterday? >> Awesome. >> You know, I read this really cute quote and I think it really resonates. "You meditate to upgrade your software "and you work out to upgrade your hardware." And while it's important that we're all ourselves at work, we can't deny that a lot of times, ourselves still need that meditation or that workout. >> Well, I hope I don't have any zero days in my software out there, so, but I'm going to definitely work on that. I love that quote. I'm going to use that. Thank you very much. That was awesome. I got to ask you, I know you're really passionate about, and we've talked about this, around, so you're a great leader but you're also focused on what's behind you in the generation, pipelining women leaders, okay? Seats at the table, mentoring and sponsorship. What can we do to build a strong pipeline of leaders in technology and business? And where do you see the biggest opportunity to nurture the talent in these fields? >> Hmm, you know, that's great, great question. And, you know, I just read a "Forbes" article by another Amazonian, Tanuja Randery, who talked about, you know, some really interesting stats. And one of the stats that she shared was, you know, by 2030, less than 25% of tech specialists will be female, less than 25%. That's only a 6% growth from where we are in 2023, so in seven years. That's alarming. So we've really got to figure out what are the kinds of things that we're going to go do from an Amazon perspective to impact that? And one of the obvious starting points is showcasing tech careers to girls and young women, and talking openly about what a technology career looks like. So specifically at Amazon, we've got an AWS Git IT program that helps schools and educators bring in tech role models to show them what potential careers look like in tech. I think that's one great way that we can help build the pipeline, but once we get the pipeline, we also have to figure out how we don't let that pipeline leak. Meaning how do we keep women and, you know, young women on their tech career? And I think big part of that, John, is really talking about how hard it is, but it's also greater than you can ever imagine. And letting them see executives that are very authentic and will talk about, geez, you know, the challenges of COVID were a time of crisis and accelerated change, and here's what it meant to me personally and here's what we were able to solve professionally. These younger generations are all about social impact, they're about economic impact and they're about financial impact. And if we're not talking about all three of those, both from how AWS is leading from the front, but how its executives are also taking that into their personal lives, they're not going to want to go into tech. >> Yeah, and I think one of the things you mentioned there about getting people that get IT, good call out there, but also, Amazon's going to train 30 million people, put hundreds of millions of dollars into education. And not only are they making it easier to get in to get trained, but once you're in, even savvy folks that are in there still have to accelerate. And there's more ways to level up, more things are happening, but there's a big trend around people changing careers either in their late 20s, early 30s, or even those moments you talk about, where it's before and after, even later in the careers, 40s, 50s. Leaders like, well, good experience, good training, who were in another discipline who re-skilled. So you have, you know, more certifications coming in. So there's still other pivot points in the pipeline. It's not just down here. And that, I find that interesting. Are you seeing that same leadership opportunities coming in where someone can come into tech older? >> Absolutely. You know, we've got some amazing programs, like Amazon Returnity, that really focuses on how do we get other, you know, how do we get women that have taken some time off of work to get back into the workforce? And here's the other thing about switching careers. If I look back on my career, I started out as a civil engineer, heavy highway construction. And now I lead a sales team at the largest cloud company in the world. And there were, you know, twists and turns around there. I've always focused on how do we change and how do we continue to evolve? So it's not just focused on, you know, young women in the pipeline. It's focused on all gender and all diverse types throughout their career, and making sure that we're providing an inclusive environment for them to bring in their unique skillsets. >> Yeah, a building has good steel. It's well structured. Roads have great foundations. You know, you got the builder in you there. >> Yes. >> So I have to ask you, what's on your mind as a tech athlete, as an executive at AWS? You know, you got your huge team, big goals, the economy's got a little bit of a headwind, but still, cloud's transforming, edge is exploding. What's your outlook as you look out in the tech landscape these days and how are you thinking about it? What your plans? Can you share a little bit about what's on your mind? >> Sure. So, geez, there's so many trends that are top of mind right now. Everything from zero trust to artificial intelligence to security. We have more access to data now than ever before. So the opportunities are limitless when we think about how we can apply technology to solve some really difficult customer problems, right? Innovation sometimes feels like it's happening at a rapid pace. And I also say, you know, there are years when nothing happens, and then there's years when centuries happen. And I feel like we're kind of in those years where centuries are happening. Cloud technologies are refining sports as we know them now. There's a surge of innovation in smart energy. Everyone's supply chain is looking to transform. Custom silicon is going mainstream. And frankly, AWS's customers and partners are expecting us to come to them with a point of view on trends and on opportunities. And that's what differentiates us. (John laughs) That's what gives me goosebumps- >> I was just going to ask you that. Does that give you goosebumps? How could you not love technology with that excitement? I mean, AI, throw in AI, too. I just talked to Swami, who heads up the AI and database, and we just talked about the past 24 months, the change. And that is a century moment happening. The large language models, computer vision, more compute. Compute's booming than ever before. Who thought that was going to happen, is still happening? Massive change. So, I mean, if you're in tech, how can you not love tech? >> I know, even if you're not in tech, I think you've got to start to love tech because it gives you access to things you've never had before. And frankly, right, change is the only constant. And if you don't like change, you're going to like being irrelevant even less than you like change. So we've got to be nimble, we've got to adapt. And here's the great thing, once we figure it out, it changes all over again. And it's not something that's easy for any of us to operate. It's hard, right? It's hard learning new technology, it's hard figuring out what do I do next? But here's the secret. I think it's hard because we're doing it right. It's not hard because we're doing it wrong. It's just hard to be human and it's hard to figure out how we apply all this different technology in a way that positively impacts us, you know, economically, financially, environmentally and socially. >> And everyone's different, too. So you got to live those (mumbles). I want to get one more question in before we, my last question, which is about you and your impact. When you talk to your team, your sales, you got a large sales team, North America. And Tanuja, who you mentioned, is in EMEA, we're going to speak with her as well. You guys lead the front lines, helping customers, but also delivering the revenue to the company, which has been fantastic, by the way. So what's your message to the troops and the team out there? When you say, "Take that hill," like what is the motivational pitch, in a few sentences? What's the main North Star message in today's marketplace when you're doing that big team meeting? >> I don't know if it's just limited to a team meeting. I think this is a universal message, and the universal message for me is find your edge, whatever that may be. Whether it is the edge of what you know about artificial intelligence and neural networks or it's the edge of how do we migrate our applications to the cloud more quickly. Or it's the edge of, oh, my gosh, how do I be a better parent and still be great at work, right? Find your edge, and then sharpen it. Go to the brink of what you think is possible, and then force yourself to jump. Get involved. The world is run by the people that show up, professionally and personally. (John laughs) So show up and get started. >> Yeah as Steve Jobs once said, "The future "that everyone looks at was created "by people no smarter than you." And I love that quote. That's really there. Final question for you. I know we're tight on time, but I want to get this in. When you think about your impact on your company, AWS, and the industry, what's something you want people to remember? >> Oh, geez. I think what I want people to remember the most is it's not about what you've said, and this is a Maya Angelou quote. "It's not about what you've said to people "or what you've done, "it's about how you've made them feel." And we can all think back on leaders or we can all think back on personal moments in our lives where we felt like we belonged, where we felt like we did something amazing, where we felt loved. And those are the moments that sit with us for the rest of our lives. I want people to remember how they felt when they were part of something bigger. I want people to belong. It shouldn't be uncommon to talk about feelings at work. So I want people to feel. >> Rachel, thank you for your time. I know you're really busy and we stretched you a little bit there. Thank you so much for contributing to this wonderful day of great leaders sharing their stories. And you're an inspiration. Thanks for everything you do. We appreciate you. >> Thank you. And let's go do some more Women of the Cloud videos. >> We (laughs) got more coming. Bring those stories on. Back up the story truck. We're ready to go. Thanks so much. >> That's good. >> Thank you. >> Okay, this is theCUBE's coverage of International Women's Day. It's not just going to be March 8th. That's the big celebration day. It's going to be every quarter, more stories coming. Stay tuned at siliconangle.com and thecube.net here, with bringing all the stories. I'm John Furrier, your host. Thanks for watching. (gentle music)

Published Date : Mar 6 2023

SUMMARY :

and very impressive, inspiring, Thank you so much. and how have you approached long as you want. to going and working for, you know, and how did you handle that? and how do you work through Some of the challenges in And I'm so thankful that you don't ask and the balance highlight. And it's because you have leaders that I shared with you at re:Invent and how do you extend this opportunity And let me give you an example, right? and raise the bar of capability. contribution is neutral. than the peer next to you. "and you work out to And where do you see And one of the stats that she shared the things you mentioned there And there were, you know, twists You know, you got the and how are you thinking about it? And I also say, you know, I was just going to ask you that. And if you don't like change, And Tanuja, who you mentioned, is in EMEA, of what you know about And I love that quote. And we can all think back on leaders Rachel, thank you for your time. Women of the Cloud videos. We're ready to go. It's not just going to be March 8th.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TelcoORGANIZATION

0.99+

RachelPERSON

0.99+

Tim CookPERSON

0.99+

Jeff FrickPERSON

0.99+

TelcosORGANIZATION

0.99+

Tanuja RanderyPERSON

0.99+

Rachel ThorntonPERSON

0.99+

AmazonORGANIZATION

0.99+

NayakiPERSON

0.99+

SanjayPERSON

0.99+

Peter BurrisPERSON

0.99+

2014DATE

0.99+

FordORGANIZATION

0.99+

TanujaPERSON

0.99+

Rachel SkaffPERSON

0.99+

Todd SkidmorePERSON

0.99+

NokiaORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

JohnPERSON

0.99+

AustraliaLOCATION

0.99+

FacebookORGANIZATION

0.99+

Bob StefanskiPERSON

0.99+

Steve JobsPERSON

0.99+

Tom JoycePERSON

0.99+

Lisa MartinPERSON

0.99+

Laura CooneyPERSON

0.99+

John FurrierPERSON

0.99+

ToddPERSON

0.99+

AWSORGANIZATION

0.99+

2011DATE

0.99+

Mary CamarataPERSON

0.99+

Meg WhitmanPERSON

0.99+

IBMORGANIZATION

0.99+

TeslaORGANIZATION

0.99+

BlackberryORGANIZATION

0.99+

Coca-ColaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Sanjay SrivastavaPERSON

0.99+

Silicon ValleyLOCATION

0.99+

BMC SoftwareORGANIZATION

0.99+

U.S.LOCATION

0.99+

SiriTITLE

0.99+

BMCORGANIZATION

0.99+

HPORGANIZATION

0.99+

MotorolaORGANIZATION

0.99+

JeffPERSON

0.99+

SamsungORGANIZATION

0.99+

Mihir ShuklaPERSON

0.99+

2023DATE

0.99+

Nayaki NayyarPERSON

0.99+

AppleORGANIZATION

0.99+

Rachel Mushahwar SkaffPERSON

0.99+

6%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Share A CokeORGANIZATION

0.99+

Joseph Nelson, Roboflow | Cube Conversation


 

(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)

Published Date : Mar 3 2023

SUMMARY :

Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

JosephPERSON

0.99+

Joseph NelsonPERSON

0.99+

January, 2021DATE

0.99+

John FurrierPERSON

0.99+

MedtronicORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

2019DATE

0.99+

UberORGANIZATION

0.99+

AppleORGANIZATION

0.99+

JohnPERSON

0.99+

400 millionQUANTITY

0.99+

Evan SpiegelPERSON

0.99+

24 monthsQUANTITY

0.99+

2017DATE

0.99+

RoboFlowORGANIZATION

0.99+

15 minutesQUANTITY

0.99+

RivianORGANIZATION

0.99+

12 monthsQUANTITY

0.99+

20%QUANTITY

0.99+

Cardinal HealthORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

BarcelonaLOCATION

0.99+

WimbledonEVENT

0.99+

roboflow.com/careersOTHER

0.99+

firstQUANTITY

0.99+

second segmentQUANTITY

0.99+

each teamQUANTITY

0.99+

six monthsQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

both worldsQUANTITY

0.99+

2%QUANTITY

0.99+

two years laterDATE

0.98+

Mobile World CongressEVENT

0.98+

UbersORGANIZATION

0.98+

third wayQUANTITY

0.98+

oneQUANTITY

0.98+

a weekQUANTITY

0.98+

Magic SudokuTITLE

0.98+

secondQUANTITY

0.98+

NvidiaORGANIZATION

0.98+

SudokuTITLE

0.98+

MWCEVENT

0.97+

todayDATE

0.97+

billion dollarQUANTITY

0.97+

one single thingQUANTITY

0.97+

over a hundred thousand developersQUANTITY

0.97+

fourQUANTITY

0.97+

thirdQUANTITY

0.96+

ElonORGANIZATION

0.96+

third thingQUANTITY

0.96+

TeslaORGANIZATION

0.96+

JetsonCOMMERCIAL_ITEM

0.96+

ElonPERSON

0.96+

RoboFlowTITLE

0.96+

InstagramORGANIZATION

0.95+

TwilioORGANIZATION

0.95+

twentiesQUANTITY

0.95+

Product Hunt ARTITLE

0.95+

MoorePERSON

0.95+

both researchersQUANTITY

0.95+

one thingQUANTITY

0.94+

Greg Manganello Fuijitsu, Fujitsu & Ryan McMeniman, Dell Technologies | MWC Barcelona 2023


 

>> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (pleasant music) >> We're back. This is Dave Vellante for our live coverage of MWC '23 SiliconANGLE's wall to wall, four-day coverage. We're here with Greg Manganello, who's from Fuijitsu. He's the global head of network services business unit at the company. And Ryan McMeniman is the director of product management for the open telecom ecosystem. We've been talking about that all week, how this ecosystem has opened up. Ryan's with Dell Technologies. Gents, welcome to theCUBE. >> Thank you, Dave. >> Thank you. >> Good to be here. >> Greg, thanks for coming on. Let's hear Fuijitsu's story. We haven't heard much at this event from Fuijitsu. I'm sure you got a big presence, but welcome to theCUBE. Tell us your angle. >> Thanks very much. So Fuijitsu, we're big O-RAN advocates, open radio access network advocates. We're one of the leading founders of that open standard. We're also members of the Open RAN Policy Coalition. I'm a board member there. We're kind of all in on OpenRAN. The reason is it gives operators choices and much more vendor diversity and therefore a lot of innovation when they build out their 5G networks. >> And so as an entry point for Dell as well, I mean obviously you guys make a lot of hay with servers and storage and other sort of hardware, but O-RAN is just this disruptive change to this industry, but it's also compute intensive. So from Dell's perspective, what are the challenges of getting customers to the carriers to adopt O-RAN? How do you de-risk it for them? >> Right, I mean O-RAN really needs to be seen as a choice, right? And that choice comes with building out an ecosystem of partners, right? Working with people like Fuijitsu and others helps us build systems that the carriers can rely upon. Otherwise, it looks like another science experiment, a sandbox, and it's really anything but that. >> So what specifically are you guys doing together? Are you doing integrations, reference architectures engineered systems, all of the above? >> Yeah, so I think it's a little bit of all of the above. So we've announced our cooperation, so the engineering teams are linked, and that we're combining our both sweet spots together from Fuijitsu's virtual CU/DU, and our OpenRAN radios, and Dell's platforms and integration capabilities. And together we're offering a pre-integrated bundle to operators to reduce that risk and kind of help overcome some of the startup obstacles by shrinking the integration cost. >> So you've got Greenfield customers, that's pretty straightforward, white sheet of paper, go, go disrupt. And then there's traditional carriers, got 4G and 5G networks, and sort of hybrid if you will, and this integration there. Where do you see the action now? I presume it's Greenfield today, but isn't it inevitable that the traditional carriers have to go open? >> It is, a couple of different ways that they need to go and they want to go might be power consumption, it might be the cloudification of their network. They're going to have different reasons for doing it. And I think we have to make sure that when we work on collaborations like we do with Fuijitsu, we have to look at all of those vectors. What is it that somebody maybe here in Europe is dealing with high gas prices, high energy prices, in the U.S. or wherever it's expansion. They're going to be different justifications for it. >> Yeah, so power must be an increasing component of the operating expense, with energy costs up, and it's a power hungry environment. So how does OpenRAN solve that problem? >> So that's a great question. So by working together we can really optimize the configurations. So on the Fuijitsu side, our radios are multi-band and highly compact and super energy efficient so that the TCO for the carrier is much, much lower. And then we've also announced on the rApp side power savings, energy savings applications, which are really sophisticated AI enabled apps that can switch off the radio based upon traffic prediction models and we can save the operator 30% on their energy bill. That's a big number. >> And that intelligence that lives in the, does it live in the RIC, is it in the brain? >> In the app right above the RIC, absolutely. >> Okay, so it's a purpose-built app to deal with that. >> It's multi-vendor app, it can sit on anybody's O-RAN system. And one of the beauties of O-RAN is there is that open architecture, so that even if Dell and Fuijitsu only sell part of the, or none of the system, an app can be selected from any vendor including Fuijitsu. So that's one of the benefits of whoever's got the best idea, the best cost performance, the best energy performance, customers can really be enabled to make the choice and continue to make choices, not just way back at RFP time, but throughout their life cycle they can keep making choices. And so that's really meaning that, hey, if we miss the buying cycle then we're closed out for 5 or 10 years. No, it's constantly being reevaluated, and that's really exciting, the whole ecosystem. But what we really want to do is make sure we partner together with key partners, Dell and Fuijitsu, such that the customer, when they do select us they see a bundle, not just every person for themselves. It de-risks it. And we get a lot of that integration headache out of the way before we launch it. >> I think that's what's different. We've been talking about how we've kind of seen this move before, in the nineties we saw the move from the mainframe vertical stack to the horizontal stack. We talked about that, but there are real differences because back then you had, I don't know, five components of the stack and there was no integration, and even converged infrastructure was kind of bolts that brought that together. And then over time it's become engineered systems. When you talk to customers, Ryan, is the conversation today mostly TCO? Is it how to get the reliability and quality of service of traditional stacks? Where's the conversation today? >> Yeah, it's the flip side of choice, which is how do you make sure you have that reliability and that security to ensure that the full stack isn't just integrated, but it lives through that whole life cycle management. What are, if you're bringing in another piece, an rApp or an xApp, how do you actually make sure that it works together as a group? Because if you don't have that kind of assurance how can you actually guarantee that O-RAN in and of itself is going to perform better than a traditional RAN system? So overcoming that barrier requires partnerships and integration activity. That is an investment on the parts of our companies, but also the operators need to look back at us and say, yeah, that work has been done, and I trust as trusted advisors for the operators that that's been done. And then we can go validate it. >> Help our audience understand it. At what point in time do you feel that from a TCO perspective there'll be parity, or in my opinion it doesn't even have to be equal. It has to be close enough. And I don't know what that close enough is because the other benefits of openness, the innovation, so there's that piece of it as the cost piece and then there is the reliability. And I would say the same thing. It's got to be, well, maybe good enough is not good enough in this world, but maybe it is for some use cases. So really my question is around adoption and what are those factors that are going to affect adoption and when can we expect them to be? >> It's a good question, Dave, and what I would say is that the closed RAN vendors are making incremental improvements. And if you think in a snapshot there might be one answer, but if you think in kind of a flow model, a river over time, our O-RAN like-minded people are on a monster innovation curve. I mean the slope of the curve is huge. So in the OpenRAN policy coalition, 60 like-minded companies working together going north, and we're saying that let's bring all the innovation together, so you can say TCO, reliability, but we're bringing the innovation curve of software and integration curve from silicon and integration from system vendors all together to really out-innovate everybody else by working together. So that's the-- >> I like that curve analogy, Greg 'cause okay, you got the ogive or S curve, and you're saying that O-RAN is entering or maybe even before the steep part of the S curve, so you're going to go hyperbolic, whereas the traditional vendors are maybe trying to squeeze a little bit more out of the lemon. >> 1, 2%, and we're making 30% or more quantum leaps at a time every innovation. So what we tell customers is you can measure right now, but if you just do the time-based competition model, as an organization, as a group of us, we're going to be ahead. >> Is it a Moore's law innovation curve or is it actually faster because you've got the combinatorial factors of silicon, certain telco technologies, other integration software. Is it actually steeper than maybe historical Moore's law? >> I think it's steeper. I don't know Ryan's opinion, but I think it's steeper because Moore's law, well-known in silicon, and it's reaching five nanometers and more and more innovations. But now we're talking about AI software and machine learning as well as the system and device vendors. So when all that's combined, what is that? So that's why I think we're at an O-RAN conference today. I'm not sure we're at MWC. >> Well, it's true. It's funny they changed the name from Mobile World Congress and that was never really meant to be a consumer show, but these things change that, right? And so I think it's appropriate MWC because we're seeing really deep enterprise technology now enter, so that's your sweet spot, isn't it? >> It really is. But I think in some ways it's the path to that price performance parity, which we saw in IT a long time ago, making its way into telecom is there, but it doesn't work unless everybody is on board. And that involves players like this and even smaller companies and innovative startups, which we really haven't seen in this space for some time. And we've been having them at the Dell booth all week long. And there's really interesting stuff like Greg said, AI, ML, optimization and efficiency, which is exciting. And that's where O-RAN can also benefit the Industry. >> And as I say, there are other differences to your advantage. You've got engineered systems or you've been through that in enterprise IT, kind of learned how to do that. But you've also got the cloud, public cloud for experimentation, so you can fail cheaply, and you got AI, right, which is, really didn't have AI in the nineties. You had it, but nobody used it. And now you're like, everybody's using ChatGPT. >> Right, but now what's exciting, and the other thing that Ryan and we are working on together is linking our labs together because it's not about the first time system integration and connecting the hoses together, and okay, there it worked, but it's about the ongoing life cycle management of all the updates and upgrades. And by using Dell's OTEL Lab and Fuijitsu's MITC lab and linking them together, now we really have a way of giving operators confidence that as we bring out the new innovations it's battle tested by two organizations. And so two logos coming together and saying, we've looked at it from our different angles and then this is battle tested. There's a lot of value there. >> I think the labs are key. >> But it's interesting, the point there is by tying labs together, there's an acknowledged skills gap as we move into this O-RAN world that operators are looking to us and probably Fuijitsu saying, help our team understand how to thrive in this new environment because we're going from closed systems to open systems where they actually again, have more choice and more ability to be flexible. >> Yeah, if you could take away that plumbing, even though they're good plumbers. All right guys, we got to go. Thanks so much for coming on theCUBE. >> Thank you much. >> It's great to have you. >> Appreciate it, Dave. >> Okay, keep it right there. Dave Vellante, Lisa Martin, and Dave Nicholson will be back from the Fira in Barcelona on theCUBE. Keep it right there. (pleasant music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. And Ryan McMeniman is the I'm sure you got a big presence, We're also members of the and other sort of hardware, the carriers can rely upon. and that we're combining our that the traditional it might be the cloudification of the operating expense, so that the TCO for the In the app right above app to deal with that. Dell and Fuijitsu, such that the customer, in the nineties we saw the move but also the operators of it as the cost piece that the closed RAN vendors or maybe even before the and we're making 30% or more quantum leaps combinatorial factors of silicon, and it's reaching five nanometers and that was never really And that involves players like this and you got AI, right, and connecting the hoses together, and more ability to be flexible. Yeah, if you could Martin, and Dave Nicholson

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Greg ManganelloPERSON

0.99+

Lisa MartinPERSON

0.99+

DavePERSON

0.99+

Dave ZeigenfussPERSON

0.99+

Dave VellantePERSON

0.99+

Ryan McMenimanPERSON

0.99+

BJ GardnerPERSON

0.99+

BJPERSON

0.99+

Dave NicholsonPERSON

0.99+

February of 2019DATE

0.99+

GregPERSON

0.99+

November of 2018DATE

0.99+

David ZeigenfussPERSON

0.99+

EuropeLOCATION

0.99+

FactionORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

PhiladelphiaLOCATION

0.99+

AtlantaLOCATION

0.99+

New JerseyLOCATION

0.99+

AWSORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

FuijitsuORGANIZATION

0.99+

September 17thDATE

0.99+

RyanPERSON

0.99+

5QUANTITY

0.99+

two floorsQUANTITY

0.99+

30%QUANTITY

0.99+

DellORGANIZATION

0.99+

DavidPERSON

0.99+

Stu MinimanPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

one floorQUANTITY

0.99+

JuneDATE

0.99+

two guestsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

summer of 2019DATE

0.99+

U.S.LOCATION

0.99+

FirstQUANTITY

0.99+

OTEL LabORGANIZATION

0.99+

two organizationsQUANTITY

0.99+

Pennsylvania LumbermensORGANIZATION

0.99+

125 yearsQUANTITY

0.99+

Atlanta, GeorgiaLOCATION

0.99+

one answerQUANTITY

0.99+

BarcelonaLOCATION

0.99+

Open RAN Policy CoalitionORGANIZATION

0.99+

two logosQUANTITY

0.99+

2007-2008DATE

0.99+

two foldQUANTITY

0.99+

four-dayQUANTITY

0.99+

100QUANTITY

0.99+

Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers


 

(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)

Published Date : Dec 8 2022

SUMMARY :

Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EvanPERSON

0.99+

Evan TougerPERSON

0.99+

CaliforniaLOCATION

0.99+

DallasLOCATION

0.99+

DellORGANIZATION

0.99+

Prowess ConsultingORGANIZATION

0.99+

2023DATE

0.99+

three-yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

R 6525COMMERCIAL_ITEM

0.99+

BroadcomORGANIZATION

0.99+

3rdQUANTITY

0.99+

R 7515COMMERCIAL_ITEM

0.99+

R7515COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

4th genQUANTITY

0.99+

3rd genQUANTITY

0.98+

both waysQUANTITY

0.98+

7525COMMERCIAL_ITEM

0.98+

ProwessORGANIZATION

0.98+

Bellevue, WashingtonLOCATION

0.98+

100,000 CPUQUANTITY

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

two generationsQUANTITY

0.97+

oneQUANTITY

0.96+

PCIe 5OTHER

0.96+

todayDATE

0.95+

theCUBEORGANIZATION

0.94+

this yearDATE

0.93+

PCI 4.0OTHER

0.92+

TPCx-VCOMMERCIAL_ITEM

0.92+

fourth-genQUANTITY

0.92+

gen 5QUANTITY

0.9+

MooreORGANIZATION

0.89+

fourth generationQUANTITY

0.88+

gen 4QUANTITY

0.87+

PCI 3OTHER

0.87+

couple of weeks agoDATE

0.85+

SuperCompute 2022TITLE

0.8+

PCIe gen 5OTHER

0.79+

VMmark 3.xCOMMERCIAL_ITEM

0.75+

minusQUANTITY

0.74+

one wayQUANTITY

0.74+

18 monthsQUANTITY

0.7+

PERC 12COMMERCIAL_ITEM

0.67+

5.0OTHER

0.67+

EPYCCOMMERCIAL_ITEM

0.65+

monthsDATE

0.64+

5QUANTITY

0.63+

PERC 11COMMERCIAL_ITEM

0.6+

next few monthsDATE

0.6+

firstQUANTITY

0.59+

VMmark 3.x.COMMERCIAL_ITEM

0.55+

EPYC GenoaCOMMERCIAL_ITEM

0.53+

genOTHER

0.52+

R7525COMMERCIAL_ITEM

0.52+

1QUANTITY

0.5+

2QUANTITY

0.47+

PowerEdgeORGANIZATION

0.47+

Kirk Bresniker, HPE | SuperComputing 22


 

>>Welcome back, everyone live here at Supercomputing 22 in Dallas, Texas. I'm John for host of the Queue here at Paul Gillin, editor of Silicon Angle, getting all the stories, bringing it to you live. Supercomputer TV is the queue right now. And bringing all the action Bresniker, chief architect of Hewlett Packard Labs with HP Cube alumnis here to talk about Supercomputing Road to Quantum. Kirk, great to see you. Thanks for coming on. >>Thanks for having me guys. Great to be >>Here. So Paul and I were talking and we've been covering, you know, computing as we get into the large scale cloud now on premises compute has been one of those things that just never stops. No one ever, I never heard someone say, I wanna run my application or workload on slower, slower hardware or processor or horsepower. Computing continues to go, but this, we're at a step function. It feels like we're at a level where we're gonna unleash new, new creativity, new use cases. You've been kind of working on this for many, many years at hp, Hewlett Packard Labs, I remember the machine and all the predecessor r and d. Where are we right now from your standpoint, HPE standpoint? Where are you in the computing? It's as a service, everything's changing. What's your view? >>So I think, you know, you capture so well. You think of the capabilities that you create. You create these systems and you engineer these amazing products and then you think, whew, it doesn't get any better than that. And then you remind yourself as an engineer. But wait, actually it has to, right? It has to because we need to continuously provide that next generation of scientists and engineer and artists and leader with the, with the tools that can do more and do more frankly with less. Because while we want want to run the program slower, we sure do wanna run them for less energy. And figuring out how we accomplish all of those things, I think is, is really where it's gonna be fascinating. And, and it's also, we think about that, we think about that now, scale data center billion, billion operations per second, the new science, arts and engineering that we'll create. And yet it's also what's beyond what's beyond that data center. How do we hook it up to those fantastic scientific instruments that are capable to generate so much information? We need to understand how we couple all of those things together. So I agree, we are at, at an amazing opportunity to raise the aspirations of the next generation. At the same time we have to think about what's coming next in terms of the technology. Is the silicon the only answer for us to continue to advance? >>You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's doing energy. You can build it in data centers for compute. There's all kinds of new things. Is there anything in the paradigm of computing and now on the road to quantum, which I know you're involved, I saw you have on LinkedIn, you have an open rec for that. What paradigm elements are changing that weren't in play a few years ago that you're looking at right now as you look at the 20 mile stair into quantum? >>So I think for us it's fascinating because we've had a tailwind at our backs my whole career, 33 years at hp. And what I could count on was transistors got at first they got cheaper, faster and they use less energy. And then, you know, that slowed down a little bit. Now they're still cheaper and faster. As we look in that and that Moore's law continues to flatten out of it, there has to be something better to do than, you know, yet another copy of the prior design opening up that diversity of approach. And whether that is the amazing wafer scale accelerators, we see these application specific silicon and then broadening out even farther next to the next to the silicon. Here's the analog computational accelerator here is now the, the emergence of a potential quantum accelerator. So seeing that diversity of approaches, but what we have to happen is we need to harness all of those efficiencies and yet we still have to realize that there are human beings that need to create the application. So how do we bridge, how do we accommodate the physical of, of new kinds of accelerator? How do we imagine the cyber physical connection to the, to the rest of the supercomputer? And then finally, how do we bridge that productivity gap? Especially not for people who like me who have been around for a long time, we wanna think about that next generation cuz they're the ones that need to solve the problems and write the code that will do it. >>You mentioned what exists beyond silicon. In fact, are you looking at different kinds of materials that computers in the future will be built upon? >>Oh absolutely. You think of when, when we, we look at the quantum, the quantum modalities then, you know, whether it is a trapped ion or a superconducting, a piece of silicon or it is a neutral ion. There's just no, there's about half a dozen of these novel systems because really what we're doing when we're using a a quantum mechanical computer, we're creating a tiny universe. We're putting a little bit of material in there and we're manipulating at, at the subatomic level, harnessing the power of of, of quantum physics. That's an incredible challenge. And it will take novel materials, novel capabilities that we aren't just used to seeing. Not many people have a helium supplier in their data center today, but some of them might tomorrow. And understanding again, how do we incorporate industrialize and then scale all of these technologies. >>I wanna talk Turkey about quantum because we've been talking for, for five years. We've heard a lot of hyperbole about quantum. We've seen some of your competitors announcing quantum computers in the cloud. I don't know who's using these, these computers, what kind of work they're being used, how much of the, how real is quantum today? How close are we to having workable true quantum computers and what can you point to any examples of how it's being, how that technology is being used in the >>Field? So it, it remains nascent. We'll put it that way. I think part of the challenge is we see this low level technology and of course it was, you know, professor Richard Fineman who first pointed us in this direction, you know, more than 30 years ago. And you know, I I I trust his judgment. Yes. You know that there's probably some there there especially for what he was doing, which is how do we understand and engineer systems at the quantum mechanical level. Well he said a quantum mechanical system's probably the way to go. So understanding that, but still part of the challenge we see is that people have been working on the low level technology and they're reaching up to wondering will I eventually have a problem that that I can solve? And the challenge is you can improve something every single day and if you don't know where the bar is, then you don't ever know if you'll be good enough. >>I think part of the approach that we like to understand, can we start with the problem, the thing that we actually want to solve and then figure out what is the bespoke combination of classical supercomputing, advanced AI accelerators, novel quantum quantum capabilities. Can we simulate and design that? And we think there's probably nothing better to do that than than an next to scale supercomputer. Yeah. Can we simulate and design that bespoke environment, create that digital twin of this environment and if we, we've simulated it, we've designed it, we can analyze it, see is it actually advantageous? Cuz if it's not, then we probably should go back to the drawing board. And then finally that then becomes the way in which we actually run the quantum mechanical system in this hybrid environment. >>So it's na and you guys are feeling your way through, you get some moonshot, you work backwards from use cases as a, as a more of a discovery navigational kind of mission piece. I get that. And Exoscale has been a great role for you guys. Congratulations. Has there been strides though in quantum this year? Can you point to what's been the, has the needle moved a little bit a lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put our finger on what's moving, like what need, where's the needle moved I >>Guess in quantum. And I think, I think that's part of the conversation that we need to have is how do we measure ourselves. I know at the World Economic Forum, quantum Development Network, we had one of our global future councils on the future of quantum computing. And I brought in a scene I EEE fellow Par Gini who, you know, created the international technology roadmap for semiconductors. And I said, Paulo, could you come in and and give us examples, how was the semiconductor community so effective not only at developing the technology but predicting the development of technology so that whether it's an individual deciding if they should change careers or it's a nation state deciding if they should spend a couple billion dollars, we have that tool to predict the rate of change and improvement. And so I think that's part of what we're hoping by participating will bring some of that road mapping skill and technology and understanding so we can make those better reasoned investments. >>Well it's also fun to see super computing this year. Look at the bigger picture, obviously software cloud natives running modern applications, infrastructure as code that's happening. You're starting to see the integration of, of environments almost like a global distributed operating system. That's the way I call it. Silicon and advancements have been a big part of what we see now. Merchant silicon, but also dpu are on the scene. So the role role of silicon is there. And also we have supply chain problems. So how, how do you look at that as a a, a chief architect of h Hewlett Packard Labs? Because not only you have to invent the future and dream it up, but you gotta deal with the realities and you get the realities are silicon's great, we need more of that quantums around the corner, but supply chain, how do you solve that? What's your thoughts and how do you, how, how is HPE looking at silicon innovation and, and supply chain? >>And so for us it, it is really understanding that partnership model and understanding and contributing. And so I will do things like I happen to be the, the systems and architectures chapter editor for the I eee International Roadmap for devices and systems, that community that wants to come together and provide that guidance. You know, so I'm all about telling the semiconductor and the post semiconductor community, okay, this is where we need to compute. I have a partner in the applications and benchmark that says, this is what we need to compute. And when you can predict in the future about where you need to compute, what you need to compute, you can have a much richer set of conversations because you described it so well. And I think our, our senior fellow Nick Dubey would, he's coined the term internet of workflows where, you know, you need to harness everything from the edge device all the way through the extra scale computer and beyond. And it's not just one sort of static thing. It is a very interesting fluid topology. I'll use this compute at the edge, I'll do this information in the cloud, I want to have this in my exoscale data center and I still need to provide the tool so that an individual who's making that decision can craft that work flow across all of those different resources. >>And those workflows, by the way, are complicated. Now you got services being turned on and off. Observability is a hot area. You got a lot more data in in cycle inflow. I mean a lot more action. >>And I think you just hit on another key point for us and part of our research at labs, I have, as part of my other assignments, I help draft our AI ethics global policies and principles and not only tell getting advice about, about how we should live our lives, it also became the basis for our AI research lab at Shewl Packard Labs because they saw, here's a challenge and here's something where I can't actually believe, maintain my ethical compliance. I need to have engineer new ways of, of achieving artificial intelligence. And so much of that comes back to governance over that data and how can we actually create those governance systems and and do that out in the open >>That's a can of worms. We're gonna do a whole segment on that one, >>On that >>Technology, on that one >>Piece I wanna ask you, I mean, where rubber meets the road is where you're putting your dollars. So you've talked a lot, a lot of, a lot of areas of, of progress right now, where are you putting your dollars right now at Hewlett Packard Labs? >>Yeah, so I think when I draw, when I draw my 2030 vision slide, you know, I, for me the first column is about heterogeneous, right? How do we bring all of these novel computational approaches to be able to demonstrate their effectiveness, their sustainability, and also the productivity that we can drive from, from, from them. So that's my first column. My section column is that edge to exoscale workflow that I need to be able to harness all of those computational and data resources. I need to be aware of the energy consequence of moving data, of doing computation and find all of that while still maintaining and solving for security and privacy. But the last thing, and, and that's one was a, one was a how one was aware. The last thing is a who, right? And is is how do we take that subject matter expert? I think of a, a young engineer starting their career at hpe. It'll be very different than my 33 years. And part of it, you know, they will be undaunted by any, any scale. They will be cloud natives, maybe they metaverse natives, they will demand to design an open cooperative environment. So for me it's thinking about that individual and how do I take those capabilities, heterogeneous edge to exito scale workflows and then make them productive. And for me, that's, that's where we were putting our emphasis on those three. When, where and >>Who. Yeah. And making it compatible for the next generation. We see the student cluster competition going on over there. This is the only show that we cover that we've been to that is from the dorm room to the boardroom and this cuz Supercomputing now is elevating up into that workflow, into integration, multiple environments, cloud, premise, edge, metaverse. This is like a whole nother world. >>And, and, but I think it's, it's the way that regardless of which human pursuit you're in, you know, everyone is going to be demand simulation and modeling ai, ML and massive data m l and massive data analytics that's gonna be at heart of, of everything. And that's what you see. That's what I love about coming here. This isn't just the way we're gonna do science. This is the way we're gonna do everything. >>We're gonna come by your booth, check it out. We've talked to some of the folks, hpe obviously HPE Discover this year, GreenLake with center stage, it's now consumption is a service for technology. Whole nother ballgame. Congratulations on, on all this. I would say the massive, I won't say pivot, but you know, a change >>It >>Is and how you guys >>Operate. And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, but as someone who has supported designs over decades, you know, that ability to to to operate and at peak efficiency, to always keep in perfect operating order and to continuously change while still meeting the customer expectations that actually allows us to deliver innovation to our customers faster than when we are delivering warranted individual packaged products. >>Kirk, thanks for coming on Paul. Great conversation here. You know, the road to Quantum's gonna be paved through computing supercomputing software integrated workflows from the dorm room to the boardroom to Cube, bringing all the action here at Supercomputing 22. I'm Jacque Forer with Paul Gillin. Thanks for watching. We'll be right back.

Published Date : Nov 16 2022

SUMMARY :

bringing it to you live. Great to be I remember the machine and all the predecessor r and d. Where are we right now from At the same time we have to think about what's coming next in terms of the technology. You know, one of the big conversations is like refactoring, replatforming, we have a booth behind us that's And then, you know, that slowed down a little bit. that computers in the future will be built upon? And understanding again, how do we incorporate industrialize and true quantum computers and what can you point to any examples And the challenge is you can improve something every single day and if you don't know where the bar is, I think part of the approach that we like to understand, can we start with the problem, lot or, I mean it's moving I guess to some, there's been some talk but we haven't really been able to put And I think, I think that's part of the conversation that we need to have is how do we need more of that quantums around the corner, but supply chain, how do you solve that? in the future about where you need to compute, what you need to compute, you can have a much richer set of Now you got services being turned on and off. And so much of that comes back to governance over that data and how can we actually create That's a can of worms. a lot of, a lot of areas of, of progress right now, where are you putting your dollars right And part of it, you know, they will be undaunted by any, any scale. This is the only show that we cover that we've been to that And that's what you see. the massive, I won't say pivot, but you know, a change And you know, it's funny sometimes you think about the, the pivot to as a services benefiting the customer, You know, the road to Quantum's gonna be paved through

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillinPERSON

0.99+

Nick DubeyPERSON

0.99+

PaulPERSON

0.99+

BresnikerPERSON

0.99+

Richard FinemanPERSON

0.99+

20 mileQUANTITY

0.99+

Hewlett Packard LabsORGANIZATION

0.99+

KirkPERSON

0.99+

PauloPERSON

0.99+

tomorrowDATE

0.99+

33 yearsQUANTITY

0.99+

first columnQUANTITY

0.99+

Jacque ForerPERSON

0.99+

Dallas, TexasLOCATION

0.99+

Shewl Packard LabsORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

Kirk BresnikerPERSON

0.99+

JohnPERSON

0.99+

threeQUANTITY

0.99+

todayDATE

0.98+

hpORGANIZATION

0.98+

MoorePERSON

0.98+

five yearsQUANTITY

0.98+

HPEORGANIZATION

0.97+

firstQUANTITY

0.97+

2030DATE

0.97+

h Hewlett Packard LabsORGANIZATION

0.97+

this yearDATE

0.96+

oneQUANTITY

0.96+

HP CubeORGANIZATION

0.95+

GreenLakeORGANIZATION

0.93+

about half a dozenQUANTITY

0.91+

billion,QUANTITY

0.91+

World Economic ForumORGANIZATION

0.9+

quantum Development NetworkORGANIZATION

0.9+

few years agoDATE

0.88+

couple billion dollarsQUANTITY

0.84+

more than 30 years agoDATE

0.84+

GiniORGANIZATION

0.78+

Supercomputing Road to QuantumTITLE

0.68+

Supercomputing 22ORGANIZATION

0.68+

ParPERSON

0.67+

billion operations per secondQUANTITY

0.67+

Silicon AngleORGANIZATION

0.66+

EEEORGANIZATION

0.66+

singleQUANTITY

0.66+

TurkeyORGANIZATION

0.56+

SuperComputing 22ORGANIZATION

0.52+

CubeORGANIZATION

0.48+

ExoscaleTITLE

0.44+

InternationalTITLE

0.4+

Next Gen Servers Ready to Hit the Market


 

(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)

Published Date : Nov 10 2022

SUMMARY :

and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

David NicholsonPERSON

0.99+

January 2023DATE

0.99+

OracleORGANIZATION

0.99+

JanuaryDATE

0.99+

DellORGANIZATION

0.99+

hundredsQUANTITY

0.99+

November 10thDATE

0.99+

AMDORGANIZATION

0.99+

10 bucksQUANTITY

0.99+

five bucksQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

100 gigQUANTITY

0.99+

EMCORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

LenovoORGANIZATION

0.99+

100%QUANTITY

0.99+

SaturdayDATE

0.99+

128 coreQUANTITY

0.99+

25 gigQUANTITY

0.99+

96 coresQUANTITY

0.99+

five timesQUANTITY

0.99+

2XQUANTITY

0.99+

96 coreQUANTITY

0.99+

8XQUANTITY

0.99+

4XQUANTITY

0.99+

96QUANTITY

0.99+

next yearDATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

2022DATE

0.98+

bothQUANTITY

0.98+

doeshardwarematter.comOTHER

0.98+

5th Gen.QUANTITY

0.98+

4th GenQUANTITY

0.98+

ARMORGANIZATION

0.98+

18-wheelerQUANTITY

0.98+

Z4COMMERCIAL_ITEM

0.97+

firstQUANTITY

0.97+

IntelORGANIZATION

0.97+

2023DATE

0.97+

Zen 4COMMERCIAL_ITEM

0.97+

Sapphire RapidsCOMMERCIAL_ITEM

0.97+

thousandsQUANTITY

0.96+

one serverQUANTITY

0.96+

doubleQUANTITY

0.95+

PCIe Gen 4OTHER

0.95+

Sapphire Rapid CPUsCOMMERCIAL_ITEM

0.94+

PCIe Gen 3OTHER

0.93+

PCIe 4OTHER

0.93+

x86COMMERCIAL_ITEM

0.92+

Wharton CTO AcademyORGANIZATION

0.92+

SuperComputing Intro | SuperComputing22


 

>>Hello everyone. My name is Savannah Peterson, coming to you from the Cube Studios in Palo Alto, California. We're gonna be talking about super computing an event coming up in Dallas this November. I'm joined by the infamous John Furrier. John, thank you for joining me today. >>Great to see you. You look great. >>Thank you. You know, I don't know if anyone's checked out the conference colors for for supercomputing, but I happen to match the accent pink and you are rocking their blue. I got the so on >>There it is. >>We don't always tie our fashion to the tech ladies and gentlemen, but we're, we're a new crew here at, at the Cube and I think it should be a thing that we, that we do moving forward. So John, you are a veteran and I'm a newbie to Supercomputing. It'll be my first time in Dallas. What can I expect? >>Basically it's a hardware nerd fest, basically of the top >>Minds. So it's like ces, >>It's like CES for like, like hardware. It's like really the coolest show if you're into like high performance computing, I mean game changing kind of, you know, physics, laws of physics and hardware. This is the show. I mean this is like the confluence of it's, it's really old. It started when I graduated college, 1988. And back then it was servers, you know, super computing was a concept. It was usually a box and it was hardware, big machine. And it would crank out calculations, simulations and, and you know, you were limited to the processor and all the, the systems components, just the architecture system software, I mean it was technical, it was, it was, it was hardware, it was fun. Very cool back then. But you know, servers got bigger and you got grid computing, you got clusters and then it be really became high performance computing concept. But that's now multiple disciplines, hence it's been around for a while. It's evergreen in the sense it's always changing, attracting talent, students, mentors, scholarships. It's kind of big funding and big companies are behind it. Wl, look, Packard Enterprise, Dell computing startups and hardware matters more than ever. You look at the cloud, what Amazon and, and the cloud hyper skills, they're building the fastest chips down at the root level hardware's back. And I think this show's gonna show a lot of that. >>There isn't the cloud without hardware to support it. So I think it's important that we're all headed here. You, you touched on the evolution there from super computing in the beginning and complex calculations and processing to what we're now calling high performance computing. Can you go a little bit deeper? What is, what does that mean, What does that cover? >>Well, I mean high high performance computing and now is a range of different things. So the super computing needs to be like a thing now. You got clusters and grids that's distributed, you got a backbone, it's well architected and there's a lot involved. This network and security, there's system software. So now it's multiple disciplines in high performance computing and you can do a lot more. And now with cloud computing you can do simulations, say drug research or drug testing. You have, you can do all kinds of cal genome sequencing. I mean the, the, the ability to actually use compute right now is so awesome. The field's got, you know, is rebooting itself in real time, you know, pun intended. So it's like really, it's really good thing. More compute makes things go faster, especially with more data. So high encapsulates all the, the engineering behind it. A lot of robotics coming in the future. All this is gonna be about the edge. You're seeing a lot more hardware making noise around things that are new use cases. You know, your Apple watch that's, you know, very high functionality to a cell tower. Cars again, high performance computing hits all these new use cases. >>It yeah, it absolutely does. I mean high performance computing touches pretty much every aspect of our lives in some capacity at this point and including how we drive our cars to, to get to the studio here in Palo Alto. Do you think that we're entering an era when all of this is about to scale exponentially versus some of the linear growth that we've seen in the space due to the frustration of some of us in the hardware world the last five to 10 years? >>Well, it's a good question. I think everyone has, has seen Moore's law, right? They've seen, you know, that's been, been well documented. I think the world's changing. You're starting to see the trend of more hardware that's specialized like DPU are now out there. You got GPUs, you're seeing the, you know, Bolton hardware, accelerators, you got chi layer software abstraction. So essentially it's, it's a software industry that's in impacted the hardware. So hardware really is software too and it's a lot more software in there. Again, system software's a lot different. So it's, I think it's, it's boomerang back up. I think there's an inflection point because if you look at cyber security and physical devices, they all kind of play in this world where they need compute at the edge. Edge is gonna be a big use case. You can see Dell Technologies there. I think they have a really big opportunity to sell more hardware. H WL Packard Enterprise, others, these are old school >>Box companies. >>So I think the distributed nature of cloud and hybrid and multi-cloud coming on earth and in space means a lot more high performance computing will be sold and and implemented. So that's my take on it. I just think I'm very bullish on this space. >>Ah, yes. And you know me, I get really personally excited about the edge. So I can't wait to see what's in store. Thinking about the variety of vendors and companies, I know we see some of the biggest players in the space. Who are you most excited to see in Dallas coming up in November? >>You know, HP enter, you look back on enterprise has always been informally, HP huge on hpc, Dell and hpe. This is their bread and butter. They've been making servers from many computers to Intel based servers now to arm-based servers and and building their own stuff. So you're gonna start to see a lot more of those players kind of transforming. We're seeing both Dell and HPE transforming and you're gonna see a lot of chip companies there. I'm sure you're gonna see a lot more younger talent, a lot, a lot of young talent are coming, like I said, robotics and the new physical world we're living in is software and IP connected. So it's not like the old school operational technology systems. You have, you know, IP enabled devices that opens up all kinds of new challenges around security vulnerabilities and also capabilities. So it's, I think it's gonna be a lot younger crowd I think than we usually see this year. And you seeing a lot of students, and again universities participating. >>Yeah, I noticed that they have a student competition that's a, a big part of the event. I'm curious when you say younger, are you expecting to see new startups and some interesting players in the space that maybe we haven't heard of before? >>I think we might see more use cases that are different. When I say younger, I don't mean so much on the Democratic but young, younger i new ideas, right? So I think you're gonna see a lot of smart people coming in that might not have the, you know, the, the lens from when it started in 1988 and remember 1988 to now so much has changed. In fact we just did AEG a segment on the cube called does hardware matter because for many, many years, over the past decades, like hardware doesn't matter, it's all about the cloud and we're not a box company. Boxes are coming back. So you know, that's gonna be music for for into the years of Dell Technologies HPE the world. But like hardware does matter and this, you're starting to see that here. So I think you'll see a lot a younger thinking, a little bit different thinking. You're gonna start to see more conf confluence of like machine learning. You're gonna see security and again, I mentioned space. These are areas where you're starting to see where hardware and high performance is gonna be part of all the new systems. And so it's just gonna be industrial to i o is gonna be a big part too. >>Yeah, absolutely. I, I was thinking about some of these use cases, I don't know if you heard about the new drones they're sending up into hurricanes, but it takes literally what a, what an edge use case, how durable it has to be and the rapid processing that has to happen as a result of the software. So many exciting things we could dive down the rabbit hole with. What can folks expect to see here on the cube during supercomputing? >>Well we're gonna talk to a lot of the leaders on the cube from this community, mostly from the practitioner's side, expert side. We're gonna have, we're gonna hear from Dell Technologies, Hewlett Packer Enterprise and a lot of other executives who are investing wanna find out what they're investing in, how it ties into the cloud. Cuz the cloud has become a great environment for multi-cloud with more grid-like capability and what's the future? Where's the hardware going, what's the evolution of the components? How is it being designed? And then how does it fit into the overall software open source market that's booming right now that cloud technology has been doing. So I wanna, we wanna try to connect the dots on the cube. >>Great. So we have a very easy task ahead of us. Hopefully everyone will enjoy the content and the guests that we leaving to, to our table here from from the show floor. When we think about, do you think there's gonna be any trends that we've seen in the past that might not be there? Has anything phased out of the super computing world? You're someone who's been around this game for a while? >>Yeah, that's a good question. I think the game is still the same but the players might shift a little bit. So for example, a lot more with the supply chain challenges you might see that impact. We're gonna watch that very closely to find out what components are gonna be in what. But I'm thinking more about system architecture because the use case is interesting. You know, I was talking to Dell folks about this, you know they have standard machines but then they have use cases for how do you put the equivalent of a data center next to say a mobile cell tower because now you have the capability for wireless and 5g. You gotta put the data center like CAPA speed functionality and capacity for compute at these edges in a smaller form factor. How do you do that? How do you handle all the IO and that's gonna be all these, all these things are nerd again nerdy conversations but they're gonna be very relevant. So I like the new use cases of power more compute in places that they've never been before. So I think that to me is where the exciting part is. Like okay, who's got the, who's really got the real deal going on here? That's something be the fun part. >>I think it allows for a new era in innovation and I don't say that lightly, but when we can put processing power literally anywhere, it certainly thrills the minds of hardware nerds. Like me, my I'm OG hardware, I know you are too, I won't reveal your roots, but I got my, my start in in hardware product design back in the day. So I can't wait >>To, well you then, you know, you know hardware, when you talk about processing power and memory, you can never have enough compute and memory. It's like, it's like the internet bandwidth. You can't never have enough bandwidth. Bandwidth, right? Network power, compute power, you know, bring it on, you know, >>Even battery life, simple things like that when it comes to hardware, especially when we're talking about being on the edge. It's just like our cell phones. Our cell phones are an edge device >>And we get, well when you combine cloud on premises hybrid and then multi-cloud and edge, you now have the ability to get compute at capabilities that were never fathom in the past. And most of the creativity is limited to the hardware capability and now that's gonna be unleashed. I think a lot of creativity. That's again back to the use cases and yes, again, you're gonna start to see more industrial stuff come out edge and I, I, I love the edge. I think this is a great use case for the edge. >>Me too. A absolutely so bold claim. I don't know if you're ready to, to draw a line in the sand. Are we on the precipice of a hardware renaissance? >>Definitely no doubt about it. When we, when we did the does hardware matter segment, it was really kind of to test, you know, everyone's talking about the cloud, but cloud also runs hardware. You look at what AWS is doing, for instance, all the innovation, it's at robotics, it's at that at the physical level, pro, pro, you know you got physics, I mean they're working on so low level engineering and the speed difference. I think from a workload standpoint, whoever can get the best out of the physics and the materials will have a winning formula. Cause you can have a lot more processing specialized processors. That's a new system architecture. And so to me the hype, definitely the HPC high press computing fits perfectly into that construct because now you got more power so that software can be more capable. And I think at the end of the day, nobody wants to write a app on our workload to run on on bad hardware, not have enough compute. >>Amen to that. On that note, John, how can people get in touch with you and us here on the show in anticipation of supercomputing? >>Of course hit the cube handle at the cube at Furrier, my last name F U R R I E R. And of course my dms are always open for scoops and story ideas. And go to silicon angle.com and the cube.net. >>Fantastic. John, I look forward to joining you in Dallas and thank you for being here with me today. And thank you all for joining us for this super computing preview. My name is Savannah Peterson and we're here on the cube live. Well not live prerecorded from Palo Alto. And look forward to seeing you for some high performance computing excitement soon.

Published Date : Oct 22 2022

SUMMARY :

My name is Savannah Peterson, coming to you from the Cube Studios Great to see you. supercomputing, but I happen to match the accent pink and you are rocking their blue. So John, you are a veteran and I'm a newbie to Supercomputing. So it's like ces, And back then it was servers, you know, super computing was a So I think it's important that we're all headed here. So now it's multiple disciplines in high performance computing and you can do a lot more. Do you think that we're entering an era when all of this is about to scale exponentially I think there's an inflection point because if you look at cyber security and physical devices, So I think the distributed nature of cloud and hybrid and multi-cloud coming on And you know me, I get really personally excited about the edge. So it's not like the old school operational technology systems. I'm curious when you say younger, are you expecting to see new startups and some interesting players in the space that maybe So you know, that's gonna be music for I, I was thinking about some of these use cases, I don't know if you heard about the new Cuz the cloud has become a great environment for multi-cloud with more grid-like When we think about, do you think there's gonna be any So I like the new use cases of Like me, my I'm OG hardware, I know you are too, bring it on, you know, It's just like our cell phones. And most of the creativity is limited to the hardware capability and now that's gonna to draw a line in the sand. it's at that at the physical level, pro, pro, you know you got physics, On that note, John, how can people get in touch with you and us here on And go to silicon angle.com and the cube.net. And look forward to seeing you for some high performance computing excitement

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Savannah PetersonPERSON

0.99+

DallasLOCATION

0.99+

AmazonORGANIZATION

0.99+

1988DATE

0.99+

John FurrierPERSON

0.99+

DellORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Hewlett Packer EnterpriseORGANIZATION

0.99+

AWSORGANIZATION

0.99+

H WL Packard EnterpriseORGANIZATION

0.99+

NovemberDATE

0.99+

hpcORGANIZATION

0.99+

HPORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

todayDATE

0.99+

HPEORGANIZATION

0.99+

Packard EnterpriseORGANIZATION

0.99+

AppleORGANIZATION

0.99+

CubeORGANIZATION

0.98+

bothQUANTITY

0.98+

first timeQUANTITY

0.97+

hpeORGANIZATION

0.95+

this yearDATE

0.95+

CESEVENT

0.94+

10 yearsQUANTITY

0.92+

earthLOCATION

0.9+

BoltonORGANIZATION

0.87+

AEGORGANIZATION

0.85+

5gQUANTITY

0.85+

Cube StudiosORGANIZATION

0.81+

FurrierORGANIZATION

0.81+

fiveQUANTITY

0.81+

MoorePERSON

0.78+

IntelORGANIZATION

0.75+

cube.netOTHER

0.74+

this NovemberDATE

0.71+

silicon angle.comOTHER

0.71+

past decadesDATE

0.63+

DemocraticORGANIZATION

0.55+

Kit Colbert, VMware | VMware Explore 2022


 

>>Welcome back everyone to the cubes, live coverage here at VMware Explorer, 22. We're here on the ground on the floor of Mosco. I'm John for David ante. We're at kit Goldberg, CTO of VMware, the star of the show, the headliner@supercloud.world. The event we had just a few weeks ago, kit. Great to see you super excited to, to chat with you. Thanks for coming on. Oh >>Yeah. Happy to be here, man. It's been a wild week. Tons of excitement. We are jazzed. We're jacked, like to look at things >>For both, of course, jacked up and jazzed. Ready to go. So you got UN stage loved your keynote, you know, very CTO oriented, hit the, all your marks cloud native, the vSphere eight intro. Yep. More performance, more power. Yeah, more efficiency. And now the cloud native over the top, you shipped a white paper a few weeks ago, which we discussed at our super cloud event. Yep. You know, really laying out the narrative of cloud native. This is the priority for you. Is that true? Is that your only priority? What are the things going on right now for you that are your top priorities, >>Top priorities. So absolutely at a high level, it's flushing out this vision that, that we're talking about in terms of what we call cross cloud services. Other people call multi-cloud, you guys have super cloud, but the point is, I think what we see is that there's these different sort of vertical silos, the different public clouds they're on-prem data center edge. And what we're looking at is trying to create a new type of cloud something that's more horizontal in architecture. And I think this is something that we realize we've been doing at VMware for a while, and we gave it a name, we call it cross cloud. But what's important is that while we do bring a lot of value there, we can't possibly do everything. This has to be an industrywide movement. And so I think what we're really excited about is figuring out, okay, how do we actually build an architecture and a framework such that there's clear sort of lines of responsibility. Here's what one company does. Here's what another one does make sure that there's clean sort of APIs between that basically an overall architecture and structure. So that's probably one of the, the high level things that we're doing as an organization right now. >>What's been the feedback here at VMware Explorer, obviously the new name, Explorer rag laid that out in the keynote. Yep. It's about moving forward. Not replacing the community. Yep. Extending the world core and exploring new frontiers multicloud. Obviously one of them key. Yeah. Very clever actually names dig into it. It's nuanced. What's been the reaction. Yep. You're right. Yep. You're crazy. I love it. I need it. It's it's too early. It's perfect timing. No, it's a bit of, what's the feedback always a little >>Bit of everything, you know, I think one of us firstno people didn't really understand it. I think people were confused about what it was, but now that we're here in person, I think generally speaking, I'm hearing a lot of positive things about it. We've been gone or been apart for three years now, right? Since the last in person one, and this is an interesting opportunity for recreation sort of rebirth, right? We've certainly lost some traditions during the COVID pandemic, but also gives us the opportunity to build new ones. And to your point, world was always associated with virtualization. And of course, we're still doing that. We're still doing cloud infrastructure, but we're doing so much more. And given this focus on multi-cloud that I just mentioned and how it is the go forward focus for VMware, we wanted to evolve the conference to have that focus. And so I've been actually really pleased to see how many folks for it's their first time here. Right? They haven't been Tom worlds before and you know, this broader sort of conference that we're creating to, to apply to the support, more disciplines, different focus areas, you know, application development, developers, platform teams, you got cloud management things with aria, public cloud management, networking security, and user computing, all in addition to the core infrastructure bits. >>So John all week's been paying homage to, to Andy Grove talking about, let chaos rain and then rain in the chaos. Right. And so when you talk to customers, that chaos message cloud chaos, how is it resonating? Are they aware of that chaos? Are they saying, yes, we have cloud chaos or some saying, eh, yeah. It's okay. Everything's good. And they just maybe have some blind spots. What do >>You think? Yeah. I'm actually surprised at how strongly it's resonating. I mean, I think we knew that we were onto something, but people even love the specific term. They're like cloud chaos. I never thought about it that way, but you're like, you're absolutely right. It was a movie. It's a great, yeah. I know. Sounds like a thriller, but, but what we sort of, the picture we paint there about these silos across clouds, the duplication of technologies, duplication of teams and training, all this stuff. People realize that's where they're at. And it's one of those things where there's this headlong rush to cloud for good reasons. People wanted to be in the agility, but now they're dealing with some of that complexity that, that gets built up there and it absolutely is chaos. And while speed is great, you need to somehow balance that speed with control things like security compliance. These are sort of enterprise requirements that are sort of getting left out. And I think that's the realization, that's the sort of chaos that we're hitting on. >>It's almost like when in bus, in business school, you had the economic lines when break even hits, you know, cloud had a lot of great goodness to it. Yep. A lot of great value. It still does on the CapEx side, but as distributed computing architectures become reality. Yep. Private cloud instantiation of hybrid cloud operations. Now you've got edge and opening up all these new, new net new applications. Yep. What are you seeing there? And it's a question we've been asked some of the folks in the partner network, what are some of those new next gen apps that are gonna be enabled by, by this next wave edge specifically? Yeah. More performance, more application development, more software. Yeah. More faster, cheaper going on here. Kind of a Moore's law vibe there. What's next. >>Yeah. So, you know, when we look at edge, so, okay. Take today. Today. Edge is oftentimes highly customized software and hardware. It's not general purpose or to cloud technologies. And while edge is certainly gonna be limited. You can't just infinitely scale. Like you can in the cloud and the network bandwidth might be a little bit limited. You still wanna imagine it or manage it as if it were another cloud location, right. That like, I wanna be able to address it. Just like I addressed a certain availabilities done within AWS. I wanna be able to say the specific edge location at, you know, wherever somewhere here in San Francisco, let's say right now there's a few different things though. The first of which is that you got to manage at scale. Cause you don't have with cloud, you got a small number of very large locations with edge. >>You got a large number of very small locations. And so it's the scale is inverted there. So what this means is that you probably can't exactly specify which edge you want to go to. What instead you wanna say is more relational. Like I've got an IOT device out there. I want my app to be in data to be near it. And the system needs to figure out, okay, where do I put that thing? And how do I get it near it? And there may be some different constraints. You have cost security, privacy, it may be your edge or maybe telco edge location, you know, one, one of these sorts of things. Right? And so I think where we're going there is to enable the movement of applications and data to the right place. And this again goes back to the whole cross cloud architecture, right? >>You don't wanna be limited in terms of where you put an app, you wanna have that flexibility. This is the whole, you know, we use the term cloud smart. Right. And that's what it means. It's like put the, the app where it needs to be sort of the right tool for the right job. And so I think the innovation though, it's gonna be huge. You're gonna see new application architectures that the app can be placed near a user near a device near like a, an iPhone or near an IOT device, like a video camera. And the way that you manage that is gonna be much kind of infrastructure is code base. Yeah. So I think there's huge possibilities there. And it's really amazing to see just real quick on the telco side, what's happening there as well. The move to 5g, the move to open ran telco is now starting to adopt these data center and cloud technologies kinda standard building blocks that we use now out at the edge. So I think, you know, the amount of innovation that we're gonna see, >>It's really the first time on telco, they actually have a viable, scalable opportunity to, to put real gear data center, liked capabilities yep. At a location for specific purpose. Yeah. The edge function. >>Yeah. And well, and what we, without >>Building a, a monster >>Facility. Exactly. Yeah. It's like the base of a cell tower or something telephone closet. But what we've been able to do is improve these general purpose technologies. Like you look at vSphere in our hypervisor today. We are great at real time workloads, right? Like as a matter of fact, you look at performance on vSphere versus bare metal. Oftentimes an app runs faster on vSphere now because of all the efficiency and scale and so forth we can bring. So it means that these telecom applications that are very latency sensitive can now run fun on there. But Hey, guess what? Once you have a general purpose server that can run some of the telecom apps, well, Hey, you got extra space to run other apps. Maybe you could sell that space to customers or partners. And you know, then you have this new architecture >>Is the dev skill, a, a barrier for the, for the telcos, where are we at >>With that? It, it, it is. I think the barriers are really, how do you provide, I dunno if it's a skill set. I mean, there's probably some skill set aspects. I think in my mind, it's more about giving them the APIs to get access to that. Like, as I said, you're not gonna have developers knowing, okay, here are the specific geographic locations of all the cell towers in San Francisco and set what you're gonna say again, I need to be near this thing. And so you used geolocation and figure out, just put it some, put it in the right place. I don't really care. Right. So again, I think it's an evolution of management evolution of the APIs that developers use to access. Like today, I'm gonna say, okay, I know my app needs to be on the east coast so I can use us east one. I know the specific AZs at a, at a cloud level. That makes sense at an edge level. It doesn't, you're not gonna know. Okay. Like the specific cross streets or whatever, you gotta let the system figure that >>Out kid. I know you gotta go on. Times's tight, real quick. You got a session here on web three. Yeah. The Cube's got the, you know, the cube versus coming soon. We might be heavy. The cube versus coming powered by arm token, we had all kinds of stuff going on. Yep. You saw the preview a couple years ago. We did with the Cuban. Anyway, you did a session on web three and DM. VMware's rolling real quick. What was that about? Yeah, what's the purpose? >>What's the direction. That was a fascinating conversation. So I was talking about web three. It was talking about why enterprises haven't really started even to scratch the surface of the potential of web three. So part of it was like, okay, what is web three? It's a buzz words. We talked through that. We talked through the use of blockchain, how that sits with the core of a lot of web three. We talked about the use of cryptocurrency and how that makes sense. We talked about the consumerization, continuing consumerization of it. We've seen it with end user devices. We may well see it with some of the web three changes around ownership, individual ownership of data, of assets, et cetera. That's gonna have a downstream impact on enterprises, how they go to market their commercial models. So it was a fascinating discussion that unfortunately it's hard to summarize, but gotten to a lot of the nuances of this and some of the, are >>You bullish on >>It? Very bullish, a hundred percent. Like I think blockchain is a hugely enabling technology and not from a cryptocurrency standpoint, put that aside. All the enterprise use cases, we have customers like broad bridge financial today leveraging VMware blockchain, doing a hundred billion in transactions a day with the sort of repo market >>You think defi is booming >>Defi. So I, I think we're just starting to get there. But what you find is oftentimes these trends start on the consumer side and then all of a sudden they surprise enterprises. >>They call it a tri tried tread five traditional fi finance >>Versus okay. >>Any >>Other way around? No, no, no. But I'm saying is that it's, these consumer trends will start to impact enterprises. But what I'm saying is that enterprises need to be ready now or start preparing now for those comings. >>And what's the preparation for that? Just education learning. Yeah. >>Education learning, looking at blockchain, use cases, looking at what will this enable consumers to do that they couldn't do before there is gonna be a democratization of access to data. You're still gonna wanna have gatekeepers. You're still gonna wanna have enterprises or services that add value on top of that, but it's gonna be a bit more of an open ecosystem now, and that's gonna change some of the market dynamics in subtle ways. >>Okay. So we got one minute left. I want to ask you, what's your impression of the super cloud event we had also, you were headlining and you guys were a big part of bringing the, a large C of great people together. Are you happy with the outcome? What do you think's next for? >>Absolutely. No. I was super excited to see how much reception and engagement it got from across the industry. Right? So many different entry participants, so many different customers, partners, et cetera, viewing it online have had a lot of conversations here at explore already. As you know, you know, VMware, we put out a white paper, our point of view on what is a multi-cloud service. What is the taxonomy of those services? Again, as I mentioned before, we need to get as an industry to a place where we have alignment about this overall architecture to enable interoperability. And I think that's really the key thing. If we're gonna make this industry architectural shift, which is what I see coming, this is what we got. >>And you're gonna be jumping all in with this and helping out if we need you >>Hundred percent. All right. >>All in. I really love your transparency on the, on your white paper. Check out the white paper online on vmware.com. It's the cross cloud cloud native. I, I call the, the mission statement. It's not a Jerry McGuire memo. It's more me than that. It's the, it's the direction of cloud native. Yep. And multi-cloud thanks for coming on and, and thanks for doing that too. >>No, of course. And thanks for having me. Thanks. Love the discussion. >>Okay. More live coverage here at world Explorer, VMware Explorer, after the short break.

Published Date : Aug 31 2022

SUMMARY :

CTO of VMware, the star of the show, the headliner@supercloud.world. We're jacked, like to look at things And now the cloud native over the top, you shipped a white paper a few weeks ago, And I think this is something that we realize we've been doing at VMware for a while, What's been the feedback here at VMware Explorer, obviously the new name, Explorer rag laid that out Bit of everything, you know, I think one of us firstno people didn't really understand it. And so when you talk to customers, that chaos message cloud And while speed is great, you need to somehow balance that speed of the folks in the partner network, what are some of those new next gen apps that are gonna be enabled by, I wanna be able to say the specific edge location at, you know, wherever somewhere here in San Francisco, And the system needs to figure out, okay, where do I put that thing? And the way that you manage that is gonna be much kind It's really the first time on telco, they actually have a viable, scalable opportunity to, And you know, then you have this new architecture Like the specific cross streets or whatever, you gotta let the system figure The Cube's got the, you know, the cube versus coming soon. We talked about the use of cryptocurrency and how that makes sense. All the enterprise use cases, we have customers like broad But what you find is oftentimes But what I'm saying is that enterprises need to be ready now or start preparing now for those comings. And what's the preparation for that? but it's gonna be a bit more of an open ecosystem now, and that's gonna change some of the market dynamics in subtle ways. What do you think's next for? And I think that's really the key thing. All right. It's the cross cloud cloud native. Love the discussion.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kit ColbertPERSON

0.99+

San FranciscoLOCATION

0.99+

three yearsQUANTITY

0.99+

Andy GrovePERSON

0.99+

VMwareORGANIZATION

0.99+

Jerry McGuirePERSON

0.99+

JohnPERSON

0.99+

telcoORGANIZATION

0.99+

TodayDATE

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Hundred percentQUANTITY

0.99+

one minuteQUANTITY

0.98+

DavidPERSON

0.98+

first timeQUANTITY

0.98+

COVID pandemicEVENT

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.97+

vSphereTITLE

0.96+

oneQUANTITY

0.96+

MoorePERSON

0.96+

three changesQUANTITY

0.95+

hundred percentQUANTITY

0.93+

few weeks agoDATE

0.87+

telco edgeORGANIZATION

0.87+

couple years agoDATE

0.86+

MoscoLOCATION

0.85+

kit GoldbergPERSON

0.84+

one companyQUANTITY

0.84+

CapExORGANIZATION

0.84+

a dayQUANTITY

0.83+

VMware ExploreTITLE

0.81+

VMware ExplorerORGANIZATION

0.78+

fiveQUANTITY

0.75+

threeQUANTITY

0.71+

vmware.comORGANIZATION

0.71+

AZsLOCATION

0.7+

waveEVENT

0.68+

headliner@supercloud.worldOTHER

0.68+

web threeOTHER

0.67+

east coastLOCATION

0.66+

VMware ExplorerTITLE

0.65+

CubanPERSON

0.65+

a hundred billionQUANTITY

0.64+

2022DATE

0.63+

CubeCOMMERCIAL_ITEM

0.59+

Tons of excitementQUANTITY

0.58+

east oneLOCATION

0.57+

ExplorerTITLE

0.56+

nextEVENT

0.53+

CTOPERSON

0.52+

VMwareTITLE

0.51+

threeOTHER

0.51+

cloudORGANIZATION

0.5+

22DATE

0.48+

webTITLE

0.47+

edgeTITLE

0.41+

5gORGANIZATION

0.32+

Sumit Dhawan, VMware | VMware Explore 2022


 

(upbeat music) >> Welcome back everyone to theCUBE's coverage of VMware Explore '22, formerly VMworld. This is our 12th year covering it. I'm John Furrier with Dave Vellente. Two sets, three days of wall-to-wall coverage. We're starting to get the execs rolling in from VMware. Sumit Dhawan, president of VMware's here. Great to see you. Great keynote, day one. >> Great to be here, John. Great to see you, Dave. Day one, super exciting. We're pumped. >> And you had no problem with the keynotes. We're back in person. Smooth as silk up there. >> We were talking about it. We had to like dust off a cobweb to make some of these inputs. >> It's not like riding a bike. >> No, it's not. We had about 40% of our agencies that we had to change out because they're no longer in business. So, I have to give kudos to the team who pulled it together. They did a fabulous job. >> You do a great check, great presentation. I know you had a lot to crack in there. Raghu set the table. I know this is for him, this was a big moment to lay out the narrative, address the Broadcom thing right out of the gate, wave from Hock Tan in the audience, and then got into the top big news. Still a lot of meat on the bone. You get up there, you got to talk about the use cases, vSphere 8, big release, a lot of stuff. Take us through the keynote. What was the important highlights for you to share, the folks watching that didn't see the keynote or wanted to get your perspective? >> Well, first of all, did any of you notice that Raghu was running on the stage? He did not do that in rehearsal. (John chuckles) I was a little bit worried, but he really did it. >> I said, I betcha that was real. (everyone chuckles) >> Anyways, the jokes aside, he did fabulous. Lays out the strategy. My thinking, as you said, was to first of all speak with their customers and explain how every enterprise is facing with this concept of cloud chaos that Raghu laid out and CVS Health story sort of exemplifies the situation that every customer is facing. They go in, they start with cloud first, which is needed, I think that's the absolutely right approach. Very quickly build out a model of getting a cloud ops team and a platform engineering team which oftentimes be a parallel work stream to a private cloud infrastructure. Great start. But as Roshan, the CIO at CVS Health laid out, there's an inflection point. And that's when you have to converge these because the use cases are where stakeholders, this is the lines of businesses, app developers, finance teams, and security teams, they don't need this stove piped information coming at 'em. And the converge model is how he opted to organize his team. So we called it a multi-cloud team, just like a workspace team. And listen, our commitment and innovations are to solve the problems of those teams so that the stakeholders get what they need. That's the rest of the keynote. >> Yeah, first of all, great point. I want to call out that inflection point comment because we've been reporting coming into VMworld with super cloud and other things across open source and down into the weeds and into the hood. The chaos is real. So, good call. I love how you guys brought that up there. But all industry inflection points, if you go back in history of the tech industry, at every single major inflection point, there was chaos, complexity, or an enemy proprietary. However you want to look at it, there was a situation where you needed to kind of reign in the chaos as Andy Grove would say. So we're at that inflection point, I think that's consistent. And also the ecosystem floor yesterday, the expo floor here in San Francisco with your partners, it was vibrant. They're all on this wave. There is a wave and an inflection point. So, okay. I buy that. So, if you buy the inflection point, what has to happen next? Because this is where we're at. People are feeling it. Some say, I don't have a problem but they're cut chaos such is the problem. So, where do you see that? How does VMware's team organizing in the industry and for customers specifically to solve the chaos, to reign it in and cross over? >> Yeah, you're a 100% right. Every inflection point is associated with some kind of a chaos that had to be reigned in. So we are focused on two major things right now which we have made progress in. And maybe third, we are still work in-progress. Number one is technology. Today's technology announcements are directly to address how that streamlining of chaos can be done through a cloud smart approach that we laid out. Our Aria, a brand new solution for management, significant enhancements to Tanzu, all of these for public cloud based workloads that also extend to private cloud. And then our cloud infrastructure with newer capabilities with AWS, Azure, as well as with new innovations on vSphere 8 and vSAN 8. And then last but not the least, our continuous automation to enable anywhere workspace. All these are simple innovation that have to address because without those innovations, the problem is that the chaos oftentimes is created because lack of technology and as a result structure has to be put in place because tooling and technology is not there. So, number one goal we see is providing that. Second is we have to be independent, provide support for every possible cloud but not without being a partner of theirs. That's not an easy thing to do but we have the DNA as a company, we have done that with data centers in the past, even though being part of Dell we did that in the data center in the past, we have done that in mobility. And so we have taken the challenge of doing that with the cloud. So we are continually building newer innovation and stronger and stronger partnerships with cloud provider which is the basis of our commercial relationships with Microsoft Azure too, where we have brought Azure VMware solution into VMware cloud universal. Again, that strengthens the value of us being neutral because it's very important to have a Switzerland party that can provide these multi-cloud solutions that doesn't have an agenda of a specific cloud, yet an ecosystem, or at least an influence with the ecosystem that can bring going forward. >> Okay, so technology, I get that. Open, not going to be too competitive, but more open. So the question I got to ask you is what is the disruptive enabler to make that happen? 'Cause you got customers, partners and team of VMware, what's the disruptive enabler that's going to get you to that level? >> Over the hump. I mean, listen, our value is this community. All this community has one of two paths to go. Either, they become stove piped into just the public-private cloud infrastructure or they step up as this convergence that's happening around them to say, "You know what? I have the solution to tame this multi-cloud complexity, to reign the chaos," as you mentioned because tooling and technologies are available. And I know they work with the ecosystem. And our objective is to bring this community to that point. And to me, that is the best path to overcome it. >> You are the connective tissue. I was able to sit into the analyst meeting today. You were sort of the proxy for CVS Health where you talked about the private that's where you started, the public cloud ops team, bringing that together. The platform is the glue. That is the connective tissue. That's where Tanzu comes in. That's where Aria comes in. And that is the disruptive technology which it's hard to build that. >> From a technology perspective, it's an enabler of something that has never been done before in that level of comprehensiveness, from a more of a infrastructure side thinking perspective. Yes, infrastructure teams have enabled self-service portals. Yes, infrastructure teams have given APIs to developers, but what we are enabling through Tanzu is completely next level where you have a lot richer experience for developers so that they never ever have to think about the infrastructure at all. Because even when you enable infrastructure as API, that's still an API of the infrastructure. We go straight to the application tier where they're just thinking about authorized set of microservices. Containers can be orchestrated and built automatically, shifting security left where we're truly checking them or enabling them to check the security vulnerabilities as they're developing the application, not going into the production when they have to touch the infrastructure. To me, that's an enabler of a special power that this new multi-cloud team can have across cloud which they haven't had in the past. >> Yeah, it's funny, John, I'd say very challenging technically. The challenge in 2010 was the software mainframe, remember the marketing people killed that term. >> Yeah, exactly. >> But you think about that. We're going to make virtualization and the overhead associated with that irrelevant. We're going to be able to run any workload and VMware achieved that. Now you're saying we run anything anywhere, any Kubernete, any container. >> That's the reality. That's the chaos. >> And the cloud and that's a new, real problem. Real challenging problem that requires serious engineering. >> Well, I mean it's aspirational, right? Let's get the reality, right? So true spanning cloud, not yet there. You guys, I think your vision is definitely right on in the sense that we'd like the chaos and multicloud's a reality. The question is AWS, Azure, Google Cloud, other clouds, they're not going to sit still. No one's going to let VMware just come up and take everything. You got to enable so the market- >> True, true. I don't think this is the case of us versus them because there is so much that they have to express in terms of the value of every cloud. And this happened in the case of, by the way, whether you go into infrastructure or even workspace solutions, as long as the richest of the experience and richest of the controls are provided, for their cloud to the developers that makes the adoption of their cloud simpler. It's a win-win for every party. >> That's the key. I think the simplest. So, I want to ask you, this comes up a lot and I love that you brought that up, simple and self-service has proven developers who are driving the change, cloud DevOps developers. They're driving the change. They're in charge more than ever. They want self-service, easier to deploy. I want a test, if I don't like it, I want to throw it away. But if I like something, I want to stick with it. So it's got to be self-service. Now that's antithetical to the old enterprise model of solve complexity with more complexity. >> Yeah, yeah. >> So the question for you is as the president of VMware, do you feel good that you guys are looking out over the landscape where you're riding into the valley of the future with the demand being automation, completely invisible, abstraction layer, new use case scenarios for IT and whatever IT becomes. Take us through your mindset there, because I think that's what I'm hearing here at this year, VMware Explorer is that you guys have recognized the shift in demographics on the developer side, but ops isn't going away either. They're connecting. >> They're connected. Yeah, so our vision is, if you think about the role of developers, they have a huge influence. And most importantly they're the ones who are driving innovation, just the amount of application development, the number of developers that have emerged, yet remains the scarcest resource for the enterprise are critical. So developers often time have taken control over decision on infrastructure and ops. Why? Because infrastructure and ops haven't shown up. Not because they like it. In fact, they hate it. (John chuckles) Developers like being developers. They like writing code. They don't really want to get into the day to day operations. In fact, here's what we see with almost all our customers. They start taking control of the ops until they go into production. And at that point in time, they start requesting one by one functions of ops, move to ops because they don't like it. So with our approach and this sort of, as we are driving into the beautiful valley of multi-cloud like you laid out, in our approach with the cross cloud services, what we are saying is that why don't we enable this new team which is a reformatted version of the traditional ops, it has the platform engineering in it, the key skill that enables the developer in it, through a platform that becomes an interface to the developers. It creates that secure workflows that developers need. So that developers think and do what they really love. And the infrastructure is seamless and invisible. It's bound to happen, John. Think about it this way. >> Infrastructure is code. >> Infrastructure has code, and even next year, it's invisible because they're just dealing with the services that they need. >> So it's self-service infrastructure. And then you've got to have that capability to simplified, I'll even say automated or computational governance and security. So Chris Wolf is coming on Thursday. >> Yeah. >> Unfortunately I won't be here. And he's going to talk about all the future projects. 'Cause you're not done yet. The project narrows, it's kind of one of these boring, but important. >> Yeah, there's a lot of stuff in the oven coming out. >> There's really critical projects coming down the pipeline that support this multi-cloud vision, is it's early days. >> Well, this is the thing that we were talking about. I want to get your thoughts on. And we were commenting on the keynote review, Hock Tan bought VMware. He's a lot more there than he thought. I mean, I got to imagine him sitting in the front row going there's some stuff coming out of the oven. I didn't even, might not have known. >> He'd be like, "Hmm, this extra value." (everyone chuckles) >> He's got to be pretty stoked, don't you think? >> He is, he is. >> There's a lot of headroom on the margin. >> I mean, independent to that, I think the strategy that he sees is something that's compelling to customers which is what, in my assessment, speaking with him, he bought VMware because it's strategic to customers and the strategic value of VMware becomes even higher as we take our multi-cloud portfolio. So it's all great. >> Well, plus the ecosystem is now re-energize. It's always been energized, but energized cuz it's sort of had to be, cuz it's such a strong- >> And there was the Dell history there too. >> But, yeah it was always EMC, and then Dell, and now it's like, wow, the ecosystem's- >> Really it's released almost. I like this new team, we've been calling this new ops kind of vibe going refactored ops, as you said, that's where the action's happening because the developers want to go faster. >> They want to go faster. >> They want to go fast cuz the velocity's paying off of them. They don't want to have to wait. They don't want security reviews. They want policy. They want some guardrails. Show me the track. >> That's it. >> And let me drive this car. >> That's it because I mean think about it, if you were a developer, listen, I've been a developer. I never really wanted to see how to operate the code in production because it took time away for developing. I like developing and I like to spend my time building the applications and that's the goal of Aria and Tanzu. >> And then I got to mention the props of seeing project Monterey actually come out to fruition is huge because that's the future of computing architecture. >> I mean at this stage, if a customer from here on is modernizing their infrastructure and they're not investing in a holistic new infrastructure from a hardware and software perspective, they're missing out an opportunity on leveraging the numbers that we were showing, 20% increase in calls. Why would you not just make that investment on both the hardware and the software layer now to get the benefits for the next five-six years. >> You would and if I don't have to make any changes and I get 20% automatically. And the other thing, I don't know if people really appreciate the new curve that the Silicon industry is on. It blows away the history of Moore's law which was whatever, 35-40% a year, we're talking about 100% a year price performance or performance improvements. >> I think when you have an inflection point as we said earlier, there's going to be some things that you know is going to happen, but I think there's going to be a lot that's going to surprise people. New brands will emerge, new startups, new talent, new functionality, new use cases. So, we're going to watch that carefully. And for the folks watching that know that theCUBE's been 12 years with covering VMware VMworld, now VMware Explore, we've kind of met everybody over the years, but I want to point out a little nuance, Raghu thing in the keynote. During the end, before the collective responsibility sustainment commitment he had, he made a comment, "As proud as we are," which is a word he used, there's a lot of pride here at VMware. Raghu kind of weaved that in there, I noticed that, I want to call that out there because Raghu's proud. He's a proud product guy. He said, "I'm a product guy." He's delivering keynote. >> Almost 20 years. >> As proud as we are, there's a lot of pride at VMware, Sumit, talk about that dynamic because you mentioned customers, your customer is not a lot of churn. They've been there for a long time. They're embedded in every single company out there, pretty much VMware is in every enterprise, if not all, I mean 99%, whatever percentage it is, it's huge penetration. >> We are proud of three things. It comes down to number one, we are proud of our innovations. You can see it, you can see the tone from Raghu or myself, or other executives changes with excitement when we're talking about our technologies, we're just proud. We're just proud of it. We are a technology and product centric company. The second thing that sort of gets us excited and be proud of is exactly what you mentioned, which is the customers. The customers like us. It's a pleasure when I bring Roshan on stage and he talks about how he's expecting certain relationship and what he's viewing VMware in this new world of multi-cloud, that makes us proud. And then third, we're proud of our talent. I mean, I was jokingly talking to just the events team alone. Of course our engineers do amazing job, our sellers do amazing job, our support teams do amazing job, but we brought this team and we said, "We are going to get you to run an event after three years from not they doing one, we're going to change the name on you, we're going to change the attendees you're going to invite, we're going to change the fact that it's going to be new speakers who have never been on the stage and done that kind of presentation. >> You're also going to serve a virtual audience. >> And we're going to have a virtual audience. And you know what? They embraced it and they surprised us and it looks beautiful. So I'm proud of the talent. >> The VMware team always steps up. You never slight it, you've got great talent over there. The big thing I want to highlight as we end this day, the segment, and I'll get your thoughts and reactions, Sumit, is again, you guys were early on hybrid. We have theCUBE tape to go back into the video data lake and find the word hybrid mentioned 2013, 2014, 2015. Even when nobody was talking about hybrid. >> Yeah, yeah. >> Multicloud, Raghu, I talked to Raghu in 2016 when he did the Pat Gelsinger, I mean Raghu, Pat and Andy Jassy. >> Yeah. >> When that cloud thing got cleared up, he cleared that up. He mentioned multicloud, even then 2016, so this is not new. >> Yeah. >> You had the vision, there's a lot of stuff in the oven. You guys make announcements directionally, and then start chipping away at it. Now you got Broadcom buys VMware, what's in the oven? How much goodness is coming out that's like just hitting the fruits are starting to bear on the tree. There's a lot of good stuff and just put that, contextualize and scale that for us. What's in the oven? >> First of all, I think the vision, you have to be early to be first and we believe in it. Okay, so that's number one. Now having said that what's in the oven, you would see us actually do more controls across cloud. We are not done on networking side. Okay, we announced something as project Northstar with networking portfolio, that's not generally available. That's in the oven. We are going to come up with more capability on supporting any Kubernetes on any cloud. We did some previews of supporting, for example, EKS. You're going to see more of those cluster controls across any Kubernetes. We have more work happening on our telco partners for enablement of O-RAN as well as our edge solutions, along with the ecosystem. So more to come on those fronts. But they're all aligned with enabling customers multi-cloud through these five cross cloud services. They're all really, some of them where we have put a big sort of a version one of solution out there such as Aria continuation, some of them where even the version one's not out and you're going to see that very soon. >> All right. Sumit, what's next for you as the president? You're proud of your team, we got that. Great oven description of what's coming out for the next meal. What's next for you guys, the team? >> I think for us, two things, first of all, this is our momentum season as we call it. So for the first time, after three years, we are now being in, I think we've expanded, explored to five cities. So getting this orchestrated properly, we are expecting nearly 50,000 customers to be engaging in person and maybe a same number virtually. So a significant touchpoint, cuz we have been missing. Our customers have departed their strategy formulation and we have departed our strategy formulation. Getting them connected together is our number one priority. And number two, we are focused on getting better and better at making customers successful. There is work needed for us. We learn, then we code it and then we repeat it. And to me, those are the two key things here in the next six months. >> Sumit, thank you for coming on theCUBE. Thanks for your valuable time, sharing what's going on. Appreciate it. >> Always great to have chatting. >> Here with the president, the CEO's coming up next in theCUBE. Of course, we're John and Dave. More coverage after the short breaks, stay with us. (upbeat music)

Published Date : Aug 30 2022

SUMMARY :

We're starting to get the Great to be here, John. And you had no problem We had to like dust off a cobweb So, I have to give kudos to the team Still a lot of meat on the bone. did any of you notice I said, I betcha that was real. so that the stakeholders and into the hood. Again, that strengthens the So the question I got to ask you is I have the solution to tame And that is the disruptive technology so that they never ever have to think the software mainframe, and the overhead associated That's the reality. And the cloud and in the sense that we'd like the chaos that makes the adoption and I love that you brought that up, So the question for you is the day to day operations. that they need. that capability to simplified, all the future projects. stuff in the oven coming out. coming down the pipeline on the keynote review, He'd be like, "Hmm, this extra value." headroom on the margin. and the strategic value of Well, plus the ecosystem And there was the because the developers want to go faster. cuz the velocity's paying off of them. and that's the goal of Aria and Tanzu. because that's the future on leveraging the numbers that the Silicon industry is on. And for the folks watching because you mentioned customers, to get you to run an event You're also going to So I'm proud of the talent. and find the word hybrid I talked to Raghu in 2016 he cleared that up. that's like just hitting the That's in the oven. for the next meal. So for the first time, after three years, Sumit, thank you for coming on theCUBE. the CEO's coming up next in theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

2016DATE

0.99+

DellORGANIZATION

0.99+

Sumit DhawanPERSON

0.99+

DavePERSON

0.99+

SumitPERSON

0.99+

2013DATE

0.99+

Chris WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

RoshanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2014DATE

0.99+

20%QUANTITY

0.99+

CVS HealthORGANIZATION

0.99+

2010DATE

0.99+

Dave VellentePERSON

0.99+

2015DATE

0.99+

Andy JassyPERSON

0.99+

PatPERSON

0.99+

100%QUANTITY

0.99+

ThursdayDATE

0.99+

Pat GelsingerPERSON

0.99+

12 yearsQUANTITY

0.99+

Andy GrovePERSON

0.99+

99%QUANTITY

0.99+

five citiesQUANTITY

0.99+

Hock TanPERSON

0.99+

three daysQUANTITY

0.99+

SecondQUANTITY

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

RaghuPERSON

0.99+

BroadcomORGANIZATION

0.99+

NorthstarORGANIZATION

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

first timeQUANTITY

0.99+

12th yearQUANTITY

0.99+

thirdQUANTITY

0.99+

SumitORGANIZATION

0.99+

two thingsQUANTITY

0.99+

Two setsQUANTITY

0.99+

vSAN 8TITLE

0.99+

vSphere 8TITLE

0.98+

next yearDATE

0.98+

TanzuORGANIZATION

0.98+

todayDATE

0.98+

TodayDATE

0.98+

MulticloudORGANIZATION

0.98+

AriaORGANIZATION

0.98+

EMCORGANIZATION

0.98+

three thingsQUANTITY

0.97+

nearly 50,000 customersQUANTITY

0.97+

firstQUANTITY

0.97+

VMworldORGANIZATION

0.96+

bothQUANTITY

0.96+

about 40%QUANTITY

0.96+

two key thingsQUANTITY

0.96+

five cross cloud servicesQUANTITY

0.95+

two pathsQUANTITY

0.94+

35-40%QUANTITY

0.93+

next six monthsDATE

0.93+

Justin Murrill, AMD & John Frey, HPE | HPE Discover 2022


 

>> Announcer: theCUBE presents HPE Discover 2022. Brought to you by HPE. >> Okay, we're back here at HPE Discover 2022, theCUBE's continuous coverage. This is day two, Dave Vellante with John Furrier. John Frey's here. He is the chief technologist for sustainable transformation at Hewlett Packard Enterprise and Justin Murrill who's the director of corporate responsibility for AMD. Guys, welcome to theCUBE. Good to see you. >> Thank you. >> Thank you. It's great to be here. >> So again, I remember the days where, you know, CIOs didn't really care about the power budget. They didn't pay the power budget. You had, you know, facilities over here, IT over here and they didn't talk to each other. That's changed. Why is there so much discussion around sustainable IT today? >> It's exciting to see how much it's up leveled, as you say. I think there are a couple different trends happening but mainly, you know, the IT teams and IT leaders that are making decisions are seeing to your point how their decisions are affecting enterprise level, greenhouse gas emission reduction goals. So that connection is becoming very clear. Everything from the server processor to beyond it, those decisions have a key role. And importantly we're seeing, you know, 60% of the Fortune 500 now have climate or energy efficiency related goals. So there's a perfect storm of sorts happening where more companies setting goals, IT decision makers looking particularly at the data center because as the computational heart of an organization, it has a wealth of opportunity from an energy and a mission savings perspective. >> I'm surprised it's only 60%. I mean, that number really shocked me. So it's got to be a 100% within the next couple of years here. I would think, I mean, it's not trivial, right? You've got responsibilities in terms of reporting and you can't just mail it in, right? >> Yeah, absolutely. So there's a lot more disclosure happening but the goal setting is really upleveling as well. >> And the metrics involved too. Can you just scope the scale and challenge of like getting the right metrics, not when you have the goals. Does that factor in, how do you see there? What's your commentary on that? >> Yeah, I think there's, the aperture is continuing to open as metrics go, so to speak. So from an operations perspective, companies are reporting on what's referred to as scope one and scope two. And scope two is the big one from electricity, right? And then scope three is everything else. That's the supply chain and the outside of that. So a lot of implications there as well from IT decision making. >> Is there a business case for sustainable IT? I mean, you're probably not going to lower the power budget, right? But is it just, hey, it's the right thing to do. We have to do it, it's good for the brand. It'll allow us to attract people or is there a a more of a rich business case? >> So there really is a business case even just within inside the data center walls, for example. There's inefficiencies that are inherent in many of these data centers. There's really low utilization levels as well. And by reducing over provisioning and increasing utilization, there's real money to be saved in terms of equipment costs, maintenance agreement costs, software licensing costs. So actually the power consumption and the environmental piece is an added benefit but it's not the main reason. So we actually had IDC do a survey for us last year and we asked IT executives, 500 senior IT executives, were you implementing sustainable IT programs and why? My guess initially was about 40% of them would say yes. Actually the number was 96% of them. And when we asked them why they fell into three categories. The digital leaders, those that are the early adopters moving the quickest. They said we do it to attract and retain institutional investors. They've been hearing from their boards. They've been hearing from their investor relations teams and investors are starting to ask and even in a couple cases board seats are becoming contentious based on the environmental perspective of the person being nominated. This digital mainstream, the folks in the middle about 80% of the total pie, they're doing it to attract and retain customers because customers are asking them about their sustainable IT programs. If they're a non-manufacturing customer, their data center consumption is probably the largest part of their company. It's also by the way usually the most expensive real estate the company owns. So customers are asking and customers are not only asking, do you have basic programs in place? But they're asking, what are your goals to Justin's point? The customers are starting to realize that carbon goals have been vaguely defined historically. So they're asking for specificity, they're asking for transparency and by the way the science-based target initiative recently released their requirements for net zero science-based targets. And that requires significant reduction to your point before you start considering renewable energy in that balance. The third reason those digital followers, that slowest group or folks that are in industries that move the slowest, they said they were doing this to attract and retain employees. Because they recognize the data scientists, the computer science, computer engineering students that they're trying to attract want to work at a company where they can see how what they do directly contributes to purpose. And they vote with their feet. If they come on and they can't make that connection pretty quickly or if they spend a lot of their time chasing down inefficiencies in a technology infrastructure, they're not going to stay there very long. >> I mean, the mission-driven organization is definitely an employee factor. People are interested in that. The work for company is responsible, doing the right thing but that business case is interesting because I think there's recognition now more than ever before. You think you're right on. It used to be kind of like mailed it in before. Okay, we're doing some stuff. Now it's like, we all have to do it. And it's a board issue. It's a financing issue. It might be a filing issue as you guys mentioned. So that's all great. So I got to ask how you guys specifically are working together, AMD and HPE. What are you guys doing to make it more efficient? And then I'll see with Cloud and Cloud scale, there's more servers being shipped now than ever before. And more devices at the edge. What are you guys doing together specifically? >> Yeah, we've been working together, AMD and HPE on advancing sustainability for many years. I've had the opportunity to working directly with John for many years and I've learned a lot from him and your team. It's fantastic to see all the developments here. I mean, so most recently the top 500 and the green 500 list of supercomputers came out. And at the top of that list is AMD HPE systems. And it shows kind of the pinnacle of what can be possible for other data centers looking to modernize and scale. So the number one system, the fastest system in the world and the most energy efficient system in the world, the Frontier supercomputer has AMD HPE technology in it. And it just passed the exit scale barrier. I mean, I'm still just blown away by this. A billion, billion calculations per second. It's just amazing. And the research is doing around clean energy, alternative energy sources, scientific research is really exciting. So there's that. The other system that really jumps out is the LUMI system, the number three system because it's a 100% powered by renewable energy. So not only that, it takes the heat and it channels it to a nearby town and covers 20% of that town's heating needs thereby avoiding 12,400 metric tons of carbon emissions. So this system is carbon negative, right? And you just go down the list. I mean, AMD is in the top eight out of 10 most green... >> Rewind that second. So you have the heat and the power shifting to a town? >> Yes, the LUMI supercomputer has the heat from the system to an nearby town. It's like a closed loop, the idea of circular economy but with energy. And it takes that waste and it makes it an input, a resource. >> But this is the kind of innovation that's going on, right? This is the scale, this is where scale and efficiency kind of come together. That's huge. Where's that going to go? What's your perspective on where that goes next because that's a blueprint that could be replicated. >> You bet. So I think we're going to continue to see overall power consumption go up at the system level. But performance per wat is climbing much more dramatically. So I think that's going to continue to scale. It's going to require a new cooling technology. So direct liquid cooling is becoming more and more in use and customers really interested in that. There's shifting from industry standard architectures to lower end high performance computer architectures to get direct liquid cooling, higher core processors and get the performance they want in a smaller footprint. And at the same time, they're really thinking about how do we operate the infrastructure as a system not as individual piece parts. And one of the things that Frontier and LUMI do so well is they were designed from the start as a system, not as piece parts making up the system. So I think that happens. The other thing that's really critical is no one company is going to solve these challenges ourself. So one of the things I love about our partnership with AMD is we look at each other's sustainability goals before we launch 'em. We say, well, how can we help? One of AMD's goals that I'll let Justin talk about came about because HPE at the time of separation laid a really aggressive product, energy efficiency goal out, said but we're not sure how we're going to make this. And AMD said we can help. So that collaboration, we critique each other's programs, we push each other, but we work together. I like to say partnership is leadership in this. >> Well, that's a nuance point. Before you get to that solution there Justin, this system's thinking is really important. You're seeing that now with Cloud. Some of the things that GreenLake and the systems are pointing out, this holistic systems' thinking is applied to partnerships, not just the company. >> Yep. >> This is a really nuanced point but we're seeing that more and more. >> Yeah, absolutely. In fact, Justin mentioned the heat reuse, same way with the national renewable energy lab. They actually did snow removal and building heating with the heat reuse. So if you're designing for example, a liquid cold system from the start, how do you make it a symbiotic relationship? There's more and more interest in co-locating data centers and greenhouses in colder environments for example. Because the principle of the circular economy is nothing is waste. So if you think it's waste or you think it's a byproduct, think about how can that be an input to something else. >> Right, so you might put a data center so you can use ambient cooling or in somewhere in the Columbia River so you can, you know, take advantage of, you know, renewable energy. What are some goals that you guys can share with us? >> So we've got some great momentum and a track record coming off of, going back to 2014, we set a 25 by 20 goal to improve the energy efficiency for our mobile processors and mobile devices, right? So laptops. And we were able to achieve a 31.7x in that timeframe. So which was twice the industry trend to that. And then moving on, we've doubled down on data center and we've set a new goal of a 30x increase in energy efficiency for our server processors and accelerators to really focused on HPC and AI training. So that's a 30x goal over 2020 to 2025 focused on these really important workloads 'cause they're fast growing. We heard yesterday 150 billion devices connected by 2025 generating a lot of data, right? So that's one of the reasons why we focused on that. 'Cause these are demanding workloads. And this represents a 2.5x increase over the historical trend, right? And fundamentally speaking, that's a 97% reduction in energy use per computation in five years. So we're very pleased. We announced an update recently. We're at 6.8x. We're on track for this goal and making great progress and showing how these, you know, solutions at a processor level and an accelerator level can be amplified, taken into HPE technology. >> Generally tech companies, you know, that compete want to rip each other's faces off. And is that the case in this space or do you guys collaborate with your competitors to share best practice? Is that beginning? Is it already there? >> There's much more collaboration in this space. This is one of the safe places I think where collaboration does occur more. >> Yeah. And we've all got to work together. A great example that was in the supply chain. When HPE first set our supply chain expectations for our suppliers around things like worker rights and environment and worker protection from a health and safety perspective. We initially had our code of conduct asked their suppliers to comply with it. Started auditing in event. And we quickly got into the factories and saw they were doing it for our workloads. But if you looked around the factory, they weren't doing in other places. And we took a step back and said, well, wait a minute. Why is that? And they said that vendor doesn't require it. So we took a step back and said let's get the industry together. We share a common supply chain. How do we have a common set of expectations and push them out to our supply chain? How to now do third party audits so the same supplier doesn't get audited by each of the major vendors and then share those audit results. And what we found was that really had a large lever effect of moving the electronic supply chain much more rapidly towards our expectations in all those areas. Well then other industries looked and said, well, wait a minute, if that worked for electronics, it'll probably work broader. And so now, the output of that is what's called the responsible business alliance across many industries taking that same approach. So that's a pre-competitive. We all have the same challenge. In many cases we share a common supply chain. So that's a great example of electronic companies coming together, design standards for things. There's a green grid group at the moment looking at liquid cooling connects. You know, we don't want every vendor to have a different connection point for liquid cooling for example. So how do we standardize that to make our customers have a easier time about looking at the technologies they want from any vendor and having common connection points. >> Right. Okay. So a lot of collaboration. Last question. How much of a difference do you think it can make? In other words, what percent of the blame pie goes to information technology? And I think regardless, you got to do your part. Will it make a dent? >> I think the sector has done a really good job of keeping that increase from going up while exponentially increasing performance. So it's been a really amazing industry effort. And moving forward, I think this is more important than ever, right? And with the slowdown of Moore's law we're seeing more gains that need to come from beyond process architecture to include packaging innovations, to power management, to just the architecture here. So the challenge of mitigating and minimizing energy growth is important. And we believe like with that 30x energy efficiency goal that it is doable but it does take a lot of collaboration and focus. >> That's a great point. I mean, if you didn't pay attention to this, IT could really become a big piece of the pie. Guys thanks so much for coming on theCUBE. Really appreciate. >> People are watching. They're paying attention at all levels. Congratulations. >> Absolutely. >> All right, Dave Vellante, John Furrier and our guests. Don't forget to go to SiliconANGLE.com for all the news. Our YouTube channel, actually go to CUBE.net. You'll get all these videos in our YouTube channel, youtube.com/SiliconANGLE. You can check out everything on demand. Keep it right there. We'll be right back. HPE Discover 2022 from Las Vegas. You're watching theCUBE. (soft music)

Published Date : Jun 29 2022

SUMMARY :

Brought to you by HPE. He is the chief technologist It's great to be here. So again, I remember the days where, Everything from the server So it's got to be a 100% but the goal setting is And the metrics involved too. and the outside of that. the right thing to do. and by the way the science-based So I got to ask how you guys specifically I've had the opportunity to So you have the heat and the has the heat from the system This is the scale, and get the performance they and the systems are pointing out, a really nuanced point but a liquid cold system from the start, or in somewhere in the So that's one of the reasons And is that the case in this space This is one of the safe places And so now, the output of that of the blame pie goes So the challenge of mitigating a big piece of the pie. People are watching. SiliconANGLE.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin MurrillPERSON

0.99+

Dave VellantePERSON

0.99+

John FreyPERSON

0.99+

Dave VellantePERSON

0.99+

John FurrierPERSON

0.99+

JustinPERSON

0.99+

30xQUANTITY

0.99+

20%QUANTITY

0.99+

2014DATE

0.99+

100%QUANTITY

0.99+

2.5xQUANTITY

0.99+

AMDORGANIZATION

0.99+

JohnPERSON

0.99+

LUMIORGANIZATION

0.99+

12,400 metric tonsQUANTITY

0.99+

31.7xQUANTITY

0.99+

97%QUANTITY

0.99+

FrontierORGANIZATION

0.99+

last yearDATE

0.99+

96%QUANTITY

0.99+

25QUANTITY

0.99+

2025DATE

0.99+

Las VegasLOCATION

0.99+

Columbia RiverLOCATION

0.99+

HPEORGANIZATION

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

eachQUANTITY

0.99+

five yearsQUANTITY

0.99+

500 senior IT executivesQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

twiceQUANTITY

0.99+

60%QUANTITY

0.99+

6.8x.QUANTITY

0.99+

10QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.98+

third reasonQUANTITY

0.98+

2020DATE

0.98+

theCUBEORGANIZATION

0.97+

CUBE.netOTHER

0.97+

150 billion devicesQUANTITY

0.97+

about 40%QUANTITY

0.97+

about 80%QUANTITY

0.97+

SiliconANGLE.comOTHER

0.96+

YouTubeORGANIZATION

0.96+

OneQUANTITY

0.96+

firstQUANTITY

0.95+

LUMIOTHER

0.92+

zeroQUANTITY

0.92+

eightQUANTITY

0.91+

HPE Discover 2022TITLE

0.91+

IDCORGANIZATION

0.9+

20 goalQUANTITY

0.89+

todayDATE

0.88+

carbonQUANTITY

0.86+

day twoQUANTITY

0.8+

500QUANTITY

0.78+

next couple of yearsDATE

0.77+

billion, billion calculations per secondQUANTITY

0.74+

three categoriesQUANTITY

0.72+

coupleQUANTITY

0.7+

500 listQUANTITY

0.7+

youtube.com/SiliconANGLEOTHER

0.7+

three systemQUANTITY

0.66+

Discover 2022EVENT

0.66+

Breaking Analysis: Broadcom, Taming the VMware Beast


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)

Published Date : May 28 2022

SUMMARY :

This is Breaking Analysis and promises that the acquisition

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Stephanie ChanPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

SymantecORGANIZATION

0.99+

Rob HofPERSON

0.99+

Alex MyersonPERSON

0.99+

April 22DATE

0.99+

HPORGANIZATION

0.99+

David FloyerPERSON

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Paul MaritzPERSON

0.99+

BroadcomORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Eric BradleyPERSON

0.99+

April 21DATE

0.99+

NSXORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

DavePERSON

0.99+

JanuaryDATE

0.99+

$61 billionQUANTITY

0.99+

8.5 billionQUANTITY

0.99+

$2.1 billionQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

AcropolisORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

90%QUANTITY

0.99+

6%QUANTITY

0.99+

4.7 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Hock TanORGANIZATION

0.99+

60%QUANTITY

0.99+

44%QUANTITY

0.99+

40 dayQUANTITY

0.99+

61%QUANTITY

0.99+

8 billionQUANTITY

0.99+

Michael DellPERSON

0.99+

52%QUANTITY

0.99+

47%QUANTITY

0.99+