Lena Smart & Tara Hernandez, MongoDB | International Women's Day
(upbeat music) >> Hello and welcome to theCube's coverage of International Women's Day. I'm John Furrier, your host of "theCUBE." We've got great two remote guests coming into our Palo Alto Studios, some tech athletes, as we say, people that've been in the trenches, years of experience, Lena Smart, CISO at MongoDB, Cube alumni, and Tara Hernandez, VP of Developer Productivity at MongoDB as well. Thanks for coming in to this program and supporting our efforts today. Thanks so much. >> Thanks for having us. >> Yeah, everyone talk about the journey in tech, where it all started. Before we get there, talk about what you guys are doing at MongoDB specifically. MongoDB is kind of gone the next level as a platform. You have your own ecosystem, lot of developers, very technical crowd, but it's changing the business transformation. What do you guys do at Mongo? We'll start with you, Lena. >> So I'm the CISO, so all security goes through me. I like to say, well, I don't like to say, I'm described as the ones throat to choke. So anything to do with security basically starts and ends with me. We do have a fantastic Cloud engineering security team and a product security team, and they don't report directly to me, but obviously we have very close relationships. I like to keep that kind of church and state separate and I know I've spoken about that before. And we just recently set up a physical security team with an amazing gentleman who left the FBI and he came to join us after 26 years for the agency. So, really starting to look at the physical aspects of what we offer as well. >> I interviewed a CISO the other day and she said, "Every day is day zero for me." Kind of goofing on the Amazon Day one thing, but Tara, go ahead. Tara, go ahead. What's your role there, developer productivity? What are you focusing on? >> Sure. Developer productivity is kind of the latest description for things that we've described over the years as, you know, DevOps oriented engineering or platform engineering or build and release engineering development infrastructure. It's all part and parcel, which is how do we actually get our code from developer to customer, you know, and all the mechanics that go into that. It's been something I discovered from my first job way back in the early '90s at Borland. And the art has just evolved enormously ever since, so. >> Yeah, this is a very great conversation both of you guys, right in the middle of all the action and data infrastructures changing, exploding, and involving big time AI and data tsunami and security never stops. Well, let's get into, we'll talk about that later, but let's get into what motivated you guys to pursue a career in tech and what were some of the challenges that you faced along the way? >> I'll go first. The fact of the matter was I intended to be a double major in history and literature when I went off to university, but I was informed that I had to do a math or a science degree or else the university would not be paid for. At the time, UC Santa Cruz had a policy that called Open Access Computing. This is, you know, the late '80s, early '90s. And anybody at the university could get an email account and that was unusual at the time if you were, those of us who remember, you used to have to pay for that CompuServe or AOL or, there's another one, I forget what it was called, but if a student at Santa Cruz could have an email account. And because of that email account, I met people who were computer science majors and I'm like, "Okay, I'll try that." That seems good. And it was a little bit of a struggle for me, a lot I won't lie, but I can't complain with how it ended up. And certainly once I found my niche, which was development infrastructure, I found my true love and I've been doing it for almost 30 years now. >> Awesome. Great story. Can't wait to ask a few questions on that. We'll go back to that late '80s, early '90s. Lena, your journey, how you got into it. >> So slightly different start. I did not go to university. I had to leave school when I was 16, got a job, had to help support my family. Worked a bunch of various jobs till I was about 21 and then computers became more, I think, I wouldn't say they were ubiquitous, but they were certainly out there. And I'd also been saving up every penny I could earn to buy my own computer and bought an Amstrad 1640, 20 meg hard drive. It rocked. And kind of took that apart, put it back together again, and thought that could be money in this. And so basically just teaching myself about computers any job that I got. 'Cause most of my jobs were like clerical work and secretary at that point. But any job that had a computer in front of that, I would make it my business to go find the guy who did computing 'cause it was always a guy. And I would say, you know, I want to learn how these work. Let, you know, show me. And, you know, I would take my lunch hour and after work and anytime I could with these people and they were very kind with their time and I just kept learning, so yep. >> Yeah, those early days remind me of the inflection point we're going through now. This major C change coming. Back then, if you had a computer, you had to kind of be your own internal engineer to fix things. Remember back on the systems revolution, late '80s, Tara, when, you know, your career started, those were major inflection points. Now we're seeing a similar wave right now, security, infrastructure. It feels like it's going to a whole nother level. At Mongo, you guys certainly see this as well, with this AI surge coming in. A lot more action is coming in. And so there's a lot of parallels between these inflection points. How do you guys see this next wave of change? Obviously, the AI stuff's blowing everyone away. Oh, new user interface. It's been called the browser moment, the mobile iPhone moment, kind of for this generation. There's a lot of people out there who are watching that are young in their careers, what's your take on this? How would you talk to those folks around how important this wave is? >> It, you know, it's funny, I've been having this conversation quite a bit recently in part because, you know, to me AI in a lot of ways is very similar to, you know, back in the '90s when we were talking about bringing in the worldwide web to the forefront of the world, right. And we tended to think in terms of all the optimistic benefits that would come of it. You know, free passing of information, availability to anyone, anywhere. You just needed an internet connection, which back then of course meant a modem. >> John: Not everyone had though. >> Exactly. But what we found in the subsequent years is that human beings are what they are and we bring ourselves to whatever platforms that are there, right. And so, you know, as much as it was amazing to have this freely available HTML based internet experience, it also meant that the negatives came to the forefront quite quickly. And there were ramifications of that. And so to me, when I look at AI, we're already seeing the ramifications to that. Yes, are there these amazing, optimistic, wonderful things that can be done? Yes. >> Yeah. >> But we're also human and the bad stuff's going to come out too. And how do we- >> Yeah. >> How do we as an industry, as a community, you know, understand and mitigate those ramifications so that we can benefit more from the positive than the negative. So it is interesting that it comes kind of full circle in really interesting ways. >> Yeah. The underbelly takes place first, gets it in the early adopter mode. Normally industries with, you know, money involved arbitrage, no standards. But we've seen this movie before. Is there hope, Lena, that we can have a more secure environment? >> I would hope so. (Lena laughs) Although depressingly, we've been in this well for 30 years now and we're, at the end of the day, still telling people not to click links on emails. So yeah, that kind of still keeps me awake at night a wee bit. The whole thing about AI, I mean, it's, obviously I am not an expert by any stretch of the imagination in AI. I did read (indistinct) book recently about AI and that was kind of interesting. And I'm just trying to teach myself as much as I can about it to the extent of even buying the "Dummies Guide to AI." Just because, it's actually not a dummies guide. It's actually fairly interesting, but I'm always thinking about it from a security standpoint. So it's kind of my worst nightmare and the best thing that could ever happen in the same dream. You know, you've got this technology where I can ask it a question and you know, it spits out generally a reasonable answer. And my team are working on with Mark Porter our CTO and his team on almost like an incubation of AI link. What would it look like from MongoDB? What's the legal ramifications? 'Cause there will be legal ramifications even though it's the wild, wild west just now, I think. Regulation's going to catch up to us pretty quickly, I would think. >> John: Yeah, yeah. >> And so I think, you know, as long as companies have a seat at the table and governments perhaps don't become too dictatorial over this, then hopefully we'll be in a good place. But we'll see. I think it's a really interest, there's that curse, we're living in interesting times. I think that's where we are. >> It's interesting just to stay on this tech trend for a minute. The standards bodies are different now. Back in the old days there were, you know, IEEE standards, ITF standards. >> Tara: TPC. >> The developers are the new standard. I mean, now you're seeing open source completely different where it was in the '90s to here beginning, that was gen one, some say gen two, but I say gen one, now we're exploding with open source. You have kind of developers setting the standards. If developers like it in droves, it becomes defacto, which then kind of rolls into implementation. >> Yeah, I mean I think if you don't have developer input, and this is why I love working with Tara and her team so much is 'cause they get it. If we don't have input from developers, it's not going to get used. There's going to be ways of of working around it, especially when it comes to security. If they don't, you know, if you're a developer and you're sat at your screen and you don't want to do that particular thing, you're going to find a way around it. You're a smart person. >> Yeah. >> So. >> Developers on the front lines now versus, even back in the '90s, they're like, "Okay, consider the dev's, got a QA team." Everything was Waterfall, now it's Cloud, and developers are on the front lines of everything. Tara, I mean, this is where the standards are being met. What's your reaction to that? >> Well, I think it's outstanding. I mean, you know, like I was at Netscape and part of the crowd that released the browser as open source and we founded mozilla.org, right. And that was, you know, in many ways kind of the birth of the modern open source movement beyond what we used to have, what was basically free software foundation was sort of the only game in town. And I think it is so incredibly valuable. I want to emphasize, you know, and pile onto what Lena was saying, it's not just that the developers are having input on a sort of company by company basis. Open source to me is like a checks and balance, where it allows us as a broader community to be able to agree on and enforce certain standards in order to try and keep the technology platforms as accessible as possible. I think Kubernetes is a great example of that, right. If we didn't have Kubernetes, that would've really changed the nature of how we think about container orchestration. But even before that, Linux, right. Linux allowed us as an industry to end the Unix Wars and as someone who was on the front lines of that as well and having to support 42 different operating systems with our product, you know, that was a huge win. And it allowed us to stop arguing about operating systems and start arguing about software or not arguing, but developing it in positive ways. So with, you know, with Kubernetes, with container orchestration, we all agree, okay, that's just how we're going to orchestrate. Now we can build up this huge ecosystem, everybody gets taken along, right. And now it changes the game for what we're defining as business differentials, right. And so when we talk about crypto, that's a little bit harder, but certainly with AI, right, you know, what are the checks and balances that as an industry and as the developers around this, that we can in, you know, enforce to make sure that no one company or no one body is able to overly control how these things are managed, how it's defined. And I think that is only for the benefit in the industry as a whole, particularly when we think about the only other option is it gets regulated in ways that do not involve the people who actually know the details of what they're talking about. >> Regulated and or thrown away or bankrupt or- >> Driven underground. >> Yeah. >> Which would be even worse actually. >> Yeah, that's a really interesting, the checks and balances. I love that call out. And I was just talking with another interview part of the series around women being represented in the 51% ratio. Software is for everybody. So that we believe that open source movement around the collective intelligence of the participants in the industry and independent of gender, this is going to be the next wave. You're starting to see these videos really have impact because there are a lot more leaders now at the table in companies developing software systems and with AI, the aperture increases for applications. And this is the new dynamic. What's your guys view on this dynamic? How does this go forward in a positive way? Is there a certain trajectory you see? For women in the industry? >> I mean, I think some of the states are trying to, again, from the government angle, some of the states are trying to force women into the boardroom, for example, California, which can be no bad thing, but I don't know, sometimes I feel a bit iffy about all this kind of forced- >> John: Yeah. >> You know, making, I don't even know how to say it properly so you can cut this part of the interview. (John laughs) >> Tara: Well, and I think that they're >> I'll say it's not organic. >> No, and I think they're already pulling it out, right. It's already been challenged so they're in the process- >> Well, this is the open source angle, Tara, you are getting at it. The change agent is open, right? So to me, the history of the proven model is openness drives transparency drives progress. >> No, it's- >> If you believe that to be true, this could have another impact. >> Yeah, it's so interesting, right. Because if you look at McKinsey Consulting or Boston Consulting or some of the other, I'm blocking on all of the names. There has been a decade or more of research that shows that a non homogeneous employee base, be it gender or ethnicity or whatever, generates more revenue, right? There's dollar signs that can be attached to this, but it's not enough for all companies to want to invest in that way. And it's not enough for all, you know, venture firms or investment firms to grant that seed money or do those seed rounds. I think it's getting better very slowly, but socialization is a much harder thing to overcome over time. Particularly, when you're not just talking about one country like the United States in our case, but around the world. You know, tech centers now exist all over the world, including places that even 10 years ago we might not have expected like Nairobi, right. Which I think is amazing, but you have to factor in the cultural implications of that as well, right. So yes, the openness is important and we have, it's important that we have those voices, but I don't think it's a panacea solution, right. It's just one more piece. I think honestly that one of the most important opportunities has been with Cloud computing and Cloud's been around for a while. So why would I say that? It's because if you think about like everybody holds up the Steve Jobs, Steve Wozniak, back in the '70s, or Sergey and Larry for Google, you know, you had to have access to enough credit card limit to go to Fry's and buy your servers and then access to somebody like Susan Wojcicki to borrow the garage or whatever. But there was still a certain amount of upfrontness that you had to be able to commit to, whereas now, and we've, I think, seen a really good evidence of this being able to lease server resources by the second and have development platforms that you can do on your phone. I mean, for a while I think Africa, that the majority of development happened on mobile devices because there wasn't a sufficient supply chain of laptops yet. And that's no longer true now as far as I know. But like the power that that enables for people who would otherwise be underrepresented in our industry instantly opens it up, right? And so to me that's I think probably the biggest opportunity that we've seen from an industry on how to make more availability in underrepresented representation for entrepreneurship. >> Yeah. >> Something like AI, I think that's actually going to take us backwards if we're not careful. >> Yeah. >> Because of we're reinforcing that socialization. >> Well, also the bias. A lot of people commenting on the biases of the large language inherently built in are also problem. Lena, I want you to weigh on this too, because I think the skills question comes up here and I've been advocating that you don't need the pedigree, college pedigree, to get into a certain jobs, you mentioned Cloud computing. I mean, it's been around for you think a long time, but not really, really think about it. The ability to level up, okay, if you're going to join something new and half the jobs in cybersecurity are created in the past year, right? So, you have this what used to be a barrier, your degree, your pedigree, your certification would take years, would be a blocker. Now that's gone. >> Lena: Yeah, it's the opposite. >> That's, in fact, psychology. >> I think so, but the people who I, by and large, who I interview for jobs, they have, I think security people and also I work with our compliance folks and I can't forget them, but let's talk about security just now. I've always found a particular kind of mindset with security folks. We're very curious, not very good at following rules a lot of the time, and we'd love to teach others. I mean, that's one of the big things stem from the start of my career. People were always interested in teaching and I was interested in learning. So it was perfect. And I think also having, you know, strong women leaders at MongoDB allows other underrepresented groups to actually apply to the company 'cause they see that we're kind of talking the talk. And that's been important. I think it's really important. You know, you've got Tara and I on here today. There's obviously other senior women at MongoDB that you can talk to as well. There's a bunch of us. There's not a whole ton of us, but there's a bunch of us. And it's good. It's definitely growing. I've been there for four years now and I've seen a growth in women in senior leadership positions. And I think having that kind of track record of getting really good quality underrepresented candidates to not just interview, but come and join us, it's seen. And it's seen in the industry and people take notice and they're like, "Oh, okay, well if that person's working, you know, if Tara Hernandez is working there, I'm going to apply for that." And that in itself I think can really, you know, reap the rewards. But it's getting started. It's like how do you get your first strong female into that position or your first strong underrepresented person into that position? It's hard. I get it. If it was easy, we would've sold already. >> It's like anything. I want to see people like me, my friends in there. Am I going to be alone? Am I going to be of a group? It's a group psychology. Why wouldn't? So getting it out there is key. Is there skills that you think that people should pay attention to? One's come up as curiosity, learning. What are some of the best practices for folks trying to get into the tech field or that's in the tech field and advancing through? What advice are you guys- >> I mean, yeah, definitely, what I say to my team is within my budget, we try and give every at least one training course a year. And there's so much free stuff out there as well. But, you know, keep learning. And even if it's not right in your wheelhouse, don't pick about it. Don't, you know, take a look at what else could be out there that could interest you and then go for it. You know, what does it take you few minutes each night to read a book on something that might change your entire career? You know, be enthusiastic about the opportunities out there. And there's so many opportunities in security. Just so many. >> Tara, what's your advice for folks out there? Tons of stuff to taste, taste test, try things. >> Absolutely. I mean, I always say, you know, my primary qualifications for people, I'm looking for them to be smart and motivated, right. Because the industry changes so quickly. What we're doing now versus what we did even last year versus five years ago, you know, is completely different though themes are certainly the same. You know, we still have to code and we still have to compile that code or package the code and ship the code so, you know, how well can we adapt to these new things instead of creating floppy disks, which was my first job. Five and a quarters, even. The big ones. >> That's old school, OG. There it is. Well done. >> And now it's, you know, containers, you know, (indistinct) image containers. And so, you know, I've gotten a lot of really great success hiring boot campers, you know, career transitioners. Because they bring a lot experience in addition to the technical skills. I think the most important thing is to experiment and figuring out what do you like, because, you know, maybe you are really into security or maybe you're really into like deep level coding and you want to go back, you know, try to go to school to get a degree where you would actually want that level of learning. Or maybe you're a front end engineer, you want to be full stacked. Like there's so many different things, data science, right. Maybe you want to go learn R right. You know, I think it's like figure out what you like because once you find that, that in turn is going to energize you 'cause you're going to feel motivated. I think the worst thing you could do is try to force yourself to learn something that you really could not care less about. That's just the worst. You're going in handicapped. >> Yeah and there's choices now versus when we were breaking into the business. It was like, okay, you software engineer. They call it software engineering, that's all it was. You were that or you were in sales. Like, you know, some sort of systems engineer or sales and now it's,- >> I had never heard of my job when I was in school, right. I didn't even know it was a possibility. But there's so many different types of technical roles, you know, absolutely. >> It's so exciting. I wish I was young again. >> One of the- >> Me too. (Lena laughs) >> I don't. I like the age I am. So one of the things that I did to kind of harness that curiosity is we've set up a security champions programs. About 120, I guess, volunteers globally. And these are people from all different backgrounds and all genders, diversity groups, underrepresented groups, we feel are now represented within this champions program. And people basically give up about an hour or two of their time each week, with their supervisors permission, and we basically teach them different things about security. And we've now had seven full-time people move from different areas within MongoDB into my team as a result of that program. So, you know, monetarily and time, yeah, saved us both. But also we're showing people that there is a path, you know, if you start off in Tara's team, for example, doing X, you join the champions program, you're like, "You know, I'd really like to get into red teaming. That would be so cool." If it fits, then we make that happen. And that has been really important for me, especially to give, you know, the women in the underrepresented groups within MongoDB just that window into something they might never have seen otherwise. >> That's a great common fit is fit matters. Also that getting access to what you fit is also access to either mentoring or sponsorship or some sort of, at least some navigation. Like what's out there and not being afraid to like, you know, just ask. >> Yeah, we just actually kicked off our big mentor program last week, so I'm the executive sponsor of that. I know Tara is part of it, which is fantastic. >> We'll put a plug in for it. Go ahead. >> Yeah, no, it's amazing. There's, gosh, I don't even know the numbers anymore, but there's a lot of people involved in this and so much so that we've had to set up mentoring groups rather than one-on-one. And I think it was 45% of the mentors are actually male, which is quite incredible for a program called Mentor Her. And then what we want to do in the future is actually create a program called Mentor Them so that it's not, you know, not just on the female and so that we can live other groups represented and, you know, kind of break down those groups a wee bit more and have some more granularity in the offering. >> Tara, talk about mentoring and sponsorship. Open source has been there for a long time. People help each other. It's community-oriented. What's your view of how to work with mentors and sponsors if someone's moving through ranks? >> You know, one of the things that was really interesting, unfortunately, in some of the earliest open source communities is there was a lot of pervasive misogyny to be perfectly honest. >> Yeah. >> And one of the important adaptations that we made as an open source community was the idea, an introduction of code of conducts. And so when I'm talking to women who are thinking about expanding their skills, I encourage them to join open source communities to have opportunity, even if they're not getting paid for it, you know, to develop their skills to work with people to get those code reviews, right. I'm like, "Whatever you join, make sure they have a code of conduct and a good leadership team. It's very important." And there are plenty, right. And then that idea has come into, you know, conferences now. So now conferences have codes of contact, if there are any good, and maybe not all of them, but most of them, right. And the ideas of expanding that idea of intentional healthy culture. >> John: Yeah. >> As a business goal and business differentiator. I mean, I won't lie, when I was recruited to come to MongoDB, the culture that I was able to discern through talking to people, in addition to seeing that there was actually women in senior leadership roles like Lena, like Kayla Nelson, that was a huge win. And so it just builds on momentum. And so now, you know, those of us who are in that are now representing. And so that kind of reinforces, but it's all ties together, right. As the open source world goes, particularly for a company like MongoDB, which has an open source product, you know, and our community builds. You know, it's a good thing to be mindful of for us, how we interact with the community and you know, because that could also become an opportunity for recruiting. >> John: Yeah. >> Right. So we, in addition to people who might become advocates on Mongo's behalf in their own company as a solution for themselves, so. >> You guys had great successful company and great leadership there. I mean, I can't tell you how many times someone's told me "MongoDB doesn't scale. It's going to be dead next year." I mean, I was going back 10 years. It's like, just keeps getting better and better. You guys do a great job. So it's so fun to see the success of developers. Really appreciate you guys coming on the program. Final question, what are you guys excited about to end the segment? We'll give you guys the last word. Lena will start with you and Tara, you can wrap us up. What are you excited about? >> I'm excited to see what this year brings. I think with ChatGPT and its copycats, I think it'll be a very interesting year when it comes to AI and always in the lookout for the authentic deep fakes that we see coming out. So just trying to make people aware that this is a real thing. It's not just pretend. And then of course, our old friend ransomware, let's see where that's going to go. >> John: Yeah. >> And let's see where we get to and just genuine hygiene and housekeeping when it comes to security. >> Excellent. Tara. >> Ah, well for us, you know, we're always constantly trying to up our game from a security perspective in the software development life cycle. But also, you know, what can we do? You know, one interesting application of AI that maybe Google doesn't like to talk about is it is really cool as an addendum to search and you know, how we might incorporate that as far as our learning environment and developer productivity, and how can we enable our developers to be more efficient, productive in their day-to-day work. So, I don't know, there's all kinds of opportunities that we're looking at for how we might improve that process here at MongoDB and then maybe be able to share it with the world. One of the things I love about working at MongoDB is we get to use our own products, right. And so being able to have this interesting document database in order to put information and then maybe apply some sort of AI to get it out again, is something that we may well be looking at, if not this year, then certainly in the coming year. >> Awesome. Lena Smart, the chief information security officer. Tara Hernandez, vice president developer of productivity from MongoDB. Thank you so much for sharing here on International Women's Day. We're going to do this quarterly every year. We're going to do it and then we're going to do quarterly updates. Thank you so much for being part of this program. >> Thank you. >> Thanks for having us. >> Okay, this is theCube's coverage of International Women's Day. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Thanks for coming in to this program MongoDB is kind of gone the I'm described as the ones throat to choke. Kind of goofing on the you know, and all the challenges that you faced the time if you were, We'll go back to that you know, I want to learn how these work. Tara, when, you know, your career started, you know, to me AI in a lot And so, you know, and the bad stuff's going to come out too. you know, understand you know, money involved and you know, it spits out And so I think, you know, you know, IEEE standards, ITF standards. The developers are the new standard. and you don't want to do and developers are on the And that was, you know, in many ways of the participants I don't even know how to say it properly No, and I think they're of the proven model is If you believe that that you can do on your phone. going to take us backwards Because of we're and half the jobs in cybersecurity And I think also having, you know, I going to be of a group? You know, what does it take you Tons of stuff to taste, you know, my primary There it is. And now it's, you know, containers, Like, you know, some sort you know, absolutely. I (Lena laughs) especially to give, you know, Also that getting access to so I'm the executive sponsor of that. We'll put a plug in for it. and so that we can live to work with mentors You know, one of the things And one of the important and you know, because So we, in addition to people and Tara, you can wrap us up. and always in the lookout for it comes to security. addendum to search and you know, We're going to do it and then we're I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Wojcicki | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jason | PERSON | 0.99+ |
Tara Hernandez | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Mark Porter | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Marty Lans | PERSON | 0.99+ |
Tara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Jim Jackson | PERSON | 0.99+ |
Jason Newton | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Daniel Hernandez | PERSON | 0.99+ |
Dave Winokur | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Lena | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Julie Sweet | PERSON | 0.99+ |
Marty | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Kayla Nelson | PERSON | 0.99+ |
Mike Piech | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Ireland | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Daniel Laury | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
Todd Kerry | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$20 | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
January 30th | DATE | 0.99+ |
Meg | PERSON | 0.99+ |
Mark Little | PERSON | 0.99+ |
Luke Cerney | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Jeff Basil | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Allan | PERSON | 0.99+ |
40 gig | QUANTITY | 0.99+ |
Krista Satterthwaite | International Women's Day
(upbeat music) >> Hello, welcome to the Cube's coverage of International Women's Day 2023. I'm John Furrier, host of the CUBE series of profiles around leaders in the tech industry sharing their stories, advice, best practices, what they're doing in their jobs their vision of the future, and more importantly, passing it on and encouraging more and more networking and telling the stories that matter. Our next guest is a great executive leader talking about how to lead in challenging times. Krista Satterthwaite, who is Senior Vice President and GM of Mainstream Compute. Krista great to see you're Cube alumni. We've had you on before talking about compute power. And by the way, congratulations on your BPT and Black Professional Tech Network 2023 Black Tech Exec of the Year Award. >> Thank you very much. Appreciate it. And thanks for having me. >> I knew I liked you the first time we were doing interviews together. You were so smart and so on top of it. Thanks for coming on. >> No problem. >> All kidding aside, let's get into it. You know, one of the things that's coming out on these interviews is leadership is being showcased and there's a network effect happening in the industry and you're starting to see people look and hear stories that they may or may not have heard before or news stories are coming out. So, one of the things that's interesting is that also in the backdrop of post pandemic, there's been a turn in the industry a little bit, there's a little bit of headwind in certain areas, some tailwinds in cloud and other areas. Compute, your area is doing very well. It could be challenging. And as a leader, has the conversation changed? And where are you at right now in the network of folks you're working with? What's the mood? >> Yeah, so actually I, things are much better. Obviously we had a chip shortage last year. Things are much, much better. But I learned a lot when it came to going through challenging times and leadership. And I think when we talk to customers, a lot of 'em are in challenging situations. Sometimes it's budget, sometimes it's attracting and retaining talent and sometimes it's just demands because, it's really exciting that technology is behind everything. But that means the demands on IT are bigger than ever before. So what I find when it comes to challenging times is that there's really three qualities that are game changers when it comes to leading and challenging times. And the first one is positivity. People have to feel like there's a light at the end of the tunnel to make sure that, their attitudes stay up, that they stay working really really hard and they look to the leader for that. The second one is communication. And I read somewhere that communication is leadership. And we had a great example from our CEO Antonio Neri when the pandemic hit and everything shut down. He had an all employee meeting every week for a month and we have tens of thousands of employees. And then even after that month, we had 'em very regularly. But he wanted to make sure that everybody heard from, him his thoughts had all the updates, knew how their peers were doing, how we were helping customers. And I really learned a lot from that in terms of communicating and communicating more during tough times. And then I would say the third one is making sure that they are informed and they feel empowered. So I would say a leader who is able to do that really, really stands out in a challenging time. >> So how do you get yourself together? Obviously you the chip shortage everyone knows in the industry and for the folks not in the tech industry, it was an economic potential disaster, because you don't get the chips you need. You guys make servers and technology, chips power everything. If you miss a shipment, it could cause a lot of backlash. So Cisco had an earnings impact. It has impact to the business. When do you have that code red moment where it's like, okay, we have to kind of put the pause and go into emergency mode. And how do you handle that? >> Well, you know, it is funny 'cause when it, when we have challenges, I come to learn that people can look at challenges and hard work as a burden or a mission and they behave totally different. If they see it as a burden, then they're doing the bare minimum and they're pointing fingers and they're complaining and they're probably not getting a whole lot done. If they see it as a mission, then all of a sudden they're going above and beyond. They're working really hard, they're really partnering. And if it affects customers for HPE, obviously we, HPE is a very customer centric company, so everyone pays attention and tries to pitch in. But when it comes to a mission, I started thinking, what are the real ingredients for a mission? And I think it's important. I think it's, people feel like they can make an impact. And then I think the third one is that the goal is clear, even if the path isn't, 'cause you may have to pivot a lot if it's a challenge. And so when it came to the chip shortage, it was a mission. We wanted to make sure that we could ship to customers as quickly as possible. And it was a mission. Everybody pulled together. I learned how much our team could pull off and pull together through that challenge. >> And the consequences can be quantified in economics. So it's like the burn the boats example, you got to burn the boats, you're stuck. You got to figure out a solution. How does that change the demands on people? Because this is, okay, there's a mission it they're not, it's not normal. What are some of those new demands that arise during those times and how do you manage that? How do you be a leader? >> Yeah, so it's funny, I was reading this statement from James White who used to be the CEO of Jamba Juice. And he was talking about how he got that job. He said, "I think it was one thing I said that really convinced them that I was the right person." And what he said was something like, "I will get more out of people than nine out of 10 leaders on the planet." He said, "Because I will look at their strengths and their capabilities and I will play to their passions." and their capabilities and I will play their passions. and getting the most out people in difficult times, it is all about how much you can get out of people for their own sake and for the company's sake. >> That's great feedback. And to people watching who are early in their careers, leading is getting the best out of your team, attitude. Some of the things you mentioned. What advice would you give folks that are starting to get into the workforce, that are starting to get into that leadership track or might have a trajectory or even might have an innate ability that they know they have and they want to pursue that dream? >> Yeah so. >> What advice would you give them? >> Yeah, what I would say, I say this all the time that, for the first half of my career I was very job conscious, but I wasn't very career conscious. So I'd get in a role and I'd stay in that role for long periods of time and I'd do a good job, but I wasn't really very career conscious. And what I would say is, everybody says how important risk taking is. Well, risk taking can be a little bit of a scary word, right? Or term. And the way I see it is give it a shot and see what happens. You're interested in something, give it a shot and see what happens. It's kind of a less intimidating way of looking at risk because even though I was job conscious, and not career conscious, one thing I did when people asked me to take something on, hey Krista, would you like to take on more responsibility here? The answer was always yes, yes, yes, yes. So I said yes because I said, hey I'll give it a shot and see what happens. And that helped me tremendously because I felt like I am giving it a try. And the more you do that, the the better it is. >> It's great. >> And actually the the less scary it is because you do that, a few times and it goes well. It's like a muscle that builds. >> It's funny, a woman executive was on the program. I said, the word balance comes up a lot. And she stopped and said, "Let's just talk about balance for a second." And then she went contrarian and said, "It's about not being unbalanced. It's about being, taking a chance and being a little bit off balance to put yourself outside your comfort zone to try new things." And then she also came up and followed and said, "If you do that alone, you increase your risk. But if you do it with people, a team that you trust and you're authentic and you're vulnerable and you're communicating, that is the chemistry." And that was a really good point. What's your reaction? 'Cause you were talking about authentic conversations good communications with Antonio. How does someone get, feel, find that team and do you agree with it? And what was your, how would you react to that? >> Yes, I agree with that. And when it comes to being authentic, that's the magic and when someone isn't, if someone's not really being themselves, it's really funny because you can feel it, you can sense it. There's kind of a wall between you and them. And over time people won't be able to put their finger on it, but they'll feel a distance from you. But when you're authentic and you share who you are, what you find is you find things in common with other people. 'Cause you're sharing more of who you are and it's like, oh, I do that too. Oh, I'm interested in that too. And build the bonds between people and the authenticity. And that's what people crave. They want people to be authentic and people can tell when you're authentic and when you're not. >> Is managing and leading through a crisis a born talent or can you learn it? >> Oh, definitely learned. I think that we're born knowing nothing and I once read people are nurtured into greatness and I think that's true. So yeah, definitely learned. >> What are some examples that can come out of a tough time as folks may look at a crisis and be shy away from it? How do they lean into it? What advice would you give folks? How do you handle it? I mean, everyone's got different personality. Okay, they get to a position but stepping through that door. >> Yeah, well, I do this presentation called, "10 things I Wish I Knew Earlier in my Career." And one of those things is about the growth mindset and the growth mindset. There's a book called "Mindset" by Carol Dweck and the growth mindset is all about learning and not always having to know everything, but really the winning is in the learning. And so if you have a growth mindset it makes you feel better about everything because you can't lose. You're winning because you're learning. So when I've learned that, I started looking at things much differently. And when it comes to going through tough times, what I find is you're exercising muscles that you didn't even know you had, which makes you stronger when the crisis is over, obviously. And I also feel like you become a lot a much more creative when you're in challenging times. You're forced to do things that you hadn't had to do before. And it also bonds the team. It's almost like going through bootcamp together. When you go through a challenge together it bonds you for life. >> I mean, you could have bonding, could be trauma bonding or success bonding. People love to be on the success side because that's positive and that's really the key mindset. You're always winning if you have that attitude. And learnings is also positive. So it's not, it's never a failure unless you make it. >> That's right, exactly. As long as you learn from it. And that's the name of the game. So, learning is the goal. >> So I have to ask you, on your job now, you have a really big responsibility HPE compute and big division. What's the current mindset that you have right now in your career, where you're at? What are some of the things on your mind that you think about? We had other, other seniors leaders say, hey, you know I got the software as my brain and the hardware's my body. I like to keep software and hardware working together. What is your current state of your career and how you looking at it, what's next and what's going on in your mind right now? >> Yeah, so for me, I really want to make sure that for my team we're nurturing the next generation of leadership and that we're helping with career development and career growth. And people feel like they can grow their careers here. Luckily at HPE, we have a lot of people stay at HPE a long time, and even people who leave HPE a lot of times they come back because the culture's fantastic. So I just want to make sure I'm contributing to that culture and I'm bringing up the next generation of leaders. >> What's next for you? What are you looking at from a career personal standpoint? >> You know, it's funny, I, I love what I'm doing right now. I'm actually on a joint venture board with H3C, which is HPE Joint Venture Company. And so I'm really enjoying that and exploring more board service opportunities. >> You have a focus of good growth mindset, challenging through, managing through tough times. How do you stay focused on that North star? How do you keep the reinforcement of the mission? How do you nurture the team to greatness? >> Yeah, so I think it's a lot of clarity, providing a lot of clarity about what's important right now. And it goes back to some of the communication that I mentioned earlier, making sure that everybody knows where the North Star is, so everybody's focused on the same thing, because I feel like with the, I always felt like throughout my career I was set up for success if I had the right information, the right guidance and the right goals. And I try to make sure that I do that with my team. >> What are some of the things that you could share as we wrap up here for the folks watching, as the networks increase, as the stories start to unfold more and more on digital like we're doing here, what do you hope people walk away with? What's working, what needs work, and what is some things that people aren't talking about that should be discussed publicly? >> Do you mean from a career standpoint or? >> For career? For growing into tech and into leadership positions. >> Okay. >> Big migration tech is now a wide field. I mean, when I grew up, broke into the eighties, it was computer science, software engineering, and three degrees in engineering, right? >> I see huge swath of AI coming. So many technical careers. There's a lot more women. >> Yeah. And that's what's so exciting about being in a technical career, technical company, is that everything's always changing. There's always opportunity to learn something new. And frankly, you know, every company is in the business of technology right now, because they want to closer to their customers. Typically, they're using technology to do that. Everyone's digitally transforming. And so what I would say is that there's so much opportunity, keep your mind open, explore what interests you and keep learning because it's changing all the time. >> You know I was talking with Sue, former HP, she's on a lot of boards. The balance at the board level still needs a lot of work and the leaderships are getting better, but the board at the seats at the table needs work. Where do you see that transition for you in the future? Is that something on your mind? Maybe a board seat? You mentioned you're on a board with HPE, but maybe sitting on some other boards? Any, any? >> Yes, actually, actually, we actually have a program here at HPE called the Board Ready Now program that I'm a part of. And so HPE is very supportive of me exploring an independent board seat. And so they have some education and programming around that. And I know Sue well, she's awesome. And so yes, I'm looking into those opportunities right now. >> She advises do one no more than two. The day job. >> Yeah, I would only be doing one current job that I have. >> Well, kris, it was great to chat with you about these topics and leadership and challenging times. Great masterclass, great advice. As SVP and GM of mainstream compute for HPE, what's going on in your job these days? What's the most exciting thing happening? Share some of your work situations. >> Sure, so the most exciting thing happening right now is HPE Gen 11, which we just announced and started shipping, brings tremendous performance benefit, has an intuitive operating experience, a trusted security by design, and it's optimized to run workloads so much faster. So if anybody is interested, they should go check it out on hpe.com. >> And of course the CUBE will be at HPE Discover. We'll see you there. Any final wisdom you'd like to share as we wrap up the last minute here? >> Yeah, so I think the last thing I'll say is that when it comes to setting your sights, I think, expecting it, good things to happen usually happens when you believe you deserve it. So what happens is you believe you deserve it, then you expect it and you get it. And so sometimes that's about making sure you raise your thermostat to expect more. And I always talk about you don't have to raise it all up at once. You could do that incrementally and other people can set your thermostat too when they say, hey, you should be, you should get a level this high or that high, but raise your thermostat because what you expect is what you get. >> Krista, thank you so much for contributing to this program. We're going to do it quarterly. We're going to do getting more stories out there, so we'll have you back and if you know anyone with good stories, send them our way. And congratulations on your BPTN Tech Executive of the Year award for 2023. Congratulations, great prize there and great recognition for your hard work. >> Thank you so much, John, I appreciate it. >> Okay, this is the Cube's coverage of National Woodman's Day. I'm John Furrier, stories from the front lines, management ranks, developers, all there, global coverage of international events with theCUBE. Thanks for watching. (soft music)
SUMMARY :
And by the way, Thank you very much. I knew I liked you And where are you at right now And the first one is positivity. And how do you handle that? that the goal is clear, And the consequences can and for the company's sake. Some of the things you mentioned. And the more you do that, And actually the the less scary it is find that team and do you agree with it? and you share who you are, and I once read What advice would you give folks? And I also feel like you become a lot I mean, you could have And that's the name of the game. that you have right now of leadership and that we're helping And so I'm really enjoying that How do you nurture the team to greatness? of the communication For growing into tech and broke into the eighties, I see huge swath of AI coming. And frankly, you know, every company is Where do you see that transition And so they have some education She advises do one no more than two. one current job that I have. great to chat with you Sure, so the most exciting And of course the CUBE So what happens is you and if you know anyone with Thank you so much, from the front lines,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nutanix | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Krista | PERSON | 0.99+ |
Bernie Hannon | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Bernie | PERSON | 0.99+ |
H3C | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
September of 2015 | DATE | 0.99+ |
Dave Tang | PERSON | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
SanDisk | ORGANIZATION | 0.99+ |
Martin | PERSON | 0.99+ |
James White | PERSON | 0.99+ |
Sue | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Carol Dweck | PERSON | 0.99+ |
Martin Fink | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave allante | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Raghu | PERSON | 0.99+ |
Raghu Nandan | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Lee Caswell | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
three-month | QUANTITY | 0.99+ |
four-year | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
Gary | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
seven dollars | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Arm Holdings | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Joseph Nelson, Roboflow | Cube Conversation
(gentle music) >> Hello everyone. Welcome to this CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great remote guest coming in. Joseph Nelson, co-founder and CEO of RoboFlow hot startup in AI, computer vision. Really interesting topic in this wave of AI next gen hitting. Joseph, thanks for coming on this CUBE conversation. >> Thanks for having me. >> Yeah, I love the startup tsunami that's happening here in this wave. RoboFlow, you're in the middle of it. Exciting opportunities, you guys are in the cutting edge. I think computer vision's been talked about more as just as much as the large language models and these foundational models are merging. You're in the middle of it. What's it like right now as a startup and growing in this new wave hitting? >> It's kind of funny, it's, you know, I kind of describe it like sometimes you're in a garden of gnomes. It's like we feel like we've got this giant headstart with hundreds of thousands of people building with computer vision, training their own models, but that's a fraction of what it's going to be in six months, 12 months, 24 months. So, as you described it, a wave is a good way to think about it. And the wave is still building before it gets to its full size. So it's a ton of fun. >> Yeah, I think it's one of the most exciting areas in computer science. I wish I was in my twenties again, because I would be all over this. It's the intersection, there's so many disciplines, right? It's not just tech computer science, it's computer science, it's systems, it's software, it's data. There's so much aperture of things going on around your world. So, I mean, you got to be batting all the students away kind of trying to get hired in there, probably. I can only imagine you're hiring regiment. I'll ask that later, but first talk about what the company is that you're doing. How it's positioned, what's the market you're going after, and what's the origination story? How did you guys get here? How did you just say, hey, want to do this? What was the origination story? What do you do and how did you start the company? >> Yeah, yeah. I'll give you the what we do today and then I'll shift into the origin. RoboFlow builds tools for making the world programmable. Like anything that you see should be read write access if you think about it with a programmer's mind or legible. And computer vision is a technology that enables software to be added to these real world objects that we see. And so any sort of interface, any sort of object, any sort of scene, we can interact with it, we can make it more efficient, we can make it more entertaining by adding the ability for the tools that we use and the software that we write to understand those objects. And at RoboFlow, we've empowered a little over a hundred thousand developers, including those in half the Fortune 100 so far in that mission. Whether that's Walmart understanding the retail in their stores, Cardinal Health understanding the ways that they're helping their patients, or even electric vehicle manufacturers ensuring that they're making the right stuff at the right time. As you mentioned, it's early. Like I think maybe computer vision has touched one, maybe 2% of the whole economy and it'll be like everything in a very short period of time. And so we're focused on enabling that transformation. I think it's it, as far as I think about it, I've been fortunate to start companies before, start, sell these sorts of things. This is the last company I ever wanted to start and I think it will be, should we do it right, the world's largest in riding the wave of bringing together the disparate pieces of that technology. >> What was the motivating point of the formation? Was it, you know, you guys were hanging around? Was there some catalyst? What was the moment where it all kind of came together for you? >> You know what's funny is my co-founder, Brad and I, we were making computer vision apps for making board games more fun to play. So in 2017, Apple released AR kit, augmented reality kit for building augmented reality applications. And Brad and I are both sort of like hacker persona types. We feel like we don't really understand the technology until we build something with it and so we decided that we should make an app that if you point your phone at a Sudoku puzzle, it understands the state of the board and then it kind of magically fills in that experience with all the digits in real time, which totally ruins the game of Sudoku to be clear. But it also just creates this like aha moment of like, oh wow, like the ability for our pocket devices to understand and see the world as good or better than we can is possible. And so, you know, we actually did that as I mentioned in 2017, and the app went viral. It was, you know, top of some subreddits, top of Injure, Reddit, the hacker community as well as Product Hunt really liked it. So it actually won Product Hunt AR app of the year, which was the same year that the Tesla model three won the product of the year. So we joked that we share an award with Elon our shared (indistinct) But frankly, so that was 2017. RoboFlow wasn't incorporated as a business until 2019. And so, you know, when we made Magic Sudoku, I was running a different company at the time, Brad was running a different company at the time, and we kind of just put it out there and were excited by how many people liked it. And we assumed that other curious developers would see this inevitable future of, oh wow, you know. This is much more than just a pedestrian point your phone at a board game. This is everything can be seen and understood and rewritten in a different way. Things like, you know, maybe your fridge. Knowing what ingredients you have and suggesting recipes or auto ordering for you, or we were talking about some retail use cases of automated checkout. Like anything can be seen and observed and we presume that that would kick off a Cambrian explosion of applications. It didn't. So you fast forward to 2019, we said, well we might as well be the guys to start to tackle this sort of problem. And because of our success with board games before, we returned to making more board game solving applications. So we made one that solves Boggle, you know, the four by four word game, we made one that solves chess, you point your phone at a chess board and it understands the state of the board and then can make move recommendations. And each additional board game that we added, we realized that the tooling was really immature. The process of collecting images, knowing which images are actually going to be useful for improving model performance, training those models, deploying those models. And if we really wanted to make the world programmable, developers waiting for us to make an app for their thing of interest is a lot less efficient, less impactful than taking our tool chain and releasing that externally. And so, that's what RoboFlow became. RoboFlow became the internal tools that we used to make these game changing applications readily available. And as you know, when you give developers new tools, they create new billion dollar industries, let alone all sorts of fun hobbyist projects along the way. >> I love that story. Curious, inventive, little radical. Let's break the rules, see how we can push the envelope on the board games. That's how companies get started. It's a great story. I got to ask you, okay, what happens next? Now, okay, you realize this new tooling, but this is like how companies get built. Like they solve their own problem that they had 'cause they realized there's one, but then there has to be a market for it. So you actually guys knew that this was coming around the corner. So okay, you got your hacker mentality, you did that thing, you got the award and now you're like, okay, wow. Were you guys conscious of the wave coming? Was it one of those things where you said, look, if we do this, we solve our own problem, this will be big for everybody. Did you have that moment? Was that in 2019 or was that more of like, it kind of was obvious to you guys? >> Absolutely. I mean Brad puts this pretty effectively where he describes how we lived through the initial internet revolution, but we were kind of too young to really recognize and comprehend what was happening at the time. And then mobile happened and we were working on different companies that were not in the mobile space. And computer vision feels like the wave that we've caught. Like, this is a technology and capability that rewrites how we interact with the world, how everyone will interact with the world. And so we feel we've been kind of lucky this time, right place, right time of every enterprise will have the ability to improve their operations with computer vision. And so we've been very cognizant of the fact that computer vision is one of those groundbreaking technologies that every company will have as a part of their products and services and offerings, and we can provide the tooling to accelerate that future. >> Yeah, and the developer angle, by the way, I love that because I think, you know, as we've been saying in theCUBE all the time, developer's the new defacto standard bodies because what they adopt is pure, you know, meritocracy. And they pick the best. If it's sell service and it's good and it's got open source community around it, its all in. And they'll vote. They'll vote with their code and that is clear. Now I got to ask you, as you look at the market, we were just having this conversation on theCUBE in Barcelona at recent Mobile World Congress, now called MWC, around 5G versus wifi. And the debate was specifically computer vision, like facial recognition. We were talking about how the Cleveland Browns were using facial recognition for people coming into the stadium they were using it for ships in international ports. So the question was 5G versus wifi. My question is what infrastructure or what are the areas that need to be in place to make computer vision work? If you have developers building apps, apps got to run on stuff. So how do you sort that out in your mind? What's your reaction to that? >> A lot of the times when we see applications that need to run in real time and on video, they'll actually run at the edge without internet. And so a lot of our users will actually take their models and run it in a fully offline environment. Now to act on that information, you'll often need to have internet signal at some point 'cause you'll need to know how many people were in the stadium or what shipping crates are in my port at this point in time. You'll need to relay that information somewhere else, which will require connectivity. But actually using the model and creating the insights at the edge does not require internet. I mean we have users that deploy models on underwater submarines just as much as in outer space actually. And those are not very friendly environments to internet, let alone 5g. And so what you do is you use an edge device, like an Nvidia Jetson is common, mobile devices are common. Intel has some strong edge devices, the Movidius family of chips for example. And you use that compute that runs completely offline in real time to process those signals. Now again, what you do with those signals may require connectivity and that becomes a question of the problem you're solving of how soon you need to relay that information to another place. >> So, that's an architectural issue on the infrastructure. If you're a tactical edge war fighter for instance, you might want to have highly available and maybe high availability. I mean, these are words that mean something. You got storage, but it's not at the edge in real time. But you can trickle it back and pull it down. That's management. So that's more of a business by business decision or environment, right? >> That's right, that's right. Yeah. So I mean we can talk through some specifics. So for example, the RoboFlow actually powers the broadcaster that does the tennis ball tracking at Wimbledon. That runs completely at the edge in real time in, you know, technically to track the tennis ball and point the camera, you actually don't need internet. Now they do have internet of course to do the broadcasting and relay the signal and feeds and these sorts of things. And so that's a case where you have both edge deployment of running the model and high availability act on that model. We have other instances where customers will run their models on drones and the drone will go and do a flight and it'll say, you know, this many residential homes are in this given area, or this many cargo containers are in this given shipping yard. Or maybe we saw these environmental considerations of soil erosion along this riverbank. The model in that case can run on the drone during flight without internet, but then you only need internet once the drone lands and you're going to act on that information because for example, if you're doing like a study of soil erosion, you don't need to be real time. You just need to be able to process and make use of that information once the drone finishes its flight. >> Well I can imagine a zillion use cases. I heard of a use case interview at a company that does computer vision to help people see if anyone's jumping the fence on their company. Like, they know what a body looks like climbing a fence and they can spot it. Pretty easy use case compared to probably some of the other things, but this is the horizontal use cases, its so many use cases. So how do you guys talk to the marketplace when you say, hey, we have generative AI for commuter vision. You might know language models that's completely different animal because vision's like the world, right? So you got a lot more to do. What's the difference? How do you explain that to customers? What can I build and what's their reaction? >> Because we're such a developer centric company, developers are usually creative and show you the ways that they want to take advantage of new technologies. I mean, we've had people use things for identifying conveyor belt debris, doing gas leak detection, measuring the size of fish, airplane maintenance. We even had someone that like a hobby use case where they did like a specific sushi identifier. I dunno if you know this, but there's a specific type of whitefish that if you grew up in the western hemisphere and you eat it in the eastern hemisphere, you get very sick. And so there was someone that made an app that tells you if you happen to have that fish in the sushi that you're eating. But security camera analysis, transportation flows, plant disease detection, really, you know, smarter cities. We have people that are doing curb management identifying, and a lot of these use cases, the fantastic thing about building tools for developers is they're a creative bunch and they have these ideas that if you and I sat down for 15 minutes and said, let's guess every way computer vision can be used, we would need weeks to list all the example use cases. >> We'd miss everything. >> And we'd miss. And so having the community show us the ways that they're using computer vision is impactful. Now that said, there are of course commercial industries that have discovered the value and been able to be out of the gate. And that's where we have the Fortune 100 customers, like we do. Like the retail customers in the Walmart sector, healthcare providers like Medtronic, or vehicle manufacturers like Rivian who all have very difficult either supply chain, quality assurance, in stock, out of stock, anti-theft protection considerations that require successfully making sense of the real world. >> Let me ask you a question. This is maybe a little bit in the weeds, but it's more developer focused. What are some of the developer profiles that you're seeing right now in terms of low-hanging fruit applications? And can you talk about the academic impact? Because I imagine if I was in school right now, I'd be all over it. Are you seeing Master's thesis' being worked on with some of your stuff? Is the uptake in both areas of younger pre-graduates? And then inside the workforce, What are some of the devs like? Can you share just either what their makeup is, what they work on, give a little insight into the devs you're working with. >> Leading developers that want to be on state-of-the-art technology build with RoboFlow because they know they can use the best in class open source. They know that they can get the most out of their data. They know that they can deploy extremely quickly. That's true among students as you mentioned, just as much as as industries. So we welcome students and I mean, we have research grants that will regularly support for people to publish. I mean we actually have a channel inside our internal slack where every day, more student publications that cite building with RoboFlow pop up. And so, that helps inspire some of the use cases. Now what's interesting is that the use case is relatively, you know, useful or applicable for the business or the student. In other words, if a student does a thesis on how to do, we'll say like shingle damage detection from satellite imagery and they're just doing that as a master's thesis, in fact most insurance businesses would be interested in that sort of application. So, that's kind of how we see uptick and adoption both among researchers who want to be on the cutting edge and publish, both with RoboFlow and making use of open source tools in tandem with the tool that we provide, just as much as industry. And you know, I'm a big believer in the philosophy that kind of like what the hackers are doing nights and weekends, the Fortune 500 are doing in a pretty short order period of time and we're experiencing that transition. Computer vision used to be, you know, kind of like a PhD, multi-year investment endeavor. And now with some of the tooling that we're working on in open source technologies and the compute that's available, these science fiction ideas are possible in an afternoon. And so you have this idea of maybe doing asset management or the aerial observation of your shingles or things like this. You have a few hundred images and you can de-risk whether that's possible for your business today. So there's pretty broad-based adoption among both researchers that want to be on the state of the art, as much as companies that want to reduce the time to value. >> You know, Joseph, you guys and your partner have got a great front row seat, ground floor, presented creation wave here. I'm seeing a pattern emerging from all my conversations on theCUBE with founders that are successful, like yourselves, that there's two kind of real things going on. You got the enterprises grabbing the products and retrofitting into their legacy and rebuilding their business. And then you have startups coming out of the woodwork. Young, seeing greenfield or pick a specific niche or focus and making that the signature lever to move the market. >> That's right. >> So can you share your thoughts on the startup scene, other founders out there and talk about that? And then I have a couple questions for like the enterprises, the old school, the existing legacy. Little slower, but the startups are moving fast. What are some of the things you're seeing as startups are emerging in this field? >> I think you make a great point that independent of RoboFlow, very successful, especially developer focused businesses, kind of have three customer types. You have the startups and maybe like series A, series B startups that you're building a product as fast as you can to keep up with them, and they're really moving just as fast as as you are and pulling the product out at you for things that they need. The second segment that you have might be, call it SMB but not enterprise, who are able to purchase and aren't, you know, as fast of moving, but are stable and getting value and able to get to production. And then the third type is enterprise, and that's where you have typically larger contract value sizes, slower moving in terms of adoption and feedback for your product. And I think what you see is that successful companies balance having those three customer personas because you have the small startups, small fast moving upstarts that are discerning buyers who know the market and elect to build on tooling that is best in class. And so you basically kind of pass the smell test of companies who are quite discerning in their purchases, plus are moving so quick they're pulling their product out of you. Concurrently, you have a product that's enterprise ready to service the scalability, availability, and trust of enterprise buyers. And that's ultimately where a lot of companies will see tremendous commercial success. I mean I remember seeing the Twilio IPO, Uber being like a full 20% of their revenue, right? And so there's this very common pattern where you have the ability to find some of those upstarts that you make bets on, like the next Ubers of the world, the smaller companies that continue to get developed with the product and then the enterprise whom allows you to really fund the commercial success of the business, and validate the size of the opportunity in market that's being creative. >> It's interesting, there's so many things happening there. It's like, in a way it's a new category, but it's not a new category. It becomes a new category because of the capabilities, right? So, it's really interesting, 'cause that's what you're talking about is a category, creating. >> I think developer tools. So people often talk about B to B and B to C businesses. I think developer tools are in some ways a third way. I mean ultimately they're B to B, you're selling to other businesses and that's where your revenue's coming from. However, you look kind of like a B to C company in the ways that you measure product adoption and kind of go to market. In other words, you know, we're often tracking the leading indicators of commercial success in the form of usage, adoption, retention. Really consumer app, traditionally based metrics of how to know you're building the right stuff, and that's what product led growth companies do. And then you ultimately have commercial traction in a B to B way. And I think that that actually kind of looks like a third thing, right? Like you can do these sort of funny zany marketing examples that you might see historically from consumer businesses, but yet you ultimately make your money from the enterprise who has these de-risked high value problems you can solve for them. And I selfishly think that that's the best of both worlds because I don't have to be like Evan Spiegel, guessing the next consumer trend or maybe creating the next consumer trend and catching lightning in a bottle over and over again on the consumer side. But I still get to have fun in our marketing and make sort of fun, like we're launching the world's largest game of rock paper scissors being played with computer vision, right? Like that's sort of like a fun thing you can do, but then you can concurrently have the commercial validation and customers telling you the things that they need to be built for them next to solve commercial pain points for them. So I really do think that you're right by calling this a new category and it really is the best of both worlds. >> It's a great call out, it's a great call out. In fact, I always juggle with the VC. I'm like, it's so easy. Your job is so easy to pick the winners. What are you talking about its so easy? I go, just watch what the developers jump on. And it's not about who started, it could be someone in the dorm room to the boardroom person. You don't know because that B to C, the C, it's B to D you know? You know it's developer 'cause that's a human right? That's a consumer of the tool which influences the business that never was there before. So I think this direct business model evolution, whether it's media going direct or going direct to the developers rather than going to a gatekeeper, this is the reality. >> That's right. >> Well I got to ask you while we got some time left to describe, I want to get into this topic of multi-modality, okay? And can you describe what that means in computer vision? And what's the state of the growth of that portion of this piece? >> Multi modality refers to using multiple traditionally siloed problem types, meaning text, image, video, audio. So you could treat an audio problem as only processing audio signal. That is not multimodal, but you could use the audio signal at the same time as a video feed. Now you're talking about multi modality. In computer vision, multi modality is predominantly happening with images and text. And one of the biggest releases in this space is actually two years old now, was clip, contrastive language image pre-training, which took 400 million image text pairs and basically instead of previously when you do classification, you basically map every single image to a single class, right? Like here's a bunch of images of chairs, here's a bunch of images of dogs. What clip did is used, you can think about it like, the class for an image being the Instagram caption for the image. So it's not one single thing. And by training on understanding the corpora, you basically see which words, which concepts are associated with which pixels. And this opens up the aperture for the types of problems and generalizability of models. So what does this mean? This means that you can get to value more quickly from an existing trained model, or at least validate that what you want to tackle with a computer vision, you can get there more quickly. It also opens up the, I mean. Clip has been the bedrock of some of the generative image techniques that have come to bear, just as much as some of the LLMs. And increasingly we're going to see more and more of multi modality being a theme simply because at its core, you're including more context into what you're trying to understand about the world. I mean, in its most basic sense, you could ask yourself, if I have an image, can I know more about that image with just the pixels? Or if I have the image and the sound of when that image was captured or it had someone describe what they see in that image when the image was captured, which one's going to be able to get you more signal? And so multi modality helps expand the ability for us to understand signal processing. >> Awesome. And can you just real quick, define clip for the folks that don't know what that means? >> Yeah. Clip is a model architecture, it's an acronym for contrastive language image pre-training and like, you know, model architectures that have come before it captures the almost like, models are kind of like brands. So I guess it's a brand of a model where you've done these 400 million image text pairs to match up which visual concepts are associated with which text concepts. And there have been new releases of clip, just at bigger sizes of bigger encoding's, of longer strings of texture, or larger image windows. But it's been a really exciting advancement that OpenAI released in January, 2021. >> All right, well great stuff. We got a couple minutes left. Just I want to get into more of a company-specific question around culture. All startups have, you know, some sort of cultural vibe. You know, Intel has Moore's law doubles every whatever, six months. What's your culture like at RoboFlow? I mean, if you had to describe that culture, obviously love the hacking story, you and your partner with the games going number one on Product Hunt next to Elon and Tesla and then hey, we should start a company two years later. That's kind of like a curious, inventing, building, hard charging, but laid back. That's my take. How would you describe the culture? >> I think that you're right. The culture that we have is one of shipping, making things. So every week each team shares what they did for our customers on a weekly basis. And we have such a strong emphasis on being better week over week that those sorts of things compound. So one big emphasis in our culture is getting things done, shipping, doing things for our customers. The second is we're an incredibly transparent place to work. For example, how we think about giving decisions, where we're progressing against our goals, what problems are biggest and most important for the company is all open information for those that are inside the company to know and progress against. The third thing that I'd use to describe our culture is one that thrives with autonomy. So RoboFlow has a number of individuals who have founded companies before, some of which have sold their businesses for a hundred million plus upon exit. And the way that we've been able to attract talent like that is because the problems that we're tackling are so immense, yet individuals are able to charge at it with the way that they think is best. And this is what pairs well with transparency. If you have a strong sense of what the company's goals are, how we're progressing against it, and you have this ownership mentality of what can I do to change or drive progress against that given outcome, then you create a really healthy pairing of, okay cool, here's where the company's progressing. Here's where things are going really well, here's the places that we most need to improve and work on. And if you're inside that company as someone who has a preponderance to be a self-starter and even a history of building entire functions or companies yourself, then you're going to be a place where you can really thrive. You have the inputs of the things where we need to work on to progress the company's goals. And you have the background of someone that is just necessarily a fast moving and ambitious type of individual. So I think the best way to describe it is a transparent place with autonomy and an emphasis on getting things done. >> Getting shit done as they say. Getting stuff done. Great stuff. Hey, final question. Put a plug out there for the company. What are you going to hire? What's your pipeline look like for people? What jobs are open? I'm sure you got hiring all around. Give a quick plug for the company what you're looking for. >> I appreciate you asking. Basically you're either building the product or helping customers be successful with the product. So in the building product category, we have platform engineering roles, machine learning engineering roles, and we're solving some of the hardest and most impactful problems of bringing such a groundbreaking technology to the masses. And so it's a great place to be where you can kind of be your own user as an engineer. And then if you're enabling people to be successful with the products, I mean you're working in a place where there's already such a strong community around it and you can help shape, foster, cultivate, activate, and drive commercial success in that community. So those are roles that tend themselves to being those that build the product for developer advocacy, those that are account executives that are enabling our customers to realize commercial success, and even hybrid roles like we call it field engineering, where you are a technical resource to drive success within customer accounts. And so all this is listed on roboflow.com/careers. And one thing that I actually kind of want to mention John that's kind of novel about the thing that's working at RoboFlow. So there's been a lot of discussion around remote companies and there's been a lot of discussion around in-person companies and do you need to be in the office? And one thing that we've kind of recognized is you can actually chart a third way. You can create a third way which we call satellite, which basically means people can work from where they most like to work and there's clusters of people, regular onsite's. And at RoboFlow everyone gets, for example, $2,500 a year that they can use to spend on visiting coworkers. And so what's sort of organically happened is team numbers have started to pull together these resources and rent out like, lavish Airbnbs for like a week and then everyone kind of like descends in and works together for a week and makes and creates things. And we call this lighthouses because you know, a lighthouse kind of brings ships into harbor and we have an emphasis on shipping. >> Yeah, quality people that are creative and doers and builders. You give 'em some cash and let the self-governing begin, you know? And like, creativity goes through the roof. It's a great story. I think that sums up the culture right there, Joseph. Thanks for sharing that and thanks for this great conversation. I really appreciate it and it's very inspiring. Thanks for coming on. >> Yeah, thanks for having me, John. >> Joseph Nelson, co-founder and CEO of RoboFlow. Hot company, great culture in the right place in a hot area, computer vision. This is going to explode in value. The edge is exploding. More use cases, more development, and developers are driving the change. Check out RoboFlow. This is theCUBE. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
Welcome to this CUBE conversation You're in the middle of it. And the wave is still building the company is that you're doing. maybe 2% of the whole economy And as you know, when you it kind of was obvious to you guys? cognizant of the fact that I love that because I think, you know, And so what you do is issue on the infrastructure. and the drone will go and the marketplace when you say, in the sushi that you're eating. And so having the And can you talk about the use case is relatively, you know, and making that the signature What are some of the things you're seeing and pulling the product out at you because of the capabilities, right? in the ways that you the C, it's B to D you know? And one of the biggest releases And can you just real quick, and like, you know, I mean, if you had to like that is because the problems Give a quick plug for the place to be where you can the self-governing begin, you know? and developers are driving the change.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Joseph Nelson | PERSON | 0.99+ |
January, 2021 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Medtronic | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
400 million | QUANTITY | 0.99+ |
Evan Spiegel | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
RoboFlow | ORGANIZATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
Rivian | ORGANIZATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Cardinal Health | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Wimbledon | EVENT | 0.99+ |
roboflow.com/careers | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
second segment | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both worlds | QUANTITY | 0.99+ |
2% | QUANTITY | 0.99+ |
two years later | DATE | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Ubers | ORGANIZATION | 0.98+ |
third way | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a week | QUANTITY | 0.98+ |
Magic Sudoku | TITLE | 0.98+ |
second | QUANTITY | 0.98+ |
Nvidia | ORGANIZATION | 0.98+ |
Sudoku | TITLE | 0.98+ |
MWC | EVENT | 0.97+ |
today | DATE | 0.97+ |
billion dollar | QUANTITY | 0.97+ |
one single thing | QUANTITY | 0.97+ |
over a hundred thousand developers | QUANTITY | 0.97+ |
four | QUANTITY | 0.97+ |
third | QUANTITY | 0.96+ |
Elon | ORGANIZATION | 0.96+ |
third thing | QUANTITY | 0.96+ |
Tesla | ORGANIZATION | 0.96+ |
Jetson | COMMERCIAL_ITEM | 0.96+ |
Elon | PERSON | 0.96+ |
RoboFlow | TITLE | 0.96+ |
ORGANIZATION | 0.95+ | |
Twilio | ORGANIZATION | 0.95+ |
twenties | QUANTITY | 0.95+ |
Product Hunt AR | TITLE | 0.95+ |
Moore | PERSON | 0.95+ |
both researchers | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
Andy Sheahen, Dell Technologies & Marc Rouanne, DISH Wireless | MWC Barcelona 2023
>> (Narrator) The CUBE's live coverage is made possible by funding by Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Fira Barcelona. It's theCUBE live at MWC23 our third day of coverage of this great, huge event continues. Lisa Martin and Dave Nicholson here. We've got Dell and Dish here, we are going to be talking about what they're doing together. Andy Sheahen joins as global director of Telecom Cloud Core and Next Gen Ops at Dell. And Marc Rouanne, one of our alumni is back, EVP and Chief Network Officer at Dish Wireless. Welcome guys. >> Great to be here. >> (Both) Thank you. >> (Lisa) Great to have you. Mark, talk to us about what's going on at Dish wireless. Give us the update. >> Yeah so we've built a network from scratch in the US, that covered the US, we use a cloud base Cloud native, so from the bottom of the tower all the way to the internet uses cloud distributed cloud, emits it, so there are a lot of things about that. But it's unique, and now it's working, so we're starting to play with it and that's pretty cool. >> What's some of the proof points, proof in the pudding? >> Well, for us, first of all it was to do basic voice and data on a smartphone and for me the success would that you won't see the difference for a smartphone. That's base line. the next step is bringing this to the enterprise for their use case. So we've covered- now we have services for smartphones. We use our brand, Boost brand, and we are distributing that across the US. But as I said, the real good stuff is when you start to making you know the machines and all the data and the applications for the enterprise. >> Andy, how is Dell a facilitator of what Marc just described and the use cases and what their able to deliver? >> We're providing a number of the servers that are being used out in their radio access network. The virtual DU servers, we're also providing some bare metal orchestration capabilities to help automate the process of deploying all these hundreds and thousands of nodes out in the field. Both of these, the servers and the bare metal orchestra product are things that we developed in concert with Dish, working together to understand the way, the best way to automate, based on the tooling their using in other parts of their network, and we've been with you guys since day one, really. >> (Marc) Absolutely, yeah. >> Making each others solutions better the whole way. >> Marc, why Dell? >> So, the way the networks work is you have a cloud, and you have a distributed edge you need someone who understands the diversity of the edge in order to bring the cloud software to the edge, and Dell is the best there, you know, you can, we can ask them to mix and match accelerators, processors memory, it's very diverse distributed edge. We are building twenty thousands sides so you imagine the size and the complexity and Dell was the right partner for that. >> (Andy) Thank you. >> So you mentioned addressing enterprise leads, which is interesting because there's nothing that would prevent you from going after consumer wireless technically, right but it sounds like you have taken a look at the market and said "we're going to go after this segment of the market." >> (Marc) Yeah. >> At least for now. Are there significant differences between what an enterprise expects from a 5G network than, verses a consumer? >> Yeah. >> (Dave) They have higher expectations, maybe, number one I guess is, if my bill is 150 dollars a month I can have certain levels of expectations whereas a large enterprise the may be making a much more significant investment, are their expectations greater? >> (Marc) Yeah. >> Do you have a higher bar to get over? >> So first, I mean first we use our network for consumers, but for us it's an enterprise. That's the consumer segment, an enterprise. So we expose the network like we would to a car manufacturer, or to a distributor of goods of food and beverage. But what you expect when you are an enterprise, you expect, manage your services. You expect to control the goodness of your services, and for this you need to observe what's happening. Are you delivering the right service? What is the feedback from the enterprise users, and that's what we call the observability. We have a data centric network, so our enterprises are saying "Yeah connecting is enough, but show us how it works, and show us how we can learn from the data, improve, improve, and become more competitive." That's the big difference. >> So what you say Marc, are some of the outcomes you achieved working with Dell? TCO, ROI, CapX, OpX, what are some of the outcomes so far, that you've been able to accomplish? >> Yeah, so obviously we don't share our numbers, but we're very competitive. Both on the CapX and the OpX. And the second thing is that we are much faster in terms of innovation, you know one of the things that Telecorp would not do, was to tap into the IT industry. So we access to the silicon and we have access to the software and at a scale that none of the Telecorp could ever do and for us it's like "wow" and it's a very powerful industry and we've been driving the consist- it's a bit technical but all the silicone, the accelerators, the processors, the GPU, the TPUs and it's like wow. It's really a transformation. >> Andy, is there anything anagallis that you've dealt with in the past to the situation where you have this true core edge, environment where you have to instrument the devices that you provide to give that level of observation or observability, whatever the new word is, that we've invented for that. >> Yeah, yeah. >> I mean has there, is there anything- >> Yeah absolutely. >> Is this unprecedented? >> No, no not at all. I mean Dell's been really working at the edge since before the edge was called the edge right, we've been selling, our hardware and infrastructure out to retail shops, branch office locations, you know just smaller form factors outside of data centers for a very long time and so that's sort of the consistency from what we've been doing for 30 years to now the difference is the volume, the different number of permutations as Marc was saying. The different type of accelerator cards, the different SKUS of different server types, the sheer volume of nodes that you have in a nationwide wireless network. So the volumes are much different, the amount of data is much different, but the process is really the same. It's about having the infrastructure in the right place at the right time and being able to understand if it's working well or if it's not and it's not just about a red light or a green light but healthy and unhealthy conditions and predicting when the red lights going to come on. And we've been doing that for a while it's just a different scale, and a different level of complexity when you're trying to piece together all these different components from different vendors. >> So we talk a lot about ecosystem, and sometimes because of the desire to talk about the outcomes and what the end users, customers, really care about sometimes we will stop at the layer where say a Dell lives, and we'll see that as the sum total of the component when really, when you talk about a server that Dish is using that in and of itself is an ecosystem >> Yep, yeah >> (Dave) or there's an ecosystem behind it you just mentioned it, the kinds of components and the choices that you make when you optimize these devices determine how much value Dish, >> (Andy) Absolutely. >> Can get out of that. How deep are you on that hardware? I'm a knuckle dragging hardware guy. >> Deep, very deep, I mean just the number of permutations that were working through with Dish and other operators as well, different accelerator cards that we talked about, different techniques for timing obviously there's different SKUs with the silicon itself, different chip sets, different chips from different providers, all those things have to come together, and we build the basic foundation and then we also started working with our cloud partners Red Hat, Wind River, all these guys, VM Ware, of course and that's the next layer up, so you've got all the different hardware components, you've got the extraction layer, with your virtualization layer and or ubernetise layer and all of that stuff together has to be managed compatibility matrices that get very deep and very big, very quickly and that's really the foundational challenge we think of open ran is thinking all these different pieces are going to fit together and not just work today but work everyday as everything gets updated much more frequently than in the legacy world. >> So you care about those things, so we don't have to. >> That's right. >> That's the beauty of it. >> Yes. >> Well thank you. (laughter) >> You're welcome. >> I want to understand, you know some of the things that we've been talking about, every company is a data company, regardless of whether it's telco, it's a retailer, if it's my bank, it's my grocery store and they have to be able to use data as quickly as possible to make decisions. One of the things they've been talking here is the monetization of data, the monetization of the network. How do you, how does Dell help, like a Dish be able to achieve the monetization of their data. >> Well as Marc was saying before the enterprise use cases are what we are all kind of betting on for 5G, right? And enterprises expect to have access to data and to telemetry to do whatever use cases they want to execute in their particular industry, so you know, if it's a health care provider, if it's a factory, an agricultural provider that's leveraging this network, they need to get the data from the network, from the devices, they need to correlate it, in order to do things like automatically turn on a watering system at a certain time, right, they need to know the weather around make sure it's not too windy and you're going to waste a lot of water. All that has data, it's going to leverage data from the network, it's going to leverage data from devices, it's going to leverage data from applications and that's data that can be monetized. When you have all that data and it's all correlated there's value, inherit to it and you can even go onto a forward looking state where you can intelligently move workloads around, based on the data. Based on the clarity of the traffic of the network, where is the right place to put it, and even based on current pricing for things like on demand insists from cloud providers. So having all that data correlated allows any enterprise to make an intelligent decision about how to move a workload around a network and get the most efficient placing of that workload. >> Marc, Andy mentions things like data and networks and moving data across the networks. You have on your business card, Chief Network Officer, what potentially either keeps you up at night in terror or gets you very excited about the future of your network? What's out there in the frontier and what are those key obstacles that have to be overcome that you work with? >> Yeah, I think we have the network, we have the baseline, but we don't yet have the consumption that is easy by the enterprise, you know an enterprise likes to say "I have 4K camera, I connect it to my software." Click, click, right? And that's where we need to be so we're talking about it APIs that are so simple that they become a click and we engineers we have a tendency to want to explain but we should not, it should become a click. You know, and the phone revolution with the apps became those clicks, we have to do the same for the enterprise, for video, for surveillance, for analytics, it has to be clicks. >> While balancing flexibility, and agility of course because you know the folks who were fans of CLIs come in light interfaces, who hate gooeys it's because they feel they have the ability to go down to another level, so obviously that's a balancing act. >> But that's our job. >> Yeah. >> Our job is to hide the complexity, but of course there is complexity. It's like in the cloud, an emprise scaler, they manage complex things but it's successful if they hide it. >> (Dave) Yeah. >> It's the same. You know we have to be emprise scaler of connectivity but hide it. >> Yeah. >> So that people connect everything, right? >> Well it's Andy's servers, we're all magicians hiding it all. >> Yeah. >> It really is. >> It's like don't worry about it, just know, >> Let us do it. >> Sit down, we will serve you the meal. Don't worry how it's cooked. >> That's right, the enterprises want the outcome. >> (Dave) Yeah. >> They don't want to deal with that bottom layer. But it is tremendously complex and we want to take that on and make it better for the industry. >> That's critical. Marc I'd love to go back to you and just I know that you've been in telco for such a long time and here we are day three of MWC the name changed this year, from Mobile World Congress, reflecting mobilism isn't the only thing, obviously it was the catalyst, but what some of the things that you've heard at the event, maybe seen at the event that give you the confidence that the right players are here to help move Dish wireless forward, for example. >> You know this is the first, I've been here for decades it's the first time, and I'm a Chief Network Officer, first time we don't talk about the network. >> (Andy) Yeah. >> Isn't that surprising? People don't tell me about speed, or latency, they talk about consumption. Apps, you know videos surveillance, or analytics or it's, so I love that, because now we're starting to talk about how we can consume and monetize but that's the first time. We use to talk about gigabytes and this and that, none of that not once. >> What does that signify to you, in terms of the evolution? >> Well you know, we've seen that the demand for the healthcare, for the smart cities, has been here for a decade, proof of concepts for a decade but the consumption has been behind and for me this is the oldest team is waking up to we are going to make it easy, so that the consumption can take off. The demand is there, we have to serve it. And the fact that people are starting to say we hide the complexity that's our problem, but don't even mention it, I love it. >> Yep. Drop the mic. >> (Andy and Marc) Yeah, yeah. >> Andy last question for you, some of the things we know Dell has a big and verging presents in telco, we've had a chance to see the booth, see the cool things you guys are featuring there, Dave did a great tour of it, talk about some of the things you've heard and maybe even from customers at this event that demonstrate to you that Dell is going in the right direction with it's telco strategy. >> Yeah, I mean personally for me this has been an unbelievable event for Dell we've had tons and tons of customer meetings of course and the feedback we're getting is that the things we're bring to market whether it's infrablocks, or purposeful servers that are designed for the telecom network are what our customers need and have always wanted. We get a lot of wows, right? >> (Lisa) That's nice. >> "Wow we didn't know Dell was doing this, we had no idea." And the other part of it is that not everybody was sure that we were going to move as fast as we have so the speed in which we've been able to bring some of these things to market and part of that was working with Dish, you know a pioneer, to make sure we were building the right things and I think a lot of the customers that we talked to really appreciate the fact that we're doing it with the industry, >> (Lisa) Yeah. >> You know, not at the industry and that comes across in the way they are responding and what their talking to us about now. >> And that came across in the interview that you just did. Thank you both for joining Dave and me. >> Thank you >> Talking about what Dell and Dish are doing together the proof is in the pudding, and you did a great job at explaining that, thanks guys, we appreciate it. >> Thank you. >> All right, our pleasure. For our guest and for Dave Nicholson, I'm Lisa Martin, you're watching theCUBE live from MWC 23 day three. We will be back with our next guest, so don't go anywhere. (upbeat music)
SUMMARY :
that drive human progress. we are going to be talking about Mark, talk to us about what's that covered the US, we use a cloud base and all the data and the and the bare metal orchestra product solutions better the whole way. and Dell is the best at the market and said between what an enterprise and for this you need to but all the silicone, the instrument the devices and so that's sort of the consistency from deep are you on that hardware? and that's the next So you care about those Well thank you. One of the things and get the most efficient the future of your network? You know, and the phone and agility of course It's like in the cloud, an emprise scaler, It's the same. Well it's Andy's Sit down, we will serve you the meal. That's right, the and make it better for the industry. that the right players are here to help it's the first time, and but that's the first easy, so that the consumption some of the things we know and the feedback we're getting is that so the speed in which You know, not at the industry And that came across in the the proof is in the pudding, We will be back with our next
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Marc Rouanne | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Andy Sheahen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Telecorp | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Dish | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
DISH Wireless | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Dish wireless | ORGANIZATION | 0.98+ |
Lisa | PERSON | 0.98+ |
MWC | EVENT | 0.98+ |
third day | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Next Gen Ops | ORGANIZATION | 0.97+ |
TCO | ORGANIZATION | 0.97+ |
Dish Wireless | ORGANIZATION | 0.97+ |
CapX | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
Boost | ORGANIZATION | 0.95+ |
150 dollars a month | QUANTITY | 0.94+ |
OpX | ORGANIZATION | 0.92+ |
Telecom Cloud Core | ORGANIZATION | 0.91+ |
thousands | QUANTITY | 0.9+ |
ROI | ORGANIZATION | 0.9+ |
tons and tons of customer | QUANTITY | 0.86+ |
SiliconANGLE News | Intel Accelerates 5G Network Virtualization
(energetic music) >> Welcome to the Silicon Angle News update Mobile World Congress theCUBE coverage live on the floor for four days. I'm John Furrier, in the studio here. Dave Vellante, Lisa Martin onsite. Intel in the news, Intel accelerates 5G network virtualization with radio access network boost for Xeon processors. Intel, well known for power and computing, they today announced their integrated virtual radio access network into its latest fourth gen Intel Xeon system on a chip. This move will help network operators gear up their efforts to deliver Cloud native features for next generation 5G core and edge networks. This announcement came today at MWC, formerly knows Mobile World Congress. In Barcelona, Intel is taking the latest step in its mission to virtualize the world's networks, including Core, Open RAN and Edge. Network virtualization is the key capability for communication service providers as they migrate from fixed function hardware to programmable software defined platforms. This provides greater agility and greater cost efficiency. According to Intel, this is the demand for agile, high performance, scalable networks requiring adoption. Fully virtualized software based platforms run on general purpose processors. Intel believes that network operators need to accelerate network virtualization to get the most out of these new architectures, and that's where it can be made its mark. With Intel vRAN Boost, it delivers twice the capability and capacity gains over its previous generation of silicon with the same power envelope with 20% in power savings that results from an integrated acceleration. In addition, Intel announced new infrastructure power manager for 5G core reference software that's designed to work with vRAN Boost. Intel also showcased its new Intel Converged Edge media platform designed to deliver multiple video services from a shared multi-tenant architecture. The platform leverages Cloud native scalability to respond to the shifting demands. Lastly, Intel announced a range of Agilex 7 Field Programmable Gate Arrays and eASIC N5X structured applications specific integrated circuits designed for individual cloud communications and embedded applications. Intel is targeting the power consumption which is energy and more horsepower for chips, which is going to power the industrial internet edge. That's going to be Cloud native. Big news happening at Mobile World Congress. theCUBE is there. Go to siliconangle.com for all the news and special report and live feed on theCUBE.net. (energetic music)
SUMMARY :
Intel in the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Mobile World Congress | EVENT | 0.98+ |
twice | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
four days | QUANTITY | 0.98+ |
fourth gen | QUANTITY | 0.96+ |
theCUBE.net | OTHER | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.86+ |
MWC | EVENT | 0.84+ |
vRAN Boost | TITLE | 0.82+ |
Agilex | TITLE | 0.78+ |
Silicon Angle | ORGANIZATION | 0.77+ |
7 Field Programmable | COMMERCIAL_ITEM | 0.76+ |
SiliconANGLE News | ORGANIZATION | 0.76+ |
eASIC | TITLE | 0.75+ |
theCUBE | ORGANIZATION | 0.63+ |
N5X | COMMERCIAL_ITEM | 0.62+ |
5G | QUANTITY | 0.55+ |
Gate Arrays | OTHER | 0.41+ |
Humphreys & Ferron-Jones | Trusted security by design, Compute Engineered for your Hybrid World
(upbeat music) >> Welcome back, everyone, to our Cube special programming on "Securing Compute, Engineered for the Hybrid World." We got Cole Humphreys who's with HPE, global server security product manager, and Mike Ferron-Jones with Intel. He's the product manager for data security technology. Gentlemen, thank you for coming on this special presentation. >> All right, thanks for having us. >> So, securing compute, I mean, compute, everyone wants more compute. You can't have enough compute as far as we're concerned. You know, more bits are flying around the internet. Hardware's mattering more than ever. Performance markets hot right now for next-gen solutions. When you're talking about security, it's at the center of every single conversation. And Gen11 for the HPE has been big-time focus here. So let's get into the story. What's the market for Gen11, Cole, on the security piece? What's going on? How do you see this impacting the marketplace? >> Hey, you know, thanks. I think this is, again, just a moment in time where we're all working towards solving a problem that doesn't stop. You know, because we are looking at data protection. You know, in compute, you're looking out there, there's international impacts, there's federal impacts, there's state-level impacts, and even regulation to protect the data. So, you know, how do we do this stuff in an environment that keeps changing? >> And on the Intel side, you guys are a Tier 1 combination partner, Better Together. HPE has a deep bench on security, Intel, We know what your history is. You guys have a real root of trust with your code, down to the silicon level, continuing to be, and you're on the 4th Gen Xeon here. Mike, take us through the Intel's relationship with HPE. Super important. You guys have been working together for many, many years. Data security, chips, HPE, Gen11. Take us through the relationship. What's the update? >> Yeah, thanks and I mean, HPE and Intel have been partners in delivering technology and delivering security for decades. And when a customer invests in an HPE server, like at one of the new Gen11s, they're getting the benefit of the combined investment that these two great companies are putting into product security. On the Intel side, for example, we invest heavily in the way that we develop our products for security from the ground up, and also continue to support them once they're in the market. You know, launching a product isn't the end of our security investment. You know, our Intel Red Teams continue to hammer on Intel products looking for any kind of security vulnerability for a platform that's in the field. As well as we invest heavily in the external research community through our bug bounty programs to harness the entire creativity of the security community to find those vulnerabilities, because that allows us to patch them and make sure our customers are staying safe throughout that platform's deployed lifecycle. You know, in 2021, between Intel's internal red teams and our investments in external research, we found 93% of our own vulnerabilities. Only a small percentage were found by unaffiliated external entities. >> Cole, HPE has a great track record and long history serving customers around security, actually, with the solutions you guys had. With Gen11, it's more important than ever. Can you share your thoughts on the talent gap out there? People want to move faster, breaches are happening at a higher velocity. They need more protection now than ever before. Can you share your thoughts on why these breaches are happening, and what you guys are doing, and how you guys see this happening from a customer standpoint? What you guys fill in with Gen11 with solution? >> You bet, you know, because when you hear about the relentless pursuit of innovation from our partners, and we in our engineering organizations in India, and Taiwan, and the Americas all collaborating together years in advance, are about delivering solutions that help protect our customer's environments. But what you hear Mike talking about is it's also about keeping 'em safe. Because you look to the market, right? What you see in, at least from our data from 2021, we have that breaches are still happening, and lot of it has to do with the fact that there is just a lack of adequate security staff with the necessary skills to protect the customer's application and ultimately the workloads. And then that's how these breaches are happening. Because ultimately you need to see some sort of control and visibility of what's going on out there. And what we were talking about earlier is you see time. Time to seeing some incident happen, the blast radius can be tremendous in today's technical, advanced world. And so you have to identify it and then correct it quickly, and that's why this continued innovation and partnership is so important, to help work together to keep up. >> You guys have had a great track record with Intel-based platforms with HPE. Gen11's a really big part of the story. Where do you see that impacting customers? Can you explain the benefits of what's going on with Gen11? What's the key story? What's the most important thing we should be paying attention to here? >> I think there's probably three areas as we look into this generation. And again, this is a point in time, we will continue to evolve. But at this particular point it's about, you know, a fundamental approach to our security enablement, right? Partnering as a Tier 1 OEM with one of the best in the industry, right? We can deliver systems that help protect some of the most critical infrastructure on earth, right? I know of some things that are required to have a non-disclosure because it is some of the most important jobs that you would see out there. And working together with Intel to protect those specific compute workloads, that's a serious deal that protects not only state, and local, and federal interests, but, really, a global one. >> This is a really- >> And then there's another one- Oh sorry. >> No, go ahead. Finish your thought. >> And then there's another one that I would call our uncompromising focus. We work in the industry, we lead and partner with those in the, I would say, in the good side. And we want to focus on enablement through a specific capability set, let's call it our global operations, and that ability to protect our supply chain and deliver infrastructure that can be trusted and into an operating environment. You put all those together and you see very significant and meaningful solutions together. >> The operating benefits are significant. I just want to go back to something you just said before about the joint NDAs and kind of the relationship you kind of unpacked, that to me, you know, I heard you guys say from sand to server, I love that phrase, because, you know, silicone into the server. But this is a combination you guys have with HPE and Intel supply-chain security. I mean, it's not just like you're getting chips and sticking them into a machine. This is, like, there's an in-depth relationship on the supply chain that has a very intricate piece to it. Can you guys just double down on that and share that, how that works and why it's important? >> Sure, so why don't I go ahead and start on that one. So, you know, as you mentioned the, you know, the supply chain that ultimately results in an end user pulling, you know, a new Gen11 HPE server out of the box, you know, started, you know, way, way back in it. And we've been, you know, Intel, from our part are, you know, invest heavily in making sure that all of our entire supply chain to deliver all of the Intel components that are inside that HPE platform have been protected and monitored ever since, you know, their inception at one of any of our 14,000, you know, Intel vendors that we monitor as part of our supply-chain assurance program. I mean we, you know, Intel, you know, invests heavily in compliance with guidelines from places like NIST and ISO, as well as, you know, doing best practices under things like the Transported Asset Protection Alliance, TAPA. You know, we have been intensely invested in making sure that when a customer gets an Intel processor, or any other Intel silicone product, that it has not been tampered with or altered during its trip through the supply chain. HPE then is able to pick up that, those components that we deliver, and add onto that their own supply-chain assurance when it comes down to delivering, you know, the final product to the customer. >> Cole, do you want to- >> That's exactly right. Yeah, I feel like that integration point is a really good segue into why we're talking today, right? Because that then comes into a global operations network that is pulling together these servers and able to deploy 'em all over the world. And as part of the Gen11 launch, we have security services that allow 'em to be hardened from our factories to that next stage into that trusted partner ecosystem for system integration, or directly to customers, right? So that ability to have that chain of trust. And it's not only about attestation and knowing what, you know, came from whom, because, obviously, you want to trust and make sure you're get getting the parts from Intel to build your technical solutions. But it's also about some of the provisioning we're doing in our global operations where we're putting cryptographic identities and manifests of the server and its components and moving it through that supply chain. So you talked about this common challenge we have of assuring no tampering of that device through the supply chain, and that's why this partnering is so important. We deliver secure solutions, we move them, you're able to see and control that information to verify they've not been tampered with, and you move on to your next stage of this very complicated and necessary chain of trust to build, you know, what some people are calling zero-trust type ecosystems. >> Yeah, it's interesting. You know, a lot goes on under the covers. That's good though, right? You want to have greater security and platform integrity, if you can abstract the way the complexity, that's key. Now one of the things I like about this conversation is that you mentioned this idea of a hardware-root-of-trust set of technologies. Can you guys just quickly touch on that, because that's one of the major benefits we see from this combination of the partnership, is that it's not just one, each party doing something, it's the combination. But this notion of hardware-root-of-trust technologies, what is that? >> Yeah, well let me, why don't I go ahead and start on that, and then, you know, Cole can take it from there. Because we provide some of the foundational technologies that underlie a root of trust. Now the idea behind a root of trust, of course, is that you want your platform to, you know, from the moment that first electron hits it from the power supply, that it has a chain of trust that all of the software, firmware, BIOS is loading, to bring that platform up into an operational state is trusted. If you have a breach in one of those lower-level code bases, like in the BIOS or in the system firmware, that can be a huge problem. It can undermine every other software-based security protection that you may have implemented up the stack. So, you know, Intel and HPE work together to coordinate our trusted boot and root-of-trust technologies to make sure that when a customer, you know, boots that platform up, it boots up into a known good state so that it is ready for the customer's workload. So on the Intel side, we've got technologies like our trusted execution technology, or Intel Boot Guard, that then feed into the HPE iLO system to help, you know, create that chain of trust that's rooted in silicon to be able to deliver that known good state to the customer so it's ready for workloads. >> All right, Cole, I got to ask you, with Gen11 HPE platforms that has 4th Gen Intel Xeon, what are the customers really getting? >> So, you know, what a great setup. I'm smiling because it's, like, it has a good answer, because one, this, you know, to be clear, this isn't the first time we've worked on this root-of-trust problem. You know, we have a construct that we call the HPE Silicon Root of Trust. You know, there are, it's an industry standard construct, it's not a proprietary solution to HPE, but it does follow some differentiated steps that we like to say make a little difference in how it's best implemented. And where you see that is that tight, you know, Intel Trusted Execution exchange. The Intel Trusted Execution exchange is a very important step to assuring that route of trust in that HPE Silicon Root of Trust construct, right? So they're not different things, right? We just have an umbrella that we pull under our ProLiant, because there's ILO, our BIOS team, CPLDs, firmware, but I'll tell you this, Gen11, you know, while all that, keeping that moving forward would be good enough, we are not holding to that. We are moving forward. Our uncompromising focus, we want to drive more visibility into that Gen11 server, specifically into the PCIE lanes. And now you're going to be able to see, and measure, and make policies to have control and visibility of the PCI devices, like storage controllers, NICs, direct connect, NVME drives, et cetera. You know, if you follow the trends of where the industry would like to go, all the components in a server would be able to be seen and attested for full infrastructure integrity, right? So, but this is a meaningful step forward between not only the greatness we do together, but, I would say, a little uncompromising focus on this problem and doing a little bit more to make Gen11 Intel's server just a little better for the challenges of the future. >> Yeah, the Tier 1 partnership is really kind of highlighted there. Great, great point. I got to ask you, Mike, on the 4th Gen Xeon Scalable capabilities, what does it do for the customer with Gen11 now that they have these breaches? Does it eliminate stuff? What's in it for the customer? What are some of the new things coming out with the Xeon? You're at Gen4, Gen11 for HP, but you guys have new stuff. What does it do for the customer? Does it help eliminate breaches? Are there things that are inherent in the product that HP is jointly working with you on or you were contributing in to the relationship that we should know about? What's new? >> Yeah, well there's so much great new stuff in our new 4th Gen Xeon Scalable processor. This is the one that was codenamed Sapphire Rapids. I mean, you know, more cores, more performance, AI acceleration, crypto acceleration, it's all in there. But one of my favorite security features, and it is one that's called Intel Control-Flow Enforcement Technology, or Intel CET. And why I like CET is because I find the attack that it is designed to mitigate is just evil genius. This type of attack, which is called a return, a jump, or a call-oriented programming attack, is designed to not bring a whole bunch of new identifiable malware into the system, you know, which could be picked up by security software. What it is designed to do is to look for little bits of existing, little bits of existing code already on the server. So if you're running, say, a web server, it's looking for little bits of that web-server code that it can then execute in a particular order to achieve a malicious outcome, something like open a command prompt, or escalate its privileges. Now in order to get those little code bits to execute in an order, it has a control mechanism. And there are different, each of the different types of attacks uses a different control mechanism. But what CET does is it gets in there and it disrupts those control mechanisms, uses hardware to prevent those particular techniques from being able to dig in and take effect. So CET can, you know, disrupt it and make sure that software behaves safely and as the programmer intended, rather than picking off these little arbitrary bits in one of these return, or jump, or call-oriented programming attacks. Now it is a technology that is included in every single one of the new 4th Gen Xeon Scalable processors. And so it's going to be an inherent characteristic the customers can benefit from when they buy a new Gen11 HPE server. >> Cole, more goodness from Intel there impacting Gen11 on the HPE side. What's your reaction to that? >> I mean, I feel like this is exactly why you do business with the big Tier 1 partners, because you can put, you know, trust in from where it comes from, through the global operations, literally, having it hardened from the factory it's finished in, moving into your operating environment, and then now protecting against attacks in your web hosting services, right? I mean, this is great. I mean, you'll always have an attack on data, you know, as you're seeing in the data. But the more contained, the more information, and the more control and trust we can give to our customers, it's going to make their job a little easier in protecting whatever job they're trying to do. >> Yeah, and enterprise customers, as you know, they're always trying to keep up to date on the skills and battle the threats. Having that built in under the covers is a real good way to kind of help them free up their time, and also protect them is really killer. This is a big, big part of the Gen11 story here. Securing the data, securing compute, that's the topic here for this special cube conversation, engineering for a hybrid world. Cole, I'll give you the final word. What should people pay attention to, Gen11 from HPE, bottom line, what's the story? >> You know, it's, you know, it's not the first time, it's not the last time, but it's our fundamental security approach to just helping customers through their digital transformation defend in an uncompromising focus to help protect our infrastructure in these technical solutions. >> Cole Humphreys is the global server security product manager at HPE. He's got his finger on the pulse and keeping everyone secure in the platform integrity there. Mike Ferron-Jones is the Intel product manager for data security technology. Gentlemen, thank you for this great conversation, getting into the weeds a little bit with Gen11, which is great. Love the hardware route-of-trust technologies, Better Together. Congratulations on Gen11 and your 4th Gen Xeon Scalable. Thanks for coming on. >> All right, thanks, John. >> Thank you very much, guys, appreciate it. Okay, you're watching "theCube's" special presentation, "Securing Compute, Engineered for the Hybrid World." I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
for the Hybrid World." And Gen11 for the HPE has So, you know, how do we do this stuff And on the Intel side, you guys in the way that we develop and how you guys see this happening and lot of it has to do with the fact that Gen11's a really big part of the story. that you would see out there. And then Finish your thought. and that ability to that to me, you know, I heard you guys say out of the box, you know, and manifests of the is that you mentioned this idea is that you want your is that tight, you know, that HP is jointly working with you on and as the programmer intended, impacting Gen11 on the HPE side. and the more control and trust and battle the threats. you know, it's not the first time, is the global server security for the Hybrid World."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
India | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
ISO | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
Transported Asset Protection Alliance | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Mike Ferron-Jones | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
TAPA | ORGANIZATION | 0.99+ |
Gen11 | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
14,000 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Humphreys | PERSON | 0.98+ |
each party | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
Gen11 | COMMERCIAL_ITEM | 0.97+ |
Americas | LOCATION | 0.97+ |
Gen11s | COMMERCIAL_ITEM | 0.96+ |
Securing Compute, Engineered for the Hybrid World | TITLE | 0.96+ |
Xeon | COMMERCIAL_ITEM | 0.94+ |
4th Gen Xeon Scalable processor | COMMERCIAL_ITEM | 0.94+ |
each | QUANTITY | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.92+ |
Ferron-Jones | PERSON | 0.91+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.91+ |
first electron | QUANTITY | 0.9+ |
two great companies | QUANTITY | 0.89+ |
decades | QUANTITY | 0.86+ |
three areas | QUANTITY | 0.85+ |
Gen11 | EVENT | 0.84+ |
ILO | ORGANIZATION | 0.83+ |
Control-Flow Enforcement Technology | OTHER | 0.82+ |
Breaking Analysis: ChatGPT Won't Give OpenAI First Mover Advantage
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> OpenAI The company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional 10 billion dollars into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal, and the compute power that it could deploy to run its system. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT, and debate the premise that the company's early entry into the space may not confer winner take all advantage to OpenAI. And to do so, we welcome CUBE collaborator, alum, Sarbjeet Johal, (chuckles) and John Furrier, co-host of the Cube. Great to see you Sarbjeet, John. Really appreciate you guys coming to the program. >> Great to be on. >> Okay, so what is ChatGPT? Well, actually we asked ChatGPT, what is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It could be fine tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything to add? >> Yeah, think it did good. It's large language model, like previous models, but it started applying the transformers sort of mechanism to focus on what prompt you have given it to itself. And then also the what answer it gave you in the first, sort of, one sentence or two sentences, and then introspect on itself, like what I have already said to you. And so just work on that. So it it's self sort of focus if you will. It does, the transformers help the large language models to do that. >> So to your point, it's a large language model, and GPT stands for generative pre-trained transformer. >> And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with ChatGPT, it's not always accurate. It's actually a large language model, and it says state of the art language model. And if you look at Google, Google has dominated AI for many times and they're well known as being the best at this. And apparently Google has their own large language model, LLM, in play and have been holding it back to release because of backlash on the accuracy. Like just in that example you showed is a great point. They got almost right, but they missed the key word. >> You know what's funny about that John, is I had previously asked it in my prompt to give me it in less than a hundred words, and it was too long, I said I was too long for Breaking Analysis, and there it went into the fact that it's a large language model. So it largely, it gave me a really different answer the, for both times. So, but it's still pretty amazing for those of you who haven't played with it yet. And one of the best examples that I saw was Ben Charrington from This Week In ML AI podcast. And I stumbled on this thanks to Brian Gracely, who was listening to one of his Cloudcasts. Basically what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up 2X so it didn't sound like a machine. And voila, it was amazing. So John is ChatGPT going to take over as a cube host? >> Well, I was thinking, we get the questions in advance sometimes from PR people. We should actually just plug it in ChatGPT, add it to our notes, and saying, "Is this good enough for you? Let's ask the real question." So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier. It gets most of it right. So it's directionally correct and it feels like it's an answer, but it's not a hundred percent accurate. And I think that's where people are seeing value in it. Writing marketing, copy, brainstorming, guest list, gift list for somebody. Write me some lyrics to a song. Give me a thesis about healthcare policy in the United States. It'll do a bang up job, and then you got to go in and you can massage it. So we're going to do three quarters of the work. That's why plagiarism and schools are kind of freaking out. And that's why Microsoft put 10 billion in, because why wouldn't this be a feature of Word, or the OS to help it do stuff on behalf of the user. So linguistically it's a beautiful thing. You can input a string and get a good answer. It's not a search result. >> And we're going to get your take on on Microsoft and, but it kind of levels the playing- but ChatGPT writes better than I do, Sarbjeet, and I know you have some good examples too. You mentioned the Reed Hastings example. >> Yeah, I was listening to Reed Hastings fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format. And it was amazing what, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, like, just like we are talking, he was talking for like almost two minutes and then ChatGPT was answering. It was not one sentence question, and then a lot of answers from ChatGPT and yeah, you're right. I, this is our ability. I've been thinking deep about this since yesterday, we talked about, like, we want to do this segment. The data is fed into the data model. It can be the current data as well, but I think that, like, models like ChatGPT, other companies will have those too. They can, they're democratizing the intelligence, but they're not creating intelligence yet, definitely yet I can say that. They will give you all the finite answers. Like, okay, how do you do this for loop in Java, versus, you know, C sharp, and as a programmer you can do that, in, but they can't tell you that, how to write a new algorithm or write a new search algorithm for you. They cannot create a secretive code for you to- >> Not yet. >> Have competitive advantage. >> Not yet, not yet. >> but you- >> Can Google do that today? >> No one really can. The reasoning side of the data is, we talked about at our Supercloud event, with Zhamak Dehghani who's was CEO of, now of Nextdata. This next wave of data intelligence is going to come from entrepreneurs that are probably cross discipline, computer science and some other discipline. But they're going to be new things, for example, data, metadata, and data. It's hard to do reasoning like a human being, so that needs more data to train itself. So I think the first gen of this training module for the large language model they have is a corpus of text. Lot of that's why blog posts are, but the facts are wrong and sometimes out of context, because that contextual reasoning takes time, it takes intelligence. So machines need to become intelligent, and so therefore they need to be trained. So you're going to start to see, I think, a lot of acceleration on training the data sets. And again, it's only as good as the data you can get. And again, proprietary data sets will be a huge winner. Anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher will benefit from this. Large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just, it will eat that up, and turn that back into something better. So I think there's going to be a lot of interesting things to look at here. And certainly productivity's going to be off the charts for vanilla and the internet is going to get swarmed with vanilla content. So if you're in the content business, and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better. So I think there's so much at play Dave (indistinct). >> I think the playing field has been risen, so we- >> Risen and leveled? >> Yeah, and leveled to certain extent. So it's now like that few people as consumers, as consumers of AI, we will have a advantage and others cannot have that advantage. So it will be democratized. That's, I'm sure about that. But if you take the example of calculator, when the calculator came in, and a lot of people are, "Oh, people can't do math anymore because calculator is there." right? So it's a similar sort of moment, just like a calculator for the next level. But, again- >> I see it more like open source, Sarbjeet, because like if you think about what ChatGPT's doing, you do a query and it comes from somewhere the value of a post from ChatGPT is just a reuse of AI. The original content accent will be come from a human. So if I lay out a paragraph from ChatGPT, did some heavy lifting on some facts, I check the facts, save me about maybe- >> Yeah, it's productive. >> An hour writing, and then I write a killer two, three sentences of, like, sharp original thinking or critical analysis. I then took that body of work, open source content, and then laid something on top of it. >> And Sarbjeet's example is a good one, because like if the calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were this minority and you had an advantage over other drivers. Now Waze is like, you know, social traffic, you know, navigation, everybody had, you know- >> All the back roads are crowded. >> They're car crowded. (group laughs) Exactly. All right, let's, let's move on. What about this notion that futurist Ray Amara put forth and really Amara's Law that we're showing here, it's, the law is we, you know, "We tend to overestimate the effect of technology in the short run and underestimate it in the long run." Is that the case, do you think, with ChatGPT? What do you think Sarbjeet? >> I think that's true actually. There's a lot of, >> We don't debate this. >> There's a lot of awe, like when people see the results from ChatGPT, they say what, what the heck? Like, it can do this? But then if you use it more and more and more, and I ask the set of similar question, not the same question, and it gives you like same answer. It's like reading from the same bucket of text in, the interior read (indistinct) where the ChatGPT, you will see that in some couple of segments. It's very, it sounds so boring that the ChatGPT is coming out the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will have, go through this, you know, hype sort of cycle and get realistic with it. And then in the long term, I think it's a great thing in the short term, it's not something which will (indistinct) >> What's your counter point? You're saying it's not. >> I, no I think the question was, it's hyped up in the short term and not it's underestimated long term. That's what I think what he said, quote. >> Yes, yeah. That's what he said. >> Okay, I think that's wrong with this, because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it, I have been comparing to the internet, like the web, web browser Mosaic and Netscape, right, Navigator. I mean, I clearly still remember the days seeing Navigator for the first time, wow. And there weren't not many sites you could go to, everyone typed in, you know, cars.com, you know. >> That (indistinct) wasn't that overestimated, the overhyped at the beginning and underestimated. >> No, it was, it was underestimated long run, people thought. >> But that Amara's law. >> That's what is. >> No, they said overestimated? >> Overestimated near term underestimated- overhyped near term, underestimated long term. I got, right I mean? >> Well, I, yeah okay, so I would then agree, okay then- >> We were off the charts about the internet in the early days, and it actually exceeded our expectations. >> Well there were people who were, like, poo-pooing it early on. So when the browser came out, people were like, "Oh, the web's a toy for kids." I mean, in 1995 the web was a joke, right? So '96, you had online populations growing, so you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog then went to the web, kind of read only as you, as we always talk about. So I think that's a moment where the hype long term, the smart money, and the smart industry experts all get the long term. And in this case, there's more poo-pooing in the short term. "Ah, it's not a big deal, it's just AI." I've heard many people poo-pooing ChatGPT, and a lot of smart people saying, "No this is next gen, this is different and it's only going to get better." So I think people are estimating a big long game on this one. >> So you're saying it's bifurcated. There's those who say- >> Yes. >> Okay, all right, let's get to the heart of the premise, and possibly the debate for today's episode. Will OpenAI's early entry into the market confer sustainable competitive advantage for the company. And if you look at the history of tech, the technology industry, it's kind of littered with first mover failures. Altair, IBM, Tandy, Commodore, they and Apple even, they were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, you were just talking about, was all the rage in Silicon Valley, with the first browser, drove up all the housing prices out here. AltaVista was the first search engine to really, you know, index full text. >> Owned by Dell, I mean DEC. >> Owned by Digital. >> Yeah, Digital Equipment >> Compaq bought it. And of course as an aside, Digital, they wanted to showcase their hardware, right? Their super computer stuff. And then so Friendster and MySpace, they came before Facebook. The iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. >> You could say smartphone. So I mean. >> Well I know, and you can, we can parse this so we'll debate it. Now Twitter, you could argue, had first mover advantage. You kind of gave me that one John. Bitcoin and crypto clearly had first mover advantage, and sustaining that. Guys, will OpenAI make it to the list on the right with ChatGPT, what do you think? >> I think categorically as a company, it probably won't, but as a category, I think what they're doing will, so OpenAI as a company, they get funding, there's power dynamics involved. Microsoft put a billion dollars in early on, then they just pony it up. Now they're reporting 10 billion more. So, like, if the browsers, Microsoft had competitive advantage over Netscape, and used monopoly power, and convicted by the Department of Justice for killing Netscape with their monopoly, Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it, they're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So the AI as a company might not make the list, but the category of what this is, large language model AI, is probably will be on the right hand side. >> Okay, we're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That, it will basically set- put forth the premise that it, that ChatGPT, its early entry into the market will not confer competitive advantage to >> For OpenAI. >> To Open- Yeah, do you agree with that? >> I agree with that actually. It, because Google has been at it, and they have been holding back, as John said because of the scrutiny from the Fed, right, so- >> And privacy too. >> And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, you know, because it makes mistakes, and there are a lot of questions around the, sort of, where the content is coming from. You saw that as your example, it just stole the content, and without your permission, you know? >> Yeah. So as quick this aside- >> And it codes on people's behalf and the, those codes are wrong. So there's a lot of, sort of, false information it's putting out there. So it's a very vulnerable thing to do what Sam Altman- >> So even though it'll get better, others will compete. >> So look, just side note, a term which Reid Hoffman used a little bit. Like he said, it's experimental launch, like, you know, it's- >> It's pretty damn good. >> It is clever because according to Sam- >> It's more than clever. It's good. >> It's awesome, if you haven't used it. I mean you write- you read what it writes and you go, "This thing writes so well, it writes so much better than you." >> The human emotion drives that too. I think that's a big thing. But- >> I Want to add one more- >> Make your last point. >> Last one. Okay. So, but he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with StrictlyVC interview from yesterday with Sam Altman. Listen to that one it's an eye opening what they want- where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with Wall Street Journal. I think he was doing- >> You were not impressed. >> I was not impressed because he was pushing it too much. So Sam Altman's holding back so there's less backlash. >> Got 10 billion reasons to push. >> I think he's almost- >> Microsoft just laid off 10000 people. Hey ChatGPT, find me a job. You know like. (group laughs) >> He's overselling it to an extent that I think it will backfire on Microsoft. And he's over promising a lot of stuff right now, I think. I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, "Oh, this AI will write code for you and this and that." Like you called him out- >> The hyperbole- >> During your- >> from Satya Nadella, he's got a lot of hyperbole. (group talks over each other) >> All right, Let's, go ahead. >> Well, can I weigh in on the whole- >> Yeah, sure. >> Microsoft thing on whether OpenAI, here's the take on this. I think it's more like the browser moment to me, because I could relate to that experience with ChatG, personally, emotionally, when I saw that, and I remember vividly- >> You mean that aha moment (indistinct). >> Like this is obviously the future. Anything else in the old world is dead, website's going to be everywhere. It was just instant dot connection for me. And a lot of other smart people who saw this. Lot of people by the way, didn't see it. Someone said the web's a toy. At the company I was worked for at the time, Hewlett Packard, they like, they could have been in, they had invented HTML, and so like all this stuff was, like, they just passed, the web was just being passed over. But at that time, the browser got better, more websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable, if they leverage the category that they're creating, can get the scale. And if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So I mean, it should have- >> You're right, it was terrible, we kept coming back. >> The fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape Navigator, without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry, vis-a-vis say the CapEx of say a cloud company like AWS. You can't replicate that, Many have tried, but I think you can replicate OpenAI. >> And we're going to talk about that. Okay, so real quick, I want to bring in some ETR data. This isn't an ETR heavy segment, only because this so new, you know, they haven't coverage yet, but they do cover AI. So basically what we're seeing here is a slide on the vertical axis's net score, which is a measure of spending momentum, and in the horizontal axis's is presence in the dataset. Think of it as, like, market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is he like, you know, DataRobot and C3 AI and some others, but the big whales, Google, AWS, Microsoft, are really dominant in this market. So that's really the key takeaway that, can we- >> I notice IBM is way low. >> Yeah, IBM's low, and actually bring that back up and you, but then you see Oracle who actually is injecting. So I guess that's the other point is, you're not necessarily going to go buy AI, and you know, build your own AI, you're going to, it's going to be there and, it, Salesforce is going to embed it into its platform, the SaaS companies, and you're going to purchase AI. You're not necessarily going to build it. But some companies obviously are. >> I mean to quote IBM's general manager Rob Thomas, "You can't have AI with IA." information architecture and David Flynn- >> You can't Have AI without IA >> without, you can't have AI without IA. You can't have, if you have an Information Architecture, you then can power AI. Yesterday David Flynn, with Hammersmith, was on our Supercloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressablity, and Zhamak from Nextdata, she was pointing out that same thing. So the data problem factors into all this too, Dave. >> So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion. Google's code red, which was, you know, the headline in the New York Times. Of course Apple is there and several alternatives in the market today. Guys like Chinchilla, Bloom, and there's a company Jasper and several others, and then Lena Khan looms large and the government's around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape, they kind of really, the US government was way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, "Eh, we're going to be selling software in the enterprise anyway." and then, pshew, the company just dissipated. So, but it looks like the US government, especially with Lena Khan, they're changing the definition of antitrust and what the cause is to go after people, and they're really much more aggressive. It's only what, two years ago that (indistinct). >> Yeah, the problem I have with the federal oversight is this, they're always like late to the game, and they're slow to catch up. So in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days, and I think- >> But they're like 15 years late to that. >> Yeah, and now they got this new thing on top of it. So like, I just worry about them getting their fingers. >> But there's only two years, you know, OpenAI. >> No, but the thing (indistinct). >> No, they're still fighting other battles. But the problem with government is that they're going to label Big Tech as like a evil thing like Pharma, it's like smoke- >> You know Lena Khan wants to kill Big Tech, there's no question. >> So I think Big Tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech, is politically motivated in most cases. You can almost look at everything, and my 80 20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated, and the 20% is probably relevant, but off the mark and not organized. >> Well market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean you look at IBM's antitrust, that, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in. But you know, like you said, they missed the web, right, and >> so they were hanging on- >> There's nobody in government >> to Windows. >> that actually knows- >> And so, you, I think you're right. It's market forces that are going to determine this. But Sarbjeet, what do you make of Microsoft's big bet here, you weren't impressed with with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing, or is it going to be applied elsewhere? What do you think. >> They are saying that they will, sort of, weave this into their products, office products, productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your antitrust sort of comments, right? I believe the, your comment was like, oh, fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to they used to move. >> So two years is like 10 Years. >> Yeah, two years is like 10 years. Just want to make that point. (Dave laughs) This thing is going like wildfire. Any new tech which comes in that I think they're going against distribution channels. Lina Khan has commented time and again that the marketplace model is that she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. >> I don't, I don't see this, I don't see a Chat AI. >> You told me it's not Bing, you had an interesting comment. >> No, no. First of all, this is great from Microsoft. If you're Microsoft- >> Why? >> Because Microsoft doesn't have the AI chops that Google has, right? Google is got so much core competency on how they run their search, how they run their backends, their cloud, even though they don't get a lot of cloud market share in the enterprise, they got a kick ass cloud cause they needed one. >> Totally. >> They've invented SRE. I mean Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Cloud's a whole different story. Microsoft gets AI, they get a playbook, they get a product they can render into, the not only Bing, productivity software, helping people write papers, PowerPoint, also don't forget the cloud AI can super help. We had this conversation on our Supercloud event, where AI's going to do a lot of the heavy lifting around understanding observability and managing service meshes, to managing microservices, to turning on and off applications, and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. combined with their R and D budgets, they can then turbocharge more research, build on it. So I think this gives them a car in the game, Google may have pole position with AI, but this puts Microsoft right in the game, and they already have a lot of stuff going on. But this just, I mean everything gets lifted up. Security, cloud, productivity suite, everything. >> What's under the hood at Google, and why aren't they talking about it? I mean they got to be freaked out about this. No? Or do they have kind of a magic bullet? >> I think they have the, they have the chops definitely. Magic bullet, I don't know where they are, as compared to the ChatGPT 3 or 4 models. Like they, but if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, that's account you should look at if you are looking there, they have put all these distinctions what ChatGPT 3 has used, they have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of this segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. >> Obviously living in Palo Alto where the Google founders are, and Google's headquarters next town over we have- >> We're so close to them. We have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is, is that, from what I'm hearing from my sources, is Google has it, they don't want to release it for many reasons. One is it might screw up their search monopoly, one, two, they're worried about the accuracy, 'cause Google will get sued. 'Cause a lot of people are jamming on this ChatGPT as, "Oh it does everything for me." when it's clearly not a hundred percent accurate all the time. >> So Lina Kahn is looming, and so Google's like be careful. >> Yeah so Google's just like, this is the third, could be a third rail. >> But the first thing you said is a concern. >> Well no. >> The disruptive (indistinct) >> What they will do is do a Waymo kind of thing, where they spin out a separate company. >> They're doing that. >> The discussions happening, they're going to spin out the separate company and put it over there, and saying, "This is AI, got search over there, don't touch that search, 'cause that's where all the revenue is." (chuckles) >> So, okay, so that's how they deal with the Clay Christensen dilemma. What's the business model here? I mean it's not advertising, right? Is it to charge you for a query? What, how do you make money at this? >> It's a good question, I mean my thinking is, first of all, it's cool to type stuff in and see a paper get written, or write a blog post, or gimme a marketing slogan for this or that or write some code. I think the API side of the business will be critical. And I think Howie Xu, I know you're going to reference some of his comments yesterday on Supercloud, I think this brings a whole 'nother user interface into technology consumption. I think the business model, not yet clear, but it will probably be some sort of either API and developer environment or just a straight up free consumer product, with some sort of freemium backend thing for business. >> And he was saying too, it's natural language is the way in which you're going to interact with these systems. >> I think it's APIs, it's APIs, APIs, APIs, because these people who are cooking up these models, and it takes a lot of compute power to train these and to, for inference as well. Somebody did the analysis on the how many cents a Google search costs to Google, and how many cents the ChatGPT query costs. It's, you know, 100x or something on that. You can take a look at that. >> A 100x on which side? >> You're saying two orders of magnitude more expensive for ChatGPT >> Much more, yeah. >> Than for Google. >> It's very expensive. >> So Google's got the data, they got the infrastructure and they got, you're saying they got the cost (indistinct) >> No actually it's a simple query as well, but they are trying to put together the answers, and they're going through a lot more data versus index data already, you know. >> Let me clarify, you're saying that Google's version of ChatGPT is more efficient? >> No, I'm, I'm saying Google search results. >> Ah, search results. >> What are used to today, but cheaper. >> But that, does that, is that going to confer advantage to Google's large language (indistinct)? >> It will, because there were deep science (indistinct). >> Google, I don't think Google search is doing a large language model on their search, it's keyword search. You know, what's the weather in Santa Cruz? Or how, what's the weather going to be? Or you know, how do I find this? Now they have done a smart job of doing some things with those queries, auto complete, re direct navigation. But it's, it's not entity. It's not like, "Hey, what's Dave Vellante thinking this week in Breaking Analysis?" ChatGPT might get that, because it'll get your Breaking Analysis, it'll synthesize it. There'll be some, maybe some clips. It'll be like, you know, I mean. >> Well I got to tell you, I asked ChatGPT to, like, I said, I'm going to enter a transcript of a discussion I had with Nir Zuk, the CTO of Palo Alto Networks, And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him, and it just pulled a bunch of stuff that, and said, okay, here it is. It talked about Supercloud, it defined Supercloud. >> It's made, it makes you- >> Wow, But it was a big lie. It was fraudulent, but still, blew me away. >> Again, vanilla content and non accurate content. So we are going to see a surge of misinformation on steroids, but I call it the vanilla content. Wow, that's just so boring, (indistinct). >> There's so many dangers. >> Make your point, cause we got to, almost out of time. >> Okay, so the consumption, like how do you consume this thing. As humans, we are consuming it and we are, like, getting a nicely, like, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence, if you go out there. So that's one thing. The second thing is, we are as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is, but the ultimate usage of that will be when the machines can take that output from likes of ChatGPT, and do actions based on that. The robots can work, the robot can paint your house, we were talking about, right? Right now we can't do that. >> Data apps. >> So the data has to be ingested by the machines. It has to be digestible by the machines. And the machines cannot digest unorganized data right now, we will get better on the ingestion side as well. So we are getting better. >> Data, reasoning, insights, and action. >> I like that mall, paint my house. >> So, okay- >> By the way, that means drones that'll come in. Spray painting your house. >> Hey, it wasn't too long ago that robots couldn't climb stairs, as I like to point out. Okay, and of course it's no surprise the venture capitalists are lining up to eat at the trough, as I'd like to say. Let's hear, you'd referenced this earlier, John, let's hear what AI expert Howie Xu said at the Supercloud event, about what it takes to clone ChatGPT. Please, play the clip. >> So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest to get a, you know, another shot to the openAI sort of the level." You know, I did a (indistinct) >> Line up. >> A hundred million dollar is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So a hundred- >> Guys a hundred million dollars, that's an astoundingly low figure. What do you make of it? >> I was in an interview with, I was interviewing, I think he said hundred million or so, but in the hundreds of millions, not a billion right? >> You were trying to get him up, you were like "Hundreds of millions." >> Well I think, I- >> He's like, eh, not 10, not a billion. >> Well first of all, Howie Xu's an expert machine learning. He's at Zscaler, he's a machine learning AI guy. But he comes from VMware, he's got his technology pedigrees really off the chart. Great friend of theCUBE and kind of like a CUBE analyst for us. And he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly, the CapEx spending to build all the tech for the run a cloud. >> And you don't need a huge sales force. >> And in some case apps too, it's the same thing. But I think it's not that hard. >> But am I right about that? You don't need a huge sales force either. It's, what, you know >> If the product's good, it will sell, this is a new era. The better mouse trap will win. This is the new economics in software, right? So- >> Because you look at the amount of money Lacework, and Snyk, Snowflake, Databrooks. Look at the amount of money they've raised. I mean it's like a billion dollars before they get to IPO or more. 'Cause they need promotion, they need go to market. You don't need (indistinct) >> OpenAI's been working on this for multiple five years plus it's, hasn't, wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success, because he's trying to manage expectations, To your point Sarbjeet, earlier. It's like, yeah, he's trying to "Whoa, whoa, settle down everybody, (Dave laughs) it's not that great." because he doesn't want to fall into that, you know, hero and then get taken down, so. >> It may take a 100 million or 150 or 200 million to train the model. But to, for the inference to, yeah to for the inference machine, It will take a lot more, I believe. >> Give it, so imagine, >> Because- >> Go ahead, sorry. >> Go ahead. But because it consumes a lot more compute cycles and it's certain level of storage and everything, right, which they already have. So I think to compute is different. To frame the model is a different cost. But to run the business is different, because I think 100 million can go into just fighting the Fed. >> Well there's a flywheel too. >> Oh that's (indistinct) >> (indistinct) >> We are running the business, right? >> It's an interesting number, but it's also kind of, like, context to it. So here, a hundred million spend it, you get there, but you got to factor in the fact that the ways companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data, we're SiliconANGLE in theCUBE. We have proprietary content, CUBE videos, transcripts. Well wouldn't it be great if someone in a marketplace could sell a module for us, right? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy, not because of the software. >> They built the moat. >> Because you can, buy Reddit open source software and try To compete. >> They built the moat with their community. >> Their community, their scale, their user expectation. Twitter, we referenced earlier, that thing should have gone under the first two years, but there was such a great emotional product. People would tolerate the fail whale. And then, you know, well that was a whole 'nother thing. >> Then a plane landed in (John laughs) the Hudson and it was over. >> I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators, for- >> So you'll have many hundreds of millions of dollars investments that are going to be seeping out. If, all right, we got to wrap, if you had to put odds on it that that OpenAI is going to be the leader, maybe not a winner take all leader, but like you look at like Amazon and cloud, they're not winner take all, these aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position. >> If I'm 0 to 10 kind of thing? >> Yeah, it's like horse race, 3 to 1, 2 to 1, even money, 10 to 1, 50 to 1. >> Maybe 2 to 1, >> 2 to 1, that's pretty low odds. That's basically saying they're the favorite, they're the front runner. Would you agree with that? >> I'd say 4 to 1. >> Yeah, I was going to say I'm like a 5 to 1, 7 to 1 type of person, 'cause I'm a skeptic with, you know, there's so much competition, but- >> I think they're definitely the leader. I mean you got to say, I mean. >> Oh there's no question. There's no question about it. >> The question is can they execute? >> They're not Friendster, is what you're saying. >> They're not Friendster and they're more like Twitter and Reddit where they have momentum. If they can execute on the product side, and if they don't stumble on that, they will continue to have the lead. >> If they say stay neutral, as Sam is, has been saying, that, hey, Microsoft is one of our partners, if you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain, like by certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. (indistinct) >> Okay, the odds are dropping. (group talks over each other) That's a good point though >> Actually they might have done that to fend off the criticism of this. But it's really interesting to see the model they have adopted. >> The wildcard in all this, My last word on this is that, if there's a developer shift in how developers and data can come together again, we have conferences around the future of data, Supercloud and meshs versus, you know, how the data world, coding with data, how that evolves will also dictate, 'cause a wild card could be a shift in the landscape around how developers are using either machine learning or AI like techniques to code into their apps, so. >> That's fantastic insight. I can't thank you enough for your time, on the heels of Supercloud 2, really appreciate it. All right, thanks to John and Sarbjeet for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here, I'm jealous. And Noah, really appreciate it, Chuck, Andrew Frick and Cameron, Andrew Frick switching, Cameron on the video lake, great job. And Alex Myerson, he's on production, manages the podcast for us, Ken Schiffman as well. Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters. Rob Hof is our editor-in-chief over at SiliconANGLE, does some great editing, thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search Breaking Analysis podcast, wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch, email me directly, david.vellante@siliconangle.com or DM me at dvellante, or comment on our LinkedIn post. And by all means, check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, We'll see you next time on Breaking Analysis. (electronic music)
SUMMARY :
bringing you data-driven and ChatGPT have taken the world by storm. So I asked it, give it to the large language models to do that. So to your point, it's So one of the problems with ChatGPT, and he simply gave the system the prompts, or the OS to help it do but it kind of levels the playing- and the answers were coming as the data you can get. Yeah, and leveled to certain extent. I check the facts, save me about maybe- and then I write a killer because like if the it's, the law is we, you know, I think that's true and I ask the set of similar question, What's your counter point? and not it's underestimated long term. That's what he said. for the first time, wow. the overhyped at the No, it was, it was I got, right I mean? the internet in the early days, and it's only going to get better." So you're saying it's bifurcated. and possibly the debate the first mobile device. So I mean. on the right with ChatGPT, and convicted by the Department of Justice the scrutiny from the Fed, right, so- And the privacy and thing to do what Sam Altman- So even though it'll get like, you know, it's- It's more than clever. I mean you write- I think that's a big thing. I think he was doing- I was not impressed because You know like. And he did the same thing he's got a lot of hyperbole. the browser moment to me, So OpenAI could stay on the right side You're right, it was terrible, They could be the Netscape Navigator, and in the horizontal axis's So I guess that's the other point is, I mean to quote IBM's So the data problem factors and the government's around the world, and they're slow to catch up. Yeah, and now they got years, you know, OpenAI. But the problem with government to kill Big Tech, and the 20% is probably relevant, back in the day, right? are they going to apply it? and also to write code as well, that the marketplace I don't, I don't see you had an interesting comment. No, no. First of all, the AI chops that Google has, right? are off the scales, right? I mean they got to be and the capacity to process that data, on some of the thinking So Lina Kahn is looming, and this is the third, could be a third rail. But the first thing What they will do out the separate company Is it to charge you for a query? it's cool to type stuff in natural language is the way and how many cents the and they're going through Google search results. It will, because there were It'll be like, you know, I mean. I never input the transcript. Wow, But it was a big lie. but I call it the vanilla content. Make your point, cause we And on the danger side as well, So the data By the way, that means at the Supercloud event, So one of the VCs actually What do you make of it? you were like "Hundreds of millions." not 10, not a billion. Clearly, the CapEx spending to build all But I think it's not that hard. It's, what, you know This is the new economics Look at the amount of And Sam is depositioning all the success, or 150 or 200 million to train the model. So I think to compute is different. not because of the software. Because you can, buy They built the moat And then, you know, well that the Hudson and it was over. that are going to be seeping out. Yeah, it's like horse race, 3 to 1, 2 to 1, that's pretty low odds. I mean you got to say, I mean. Oh there's no question. is what you're saying. and if they don't stumble on that, the money back to the public, to the, Okay, the odds are dropping. the model they have adopted. Supercloud and meshs versus, you know, on the heels of Supercloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Lina Khan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Reid Hoffman | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Lena Khan | PERSON | 0.99+ |
Sam Altman | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
David Flynn | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Noah | PERSON | 0.99+ |
Ray Amara | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
150 | QUANTITY | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Howie Xu | PERSON | 0.99+ |
Anderson | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Santa Cruz | LOCATION | 0.99+ |
1995 | DATE | 0.99+ |
Lina Kahn | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
50 words | QUANTITY | 0.99+ |
Hundreds of millions | QUANTITY | 0.99+ |
Compaq | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
two sentences | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Cameron | PERSON | 0.99+ |
100 million | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
one sentence | QUANTITY | 0.99+ |
10 million | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
Netscape | ORGANIZATION | 0.99+ |
Meet the new HPE ProLiant Gen11 Servers
>> Hello, everyone. Welcome to theCUBE's coverage of Compute Engineered For Your Hybrid World, sponsored by HPE and Intel. I'm John Furrier, host of theCUBE. I'm pleased to be joined by Krista Satterthwaite, SVP and general manager for HPE Mainstream Compute, and Lisa Spelman, corporate vice president, and general manager of Intel Xeon Products, here to discuss the major announcement. Thanks for joining us today. Thanks for coming on theCUBE. >> Thanks for having us. >> Great to be here. >> Great to see you guys. And exciting announcement. Krista, Compute continues to evolve to meet the challenges of businesses. We're seeing more and more high performance, more Compute, I mean, it's getting more Compute every day. You guys officially announced this next generation of ProLiant Gen11s in November. Can you share and talk about what this means? >> Yeah, so first of all, thanks so much for having me. I'm really excited about this announcement. And yeah, in November we announced our HPE ProLiant NextGen, and it really was about one thing. It's about engineering Compute for customers' hybrid world. And we have three different design principles when we designed this generation. First is intuitive cloud operating experience, and that's with our HPE GreenLake for Compute Ops Management. And that's all about management that is simple, unified, and automated. So it's all about seeing everything from one council. So you have a customer that's using this, and they were so surprised at how much they could see, and they were excited because they had servers in multiple locations. This was a hotel, so they had servers everywhere, and they can now see all their different firmware levels. And with that type of visibility, they thought their planning was going to be much, much easier. And then when it comes to updates, they're much quicker and much easier, so it's an exciting thing, whether you have servers just in the data center, or you have them distributed, you could see and do more than you ever could before with HPE GreenLake for Compute Ops Management. So that's number one. Number two is trusted security by design. Now, when we launched our HPE ProLiant Gen10 servers years ago, we launched groundbreaking innovative security features, and we haven't stopped, we've continued to enhance that every since then. And this generation's no exception. So we have new innovations around security. Security is a huge focus area for us, and so we're excited about delivering those. And then lastly, performance for every workload. We have a huge increase in performance with HPE ProLiant Gen11, and we have customers that are clamoring for this additional performance right now. And what's great about this is that, it doesn't matter where the bottleneck is, whether it's CPU, memory or IO, we have advancements across the board that are going to make real differences in what customers are going to be able to get out of their workloads. And then we have customers that are trying to build headroom in. So even if they don't need a today, what they put in their environment today, they know needs to last and need to be built for the future. >> That's awesome. Thanks for the recap. And that's great news for folks looking to power those workloads, more and more optimizations needed. I got to ask though, how is what you guys are announcing today, meeting these customer needs for the future, and what are your customers looking for and what are HPE and Intel announcing today? >> Yeah, so customers are doing more than ever before with their servers. So they're really pushing things to the max. I'll give you an example. There's a retail customer that is waiting to get their hands on our ProLiant Gen11 servers, because they want to do video streaming in every one of their retail stores and what they're building, when they're building what they need, we started talking to 'em about what their needs were today, and they were like, "Forget about what my needs are today. We're buying for headroom. We don't want to touch these servers for a while." So they're maxing things out, because they know the needs are coming. And so what you'll see with this generation is that we've built all of that in so that customers can deploy with confidence and know they have the headroom for all the things they want to do. The applications that we see and what people are trying to do with their servers is light years different than the last big announcement we had, which was our ProLiant Gen10 servers. People are trying to do more than ever before and they're trying to do that at the Edge as well as as the data center. So I'll tell you a little bit about the servers we have. So in partnership with Intel, we're really excited to announce a new batch of servers. And these servers feature the 4th Gen Intel Xeon scalable processors, bringing a lot more performance and efficiency. And I'll talk about the servers, one, the first one is a HPE ProLiant DL320 Gen11. Now, I told you about that retail customer that's trying to do video streaming in their stores. This is the server they were looking at. This server is a new server, we didn't have a Gen10 or a Gen10+ version of the server. This is a new server and it's optimized for Edge use cases. It's a rack-based server and it's very, very flexible. So different types of storage, different types of GPU configurations, really designed to take care of many, many use cases at the Edge and doing more at the Edge than ever before. So I mentioned video streaming, but also VDI and analytics at the Edge. The next two servers are some of our most popular servers, our HPE ProLiant DL360 Gen11, and that's our density-optimized server for enterprise. And that is getting an upgrade across the board as well, big, big improvements in terms of performance, and expansion. And for those customers that need even more expansion when it comes to, let's say, storage or accelerators then the DL 380 Gen11 is a server that's new as well. And that's really for folks that need more expandability than the DL360, which is a one use server. And then lastly, our ML350, which is a tower server. These tower servers are typically used at remote sites, branch offices and this particular server holds a world record for energy efficiency for tower servers. So those are some of the servers we have today that we're announcing. I also want to talk a little bit about our Cray portfolio. So we're announcing two new servers with our HPE Cray portfolio. And what's great about this is that these servers make super computing more accessible to more enterprise customers. These servers are going to be smaller, they're going to come in at lower price points, and deliver tremendous energy efficiency. So these are the Cray XD servers, and there's more servers to come, but these are the ones that we're announcing with this first iteration. >> Great stuff. I can talk about servers all day long, I love server innovation. It's been following for many, many years, and you guys know. Lisa, we'll bring you in. Servers have been powered by Intel Xeon, we've been talking a lot about the scalable processors. This is your 4th Gen, they're in Gen11 and you're at 4th Gen. Krista mentioned this generation's about Security Edge, which is essentially becoming like a data center model now, the Edges are exploding. What are some of the design principles that went into the 4th Gen this time around the scalable processor? Can you share the Intel role here? >> Sure. I love what Krista said about headroom. If there's anything we've learned in these past few years, it's that you can plan for today, and you can even plan for tomorrow, but your tomorrow might look a lot different than what you thought it was going to. So to meet these business challenges, as we think about the underlying processor that powers all that amazing server lineup that Krista just went through, we are really looking at delivering that increased performance, the power efficient compute and then strong security. And of course, attention to the overall operating cost of the customer environment. Intel's focused on a very workload-first approach to solving our customers' real problems. So this is the applications that they're running every day to drive their digital transformation, and we really like to focus our innovation, and leadership for those highest value, and also the highest growth workloads. Some of those that we've uniquely focused on in 4th Gen Xeon, our artificial intelligence, high performance computing, network, storage, and as well as the deployments, like you were mentioning, ranging from the cloud all the way out to the Edge. And those are all satisfied by 4th Gen Xeon scalable. So our strategy for architecting is based off of all of that. And in addition to doing things like adding core count, improving the platform, updating the memory and the IO, all those standard things that you do, we've invested deeply in delivering the industry's CPU with the most built-in accelerators. And I'll just give an example, in artificial intelligence with built-in AMX acceleration, plus the framework optimizations, customers can see a 10X performance improvement gen over gen, that's on both training and inference. So it further cements Xeon as the world's foundation for inference, and it now delivers performance equivalent of a modern GPU, but all within your CPU. The flexibility that, that opens up for customers is tremendous and it's so many new ways to utilize their infrastructure. And like Krista said, I just want to say that, that best-in-class security, and security solutions are an absolute requirement. We believe that starts at the hardware level, and we continue to invest in our security features with that full ecosystem support so that our customers, like HPE, can deliver that full stacked solution to really deliver on that promise. >> I love that scalable processor messaging too around the silicon and all those advanced features, the accelerators. AI's certainly seeing a lot of that in demand now. Krista, similar question to you on your end. How do you guys look at these, your core design principles around the ProLiant Gen11, and how that helps solve the challenges for your customers that are living in this hybrid world today? >> Yeah, so we see how fast things are changing and we kept that in mind when we decided to design this generation. We talked all already about distributed environments. We see the intensity of the requirements that are at the Edge, and that's part of what we're trying to address with the new platform that I mentioned. It's also part of what we're trying to address with our management, making sure that people can manage no matter where a server is and get a great experience. The other thing we're realizing when it comes to what's happening is customers are looking at how they operate. Many want to buy as a service and with HPE GreenLake, we see that becoming more and more popular. With HPE GreenLake, we can offer that to customers, which is really helpful, especially when they're trying to get new technology like this. Sometimes they don't have it in the budget. With something like HP GreenLake, there's no upfront costs so they can enjoy this technology without having to come up with a big capital outlay for it. So that's great. Another one is around, I liked what Lisa said about security starting at the hardware. And that's exactly, the foundation has to be secure, or you're starting at the wrong place. So that's also something that we feel like we've advanced this time around. This secure root of trust that we started in Gen10, we've extended that to additional partners, so we're excited about that as well. >> That's great, Krista. We're seeing and hearing a lot about customers challenges at the Edge. Lisa, I want to bring you back in on this one. What are the needs that you see at the Edge from an Intel perspective? How is Intel addressing the Edge? >> Yeah, thanks, John. You know, one of the best things about Xeon is that it can span workloads and environments all the way from the Edge back to the core data center all within the same software environment. Customers really love that portability. For the Edge, we have seen an explosion of use cases coming from all industries and I think Krista would say the same. Where we're focused on delivering is that performant-enough compute that can fit into a constrained environment, and those constraints can be physical space, they can be the thermal environment. The Network Edge has been a big focus for us. Not only adding features and integrating acceleration, but investing deeply in that software environment so that more and more critical applications can be ported to Xeon and HPE industry standard servers versus requiring expensive, proprietary systems that were quite frankly not designed for this explosion of use cases that we're seeing. Across a variety of Edge to cloud use cases, we have identified ways to provide step function improvements in both performance and that power efficiency. For example, in this generation, we're delivering an up to 2.9X average improvement in performance per watt versus not using accelerators, and up to 70 watt power savings per CPU opportunity with some unique power management features, and improve total cost of ownership, and just overall power- >> What's the closing thoughts? What should people take away from this announcement around scalable processors, 4th Gen Intel, and then Gen11 ProLiant? What's the walkaway? What's the main super thought here? >> So I can go first. I think the main thought is that, obviously, we have partnered with Intel for many, many years. We continue to partner this generation with years in the making. In fact, we've been working on this for years, so we're both very excited that it's finally here. But we're laser focused on making sure that customers get the most out of their workloads, the most out of their infrastructure, and that they can meet those challenges that people are throwing at 'em. I think IT is under more pressure than ever before and the demands are there. They're critical to the business success with digital transformation and our job is to make sure they have everything they need, and they could do and meet the business needs as they come at 'em. >> Lisa, your thoughts on this reflection point we're in right now? >> Well, I agree with everything that Krista said. It's just a really exciting time right now. There's a ton of challenges in front of us, but the opportunity to bring technology solutions to our customers' digital transformation is tremendous right now. I think I would also like our customers to take away that between the work that Intel and HPE have done together for generations, they have a community that they can trust. We are committed to delivering customer-led solutions that do solve these business transformation challenges that we know are in front of everyone, and we're pretty excited for this launch. >> Yeah, I'm super enthusiastic right now. I think you guys are on the right track. This title Compute Engineered for Hybrid World really kind of highlights the word, "Engineered." You're starting to see this distributed computing architecture take shape with the Edge. Cloud on-premise computing is everywhere. This is real relevant to your customers, and it's a great announcement. Thanks for taking the time and joining us today. >> Thank you. >> Yeah, thank you. >> This is the first episode of theCUBE's coverage of Compute Engineered For Your Hybrid World. Please continue to check out thecube.net, our site, for the future episodes where we'll discuss how to build high performance AI applications, transforming compute management experiences, and accelerating VDI at the Edge. Also, to learn more about the new HPE ProLiant servers with the 4th Gen Intel Xeon processors, you can go to hpe.com. And check out the URL below, click on it. I'm John Furrier at theCUBE. You're watching theCUBE, the leader in high tech, enterprise coverage. (bright music)
SUMMARY :
and general manager of Great to see you guys. that are going to make real differences Thanks for the recap. This is the server they were looking at. into the 4th Gen this time and also the highest growth workloads. and how that helps solve the challenges that are at the Edge, How is Intel addressing the Edge? from the Edge back to the core data center and that they can meet those challenges but the opportunity to Thanks for taking the and accelerating VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Krista | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
November | DATE | 0.99+ |
10X | QUANTITY | 0.99+ |
DL360 | COMMERCIAL_ITEM | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
DL 380 Gen11 | COMMERCIAL_ITEM | 0.99+ |
ProLiant Gen11 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.98+ |
first iteration | QUANTITY | 0.98+ |
ML350 | COMMERCIAL_ITEM | 0.98+ |
first | QUANTITY | 0.98+ |
Xeon | COMMERCIAL_ITEM | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
ProLiant Gen11s | COMMERCIAL_ITEM | 0.97+ |
first episode | QUANTITY | 0.97+ |
HPE Mainstream Compute | ORGANIZATION | 0.97+ |
thecube.net | OTHER | 0.97+ |
two servers | QUANTITY | 0.97+ |
4th Gen | QUANTITY | 0.96+ |
Edge | ORGANIZATION | 0.96+ |
Intel Xeon Products | ORGANIZATION | 0.96+ |
hpe.com | OTHER | 0.95+ |
one | QUANTITY | 0.95+ |
4th Gen. | QUANTITY | 0.95+ |
HPE GreenLake | ORGANIZATION | 0.93+ |
Gen10 | COMMERCIAL_ITEM | 0.93+ |
two new servers | QUANTITY | 0.92+ |
up to 70 watt | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.91+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.91+ |
one council | QUANTITY | 0.91+ |
HPE ProLiant NextGen | COMMERCIAL_ITEM | 0.89+ |
first one | QUANTITY | 0.87+ |
Cray | ORGANIZATION | 0.86+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.85+ |
Edge | TITLE | 0.83+ |
three different design principles | QUANTITY | 0.83+ |
HP GreenLake | ORGANIZATION | 0.82+ |
Number two | QUANTITY | 0.81+ |
HPE Compute Engineered for your Hybrid World - Transform Your Compute Management Experience
>> Welcome everyone to "theCUBE's" coverage of "Compute engineered for your hybrid world," sponsored by HP and Intel. Today we're going to going to discuss how to transform your compute management experience with the new 4th Gen Intel Xeon scalable processors. Hello, I'm John Furrier, host of "theCUBE," and my guests today are Chinmay Ashok, director cloud engineering at Intel, and Koichiro Nakajima, principal product manager, compute at cloud services with HPE. Gentlemen, thanks for coming on this segment, "Transform your compute management experience." >> Thanks for having us. >> Great topic. A lot of people want to see that system management one pane of glass and want to manage everything. This is a really important topic and they started getting into distributed computing and cloud and hybrid. This is a major discussion point. What are some of the major trends you guys see in the system management space? >> Yeah, so system management is trying to help user manage their IT infrastructure effectively and efficiently. So, the system management is evolving along with the IT infrastructures which is trying to accommodate market trends. We have been observing the continuous trends like digital transformation, edge computing, and exponential data growth never stops. AI, machine learning, deep learning, cloud native applications, hybrid cloud, multi-cloud strategies. There's a lot of things going on. Also, COVID-19 pandemic has changed the way we live and work. These are all the things that, given a profound implication to the system design architectures that system management has to consider. Also, security has always been the very important topic, but it has become more important than ever before. Some of the research is saying that the cyber criminals becoming like a $10.5 trillion per year. We all do our efforts on the solution provider size and on the user side, but still cyber criminals are growing 15% year by year. So, with all this kind of thing in the mind, system management really have to evolve in a way to help user efficiently and effectively manage their more and more distributed IT infrastructure. >> Chinmay, what's your thoughts on the major trends in system management space? >> Thanks, John, Yeah, to add to what Koichiro said, I think especially with the view of the system or the service provider, as he was saying, is changing, is evolving over the last few years, especially with the advent of the cloud and the different types of cloud usage models like platform as a service, on-premises, of course, infrastructure is a service, but the traditional software as a service implies that the service provider needs a different view of the system and the context in which we need the CPU vendor, or the platform vendor needs to provide that, is changing. That includes both in-band telemetry being able to monitor what is going on on the system through traditional in-band methods, but also the advent of the out-of-band methods to do this without end user disruption is a key element to the enhancements that our customers are expecting from us as we deploy CPUs and platforms. >> That's great. You know what I love about this discussion is we had multiple generation enhancements, 4th Gen Xeon, 11th Gen ProLiant, iLOs going to come up with got another generation increase on that one. We'll get into that on the next segment, but while we're here, what is iLO? Can you guys define what that is and why it's important? >> Yeah, great question. Real quick, so HPE Integrated Lights-Out is the formal name of the product and we tend to call it as a iLO for short. iLO is HPE'S BMC. If you're familiar with this topic it's a Baseboard Management Controller. If not, this is a small computer on the server mother board and it runs independently from host CPU and the operating system. So, that's why it's named as Lights-Out. Now what can you do with the iLO? iLO really helps a user manage and use and monitor the server remotely, securely, throughout its life from the deployment to the retirement. So, you can really do things like, you know, turning a server power on, off, install operating system, access to IT, firmware update, and when you decide to retire server, you can completely wipe the data off that server so then it's ready to trash. iLO is really a best solution to manage a single server, but when you try to manage hundreds or thousand of servers in a larger scale environment, then managing server one by one by one through the iLO is not practical. So, HPE has two options. One of them is a HPE OneView. OneView is a best solution to manage a very complex, on-prem IT infrastructure that involves a thousand of servers as well as the other IT elements like fiber channel storage through the storage agent network and so on. Another option that we have is HPE for GreenLake Compute Ops Management. This is our latest, greatest product that we recently launched and this is a best solution to manage a distributed IT environment with multiple edge points or multiple clouds. And I recently involved in the customer conversation about the computer office management and with the hotel chain, global hotel chain with 9,000 locations worldwide and each of the location only have like a couple of servers to manage, but combined it's, you know, 27,000 servers and over the 9,000 locations, we didn't really have a great answer for that kind of environment before, but now HPE has GreenLake for computer office management for also deal with, you know, such kind of environment. >> Awesome. We're going to do a big dive on iLO in the next segment, but Chinmay, before we end this segment, what is PMT? >> Sure, so yeah, with the introduction of the 4th Gen Intel Xeon scalable processor, we of course introduce many new technologies like PCI Gen 5, DDR5, et cetera. And these are very key to general system provision, if you will. But with all of these new technologies come new sources of telemetry that the service provider now has to manage, right? So, the PMT is a technology called Platform Monitoring Technology. That is a capability that we introduced with the Intel 4th Gen Xeon scalable processor that allows the service provider to monitor all of these sources of telemetry within the system, within the system on chip, the CPU SOC, in all of these contexts that we talked about, like the hybrid cloud and cloud infrastructure as a service or platform as a service, but both in their in-band traditional telemetry collection models, but also out-of-band collection models such as the ones that Koichiro was talking about through the BMC et cetera. So, this is a key enhancement that we believe that takes the Intel product line closer to what the service providers require for managing their end user experience. >> Awesome, well thanks so much for spending the time in this segment. We're going to take a quick break, we're going to come back and we're going to discuss more what's new with Gen 11 and iLO 6. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (light music) Welcome back. We're continuing the coverage of "theCUBE's" coverage of compute engineered for your hybrid world. I'm John Furrier, I'm joined by Chinmay Ashok who's from Intel and Koichiro Nakajima with HPE. We're going to dive deeper into transforming your compute management experience with 4th Gen Intel Xeon scalable processors and HP ProLiant Gen11. Okay, let's get into it. We want to talk about Gen11. What's new with Gen11? What's new with iLO 6? So, NexGen increases in performance capabilities. What's new, what's new at Gen11 and iLO 6 let's go. >> Yeah, iLO 6 accommodates a lot of new features and the latest, greatest technology advancements like a new generation CPUs, DDR5 memories, PCI Gen 5, GPGPUs, SmartNICs. There's a lot of great feature functions. So, it's an iLO, make sure that supports all the use cases that associate with those latest, greatest advancements. For instance, like you know, some of the higher thermal design point CPU SKUs that requires a liquid cooling. We all support those kind of things. And also iLO6 accommodates latest, greatest industry standard system management, standard specifications, for instance, like an DMTF, TLDN, DMTF, RDE, SPDM. And what are these means for the iLO6 and Gen11? iLO6 really offers the greatest manageability and monitoring user experiences as well as the greatest automation through the refresh APIs. >> Chinmay, what's your thoughts on the Gen11 and iLO6? You're at Intel, you're enabling all this innovation. >> Yeah. >> What's the new features? >> Yeah, thanks John. Yeah, so yeah, to add to what Koichiro said, I think with the introduction of Gen11, 4th Gen Intel Xeon scalable processor, we have all of these rich new feature sets, right? With the DDR5, PCI Gen5, liquid cooling, et cetera. And then all of these new accelerators for various specific workloads that customers can use using this processor. So, as we were discussing previously, what this brings is all of these different sources of telemetry, right? So, our sources of data that the system provider or the service provider then needs to utilize to manage the compute experience for their end user. And so, what's new from that perspective is Intel realized that these new different sources of telemetry and the new mechanisms by which the service provider has to extract this telemetry required us to fundamentally think about how we provide the telemetry experience to the service provider. And that meant extending our existing best-in-class, in-band telemetry capabilities that we have today already built into in market Intel processors. But now, extending that with the introduction of the PMT, the Platform Monitoring Technology, that allows us to expand on that in-band telemetry, but also include all of these new sources of telemetry data through all of these new accelerators through the new features like PCI Gen5, DDR5, et cetera, but also bring in that out-of-band telemetry management experience. And so, I think that's a key innovation here, helping prepare for the world that the cloud is enabling. >> It's interesting, you know, Koichiro you had mentioned on the previous segment, COVID-19, we all know the impact of how that changed, how IT at the managed, you know, all of a sudden remote work, right? So, as you have cloud go to hybrid, now we got the edge coming, we're talking about a distributed computing environment, we got telemetry, you got management. This is a huge shift and it's happening super fast. What's the Gen11 iLO6 mean for architects as they start to look at going beyond hybrid and going to the edge, you're going to need all this telemetry. What's the impact? Can you guys just riff and share your thoughts on what this means for that kind of NexGen cloud that we see coming on on which is essentially distributed computing. >> Yeah, that's a great topic to discuss. So, there's a couple of the things. Really, to make sure those remote environment and also the management distributed IT environments, the system management has to reach across the remote location, across the internet connections, and the connectivities. So, the system management protocol, for instance, like traditionally IPMI or SNMP, or those things, got to be modernized into more restful API and those modern integration friendly to the modern tool chains. So, we're investing on those like refresh APIs and also again, the security becomes paramount importance because those are exposed to the bad people to snoop and trying to do some bad thing like men in a middle attacks, things like that. So we really, you know, focus on the security side on the two aspects on the iLO6 and Gen11. One other thing is we continue our industry unique silicon root of trust technology. So, that one is fortunate platform making sure the platform firmware, only the authentic and legitimate image of the firmware can run on HP server. And when you check in, validating the firmware images, the root of the trust reside in the silicon. So, no one can change it. Even the bad people trying to change the root of trust, it's bond in the chips so you cannot really change. And that's why, even bad people trying to compromise, you know, install compromise the firmware image on the HPE servers, you cannot do that. Another thing is we're making a lot of enhancements to make sure security on board our HP server into your network or onto a services like a GreenLake. Give you a couple of example, for instance, like a IDevID, Initial Device ID. That one is conforming to IEEE 802.1AR and it's immutable so no one can change it. And by using the IDevID, you can really identify you are not onboarding a rogue server or unknown server, but the server that you you want to onboard, right? It's absolutely important. Another thing is like platform certificate. Platform certificate really is the measurement of the configuration. So again, this is a great feature that makes sure you receive a server from the factory and no one during the transportation touch the server and alter the configuration. >> Chinmay, what's your reaction to this new distributed NextGen cloud? You got data, security, edge, move the compute to the data, don't move the data around. These are big conversations. >> Yeah, great question, John. I think this is an important thing to consider for the end user, the service provider in all of these contexts, right? I think Koichiro mentioned some of these key elements that go into as we develop and design these new products. But for example, from a security perspective, we introduce the trust domain extensions, TDX feature, for confidential computing in Intel 4th Generation Xeon scalable processors. And that enables the isolation of user workloads in these cloud environments, et cetera. But again, going back to the point Koichiro was making where if you go to the edge, you go to the cloud and then have the edge connect to the cloud you have independent networks for system management, independent networks for user data, et cetera. So, you need the ability to create that isolation. All of this telemetry data that needs to be isolated from the user, but used by the service provider to provide the best experience. All of these are built on the foundations of technologies such as TDX, PMT, iLO6, et cetera. >> Great stuff, gentlemen. Well, we have a lot more to discuss on our next segment. We're going to take a break here before wrapping up. We'll be right back with more. You're watching "theCUBE," the leader in high tech coverage. (light music) Okay, welcome back here, on "theCUBE's" coverage of "Compute engineered for your hybrid world." I'm John Furrier, host of the Cube. We're wrapping up our discussion here on transforming compute management experience with 4th Gen Intel Xeon scalable processors and obviously HPE ProLiant Gen11. Gentlemen, welcome back. Let's get into the takeaways for this discussion. Obviously, systems management has been around for a while, but transforming that experience on the management side is super important as the environment just radically changing for the better. What are some of the key takeaways for the audience watching here that they should put into their kind of tickler file and/or put on their to-do list to keep an eye on? >> Yeah, so Gen11 and iLO6 offers the latest, greatest technologies with new generation CPUs, DDR5, PCI Gen5, and so on and on. There's a lot of things in there and also iLO6 is the most mature version of iLO and it offers the best manageability and security. On top of iLO, HP offers the best of read management options like HP OneView and Compute Ops Management. It's really a lot of the things that help user achieve a lot of the things regardless of the use case like edge computing, or distributed IT, or hybrid strategy and so on and on. And you could also have a great system management that you can unleash all the full potential of latest, greatest technology. >> Chinmay, what's your thoughts on the key takeaways? Obviously as the world's changing, more gen chips are coming out, specialized workloads, performance. I mean, I've never met anyone that says they want to run on slower infrastructure. I mean, come on, performance matters. >> Yes, no, it definitely, I think one of the key things I would say is yes, with Gen11 Intel for gen scalable we're introducing all of these technologies, but I think one of the key things that has grown over the last few years is the view of the system provider, the abstraction that's needed, right? Like the end user today is migrating a lot of what they're traditionally used to from a physical compute perspective to the cloud. Everything goes to the cloud and when that happens there's a lot of just the experience that the end user sees, but everything underneath is abstracted away and then managed by the system provider, right? So we at Intel, and of course, our partners at HP, we have spent a lot of time figuring out what are the best sets of features that provide that best system management experience that allow for that abstraction to work seamlessly without the end user noticing? And I think from that perspective, the 4th Gen Intel Xeon scalable processors is so far the best Intel product that we have introduced that is prepared for that type of abstraction. >> So, I'm going to put my customer hat on for a second. I'll ask you both. What's in it for me? I'm the customer. What's in it for me? What's the benefit to me? What does this all mean to me? What's my win? >> Yeah, I can start there. I think the key thing here is that when we create capabilities that allow you to build the best cloud, at the end of the day that efficiency, that performance, all of that translates to a better experience for the consumer, right? So, as the service provider is able to have all of these myriad capabilities to use and choose from and then manage the system experience, what that implies is that the end user sees a seamless experience as they go from one application to another as they go about their daily lives. >> Koichiro, what's your thoughts on what's in it for me? You guys got a lot of engineering going on in Gen11, every gen increase always is a step function and increase of value. What's in it for me? What do I care? What's in it for me? I'm the customer. >> Alright. Yeah, so I fully agree with Chinmay's point. You know, he lays out the all the good points, right? Again, you know what the Gen11 and iLO6 offer all the latest, greatest features and all the technology and advancements are packed in the Gen11 platform and iLO6 unleash all full potentials for those benefits. And things are really dynamic in today's world and IT system also going to be agile and the system management get really far, to the point like we never imagine what the system management can do in the past. For instance, the managing on-prem devices across multiple locations from a single point, like a single pane of glass on the cloud management system, management on the cloud, that's what really the compute office management that HP offers. It's all new and it's really help customers unleash full potential of the gear and their investment and provide the best TCO and ROIs, right? I'm very excited that all the things that all the teams have worked for the multiple years have finally come to their life and to the public. And I can't really wait to see our customers start putting their hands on and enjoy the benefit of the latest, greatest offerings. >> Yeah, 4th Gen Xeon, Gen11 ProLiant, I mean, all the things coming together, accelerators, more cores. You got data, you got compute, and you got now this idea of security, I mean, you got hitting all the points, data and security big features here, right? Data being computed in a way with Gen4 and Gen11. This is like the big theme, data security, kind of the the big part of the core here in this announcement, in this relationship. >> Absolutely. I believe, I think the key things as these new generations of processors enable is new types of compute which imply is more types of data, more types of and hence, with more types of data, more types of compute. You have more types of system management more differentiation that the service provider has to then deal with, the disaggregation that they have to deal with. So yes, absolutely this is, I think exciting times for end users, but also for new frontiers for service providers to go tackle. And we believe that the features that we're introducing with this CPU and this platform will enable them to do so. >> Well Chinmay thank you so much for sharing your Intel perspective, Koichiro with HPE. Congratulations on all that hard work and engineering coming together. Bearing fruit, as you said, Koichiro, this is an exciting time. And again, keep moving the needle. This is an important inflection point in the industry and now more than ever this compute is needed and this kind of specialization's all awesome. So, congratulations and participating in the "Transforming your compute management experience" segment. >> Thank you very much. >> Okay. I'm John Furrier with "theCUBE." You're watching the "Compute Engineered for your Hybrid World Series" sponsored by HP and Intel. Thanks for watching. (light music)
SUMMARY :
how to transform your in the system management space? that the cyber criminals becoming of the out-of-band methods to do this We'll get into that on the next segment, of the product and we tend to on iLO in the next segment, of telemetry that the service provider now for spending the time in this segment. and the latest, greatest on the Gen11 and iLO6? that the system provider at the managed, you know, and legitimate image of the move the compute to the data, by the service provider to I'm John Furrier, host of the Cube. a lot of the things Obviously as the world's experience that the end user sees, What's the benefit to me? that the end user sees I'm the customer. that all the things that kind of the the big part of the core here that the service provider And again, keep moving the needle. for your Hybrid World Series"
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Koichiro | PERSON | 0.99+ |
Koichiro Nakajima | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Chinmay Ashok | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
iLO 6 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
27,000 servers | QUANTITY | 0.99+ |
9,000 locations | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
two options | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iLO6 | COMMERCIAL_ITEM | 0.99+ |
Chinmay | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.98+ |
two aspects | QUANTITY | 0.98+ |
COVID-19 pandemic | EVENT | 0.97+ |
iLO | TITLE | 0.97+ |
single point | QUANTITY | 0.96+ |
IEEE 802.1AR | OTHER | 0.96+ |
Gen11 | COMMERCIAL_ITEM | 0.96+ |
PCI Gen 5 | OTHER | 0.96+ |
one | QUANTITY | 0.96+ |
Today | DATE | 0.96+ |
4th Generation Xeon | COMMERCIAL_ITEM | 0.95+ |
today | DATE | 0.95+ |
PCI Gen5 | OTHER | 0.95+ |
single server | QUANTITY | 0.94+ |
HPE ProLiant Gen11 | COMMERCIAL_ITEM | 0.94+ |
Gen11 ProLiant | COMMERCIAL_ITEM | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.91+ |
NexGen | COMMERCIAL_ITEM | 0.91+ |
$10.5 trillion per year | QUANTITY | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.89+ |
HPE Compute Engineered for your Hybrid World - Next Gen Enhanced Scalable processors
>> Welcome to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier, host of "theCUBE" with the new fourth gen Intel Z on scalable process being announced, HPE is releasing four new HPE ProLiant Gen 11 servers and here to talk about the feature of those servers as well as the partnership between HPE and Intel, we have Darren Anthony, director compute server product manager with HPE, and Suzi Jewett, general manager of the Zion products with Intel. Thanks for joining us folks. Appreciate you coming on. >> Thanks for having us. (Suzi's speech drowned out) >> This segment is about NextGen enhanced scale of process. Obviously the Zion fourth gen. This is really cool stuff. What's the most exciting element of the new Intel fourth gen Zion processor? >> Yeah, John, thanks for asking. Of course, I'm very excited about the fourth gen Intel Zion processor. I think the best thing that we'll be delivering is our new ong package accelerators, which you know allows us to service the majority of the server market, which still is buying in that mid core count range and provide workload acceleration that matters for every one of the products that we sell. And that workload acceleration allows us to drive better efficiency and allows us to really dive into improved sustainability and workload optimizations for the data center. >> It's about al the rage about the cores. Now we got the acceleration continued to innovate with Zion. Congratulations. Darren what does the new Intel fourth Gen Zion processes mean for HPE from the ProLiant perspective? You're on Gen 11 servers. What's in it? What's it mean for you guys and for your customers? >> Well, John, first we got to talk about the great partnership. HPE and Intel have been partners delivering innovation for our server products for over 30 years, and we're continuing that partnership with HP ProLiant Gen 11 servers to deliver compelling business outcomes for our customers. Customers are on a digital transformation journey, and they need the right compute to power applications, accelerate analytics, and turn data into value. HP ProLiant Compute is engineered for your hybrid world and delivers optimized performance for your workloads. With HP ProLiant Gen 11 servers and Intel fourth gen Zion processors, you can have the performance to accelerate workloads from the data center to the edge. With Gen 11, we have more. More performance to meet new workload demands. With PCI Gen five which delivers increased bandwidth with room for more data and graphics accelerators for workloads like VDI, our new demands at the edge. DDR5 memory springs greater bandwidth and performance increases for low latency and memory solutions for database and analytics workloads and higher clock speed CPU chipset combinations for processor intensive AI and machine learning applications. >> Got to love the low latency. Got to love the more performance. Got to love the engineered for the hybrid world. You mentioned that. Can you elaborate more on engineered for the hybrid world? What does that mean? Can you elaborate? >> Well, HP ProLiant Compute is based on three pillars. First, an intuitive cloud operating experience with HPE GreenLake compute ops management. Second, trusted security by design with a zero trust approach from silicone to cloud. And third, optimize for performance for your workloads, whether you deploy as a traditional infrastructure or a pay-as-you-go model with HPE GreenLake on-premise at the edge in a colo and in the public cloud. >> Well, thanks Suzi and Darren, we'll be right back. We're going to take a quick break. We're going to come back and do a deep dive and get into the ProLiant Gen 11 servers. We're going to dig into it. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (upbeat music) >> Hello everyone. Welcome back continuing coverage of "theCUBE's" "Compute Engineered for Your Hybrid World" with HP and Intel. I'm John Furrier, host of "theCUBE'" joined back by Darren Anthony from HPE and Suzie Jewitt from Intel. as we continue our conversation on the fourth gen Zion scalable processor and HP Gen 11 servers. Suzi, we'll start with you first. Can you give us some use cases around the new fourth gen, Intel Zion scalable processors? >> Yeah, I'd love to. What we're really seeing with an ever-changing market, and you know, adapting to that is we're leading with that workload focus approach. Some examples, you know, that we see are with vRAN. For in vRAN, we estimate the 2021 market size was about 150 million, and we expect a CAG of almost 30% all the way through 2030. So we're really focused on that, on, you know deployed edge use cases, growing about 10% to over 50% in 2026. And HPC use cases, of course, continue to grow at a study CAGR around, you know, about 7%. Then last but not least is cloud. So we're, you know, targeting a growth rate of almost 20% over a five year CAGR. And the fourth G Zion is targeted to all of those workloads, both through our architectural improvements that, you know deliver node level performance as well as our operational improvements that deliver data center performance. And wrapping that all around with the accelerators that I talked about earlier that provide that workload specific improvements that get us to where our customers need to operationalize in their data center. >> I love the focus solutions around seeing compute used that way and the processors. Great stuff. Darren, how do you see the new ProLiant Gen 11 servers being used on your side? I mean obviously, you've got the customers deploying the servers. What are you seeing on those workloads? Those targeted workloads? (John chuckling) >> Well, you know, very much in line with what Suzi was talking about. The generational improvements that we're seeing in performance for Gen 11. They're outstanding for many different use cases. You know, obviously VDI. what we're seeing a lot is around the analytics. You know, with moving to the edge, there's a lot more data. Customers need to convert that data into something tangible. Something that's actionable. And so we're really seeing the strong use cases around analytics in order to mine that data and to make better, faster decisions for the customers. >> You know what I love about this market is people really want to hear about performance. They love speed, they love the power, and low power, by the way on the other side. So, you know, this has really been a big part of the focus now this year. We're seeing a lot more discussion. Suzi, can you tell us more about the key performance improvements on the processors? And Darren, if you don't mind, if you can follow up on the benefits of the new servers relative to the performance. Suzi? >> Sure, so, you know, at a standard expectant rate we're looking at, you know, 60% gen over gen, from our previous third gen Zion, but more importantly as we've been mentioning is the performance improvement we get with the accelerators. As an example, an average accelerator proof point that we have is 2.9 times improvement in performance per wat for accelerated workloads versus non-accelerated workloads. Additionally, we're seeing really great and performance improvement in low jitter so almost 20 to 50 times improvement versus previous gen in jitter on particular workloads which is really important, you know to our cloud service providers. >> Darren, what's your follow up on this? This is obviously translates into the the gen 11 servers. >> Well, you know, this generation. Huge improvements across the board. And what we're seeing is that not only customers are prepared for what they need now you know, workloads are evolving and transitioning. Customers need more. They're doing more. They're doing more analytics. And so not only do you have the performance you need now, but it's actually built for the future. We know that customers are looking to take in that data and do something and work with the data wherever it resides within their infrastructure. We also see customers that are beginning to move servers out of a centralized data center more to the edge, closer to the way that where the data resides. And so this new generation really tremendous for that. Seeing a lot of benefits for the customers from that perspective. >> Okay, Suzi, Darren, I want to get your thoughts on one of the hottest trends happening right now. Obviously machine learning and AI has always been hot, but recently more and more focus has been on AI. As you start to see this kind of next gen kind of AI coming on, and the younger generation of developers, you know, they're all into this. This is really the one of the hottest trends of AI. We've seen the momentum and accelerations kind of going next level. Can you guys comment on how Zion here and Gen 11 are tying into that? What's that mean for AI? >> So, exactly. With the fourth gen Intel Zion, we have one of our key you know, on package accelerators in every core is our AMX. It delivers up to 10 times improvement on inference and training versus previous gens, and, you know throws the competition out of the water. So we are really excited for our AI performance leading with Zion >> And- >> And John, what we're seeing is that this next generation, you know you're absolutely right, you know. Workloads a lot more focused. A lot more taking more advantage of AI machine learning capabilities. And with this generation together with the Intel Zion fourth gen, you know what we're seeing is the opportunity with that increase in IO bandwidth that now we have an opportunity for those applications and those use cases and those workloads to take advantage of this capability. We haven't had that before, but now more than ever, we've actually, you know opened the throttle with the performance and with the capabilities to support those workloads. >> That's great stuff. And you know, the AI stuff also does all lot on differentiated heavy lifting, and it needs processing power. It needs the servers. This is just, (John chuckling) it creates more and more value. This is right in line. Congratulations. Super excited by that call out. Really appreciate it. Thanks Suzi and Darren. Really appreciate. A lot more discuss with you guys as we go a little bit deeper. We're going to talk about security and wrap things up after this short break. I'm John Furrier, "theCUBE," the leader in enterprise tech coverage. (upbeat music) >> Welcome back to "theCUBE's" coverage of "Compute Engineered for Your Hybrid World." I'm John Furrier, host of "theCUBE" joined by Darren Anthony from HPE and Suzi Jewett from Intel as we turn our discussion to security. A lot of great features with the new Zion scalable processor's gen four and the ProLiant gen 11. Let's get into it. Suzi, what are some of the cool features of the fourth gen Intel Zion scalable processors? >> Sure, John, I'd love to talk about it. With fourth gen, Intel offers the most comprehensive confidential computing portfolio to really enhance data security and ingest regulatory compliance and sovereignty concerns. A couple examples of those features and technologies that we've included are a larger baseline enclave with the SGX technology, which is our application isolation technology and our Intel CET substantially reduces the risk of whole class software-based attacks. That wrapped around at a platform level really allows us, you know, to secure workload acceleration software and ensure platform integrity. >> Darren, this is a great enablement for HPE. Can you tell us about the security with the the new HP ProLiant Gen 11 servers? >> Absolutely, John. So HP ProLiant engineered with a fundamental security approach to defend against increasingly complex threats and uncompromising focus on state-of-the-art security innovations that are built right into our DNA, from silicon to software, from the factory to the cloud. It's our goal to protect the customer's infrastructure, workloads, and the data from threats to hardware and risk from third party software and devices. So Gen 11 is just a continuation of the the great technological innovations that we've had around providing zero trust architecture. We're extending our Silicon Root of Trust, and it's just a motion forward for innovating on that Silicon Root of Trust that we've had. So with Silicon Root of Trust, we protect millions of lines of firmware code from malware and ransomware with the digital footprint that's unique to the server. With this Silicon Root of Trust, we're securing over 4 million HPE servers around the world and beyond that Silicon, the authentication of and extending this to our partner ecosystem, the authentication of platform components, such as network interface cards and storage controllers just gives us that protection against additional entry points of security threats that can compromise the entire server infrastructure. With this latest version, we're also doing authentication integrity with those components using the security protocol and data model protocol or SPDM. But we know that trusted and protected infrastructure begins with a secure supply chain, a layer of protection that starts at the manufacturing floor. HP provides you optimized protection for ProLiant servers from trusted suppliers to the factories and into transit to the customer. >> Any final messages Darren you'd like to share with your audience on the hybrid world engineering for the hybrid world security overall the new Gen 11 servers with the Zion fourth generation process scalable processors? >> Well, it's really about choice. Having the right choice for your compute, and we know HPE ProLiant servers, together, ProLiant Gen 11 servers together with the new Zion processors is the right choice. Delivering the capabilities to performance and the efficiency that customers need to run their most complex workloads and their most performance hungry work workloads. We're really excited about this next generation of platforms. >> ProLiant Gen 11. Suzi, great customer for Intel. You got the fourth generation Zion scalable processes. We've been tracking multiple generations for both of you guys for many, many years now, the past decade. A lot of growth, a lot of innovation. I'll give you the last word on the series here on this segment. Can you share the the collaboration between Intel and HP? What does it mean and what's that mean for customers? Can you give your thoughts and share your views on the relationship with with HPE? >> Yeah, we value, obviously HPE as one of our key customers. We partner with them from the beginning of when we are defining the product all the way through the development and validation. HP has been a great partner in making sure that we deliver collaboratively to the needs of their customers and our customers all together to make sure that we get the best product in the market that meets our customer needs allowing for the flexibility, the operational efficiency, the security that our markets demand. >> Darren, Suzi, thank you so much. You know, "Compute for an Engineered Hybrid World" is really important. Compute is... (John stuttering) We need more compute. (John chuckling) Give us more power and less power on the sustainability side. So a lot of great advances. Thank you so much for spending the time and give us an overview on the innovation around the Zion and, and the ProLiant Gen 11. Appreciate your time. Appreciate it. >> You're welcome. Thanks for having us. >> You're watching "theCUBE's" coverage of "Compute Engineered for Your Hybrid World" sponsored by HPE and Intel. I'm John Furrier with "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
and here to talk about the Thanks for having us. of the new Intel fourth of the server market, continued to innovate with Zion. from the data center to the edge. engineered for the hybrid world? and in the public cloud. and get into the ProLiant Gen 11 servers. on the fourth gen Zion scalable processor and you know, adapting I love the focus solutions decisions for the customers. and low power, by the the performance improvement into the the gen 11 servers. the performance you need now, This is really the one of With the fourth gen Intel with the Intel Zion fourth gen, you know A lot more discuss with you guys and the ProLiant gen 11. Intel offers the most Can you tell us about the security from the factory to the cloud. and the efficiency that customers need on the series here on this segment. allowing for the flexibility, and the ProLiant Gen 11. Thanks for having us. I'm John Furrier with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ed Macosky | PERSON | 0.99+ |
Darren Anthony | PERSON | 0.99+ |
Yaron Haviv | PERSON | 0.99+ |
Mandy Dolly | PERSON | 0.99+ |
Mandy Dhaliwal | PERSON | 0.99+ |
David Richards | PERSON | 0.99+ |
Suzi Jewett | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
2.9 times | QUANTITY | 0.99+ |
Darren | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Suzi | PERSON | 0.99+ |
Silicon Angle Media | ORGANIZATION | 0.99+ |
RenDisco | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
Suzie Jewitt | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AKS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
500 terabytes | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Hadoop | TITLE | 0.99+ |
1,000 camera | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
18,000 customers | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2030 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
2026 | DATE | 0.99+ |
Yaron | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications
>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)
SUMMARY :
Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jordan Plum | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Gary Wang | PERSON | 0.99+ |
Bradley | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
16 times | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Jordan | PERSON | 0.99+ |
Artisight | ORGANIZATION | 0.99+ |
DL 360 | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three experts | QUANTITY | 0.99+ |
DL 380 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Compute Engineered for your Hybrid World | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
Bradley Sweeney | PERSON | 0.98+ |
over 400 deep learning | QUANTITY | 0.97+ |
intel | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Gen 11 DL 380 | COMMERCIAL_ITEM | 0.95+ |
Xeon | COMMERCIAL_ITEM | 0.95+ |
Today | DATE | 0.95+ |
fourth gen | QUANTITY | 0.92+ |
GitHub | ORGANIZATION | 0.91+ |
380 Gen 11 | COMMERCIAL_ITEM | 0.9+ |
about 55 or more | QUANTITY | 0.89+ |
four gen Xeon | COMMERCIAL_ITEM | 0.88+ |
Big Data | ORGANIZATION | 0.88+ |
Gen 11 | COMMERCIAL_ITEM | 0.87+ |
five slots | QUANTITY | 0.86+ |
Proliant | COMMERCIAL_ITEM | 0.84+ |
GreenLake | ORGANIZATION | 0.75+ |
Compute Engineered for your Hybrid | TITLE | 0.7+ |
Ezmeral | ORGANIZATION | 0.68+ |
HPE Compute Engineered for your Hybrid World - Accelerate VDI at the Edge
>> Hello everyone. Welcome to theCUBEs coverage of Compute Engineered for your Hybrid World sponsored by HPE and Intel. Today we're going to dive into advanced performance of VDI with the fourth gen Intel Zion scalable processors. Hello I'm John Furrier, the host of theCUBE. My guests today are Alan Chu, Director of Data Center Performance and Competition for Intel as well as Denis Kondakov who's the VDI product manager at HPE, and also joining us is Cynthia Sustiva, CAD/CAM product manager at HPE. Thanks for coming on, really appreciate you guys taking the time. >> Thank you. >> So accelerating VDI to the Edge. That's the topic of this topic here today. Let's get into it, Dennis, tell us about the new HPE ProLiant DL321 Gen 11 server. >> Okay, absolutely. Hello everybody. So HP ProLiant DL320 Gen 11 server is the new age center CCO and density optimized compact server, compact form factor server. It enables to modernize and power at the next generation of workloads in the diverse rec environment at the Edge in an industry standard designed with flexible scale for advanced graphics and compute. So it is one unit, one processor rec optimized server that can be deployed in the enterprise data center as well as at the remote office at end age. >> Cynthia HPE has announced another server, the ProLiant ML350. What can you tell us about that? >> Yeah, so the HPE ProLiant ML350 Gen 11 server is a powerful tower solution for a wide range of workloads. It is ideal for remote office compute with NextGen performance and expandability with two processors in tower form factor. This enables the server to be used not only in the data center environment, but also in the open office space as a powerful workstation use case. >> Dennis mentioned both servers are empowered by the fourth gen Intel Zion scale of process. Can you talk about the relationship between Intel HPE to get this done? How do you guys come together, what's behind the scenes? Share as much as you can. >> Yeah, thanks a lot John. So without a doubt it takes a lot to put all this together and I think the partnership that HPE and Intel bring together is a little bit of a critical point for us to be able to deliver to our customers. And I'm really thrilled to say that these leading Edge solutions that Dennis and Cynthia just talked about, they're built on the foundation of our fourth Gen Z on scalable platform that's trying to meet a wide variety of deployments for today and into the future. So I think the key point of it is we're together trying to drive leading performance with built-in acceleration and in order to deliver a lot of the business values to our customers, both HP and Intels, look to scale, drive down costs and deliver new services. >> You got the fourth Gen Z on, you got the Gen 11 and multiple ProLiants, a lot of action going on. Again, I love when these next gens come out. Can each of you guys comment and share what are the use cases for each of the systems? Because I think what we're looking at here is the next level innovation. What are some of the use cases on the systems? >> Yeah, so for the ML350, in the modern world where more and more data are generated at the Edge, we need to deploy computer infrastructure where the data is generated. So smaller form factor service will satisfy the requirements of S&B customers or remote and branch offices to deliver required performance redundancy where we're needed. This type of locations can be lacking dedicated facilities with strict humidity, temperature and noise isolation control. The server, the ML350 Gen 11 can be used as a powerful workstation sitting under a desk in the office or open space as well as the server for visualized workloads. It is a productivity workhorse with the ability to scale and adapt to any environment. One of the use cases can be for hosting digital workplace for manufacturing CAD/CAM engineering or oil and gas customers industry. So this server can be used as a high end bare metal workstation for local end users or it can be virtualized desktop solution environments for local and remote users. And talk about the DL320 Gen 11, I will pass it on to Dennis. >> Okay. >> Sure. So when we are talking about age of location we are talking about very specific requirements. So we need to provide solution building blocks that will empower and performance efficient, secure available for scaling up and down in a smaller increments than compared to the enterprise data center and of course redundant. So DL 320 Gen 11 server is the perfect server to satisfy all of those requirements. So for example, S&B customers can build a video solution, for example starting with just two HP ProLiant TL320 Gen 11 servers that will provide sufficient performance for high density video solution and at the same time be redundant and enable it for scaling up as required. So for VGI use cases it can be used for high density general VDI without GP acceleration or for a high performance VDI with virtual VGPU. So thanks to the modern modular architecture that is used on the server, it can be tailored for GPU or high density storage deployment with software defined compute and storage environment and to provide greater details on your Intel view I'm going to pass to Alan. >> Thanks a lot Dennis and I loved how you're both seeing the importance of how we scale and the applicability of the use cases of both the ML350 and DL320 solutions. So scalability is certainly a key tenant towards how we're delivering Intel's Zion scalable platform. It is called Zion scalable after all. And we know that deployments are happening in all different sorts of environments. And I think Cynthia you talked a little bit about kind of a environmental factors that go into how we're designing and I think a lot of people think of a traditional data center with all the bells and whistles and cooling technology where it sometimes might just be a dusty closet in the Edge. So we're defining fortunes you see on scalable to kind of tackle all those different environments and keep that in mind. Our SKUs range from low to high power, general purpose to segment optimize. We're supporting long life use cases so that all goes into account in delivering value to our customers. A lot of the latency sensitive nature of these Edge deployments also benefit greatly from monolithic architectures. And with our latest CPUs we do maintain quite a bit of that with many of our SKUs and delivering higher frequencies along with those SKUs optimized for those specific workloads in networking. So in the end we're looking to drive scalability. We're looking to drive value in a lot of our end users most important KPIs, whether it's latency throughput or efficiency and 4th Gen Z on scalable is looking to deliver that with 60 cores up to 60 cores, the most builtin accelerators of any CPUs in the market. And really the true technology transitions of the platform with DDR5, PCIE, Gen five and CXL. >> Love the scalability story, love the performance. We're going to take a break. Thanks Cynthia, Dennis. Now we're going to come back on our next segment after a quick break to discuss the performance and the benefits of the fourth Gen Intel Zion Scalable. You're watching theCUBE, the leader in high tech coverage, be right back. Welcome back around. We're continuing theCUBE's coverage of compute engineer for your hybrid world. I'm John Furrier, I'm joined by Alan Chu from Intel and Denis Konikoff and Cynthia Sistia from HPE. Welcome back. Cynthia, let's start with you. Can you tell us the benefits of the fourth Gen Intel Zion scale process for the HP Gen 11 server? >> Yeah, so HP ProLiant Gen 11 servers support DDR five memory which delivers increased bandwidth and lower power consumption. There are 32 DDR five dim slots with up to eight terabyte total on ML350 and 16 DDR five dim slots with up to two terabytes total on DL320. So we deliver more memory at a greater bandwidth. Also PCIE 5.0 delivers an increased bandwidth and greater number of lanes. So when we say increased number of lanes we need to remember that each lane delivers more bandwidth than lanes of the previous generation plus. Also a flexible storage configuration on HPDO 320 Gen 11 makes it an ideal server for establishing software defined compute and storage solution at the Edge. When we consider a server for VDI workloads, we need to keep the right balance between the number of cords and CPU frequency in order to deliver the desire environment density and noncompromised user experience. So the new server generation supports a greater number of single wide and global wide GPU use to deliver more graphic accelerated virtual desktops per server unit than ever before. HPE ProLiant ML 350 Gen 11 server supports up to four double wide GPUs or up to eight single wide GPUs. When the signing GPU accelerated solutions the number of GPUs available in the system and consistently the number of BGPUs that can be provisioned for VMs in the binding factor rather than CPU course or memory. So HPE ProLiant Gen 11 servers with Intel fourth generation science scalable processors enable us to deliver more virtual desktops per server than ever before. And with that I will pass it on to Alan to provide more details on the new Gen CPU performance. >> Thanks Cynthia. So you brought up I think a really great point earlier about the importance of achieving the right balance. So between the both of us, Intel and HPE, I'm sure we've heard countless feedback about how we should be optimizing efficiency for our customers and with four Gen Z and scalable in HP ProLiant Gen 11 servers I think we achieved just that with our built-in accelerator. So built-in acceleration delivers not only the revolutionary performance, but enables significant offload from valuable core execution. That offload unlocks a lot of previously unrealized execution efficiency. So for example, with quick assist technology built in, running engine X, TLS encryption to drive 65,000 connections per second we can offload up to 47% of the course that do other work. Accelerating AI inferences with AMX, that's 10X higher performance and we're now unlocking realtime inferencing. It's becoming an element in every workload from the data center to the Edge. And lastly, so with faster and more efficient database performance with RocksDB, we're executing with Intel in-memory analytics accelerator we're able to deliver 2X the performance per watt than prior gen. So I'll say it's that kind of offload that is really going to enable more and more virtualized desktops or users for any given deployment. >> Thanks everyone. We still got a lot more to discuss with Cynthia, Dennis and Allen, but we're going to take a break. Quick break before wrapping things up. You're watching theCUBE, the leader in tech coverage. We'll be right back. Okay, welcome back everyone to theCUBEs coverage of Compute Engineered for your Hybrid World. I'm John Furrier. We'll be wrapping up our discussion on advanced performance of VDI with the fourth gen Intel Zion scalable processers. Welcome back everyone. Dennis, we'll start with you. Let's continue our conversation and turn our attention to security. Obviously security is baked in from day zero as they say. What are some of the new security features or the key security features for the HP ProLiant Gen 11 server? >> Sure, I would like to start with the balance, right? We were talking about performance, we were talking about density, but Alan mentioned about the balance. So what about the security? The security is really important aspect especially if we're talking about solutions deployed at the H. When the security is not active but other aspects of the environment become non-important. And HP is uniquely positioned to deliver the best in class security solution on the market starting with the trusted supply chain and factories and silicon route of trust implemented from the factory. So the new ISO6 supports added protection leveraging SPDM for component authorization and not only enabled for the embedded server management, but also it is integrated with HP GreenLake compute ops manager that enables environment for secure and optimized configuration deployment and even lifecycle management starting from the single server deployed on the Edge and all the way up to the full scale distributed data center. So it brings uncompromised and trusted solution to customers fully protected at all tiers, hardware, firmware, hypervisor, operational system application and data. And the new intel CPUs play an important role in the securing of the platform. So Alan- >> Yeah, thanks. So Intel, I think our zero trust strategy toward security is a really great and a really strong parallel to all the focus that HPE is also bringing to that segment and market. We have even invested in a lot of hardware enabled security technologies like SGX designed to enhance data protection at rest in motion and in use. SGX'S application isolation is the most deployed, researched and battle tested confidential computing technology for the data center market and with the smallest trust boundary of any solution in market. So as we've talked about a little bit about virtualized use cases a lot of virtualized applications rely also on encryption whether bulk or specific ciphers. And this is again an area where we've seen the opportunity for offload to Intel's quick assist technology to encrypt within a single data flow. I think Intel and HP together, we are really providing security at all facets of execution today. >> I love that Software Guard Extension, SGX, also silicon root of trust. We've heard a lot about great stuff. Congratulations, security's very critical as we see more and more. Got to be embedded, got to be completely zero trust. Final question for you guys. Can you share any messages you'd like to share with the audience each of you, what should they walk away from this? What's in it for them? What does all this mean? >> Yeah, so I'll start. Yes, so to wrap it up, HPR Proliant Gen 11 servers are built on four generation science scalable processors to enable high density and extreme performance with high performance CDR five memory and PCI 5.0 plus HP engine engineered and validated workload solutions provide better ROI in any consumption model and prefer by a customer from Edge to Cloud. >> Dennis? >> And yeah, so you are talking about all of the great features that the new generation servers are bringing to our customers, but at the same time, customer IT organization should be ready to enable, configure, support, and fine tune all of these great features for the new server generation. And this is not an obvious task. It requires investments, skills, knowledge and experience. And HP is ready to step up and help customers at any desired skill with the HP Greenlake H2 cloud platform that enables customers for cloud like experience and convenience and the flexibility with the security of the infrastructure deployed in the private data center or in the Edge. So while consuming all of the HP solutions, customer have flexibility to choose the right level of the service delivered from HP GreenLake, starting from hardwares as a service and scale up or down is required to consume the full stack of the hardwares and software as a service with an option to paper use. >> Awesome. Alan, final word. >> Yeah. What should we walk away with? >> Yeah, thanks. So I'd say that we've talked a lot about the systems here in question with HP ProLiant Gen 11 and they're delivering on a lot of the business outcomes that our customers require in order to optimize for operational efficiency or to optimize for just to, well maybe just to enable what they want to do in, with their customers enabling new features, enabling new capabilities. Underpinning all of that is our fourth Gen Zion scalable platform. Whether it's the technology transitions that we're driving with DDR5 PCIA Gen 5 or the raw performance efficiency and scalability of the platform in CPU, I think we're here for our customers in delivering to it. >> That's great stuff. Alan, Dennis, Cynthia, thank you so much for taking the time to do a deep dive in the advanced performance of VDI with the fourth Gen Intel Zion scalable process. And congratulations on Gen 11 ProLiant. You get some great servers there and again next Gen's here. Thanks for taking the time. >> Thank you so much for having us here. >> Okay, this is theCUBEs keeps coverage of Compute Engineered for your Hybrid World sponsored by HP and Intel. I'm John Furrier for theCUBE. Accelerate VDI at the Edge. Thanks for watching.
SUMMARY :
the host of theCUBE. That's the topic of this topic here today. in the enterprise data center the ProLiant ML350. but also in the open office space by the fourth gen Intel deliver a lot of the business for each of the systems? One of the use cases can be and at the same time be redundant So in the end we're looking and the benefits of the fourth for VMs in the binding factor rather than from the data center to the Edge. for the HP ProLiant Gen 11 server? and not only enabled for the is the most deployed, got to be completely zero trust. by a customer from Edge to Cloud. of the HP solutions, Alan, final word. What should we walk away with? lot of the business outcomes the time to do a deep dive Accelerate VDI at the Edge.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Denis Kondakov | PERSON | 0.99+ |
Cynthia | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Denis Konikoff | PERSON | 0.99+ |
Alan Chu | PERSON | 0.99+ |
Cynthia Sustiva | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cynthia Sistia | PERSON | 0.99+ |
John | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
2X | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10X | QUANTITY | 0.99+ |
60 cores | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one unit | QUANTITY | 0.99+ |
each lane | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
ML350 | COMMERCIAL_ITEM | 0.99+ |
S&B | ORGANIZATION | 0.99+ |
DL320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
HPDO 320 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
ML350 Gen 11 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
ProLiant ML350 | COMMERCIAL_ITEM | 0.97+ |
two | QUANTITY | 0.97+ |
ProLiant Gen 11 | COMMERCIAL_ITEM | 0.97+ |
DL 320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
ProLiant DL320 Gen 11 | COMMERCIAL_ITEM | 0.97+ |
single | QUANTITY | 0.97+ |
ProLiant ML350 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
Intels | ORGANIZATION | 0.96+ |
DL320 | COMMERCIAL_ITEM | 0.96+ |
ProLiant DL321 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
ProLiant TL320 Gen 11 | COMMERCIAL_ITEM | 0.96+ |
two processors | QUANTITY | 0.96+ |
Zion | COMMERCIAL_ITEM | 0.95+ |
HPE ProLiant ML 350 Gen 11 | COMMERCIAL_ITEM | 0.95+ |
Zion | TITLE | 0.94+ |
HPE Compute Security - Kevin Depew, HPE & David Chang, AMD
>>Hey everyone, welcome to this event, HPE Compute Security. I'm your host, Lisa Martin. Kevin Dee joins me next Senior director, future Surfer Architecture at hpe. Kevin, it's great to have you back on the program. >>Thanks, Lisa. I'm glad to be here. >>One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And if we think of how dramatically the landscape has changed in the last couple of years, I was looking at some numbers that H P V E had provided. Cybercrime will reach 10.5 trillion by 2025. It's a couple years away. The average total cost of a data breach is now over 4 million, 15% year over year crime growth predicted over the next five years. It's no longer if we get hit, it's when it's how often. What's the severity? Talk to me about the current situation with the cybersecurity landscape that you're seeing. >>Yeah, I mean the, the numbers you're talking about are just staggering and then that's exactly what we're seeing and that's exactly what we're hearing from our customers is just absolutely key. Customers have too much to lose. The, the dollar cost is just, like I said, staggering. And, and here at HP we know we have a huge part to play, but we also know that we need partnerships across the industry to solve these problems. So we have partnered with, with our, our various partners to deliver these Gen 11 products. Whether we're talking about partners like a M D or partners like our Nick vendors, storage card vendors. We know we can't solve the problem alone. And we know this, the issue is huge. And like you said, the numbers are staggering. So we're really, we're really partnering with, with all the right players to ensure we have a secure solution so we can stay ahead of the bad guys to try to limit the, the attacks on our customers. >>Right. Limit the damage. What are some of the things that you've seen particularly change in the last 18 months or so? Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? >>Well, there, there's been a massive number of attacks just in the last 12 months, but I wouldn't really say it's so much changed because the amount of attacks has been increasing dramatically over the years for many, many, many years. It's just a very lucrative area for the bad guys, whether it's ransomware or stealing personal data, whatever it is, it's there. There's unfortunately a lot of money to be made into it, made from it, and a lot of money to be lost by the good guys, the good guys being our customers. So it's not so much that it's changed, it's just that it's even accelerating faster. So the real change is, it's accelerating even faster because it's becoming even more lucrative. So we have to stay ahead of these bad guys. One of the statistics of Microsoft operating environments, the number of tax in the last year, up 50% year over year, that's a huge acceleration and we've gotta stay ahead of that. We have to make sure our customers don't get impacted to the level that these, these staggering number of attacks are. The, the bad guys are out there. We've gotta protect, protect our customers from the bad guys. >>Absolutely. The acceleration that you talked about is, it's, it's kind of frightening. It's very eye-opening. We do know that security, you know, we've talked about it for so long as a, as a a C-suite priority, a board level priority. We know that as some of the data that HPE e also sent over organizations are risking are, are listing cyber risks as a top five concern in their organization. IT budgets spend is going up where security is concerned. And so security security's on everyone's mind. In fact, the cube did, I guess in the middle part of last, I did a series on this really focusing on cybersecurity as a board issue and they went into how companies are structuring security teams changing their assumptions about the right security model, offense versus defense. But security's gone beyond the board, it's top of mind and it's on, it's in an integral part of every conversation. So my question for you is, when you're talking to customers, what are some of the key challenges that they're saying, Kevin, these are some of the things the landscape is accelerating, we know it's a matter of time. What are some of those challenges and that they're key pain points that they're coming to you to help solve? >>Yeah, at the highest level it's simply that security is incredibly important to them. We talked about the numbers. There's so much money to be lost that what they come to us and say, is security's important for us? What can you do to protect us? What can you do to prevent us from being one of those statistics? So at a high level, that's kind of what we're seeing at a, with a little more detail. We know that there's customers doing digital transformations. We know that there's customers going hybrid cloud, they've got a lot of initiatives on their own. They've gotta spend a lot of time and a lot of bandwidth tackling things that are important to their business. They just don't have the bandwidth to worry about yet. Another thing which is security. So we are doing everything we can and partnering with everyone we can to help solve those problems for customers. >>Cuz we're hearing, hey, this is huge, this is too big of a risk. How do you protect us? And by the way, we only have limited bandwidth, so what can we do? What we can do is make them assured that that platform is secure, that we're, we are creating a foundation for a very secure platform and that we've worked with our partners to secure all the pieces. So yes, they still have to worry about security, but there's pieces that we've taken care of that they don't have to worry about and there's capabilities that we've provided that they can use and we've made that easy so they can build su secure solutions on top of it. >>What are some of the things when you're in customer conversations, Kevin, that you talk about with customers in terms of what makes HPE E'S approach to security really unique? >>Well, I think a big thing is security is part of our, our dna. It's part of everything we do. Whether we're designing our own asics for our bmc, the ilo ASIC ILO six used on Gen 11, or whether it's our firmware stack, the ILO firmware, our our system, UFI firmware, all those pieces in everything we do. We're thinking about security. When we're building products in our factory, we're thinking about security. When we're think designing our supply chain, we're thinking about security. When we make requirements on our suppliers, we're driving security to be a key part of those components. So security is in our D N a security's top of mind. Security is something we think about in everything we do. We have to think like the bad guys, what could the bad guy take advantage of? What could the bad guy exploit? So we try to think like them so that we can protect our customers. >>And so security is something that that really is pervasive across all of our development organizations, our supply chain organizations, our factories, and our partners. So that's what we think is unique about HPE is because security is so important and there's a whole lot of pieces of our reliance servers that we do ourselves that many others don't do themselves. And since we do it ourselves, we can make sure that security's in the design from the start, that those pieces work together in a secure manner. So we think that gives us a, an advantage from a security standpoint. >>Security is very much intention based at HPE e I was reading in some notes, and you just did a great job of talking about this, that fundamental security approach, security is fundamental to defend against threats that are increasingly complex through what you also call an uncompromising focus to state-of-the-art security and in in innovations built into your D N A. And then organizations can protect their infrastructure, their workloads, their data from the bad guys. Talk to us briefly in our final few minutes here, Kevin, about fundamental uncompromising protected the value in it for me as an HPE customer. >>Yeah, when we talk about fundamental, we're talking about the those fundamental technologies that are part of our platform. Things like we've integrated TPMS and sorted them down in our platforms. We now have platform certificates as a standard part of the platform. We have I dev id and probably most importantly, our platforms continue to support what we really believe was a groundbreaking technology, Silicon Root of trust and what that's able to do. We have millions of lines of firmware code in our platforms and with Silicon Root of trust, we can authenticate all of those lines of firmware. Whether we're talking about the the ILO six firmware, our U E I firmware, our C P L D in the system, there's other pieces of firmware. We authenticate all those to make sure that not a single line of code, not a single bit has been changed by a bad guy, even if the bad guy has physical access to the platform. >>So that silicon route of trust technology is making sure that when that system boots off and that hands off to the operating system and then eventually the customer's application stack that it's starting with a solid foundation, that it's starting with a system that hasn't been compromised. And then we build other things into that silicon root of trust, such as the ability to do the scans and the authentications at runtime, the ability to automatically recover if we detect something has been compromised, we can automatically update that compromised piece of firmware to a good piece before we've run it because we never want to run firmware that's been compromised. So that's all part of that Silicon Root of Trust solution and that's a fundamental piece of the platform. And then when we talk about uncompromising, what we're really talking about there is how we don't compromise security. >>And one of the ways we do that is through an extension of our Silicon Root of trust with a capability called S Spdm. And this is a technology that we saw the need for, we saw the need to authenticate our option cards and the firmware in those option cards. Silicon Root Prota, Silicon Root Trust protects against many attacks, but one piece it didn't do is verify the actual option card firmware and the option cards. So we knew to solve that problem we would have to partner with others in the industry, our nick vendors, our storage controller vendors, our G vendors. So we worked with industry standards bodies and those other partners to design a capability that allows us to authenticate all of those devices. And we worked with those vendors to get the support both in their side and in our platform side so that now Silicon Rivers and trust has been extended to where we protect and we trust those option cards as well. >>So that's when, when what we're talking about with Uncompromising and with with Protect, what we're talking about there is our capabilities around protecting against, for example, supply chain attacks. We have our, our trusted supply chain solution, which allows us to guarantee that our server, when it leaves our factory, what the server is, when it leaves our factory, will be what it is when it arrives at the customer. And if a bad guy does anything in that transition, the transit from our factory to the customer, they'll be able to detect that. So we enable certain capabilities by default capability called server configuration lock, which can ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, swapping out processors, whatever it is, we'll detect if a bad guy did any of that and the customer will know it before they deploy the system. That gets enabled by default. >>We have an intrusion detection technology option when you use by the, the trusted supply chain that is included by default. That lets you know, did anybody open that system up, even if the system's not plugged in, did somebody take the hood off and potentially do something malicious to it? We also enable a capability called U EFI secure Boot, which can go authenticate some of the drivers that are located on the option card itself. Those kind of capabilities. Also ilo high security mode gets enabled by default. So all these things are enabled in the platform to ensure that if it's attacked going from our factory to the customer, it will be detected and the customer won't deploy a system that's been maliciously attacked. So that's got >>It, >>How we protect the customer through those capabilities. >>Outstanding. You mentioned partners, my last question for you, we've got about a minute left, Kevin is bring AMD into the conversation, where do they fit in this >>AMD's an absolutely crucial partner. No one company even HP can do it all themselves. There's a lot of partnerships, there's a lot of synergies working with amd. We've been working with AMD for almost 20 years since we delivered our first AM MD base ProLiant back in 2004 H HP ProLiant, DL 5 85. So we've been working with them a long time. We work with them years ahead of when a processor is announced, we benefit each other. We look at their designs and help them make their designs better. They let us know about their technology so we can take advantage of it in our designs. So they have a lot of security capabilities, like their memory encryption technologies, their a MD secure processor, their secure encrypted virtualization, which is an absolutely unique and breakthrough technology to protect virtual machines and hypervisor environments and protect them from malicious hypervisors. So they have some really great capabilities that they've built into their processor, and we also take advantage of the capabilities they have and ensure those are used in our solutions and in securing the platform. So a really such >>A great, great partnership. Great synergies there. Kevin, thank you so much for joining me on the program, talking about compute security, what HPE is doing to ensure that security is fundamental, that it is unpromised and that your customers are protected end to end. We appreciate your insights, we appreciate your time. >>Thank you very much, Lisa. >>We've just had a great conversation with Kevin Depu. Now I get to talk with David Chang, data center solutions marketing lead at a md. David, welcome to the program. >>Thank, thank you. And thank you for having me. >>So one of the hot topics of conversation that we can't avoid is security. Talk to me about some of the things that AMD is seeing from the customer's perspective, why security is so important for businesses across industries. >>Yeah, sure. Yeah. Security is, is top of mind for, for almost every, every customer I'm talking to right now. You know, there's several key market drivers and, and trends, you know, in, out there today that's really needing a better and innovative solution for, for security, right? So, you know, the high cost of data breaches, for example, will cost enterprises in downtime of, of the data center. And that time is time that you're not making money, right? And potentially even leading to your, to the loss of customer confidence in your, in your cust in your company's offerings. So there's real costs that you, you know, our customers are facing every day not being prepared and not having proper security measures set up in the data center. In fact, according to to one report, over 400 high-tech threats are being introduced every minute. So every day, numerous new threats are popping up and they're just, you know, the, you know, the bad guys are just getting more and more sophisticated. So you have to take, you know, measures today and you have to protect yourself, you know, end to end with solutions like what a AM MD and HPE has to offer. >>Yeah, you talked about some of the costs there. They're exorbitant. I've seen recent figures about the average, you know, cost of data breacher ransomware is, is close to, is over $4 million, the cost of, of brand reputation you brought up. That's a great point because nobody wants to be the next headline and security, I'm sure in your experiences. It's a board level conversation. It's, it's absolutely table stakes for every organization. Let's talk a little bit about some of the specific things now that A M D and HPE E are doing. I know that you have a really solid focus on building security features into the EPIC processors. Talk to me a little bit about that focus and some of the great things that you're doing there. >>Yeah, so, you know, we partner with H P E for a long time now. I think it's almost 20 years that we've been in business together. And, and you know, we, we help, you know, we, we work together design in security features even before the silicons even, you know, even born. So, you know, we have a great relationship with, with, with all our partners, including hpe and you know, HPE has, you know, an end really great end to end security story and AMD fits really well into that. You know, if you kind of think about how security all started, you know, in, in the data center, you, you've had strategies around encryption of the, you know, the data in, in flight, the network security, you know, you know, VPNs and, and, and security on the NS. And, and even on the, on the hard drives, you know, data that's at rest. >>You know, encryption has, you know, security has been sort of part of that strategy for a a long time and really for, you know, for ages, nobody really thought about the, the actual data in use, which is, you know, the, the information that's being passed from the C P U to the, the, the memory and, and even in virtualized environments to the, the, the virtual machines that, that everybody uses now. So, you know, for a long time nobody really thought about that app, you know, that third leg of, of encryption. And so a d comes in and says, Hey, you know, this is things that as, as the bad guys are getting more sophisticated, you, you have to start worrying about that, right? And, you know, for example, you know, you know, think, think people think about memory, you know, being sort of, you know, non-persistent and you know, when after, you know, after a certain time, the, the, you know, the, the data in the memory kind of goes away, right? >>But that's not true anymore because even in in memory data now, you know, there's a lot of memory modules that still can retain data up to 90 minutes even after p power loss. And with something as simple as compressed, compressed air or, or liquid nitrogen, you can actually freeze memory dams now long enough to extract the data from that memory module for up, you know, up, up to two or three hours, right? So lo more than enough time to read valuable data and, and, and even encryption keys off of that memory module. So our, our world's getting more complex and you know, more, the more data out there, the more insatiable need for compute and storage. You know, data management is becoming all, all the more important, you know, to keep all of that going and secure, you know, and, and creating security for those threats. It becomes more and more important. And, and again, especially in virtualized environments where, you know, like hyperconverged infrastructure or vir virtual desktop memories, it's really hard to keep up with all those different attacks, all those different attack surfaces. >>It sounds like what you were just talking about is what AMD has been able to do is identify yet another vulnerability Yes. Another attack surface in memory to be able to, to plug that hole for organizations that didn't, weren't able to do that before. >>Yeah. And, you know, and, and we kind of started out with that belief that security needed to be scalable and, and able to adapt to, to changing environments. So, you know, we, we came up with, you know, the, you know, the, the philosophy or the design philosophy that we're gonna continue to build on those security features generational generations and stay ahead of those evolving attacks. You know, great example is in, in the third gen, you know, epic C P U, that family that we had, we actually created this feature called S E V S N P, which stands for SECURENESS Paging. And it's really all around this, this new attack where, you know, your, the, the, you know, it's basically hypervisor based attacks where people are, you know, the bad actors are writing in to the memory and writing in basically bad data to corrupt the mem, you know, to corrupt the data in the memory. So s e V S and P is, was put in place to help, you know, secure that, you know, before that became a problem. And, you know, you heard in the news just recently that that becoming a more and more, more of a bigger issue. And the great news is that we had that feature built in, you know, before that became a big problem. >>And now you're on the fourth gen, those epic crosses talk of those epic processes. Talk to me a little bit about some of the innovations that are now in fourth gen. >>Yeah, so in fourth gen we actually added, you know, on top of that. So we've, we've got, you know, the sec the, the base of our, our, what we call infinity guard is, is all around the secure boot. The, you know, the, the, the, the secure root of trust that, you know, that we, we work with HPE on the, the strong memory encryption and the S E V, which is the secure encrypted virtualization. And so remember those s s and p, you know, incap capabilities that I talked about earlier. We've actually, in the fourth gen added two x the number of sev v s and P guests for even higher number of confidential VMs to support even more customers than before. Right? We've also added more guest protection from simultaneous multi threading or S M T side channel attacks. And, you know, while it's not officially part of Infinity Guard, we've actually added more APEC acceleration, which greatly benefits the security of those confidential VMs with the larger number of VCPUs, which basically means that you can build larger VMs and still be secured. And then lastly, we actually added even stronger a e s encryption. So we went from 128 bit to 256 bit, which is now military grade encryption on top of that. And, you know, and, and that's really, you know, the de facto crypto cryptography that is used for most of the applications for, you know, customers like the US federal government and, and all, you know, the, is really an essential element for memory security and the H B C applications. And I always say if it's good enough for the US government, it's good enough for you. >>Exactly. Well, it's got to be, talk a little bit about how AMD is doing this together with HPE a little bit about the partnership as we round out our conversation. >>Sure, absolutely. So security is only as strong as the layer below it, right? So, you know, that's why modern security must be built in rather than, than, you know, bolted on or, or, or, you know, added after the fact, right? So HPE and a MD actually developed this layered approach for protecting critical data together, right? Through our leadership and, and security features and innovations, we really deliver a set of hardware based features that, that help decrease potential attack surfaces. With, with that holistic approach that, you know, that safeguards the critical information across system, you know, the, the entire system lifecycle. And we provide the confidence of built-in silicon authentication on the world's most secure industry standard servers. And with a 360 degree approach that brings high availability to critical workloads while helping to defend, you know, against internal and external threats. So things like h hp, root of silicon root of trust with the trusted supply chain, which, you know, obviously AMD's part of that supply chain combined with AMD's Infinity guard technology really helps provide that end-to-end data protection in today's business. >>And that is so critical for businesses in every industry. As you mentioned, the attackers are getting more and more sophisticated, the vulnerabilities are increasing. The ability to have a pa, a partnership like H P E and a MD to deliver that end-to-end data protection is table stakes for businesses. David, thank you so much for joining me on the program, really walking us through what am MD is doing, the the fourth gen epic processors and how you're working together with HPE to really enable security to be successfully accomplished by businesses across industries. We appreciate your insights. >>Well, thank you again for having me, and we appreciate the partnership with hpe. >>Well, you wanna thank you for watching our special program HPE Compute Security. I do have a call to action for you. Go ahead and visit hpe com slash security slash compute. Thanks for watching.
SUMMARY :
Kevin, it's great to have you back on the program. One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And like you said, the numbers are staggering. Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? So the real change is, it's accelerating even faster because it's becoming We do know that security, you know, we've talked about it for so long as a, as a a C-suite Yeah, at the highest level it's simply that security is incredibly important to them. And by the way, we only have limited bandwidth, So we try to think like them so that we can protect our customers. our reliance servers that we do ourselves that many others don't do themselves. and you just did a great job of talking about this, that fundamental security approach, of code, not a single bit has been changed by a bad guy, even if the bad guy has the ability to automatically recover if we detect something has been compromised, And one of the ways we do that is through an extension of our Silicon Root of trust with a capability ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, That lets you know, into the conversation, where do they fit in this and in securing the platform. Kevin, thank you so much for joining me on the program, Now I get to talk with David Chang, And thank you for having me. So one of the hot topics of conversation that we can't avoid is security. numerous new threats are popping up and they're just, you know, the, you know, the cost of, of brand reputation you brought up. know, the data in, in flight, the network security, you know, you know, that app, you know, that third leg of, of encryption. the data from that memory module for up, you know, up, up to two or three hours, It sounds like what you were just talking about is what AMD has been able to do is identify yet another in the third gen, you know, epic C P U, that family that we had, Talk to me a little bit about some of the innovations Yeah, so in fourth gen we actually added, you know, Well, it's got to be, talk a little bit about how AMD is with that holistic approach that, you know, that safeguards the David, thank you so much for joining me on the program, Well, you wanna thank you for watching our special program HPE Compute Security.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
David Chang | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Kevin Dee | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
15% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10.5 trillion | QUANTITY | 0.99+ |
HPE E | ORGANIZATION | 0.99+ |
H P E | ORGANIZATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
over $4 million | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
fourth gen. | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.99+ |
over 4 million | QUANTITY | 0.99+ |
DL 5 85 | COMMERCIAL_ITEM | 0.99+ |
256 bit | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
three hours | QUANTITY | 0.98+ |
amd | ORGANIZATION | 0.98+ |
128 bit | QUANTITY | 0.98+ |
over 400 high-tech threats | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
Infinity Guard | ORGANIZATION | 0.98+ |
one piece | QUANTITY | 0.98+ |
almost 20 years | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
millions of lines | QUANTITY | 0.97+ |
single bit | QUANTITY | 0.97+ |
50% | QUANTITY | 0.97+ |
one report | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
hpe | ORGANIZATION | 0.96+ |
third gen | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
H P V E | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.95+ |
two | QUANTITY | 0.95+ |
third leg | QUANTITY | 0.94+ |
last couple of years | DATE | 0.93+ |
Silicon Rivers | ORGANIZATION | 0.92+ |
up to 90 minutes | QUANTITY | 0.92+ |
S Spdm | ORGANIZATION | 0.9+ |
ILO | ORGANIZATION | 0.88+ |
AM | ORGANIZATION | 0.88+ |
US government | ORGANIZATION | 0.86+ |
single line | QUANTITY | 0.85+ |
last 18 months | DATE | 0.82+ |
Gen 11 | QUANTITY | 0.81+ |
last 12 months | DATE | 0.81+ |
AM MD base ProLiant | COMMERCIAL_ITEM | 0.8+ |
next five years | DATE | 0.8+ |
up to two | QUANTITY | 0.8+ |
Protect | ORGANIZATION | 0.79+ |
couple years | QUANTITY | 0.79+ |
Mohan Rokkam & Greg Gibby | 4th Gen AMD EPYC on Dell PowerEdge: Virtualization
(cheerful music) >> Welcome to theCUBE's continuing coverage of AMD's 4th Generation EPYC launch. I'm Dave Nicholson, and I'm here in our Palo Alto studios talking to Greg Gibby, senior product manager, data center products from AMD, and Mohan Rokkam, technical marketing engineer at Dell. Welcome, gentlemen. >> Mohan: Hello, hello. >> Greg: Thank you. Glad to be here. >> Good to see each of you. Just really quickly, I want to start out. Let us know a little bit about yourselves. Mohan, let's start with you. What do you do at Dell exactly? >> So I'm a technical marketing engineer at Dell. I've been with Dell for around 15 years now and my goal is to really look at the Dell powered servers and see how do customers take advantage of some of the features we have, especially with the AMD EPYC processors that have just come out. >> Greg, and what do you do at AMD? >> Yeah, so I manage our software-defined infrastructure solutions team, and really it's a cradle to grave where we work with the ISVs in the market, so VMware, Nutanix, Microsoft, et cetera, to integrate the features that we're putting into our processors and make sure they're ready to go and enabled. And then we work with our valued partners like Dell on putting those into actual solutions that customers can buy and then we work with them to sell those solutions into the market. >> Before we get into the details on the 4th Generation EPYC launch and what that means and why people should care. Mohan, maybe you can tell us a little about the relationship between Dell and AMD, how that works, and then Greg, if you've got commentary on that afterwards, that'd be great. Yeah, Mohan. >> Absolutely. Dell and AMD have a long standing partnership, right? Especially now with EPYC series. We have had products since EPYC first generation. We have been doing solutions across the whole range of Dell ecosystem. We have integrated AMD quite thoroughly and effectively and we really love how performant these systems are. So, yeah. >> Dave: Greg, what are your thoughts? >> Yeah, I would say the other thing too is, is that we need to point out is that we both have really strong relationships across the entire ecosystem. So memory vendors, the software providers, et cetera, we have technical relationships. We're working with them to optimize solutions so that ultimately when the customer buys that, they get a great user experience right out of the box. >> So, Mohan, I know that you and your team do a lot of performance validation testing as time goes by. I suspect that you had early releases of the 4th Gen EPYC processor technology. What have you been seeing so far? What can you tell us? >> AMD has definitely knocked it out of the park. Time and again, in the past four generations, in the past five years alone, we have done some database work where in five years, we have seen five exit performance. And across the board, AMD is the leader in benchmarks. We have done virtualization where we would consolidate from five into one system. We have world records in AI, we have world records in databases, we have world records in virtualization. The AMD EPYC solutions has been absolutely performant. I'll leave you with one number here. When we went from top of Stack Milan to top of Stack Genoa, we saw a performance bump of 120%. And that number just blew my mind. >> So that prompts a question for Greg. Often we, in industry insiders, think in terms of performance gains over the last generation or the current generation. A lot of customers in the real world, however, are N - 2. They're a ways back, so I guess two points on that. First of all, the kinds of increases the average person is going to see when they move to this architecture, correct me if I'm wrong, but it's even more significant than a lot of the headline numbers because they're moving two generations, number one. Correct me if I'm wrong on that, but then the other thing is the question to you, Greg. I like very long complicated questions, as you can tell. The question is, is it okay for people to skip generations or make the case for upgrades, I guess is the problem? >> Well, yeah, so a couple thoughts on that first too. Mohan talked about that five X over the generation improvements that we've seen. The other key point with that too is that we've made significant process improvements along the way moving to seven nanocomputer to now five nanocomputer and that's really reducing the total amount of power or the performance per watt the customers can realize as well. And when we look at why would a customer want to upgrade, right? And I want to rephrase that as to why aren't you? And there is a real cost of not upgrading. And so when you look at infrastructure, the average age of a server in the data center is over five years old. And if you look at the most popular processors that were sold in that timeframe, it's 8, 10, 12 cores. So now you've got a bunch of servers that you need in order to deliver the applications and meet your SLAs to your end users, and all those servers pull power. They require maintenance. They have the opportunity to go down, et cetera. You got to pay licensing and service and support costs and all those. And when you look at all the costs that roll up, even though the hardware is paid for just to keep the lights on, and not even talking about the soft costs of unplanned downtime, and, "I'm not meeting your SLAs," et cetera, it's very expensive to keep those servers running. Now, if you refresh, and now you have processors that have 32, 64, 96 cores, now you can consolidate that infrastructure and reduce your total power bill. You can reduce your CapEx, you reduce your ongoing OpEx, you improve your performance, and you improve your security profile. So it really is more cost effective to refresh than not to refresh. >> So, Mohan, what has your experience been double clicking on this topic of consolidation? I know that we're going to talk about virtualization in some of the results that you've seen. What have you seen in that regard? Does this favor better consolidation and virtualized environments? And are you both assuring us that the ROI and TCO pencil out on these new big, bad machines? >> Greg definitely hit the nail on the head, right? We are seeing tremendous savings really, if you're consolidating from two generations old. We went from, as I said, five is to one. You're going from five full servers, probably paid off down to one single server. That itself is, if you look at licensing costs, which again, with things like VMware does get pretty expensive. If you move to a single system, yes, we are at 32, 64, 96 cores, but if you compare to the licensing costs of 10 cores, two sockets, that's still pretty significant, right? That's one huge thing. Another thing which actually really drives the thing is we are looking at security, and in today's environment, security becomes a major driving factor for upgrades. Dell has its own setups, cyber-resilient architecture, as we call it, and that really is integrated from processor all the way up into the OS. And those are some of the features which customers really can take advantage of and help protect their ecosystems. >> So what kinds of virtualized environments did you test? >> We have done virtualization across primary codes with VMware, but the Azure Stack, we have looked at Nutanix. PowerFlex is another one within Dell. We have vSAN Ready Nodes. All of these, OpenShift, we have a broad variety of solutions from Dell and AMD really fits into almost every one of them very well. >> So where does hyper-converged infrastructure fit into this puzzle? We can think of a server as something that contains not only AMD's latest architecture but also latest PCIe bus technology and all of the faster memory, faster storage cards, faster nicks, all of that comes together. But how does that play out in Dell's hyper-converged infrastructure or HCI strategy? >> Dell is a leader in hyper-converged infrastructure. We have the very popular VxRail line, we have the PowerFlex, which is now going into the AWS ecosystem as well, Nutanix, and of course, Azure Stack. With all these, when you look at AMD, we have up to 96 cores coming in. We have PCIe Gen 5 which means you can now connect dual port, 100 and 200 gig nicks and get line rate on those so you can connect to your ecosystem. And I don't know if you've seen the news, 200, 400 gig routers and switchers are selling out. That's not slowing down. The network infrastructure is booming. If you want to look at the AI/ML side of things, the VDI side of things, accelerator cards are becoming more and more powerful, more and more popular. And of course they need that higher end data path that PCIe Gen 5 brings to the table. GDDR5 is another huge improvement in terms of performance and latencies. So when we take all this together, you talk about hyper-converged, all of them add into making sure that A, with hyper-converged, you get ease of management, but B, just 'cause you have ease of management doesn't mean you need to compromise on anything. And the AMD servers effectively are a no compromise offering that we at Dell are able to offer to our customers. >> So Greg, I've got a question a little bit from left field for you. We covered Supercompute Conference 2022. We were in Dallas a couple of weeks ago, and there was a lot of discussion of the current processor manufacturer battles, and a lot of buzz around 4th Gen EPYC being launched and what's coming over the next year. Do you have any thoughts on what this architecture can deliver for us in terms of things like AI? We talk about virtualization, but if you look out over the next year, do you see this kind of architecture driving significant change in the world? >> Yeah, yeah, yeah, yeah. It has the real potential to do that from just the building blocks. So we have our chiplet architecture we call it. So you have an IO die and then you have your core complexes that go around that. And we integrate it all with our infinity fabric. That architecture allows you, if we wanted to, replace some of those CCDs with specific accelerators. And so when we look two, three, four years down the road, that architecture and that capability already built into what we're delivering and can easily be moved in. We just need to make sure that when you look at doing that, that the power that's required to do that and the software, et cetera, and those accelerators actually deliver better performance as a dedicated engine versus just using standard CPUs. The other things that I would say too is if you look at emerging workloads. So data center modernization is one of the buzzwords in cloud native, right? And these container environments, well, AMD'S architecture really just screams support for those type of environments, right? Where when you get into these larger core accounts and the consolidation that Mohan talked about. Now when I'm in a container environment, that blast radius so a lot of customers have concerns around, "Hey, having a single point of failure and having more than X number of cores concerns me." If I'm in containers, that becomes less of a concern. And so when you look at cloud native, containerized applications, data center modernization, AMD's extremely well positioned to take advantage of those use cases as well. >> Yeah, Mohan, and when we talk about virtualization, I think sometimes we have to remind everyone that yeah, we're talking about not only virtualization that has a full-blown operating system in the bucket, but also virtualization where the containers have microservices and things like that. I think you had something to add, Mohan. >> I did, and I think going back to the accelerator side of business, right? When we are looking at the current technology and looking at accelerators, AMD has done a fantastic job of adding in features like AVX-512, we have the bfloat16 and eight features. And some of what these do is they're effectively built-in accelerators for certain workloads especially in the AI and media spaces. And in some of these use cases we look at, for example, are inference. Traditionally we have used external accelerator cards, but for some of the entry level and mid-level use cases, CPU is going to work just fine especially with the newer CPUs that we are seeing this fantastic performance from. The accelerators just help get us to the point where if I'm at the edge, if I'm in certain use cases, I don't need to have an accelerator in there. I can run most of my inference workloads right on the CPU. >> Yeah, yeah. You know the game. It's an endless chase to find the bottleneck. And once we've solved the puzzle, we've created a bottleneck somewhere else. Back to the supercompute conversations we had, specifically about some of the AMD EPYC processor technology and the way that Dell is packaging it up and leveraging things like connectivity. That was one of the things that was also highlighted. This idea that increasingly connectivity is critically important, not just for supercomputing, but for high-performance computing that's finding its way out of the realms of Los Alamos and down to the enterprise level. Gentlemen, any more thoughts about the partnership or maybe a hint at what's coming in the future? I know that the original AMD announcement was announcing and previewing some things that are rolling out over the next several months. So let me just toss it to Greg. What are we going to see in 2023 in terms of rollouts that you can share with us? >> That I can share with you? Yeah, so I think look forward to see more advancements in the technology at the core level. I think we've already announced our product code name Bergamo, where we'll have up to 128 cores per socket. And then as we look in, how do we continually address this demand for data, this demand for, I need actionable insights immediately, look for us to continue to drive performance leadership in our products that are coming out and address specific workloads and accelerators where appropriate and where we see a growing market. >> Mohan, final thoughts. >> On the Dell side, of course, we have four very rich and configurable options with AMD EPYC servers. But beyond that, you'll see a lot more solutions. Some of what Greg has been talking about around the next generation of processors or the next updated processors, you'll start seeing some of those. and you'll definitely see more use cases from us and how customers can implement them and take advantage of the features that. It's just exciting stuff. >> Exciting stuff indeed. Gentlemen, we have a great year ahead of us. As we approach possibly the holiday seasons, I wish both of you well. Thank you for joining us. From here in the Palo Alto studios, again, Dave Nicholson here. Stay tuned for our continuing coverage of AMD's 4th Generation EPYC launch. Thanks for joining us. (cheerful music)
SUMMARY :
talking to Greg Gibby, Glad to be here. What do you do at Dell exactly? of some of the features in the market, so VMware, on the 4th Generation EPYC launch the whole range of Dell ecosystem. is that we need to point out is that of the 4th Gen EPYC processor technology. Time and again, in the the question to you, Greg. of servers that you need in some of the results that you've seen. really drives the thing is we have a broad variety and all of the faster We have the very popular VxRail line, over the next year, do you that the power that's required to do that in the bucket, but also but for some of the entry I know that the original AMD in the technology at the core level. and take advantage of the features that. From here in the Palo Alto studios,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Greg | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Greg Gibby | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
8 | QUANTITY | 0.99+ |
Mohan | PERSON | 0.99+ |
32 | QUANTITY | 0.99+ |
Mohan Rokkam | PERSON | 0.99+ |
100 | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
10 cores | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
120% | QUANTITY | 0.99+ |
two sockets | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
12 cores | QUANTITY | 0.99+ |
two generations | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
five full servers | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two points | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
EPYC | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
one system | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Los Alamos | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
two generations | QUANTITY | 0.99+ |
four years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Azure Stack | TITLE | 0.98+ |
five nanocomputer | QUANTITY | 0.98+ |
Seamus Jones & Milind Damle
>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.
SUMMARY :
I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
70 | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
220% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
121% | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Shamus Jones | PERSON | 0.99+ |
12 cores | QUANTITY | 0.99+ |
Shamus | ORGANIZATION | 0.99+ |
Shamus | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
300 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
seven year | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
96 scores | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Milland Doley | PERSON | 0.99+ |
first guest | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
amd | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Lin | PERSON | 0.98+ |
20 years ago | DATE | 0.98+ |
Melinda | PERSON | 0.98+ |
One terabyte | QUANTITY | 0.98+ |
Seamus | ORGANIZATION | 0.98+ |
one core | QUANTITY | 0.98+ |
Melind | PERSON | 0.98+ |
fourth generation | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
7 years | QUANTITY | 0.97+ |
Seamus Jones | PERSON | 0.97+ |
Dallas | LOCATION | 0.97+ |
One | QUANTITY | 0.97+ |
Melin | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
6 | QUANTITY | 0.96+ |
Milind Damle | PERSON | 0.96+ |
Melan | PERSON | 0.96+ |
first | QUANTITY | 0.95+ |
8 | QUANTITY | 0.94+ |
second generation | QUANTITY | 0.94+ |
Seamus | PERSON | 0.94+ |
TP C X | TITLE | 0.93+ |
Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers
(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)
SUMMARY :
Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Evan | PERSON | 0.99+ |
Evan Touger | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Prowess Consulting | ORGANIZATION | 0.99+ |
2023 | DATE | 0.99+ |
three-year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
R 6525 | COMMERCIAL_ITEM | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
3rd | QUANTITY | 0.99+ |
R 7515 | COMMERCIAL_ITEM | 0.99+ |
R7515 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
4th gen | QUANTITY | 0.99+ |
3rd gen | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
7525 | COMMERCIAL_ITEM | 0.98+ |
Prowess | ORGANIZATION | 0.98+ |
Bellevue, Washington | LOCATION | 0.98+ |
100,000 CPU | QUANTITY | 0.98+ |
PowerEdge | COMMERCIAL_ITEM | 0.97+ |
two generations | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
PCIe 5 | OTHER | 0.96+ |
today | DATE | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
this year | DATE | 0.93+ |
PCI 4.0 | OTHER | 0.92+ |
TPCx-V | COMMERCIAL_ITEM | 0.92+ |
fourth-gen | QUANTITY | 0.92+ |
gen 5 | QUANTITY | 0.9+ |
Moore | ORGANIZATION | 0.89+ |
fourth generation | QUANTITY | 0.88+ |
gen 4 | QUANTITY | 0.87+ |
PCI 3 | OTHER | 0.87+ |
couple of weeks ago | DATE | 0.85+ |
SuperCompute 2022 | TITLE | 0.8+ |
PCIe gen 5 | OTHER | 0.79+ |
VMmark 3.x | COMMERCIAL_ITEM | 0.75+ |
minus | QUANTITY | 0.74+ |
one way | QUANTITY | 0.74+ |
18 months | QUANTITY | 0.7+ |
PERC 12 | COMMERCIAL_ITEM | 0.67+ |
5.0 | OTHER | 0.67+ |
EPYC | COMMERCIAL_ITEM | 0.65+ |
months | DATE | 0.64+ |
5 | QUANTITY | 0.63+ |
PERC 11 | COMMERCIAL_ITEM | 0.6+ |
next few months | DATE | 0.6+ |
first | QUANTITY | 0.59+ |
VMmark 3.x. | COMMERCIAL_ITEM | 0.55+ |
EPYC Genoa | COMMERCIAL_ITEM | 0.53+ |
gen | OTHER | 0.52+ |
R7525 | COMMERCIAL_ITEM | 0.52+ |
1 | QUANTITY | 0.5+ |
2 | QUANTITY | 0.47+ |
PowerEdge | ORGANIZATION | 0.47+ |
Dilip Ramachandran and Juergen Zimmerman
(bright upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch, along with the way that Dell has integrated this technology into its PowerEdge server lines. We're in for an interesting conversation today. Today, I'm joined by Dilip Ramachandran, Senior Director of Marketing at AMD, and Juergen Zimmermann. Juergen is Principal SAP Solutions Performance Benchmarking Engineer at Dell. Welcome, gentlemen. >> Welcome. >> Thank you David, nice to be here. >> Nice to meet you too, welcome to theCUBE. You will officially be CUBE alumni after this. Dilip, let's start with you. What's this all about? Tell us about AMD's recent launch and the importance of it. >> Thanks, David. I'm excited to actually talk to you today, AMD, at our fourth generation EPYC launch last month in November. And as part of that fourth generation EPYC launch, we announced industry-leading performance based on 96 cores, based on Zen 4 architecture. And new interfaces, PCIe Gen 5, as well as DDR5. Incredible amount of memory bandwidth, memory capacity supported, and a whole lot of other features as well. So we announced this product, we launched it in November last month. And we've been closely working with Dell on a number of benchmarks that we'd love to talk to you more about today. >> So just for some context, when was the last release of this scale? So when was the third generation released? How long ago? >> The third generation EPYC was launched in Q1 of 2021. So it was almost 18 to 24 months ago. And since then we've made a tremendous jump, the fourth generation EPYC, in terms of number of cores. So third generation EPYC supported 64 cores, fourth generation EPYC supports 96 cores. And these are new cores, the Zen 4 cores, the fourth generation of Zen cores. So very high performance, new interfaces, and really world-class performance. >> Excellent. Well, we'll go into greater detail in a moment, but let's go to Juergen. Tell us about the testing that you've been involved with to kind of prove out the benefits of this new AMD architecture. >> Yeah, well, the testing is SAP Standard Performance benchmark, the SAP SD two tier. And this is more or less a industry standard benchmark that is used to size your service for the needs of SAP. Actually, SAP customers always ask the vendors about the SAP benchmark and the SAPS values of their service. >> And I should have asked you before, but give us a little bit of your background working with SAP. Have you been doing this for longer than a week? >> Yeah, yeah, definitely, I do this for about 20 years now. Started with Sun Microsystems, and interestingly in the year 2003, 2004, I started working with AMD service on SAP with Linux, and afterwards parted the SAP application to Solaris AMD, also with AMD. So I have a lot of tradition with SAP and AMD benchmarks, and doing this ever since then. >> So give us some more detail on the results of the recent testing, and if you can, tell us why we should care? >> (laughs) Okay, the recent results actually also surprised myself, they were so good. So I initially installed the benchmark kit, and couldn't believe that the server is just getting, or hitting idle by the numbers I saw. So I cranked up the numbers and reached results that are most likely double the last generation, so Zen 3 generation, and that even passed almost all 8-socket systems out there. So if you want to have the same SAP performance, you can just use 2-socket AMD server instead of any four or 8-socket servers out there. And this is a tremendous saving in energy. >> So you just mentioned savings in terms of power consumption, which is a huge consideration. What are the sort of end user results that this delivers in terms of real world performance? How is a human being at the end of a computer going to notice something like this? >> So actually the results are like that you get almost 150,000 users concurrently accessing the system, and get their results back from SAP within one second response time. >> 150,000 users, you said? >> 150,000 users in parallel. >> (laughs) Okay, that's amazing. And I think it's interesting to note that, and I'll probably say this a a couple of times. You just referenced third generation EPYC architecture, and there are a lot of folks out there who are two generations back. Not everyone is religiously updating every 18 months, and so for a fair number of SAP environments, this is an even more dramatic increase. Is that a fair thing to say? >> Yeah, I just looked up yesterday the numbers from generation one of EPYC, and this was at about 28,000 users. So we are five times the performance now, within four years. Yeah, great. >> So Dilip, let's dig a little more into the EPYC architecture, and I'm specifically also curious about... You mentioned PCIe Gen five, or 5.0 and all of the components that plug into that. You mentioned I think faster DDR. Talk about that. Talk about how all of the components work together to make when Dell comes out with a PowerEdge server, to make it so much more powerful. >> Absolutely. So just to spend a little bit more time on this particular benchmark, the SAP Sales and Distribution benchmark. It's a widely used benchmark in the industry to basically look at how do I get the most performance out of my system for a variety of SAP business suite applications. And we touched upon it earlier, right, we are able to beat a performance of 4-socket and 8-socket servers out there. And you know, it saves energy, it saves cost, better TCO for the data center. So we're really excited to be able to support more users in a single server and meeting all the other dual socket and 4-socket combinations out there. Now, how did we get there, right, is more the important question. So as part of our fourth generation EPYC, we obviously upgraded our CPU core to provide much better single third performance per core. And at the socket level, you know, when you're packing 96 cores, you need to be able to feed these cores, you know, from a memory standpoint. So what we did was we went to 12 channels of memory, and these are DDR5 memory channels. So obviously you get much better bandwidth, higher speed of the memory with DDR5, you know, starting at 4,800 megahertz. And you're also now able to have more channels to be able to send the data from the memory into the CPU subsystem, which is very critical to keep the CPUs busy and active, and get the performance out. So that's on the memory side. On the data side, you know, we do have PCIe Gen five, and any data oriented applications that take data either from the PCIe drives or the network cards that utilize Gen five that are available in the industry today, you can actually really get data into the system through the PCIe I/O, either again, through the disk, or through the net card as well. So those are other ways to actually also feed the CPU subsystem with data to be processed by the CPU complex. So we are, again, very excited to see all of this coming together, and as they say, proof's in the pudding. You know, Juergen talked about it. How over generation after generation we've increased the performance, and now with our fourth generation EPYC, we are absolutely leading world-class performance on the SAP Sales and Distribution benchmark. >> Dilip, I have another question for you, and this may be, it may be a bit of a PowerEdge and beyond question. What are you seeing, or what are you anticipating in terms of end user perception when they go to buy a new server? Obviously server is a very loose term, and they can be configured in a bunch of different ways. But is there a discussion about ROI and TCO that's particularly critical? Because people are going to ask, "Well, wait a minute. If it's more expensive than the last one that I bought, am I getting enough bang for my buck?" Is that going to be part of the conversation, especially around power and cooling and things like that? >> Yeah, absolutely. You know, every data center decision maker has to ask the question, "Why should I upgrade? Should I stay with legacy hardware, or should I go into the latest and greatest that AMD offers?" And the advantages that the new generation products bring is much better performance at much better energy consumption levels, as well as much better performance per dollar levels. So when you do the upgrade, you are actually getting, you know, savings in terms of performance per dollar, as well as saving in space because you can consolidate your work into fewer servers 'cause you have more cores. As we talked about, you have eight, you know. Typically you might do it on a four or 8-socket server which is really expensive. You can consolidate down to a 2-socket server which is much cheaper. As also for maintenance costs, it's much lower maintenance costs as well. All of this, performance, power, maintenance costs, all of that translate into better TCO, right. So lower all of these, high performance, lower power, and then lower maintenance costs, translate to much better TCO for the end user. And that's an important equation that all customers pay attention to. and you know, we love to work with them and demonstrate those TCO benefits to them. >> Juergen, talk to us more in general about what Dell does from a PowerEdge perspective to make sure that Dell is delivering the best infrastructure possible for SAP. In general, I mean, I assume that this is a big responsibility of yours, is making sure that the stuff runs properly and if not, fixing it. So tell us about that relationship between Dell and a SAP. >> Yeah, for Dell and SAP actually, we're more or less partners with SAP. We have people sitting in SAP's Linux lab, and working in cooperative with SAP, also with Linux partners like SUSE and Red Hat. And we are in constant exchange about what's new in Linux, what's new on our side. And we're all a big family here. >> So when the new architecture comes out and they send it to Juergen, the boys back at the plant as they say, or the factory to use Formula One terms, are are waiting with baited breath to hear what Juergen says about the results. So just kind of kind of recap again, you know, the specific benchmarks that you were running. Tell us about that again. >> Yeah, the specific benchmark is the SAP Sales and Distribution benchmark. And for SAP, this is the benchmark that needs to be tested, and it shows the performance of the whole system. So in contrast to benchmarks that only check if the CPU is running, very good, this test the whole system up from the network stack, from the storage stack, the memory, subsystem, and the OS running on the CPUs. >> Okay, which makes perfect sense, since Dell is delivering an integrated system and not just CPU technology. You know, on that subject, Dilip, do you have any insights into performance numbers that you're hearing about with Gen four EPYC for other database environments? >> Yeah, we have actually worked together with Dell on a variety of benchmarks, both on the latest fourth generation EPYC processors as well as the preceding one, the third generation EPYC processors. And published a bunch of world records on database, particularly I would say TPC-H, TPCx-V, as well as TPCx-HS and TPCx-IoT. So a number of TPC related benchmarks that really showcase performance for database and related applications. And we've collaborated very closely with Dell on these benchmarks and published a number of them already, and you know, a number of them are world records as well. So again, we're very excited to collaborate with Dell on the SAP Sales and Distribution benchmark, as well as other benchmarks that are related to database. >> Well, speaking of other benchmarks, here at theCUBE we're going to be talking to actually quite a few people, looking at this fourth generation EPYC launch from a whole bunch of different angles. You two gentlemen have shed light on some really good pieces of that puzzle. I want to thank you for being on theCUBE today. With that, I'd like to thank all of you for joining us here on theCUBE. Stay tuned for continuing CUBE coverage of AMD's fourth generation EPYC launch, and Dell PowerEdge strategy to leverage it.
SUMMARY :
Welcome to theCUBE's Nice to meet you talk to you today, AMD, the fourth generation of Zen cores. to kind of prove out the benefits and the SAPS values of their service. you before, but give us and afterwards parted the SAP application and couldn't believe that the server What are the sort of end user results So actually the results Is that a fair thing to say? and this was at about 28,000 users. and all of the components And at the socket level, you know, of the conversation, And the advantages that the is delivering the best and working in cooperative with SAP, or the factory to use Formula One terms, and it shows the performance You know, on that subject, on the SAP Sales and With that, I'd like to thank all of you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dilip | PERSON | 0.99+ |
Dilip Ramachandran | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Juergen | PERSON | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
12 channels | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
4,800 megahertz | QUANTITY | 0.99+ |
2003 | DATE | 0.99+ |
2004 | DATE | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
96 cores | QUANTITY | 0.99+ |
Juergen Zimmermann | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
64 cores | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one second | QUANTITY | 0.99+ |
November last month | DATE | 0.99+ |
8-socket | QUANTITY | 0.99+ |
about 28,000 users | QUANTITY | 0.98+ |
2-socket | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Juergen Zimmerman | PERSON | 0.98+ |
two generations | QUANTITY | 0.98+ |
four years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Zen 3 generation | COMMERCIAL_ITEM | 0.98+ |
about 20 years | QUANTITY | 0.97+ |
150,000 users | QUANTITY | 0.97+ |
Linux | TITLE | 0.96+ |
single | QUANTITY | 0.96+ |
almost 150,000 users | QUANTITY | 0.95+ |
fourth generation | QUANTITY | 0.95+ |
SAP | TITLE | 0.94+ |
two gentlemen | QUANTITY | 0.94+ |
third generation | QUANTITY | 0.94+ |
fourth | QUANTITY | 0.93+ |
single server | QUANTITY | 0.93+ |
two tier | QUANTITY | 0.92+ |
24 months ago | DATE | 0.92+ |
PCIe Gen five | OTHER | 0.91+ |
PCIe Gen 5 | OTHER | 0.9+ |
Zen 4 cores | COMMERCIAL_ITEM | 0.89+ |
Kevin Depew | HPE ProLiant Gen11 – Trusted Security by Design
>>Hey everyone, welcome to the cube. Lisa Martin here with Kevin Depu, senior Director Future Server Architecture at hpe. Kevin, it's great to have you on the program. You're gonna be breaking down everything that's exciting and compelling about Gen 11. How are you today? >>Thanks Lisa, and I'm doing great. >>Good, good, good. So let's talk about ProLiant Gen 11, the next generation of compute. I read some great stats on hpe.com. I saw that Gen 11 added 28 new world records while delivering up to 99% higher performance and 43% more energy efficiency than the previous version. That's amazing. Talk to me about Gen 11. What makes this update so compelling? >>Well, you talked about some of the stats regarding the performance and the power efficiency, and those are excellent. We partnered with amd, we've got excellent performance on these platforms. We have excellent power efficiency, but the advantage of this platform go beyond that. Today we're gonna talk a lot about cybersecurity and we've got a lot of security capabilities in these platforms. We've built on top of the security capabilities that we've had, generation over generation, we've got some new exciting capabilities we'll be talking about. So whether it's the performance, whether it's power efficient, whether it's security, all those capabilities are in this platform. Security is part of our dna. We put it into the design from the very beginning, and we've partnered with AMD to deliver what we think is a very compelling story. >>The security piece is absolutely critical. The to, we could have a, you know, an entire separate conversation on the cybersecurity landscape and the changes there. But one of the things I also noticed in the material on Gen 11 is that HPE says it's fundamental. What do you mean by that and what's new that makes it so fundamental? >>Well, by saying it's fundamental is security is a fundamental part of the platform. You need systems that are reliable. You need systems that have excellent performance. You need systems that are, have very good power efficiency, those things you talked about before, those are all very important to have a good server, but security's a part that's absolutely critical as well. So security is one of the fundamental capabilities of the platform. I had mentioned. We built on top of capabilities, capabilities like our silicon root of trust, which ensures that the firmware stack on these platforms is not compromised. Those are continuing this platform and have been expanded on. We have our trusted supply chain and we've expanded on that as well. We have a lot of security capabilities, our platform certificates, our IEB IDs. There's just a lot of security capabilities that are absolutely fundamental to these being a good solution because as we said, security is fundamental. It's an absolutely critical part of these platforms. >>Absolutely. For companies in every industry. I wanna talk a little bit about about one of the other things that HPE describes Gen 11 as as being uncompromising. And I wanted to understand what that means and what's the value add in it for customers? >>Yeah. Well, by uncompromising means we can't compromise on security. Security to what I said before, it's fundamental. It can't be promised. You have to have security be strong on these platforms. So one of the capabilities, which we're specifically talking about when we talk about Uncompromising is a capability called spdm. We've extended our silicon root of trust, which is one of our key technologies we've had since our Gen 10 platforms. We've extended that through something called spdm. We saw a problem in the industry with the ability to authenticate option cards and other devices in the system. Silicon Root of Trust verified many pieces of firmware in the platform, but one piece that it wasn't verifying was the option cards. And we needed, we knew we needed to solve this problem and we knew we couldn't do it a hundred percent on our own because we needed to work with our partners, whether it's a storage option card, a nick, or even devices in the future, we needed to make sure that we could verify that those were what they were meant to be. >>They weren't compromised, they weren't maliciously compromised and that we could authenticate them. So we worked with industry standards bodies to create the S P M specification. And what that allows us to do is authenticate the option cards in the systems. So that's one of our new capabilities that we've added in these platforms. So we've gone beyond securing all of the things that Silicon Real Trust secured in the past to extending that to the option cards and their firmware as well. So when we boot up one of these platforms, when we hand off to the OS and to the the customers software solution, they can be, they can rest assured that all the things that have run all that, that platform is not compromised. A bad guy has not gone in and changed things and that includes a bad guy with physical access to the platform. So that's why we have unpromised security in these platforms. >>Outstanding. That sounds like great work that's been done there and giving customers that piece of mind where security is concerned is table stakes for everybody across the organization. Kevin, you mentioned partners. I know HPE is extending protection to the partner ecosystem. I wanted to get a little bit more info on that from you. >>Yeah, we've worked with our option co card vendors, numerous partners across the industry to support spdm. We were the ones who kind of went to the, the industry standards bodies and said, we need to solve this problem. And we had agreement from everybody. Everybody agrees this is a problem that had to be solved. So, but to solve it, you've gotta have a partnership. We can't just do it on our own. There's a lot of things that we HPE can solve on our own. This is not one of them to be able to get a method that we could authenticate and trust the option cards in the system. We needed to work with our option card vendors. So that's something that we, we did. And we use also some capabilities that we work with some of our processor vendor partners as well. So working with partners across the industry, we were able to deliver spdm. >>So we know that option card, whether it's a storage card or a Nick Card or, or GPUs in the future, those, those may not be there from day one, but we know that those option cards are what they intended because you could do an attack where you compromise the option card, you compromise the firmware in that option card and option cards have the ability to read and write to memory using something called dma. And if those cards are running firmware that's being created by a bad guy, they can do a lot of, of very costly attacks. I mean we, there's a lot of statistics that showed just how, how costly cybersecurity attacks are. If option cards have been compromised, you can do some really bad things. So this is how we can trust those option cards. And we had to partner with those, those partners in the industry to both define the spec and both sides had to implement to that specification so that we could deliver the solution we're delivering. >>HPE is such a strong partner ecosystem. You did a great job of articulating the value in this for customers. From a security perspective, I know that you're also doing a lot of collaboration and work with amd. Talk to me a little bit about that and the value in it for your joint customers. >>Yeah, absolutely. AMD is a longstanding partner. We actually started working with AMD about 20 years ago when we delivered our first AMD opton based platform, the HP pro, HP Pliant, DL 5 85. So we've got a long engineering relationship with AMD and we've been making products with AMD since they introduced their epic generation processor in 2017. That's when AMD really upped the secure their security game. They created capabilities with their AMD secure processor, their secure encryption virtualization, their memory encryption technologies. And we work with AMD long before platforms actually release. So they come to us with their ideas, their designs, we collaborate with them on things we think are valuable when we see areas where they can do things better, we provide feedback. So we really have a partnership to make these processors better. And it's not something where we just work with them for a short amount of time and deliver a product. >>We're working with them for years before those products come out. So that partnership allows both parties to create better platforms cuz we understand what they're capable of, they understand what our needs are as a, as a server provider. And so we help them make their processors better and they help us make our products better. And that extends in all areas, whether it's performance, power, efficiency, but very importantly in what we're talking about here, security. So they have got an excellent security story with all of their technologies. Again, memory encryption. They, they've got some exceptional technologies there. All their secure encryption, virtualization to secure virtualized environments, those are all things that they excel at. And we take advantage of those in our designs. We make sure that those so work with our servers as part of a solution >>Sounds like a very deeply technically integrated and longstanding relationship that's really symbiotic for both sides. I wanted to get some information from you on HPE server security optimized service. Talk to me about what that is. How does that help HP help its customers get around some of those supply chain challenges that are persistent? >>Yeah, what that is is with our previous generation of products, we announced something called our HPE trusted supply chain and but that was focused on the US market with the solution for gen 11. We've expanded that to other markets. It's, it's available from factories other than the ones in our us it's available for shipping products to other geographies. So what that really was is taking the HPE trusted supply chain and expanding it to additional geographies throughout the world, which provides a big, big benefit for our non-US based customers. And what that is, is we're trying to make sure that the server that we ship out of our factories is indeed exactly what that customer is getting. So try to prevent any possibility of attack in the supply chain going from our factories to the customer. And if there is an attack, we can detect it and the customer knows about it. >>So they won't deploy a system that's been compromised cuz there, there have been high profile cases of supply chain attacks. We don't want to have that with our, our customers buying our Reliant products. So we do things like enable you I Secure Boot, which is an ability to authenticate the, what's called a u i option ROM driver on option cards. That's enabled by default. Normally that's not enabled by default. We enable our high security mode in our ILO product. We include our intrusion tech detection technology option, which is an optional feature, but it's their standard when you buy one of the boxes with this, this capability, this trusted supply chain capability. So there's a lot of capabilities that get enabled at the factory. We also enable server configuration lock, which allows a customer to detect, get a bad guy, modify anything in the platform when it transits from our factory to them. So what it allows a customer to do is get that platform and know that it is indeed what it is intended to be and that it hasn't been attacked and we've now expanded that to many geographies throughout the world. >>Excellent. So much more coverage across the world, which is so incredibly important. As cyber attacks continue to rise year over year, the the ransomware becomes a household word, the ransoms get even more expensive, especially considering the cybersecurity skills gap. I'm just wondering what are some of the, the ways in which everything that you've described with Gen 11 and the HPE partner ecosystem with A and B for example, how does that help customers to get around that security skills gap that is present? >>Well, the key thing there is we care about our customer security. So as I mentioned, security is in our dna. We do, we consider security in everything. We do every update to firm where we make, when we do the hardware design, whatever we're doing, we're always considering what could a bad guy do? What could a bad guy take advantage of and attempt to prevent it. And AMD does the same thing. You can look at all the technologies they have in their AMD processor. They're, they're making sure their processor is secure. We're making sure our platform is secure so the customer doesn't have to worry about it. So that's something the customer can trust us. They can trust the amd so they know that that's not the area where they, they have to expend their bandwidth. They can extend their bandwidth on the security on other parts of the, the solution versus knowing that the platform and the CPU is secure. >>And beyond that, we create features and capabilities that they can take advantage of in the, in the case of amd, a lot of their capabilities are things that the software stack and the OS can take advantage of. We have capabilities on the client side that the software and that they can take advantage of, whether it's server configuration lock or whatever. We try to create features that are easy for them to use to make their environments more secure. So we're making things that can trust the platform, they can trust the processor, they don't have to worry about that. And then we have features and capabilities that lets them solve some of the problems easier. So we're, we're trying to, to help them with that skills gap by making certain things easier and making certain things that they don't even have to worry about. >>Right. It sounds like allowing them to be much more strategic about the security skills that they do have. My last question for you, Kevin, is Gen 11 available now? Where can folks go to get their hands on it? >>So Gen 11 was announced earlier this month. The products will actually be shipping before the end of this year, before the end of 2022. And you can go to our website and find all about our compute security. So it all that information's available on our website. >>Awesome. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in it, why security is fundamental to the uncompromising nature with which HPE and partners have really updated the system and the rest of world coverage that you guys are enabling. We appreciate your insights on your time, Kevin. >>Thank you very much, Lisa. Appreciate >>It. And we want to let you and the audience know, check out hpe.com/info/compute for more info on 11. Thanks for watching.
SUMMARY :
Kevin, it's great to have you on the program. So let's talk about ProLiant Gen 11, the next generation of compute. We put it into the design from the very beginning, The to, we could have a, you know, an entire separate conversation So security is one of the fundamental capabilities of the platform. And I wanted to understand what that means and what's the value add in it for customers? a nick, or even devices in the future, we needed to make sure that we could verify in the past to extending that to the option cards and their firmware as well. is table stakes for everybody across the organization. the industry standards bodies and said, we need to solve this problem. the spec and both sides had to implement to that specification so that we could deliver You did a great job of articulating the value in this for customers. So they come to us with their ideas, their designs, we collaborate parties to create better platforms cuz we understand what they're capable of, Talk to me about what that is. possibility of attack in the supply chain going from our factories to the customer. So we do things like enable you I Secure Boot, So much more coverage across the world, which is so incredibly important. So that's something the customer can trust us. We have capabilities on the client side that the It sounds like allowing them to be much more strategic about the security skills that they do have. So it all that information's available on our website. Kevin, it's been a pleasure talking to you, unpacking Gen 11, the value in It. And we want to let you and the audience know, check out hpe.com/info/compute
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Kevin Depu | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Kevin Depew | PERSON | 0.99+ |
43% | QUANTITY | 0.99+ |
amd | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Silicon Real Trust | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
end of 2022 | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
both parties | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
Today | DATE | 0.97+ |
hpe | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
hpe.com/info/compute | OTHER | 0.97+ |
end of this year | DATE | 0.97+ |
hpe.com | ORGANIZATION | 0.96+ |
DL 5 85 | COMMERCIAL_ITEM | 0.96+ |
earlier this month | DATE | 0.95+ |
up to 99% | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.93+ |
day one | QUANTITY | 0.9+ |
ILO | ORGANIZATION | 0.89+ |
ProLiant | TITLE | 0.87+ |
Gen 10 | QUANTITY | 0.86+ |
Pliant | COMMERCIAL_ITEM | 0.84+ |
28 new world records | QUANTITY | 0.83+ |
gen 11 | QUANTITY | 0.83+ |
Gen 11 | QUANTITY | 0.82+ |
about 20 years ago | DATE | 0.81+ |
one of | QUANTITY | 0.77+ |
11 | OTHER | 0.7+ |
Nick Card | COMMERCIAL_ITEM | 0.69+ |
Gen11 | QUANTITY | 0.64+ |
HPE ProLiant | ORGANIZATION | 0.64+ |
Gen 11 | QUANTITY | 0.62+ |
years | QUANTITY | 0.62+ |
Gen | OTHER | 0.6+ |
Gen 11 | OTHER | 0.59+ |
11 | QUANTITY | 0.57+ |
Gen | QUANTITY | 0.52+ |
boxes | QUANTITY | 0.47+ |
spdm | TITLE | 0.44+ |
spdm | OTHER | 0.41+ |
pro | COMMERCIAL_ITEM | 0.38+ |
Sean Knapp, Ascend io | AWS re:Invent 2022 - Global Startup Program
>>And welcome back to the Cube everyone. I'm John Walls to continue our coverage here of AWS Reinvent 22. We're part of the AWS Startup Showcase is the global startup program that AWS so proudly sponsors and with us to talk about what they're doing now in the AWS space. Shaun Knapps, the CEO of AS Send IO and Sean, good to have here with us. We appreciate >>It. Thanks for having me, >>John. Yeah, thanks for the time. First off, gotta show the t-shirt. You caught my attention. Big data is a cluster. I don't think you get a lot of argument from some folks, right? But it's your job to make some sense of it, is it not? Yeah. Tell us about a Send io. >>Sure. As Send IO is a data automation platform. What we do is connect a lot of the, the disparate parts of what data teams do when they create ETL and E o T data pipelines. And we use advanced levels of automation to make it easier and faster for them to build these complex systems and have their world be a little bit less of a, a cluster. >>All right. So let's get into automation a little bit then again, I, your definition of automation and how you're applying it to your business case. >>Absolutely. You know, what we see oftentimes is as spaces mature and evolve, the number of repetitive and repeatable tasks that actually become far less differentiating, but far more taxable if you will, right to the business, start to accumulate as those common patterns emerge. And, and, you know, as we see standardization around tech stacks, like on Amazon and on Snowflake and on data bricks, and as you see those patterns really start to, to formalize and standardize, it opens up the door to basically not have your team have to do all those things anymore and write code or perform the same actions that they used to always have to, and you can lean more on technology to properly automate and remove the, the monotony of those tasks and give your teams greater leverage. >>All right. So, so let's talk about at least maybe your, the journey, say in the past 18 months in terms of automation and, and what have you seen from a trend perspective and how are you trying to address that in order to, to meet that need? >>Yeah, I think the last 18 months have become, you know, really exciting as we've seen both that, you know, a very exciting boom and bust cycle that are driving a lot of other macro behaviors. You know, what we've seen over the last 18 months is far greater adoption of the, the standard, what we call the data planes, the, the architectures around snowflake and data bricks and, and Amazon. And what that's created as a result is the emergence of what I would call is the next problem. You know, as you start to solve that category of how >>You, that's it always works too, isn't >>It? Yeah, exactly. Always >>Works that >>This is the wonderful thing about technology is the job security. There's always the next problem to go solve. And that's what we see is, you know, as we we go into cloud, we get that infinite scale, infinite capacity, capacity, infinite flexibility. And you know, with these modern now data platforms, we get that infinite ability to store and process data incredibly quickly with incredible ease. And so what, what do most organizations do? You take a ton of new bodies, like all the people who wanted to do those like really cool things with data you're like, okay, now you can. And so you start throwing a lot more use cases, you start creating a lot more data products, you start doing a lot more things with data. And this is really where that third category starts to emerge, which is you get this data mess, not mesh, but the data mess. >>You get a cluster cluster, you get a cluster exactly where the complexity skyrockets. And as a result that that rapid innovation that, that you are all looking for and, and promised just comes to a screeching halt as you're just, just like trying to swim through molasses. And as a result, this is where that, that new awareness around automation starts really heightened. You know, we, we did a really interesting survey at the start of this year, did it as a blind survey, independent third party surveyed, 500 chief data officers, data scientists, data architects, and asked them a plethora of questions. But one of the questions we asked them was, do you currently or do you intend on investing in data automation to increase your team's productivity? And what was shocking, and I was very surprised by this, okay, what was shocking was only three and a half percent said they do today. Which is really interesting because it really hones in on this notion of automation is beyond what a lot of a think of, you know, tooling and enhancements today, only three and a half percent today had it, but 88.5% said they intend on making data automation investments in the next 12 months. And that stark contrast of how many people have a thing and how many people want that benefit of automation, right? I think it is incredibly critical as we look to 2023 and beyond. >>I mean, this seems like a no-brainer, does it not? I mean, know it is your business, of course you agree with me, but, but of course, of course what brilliant statement. But it is, it seems like, you know, the more you're, you're able to automate certain processes and then free up your resources and your dollars to be spent elsewhere and your, and your human capital, you know, to be invested elsewhere. That just seems to be a layup. I'm really, I'm very surprised by that three and a half percent figure >>I was too. I actually was expecting it to be higher. I was expecting five to 10%. Yeah. As there's other tools in the, the marketplace around ETL tools or orchestration tools that, that some would argue fit in the automation category. And I think the, what, what the market is telling us based on, on that research is that those themselves are, don't qualify as automation. That, that the market has a, a larger vision for automation. Something that is more metadata driven, more AI back, that takes us a greater leap and of leverage for the teams than than what the, the existing capabilities in the industry today can >>Afford. Okay. So if you got this big leap that you can make, but, but, but maybe, you know, should sites be set a little lower, are you, are you in danger of creating too much of an expectation or too much of a false hope? Because you know, I mean sometimes incremental increases are okay. I >>Agree. I I I think the, you know, I think you wanna do a little bit of both. I think you, you want to have a plan for, for reaching for the stars and you gotta be really pragmatic as well. Even inside of a a suni, we actually have a core value, which is build for 10 x plan for a hundred x and so know where you're going, right? But, but solve the problems that are right in front of you today as, as you get to that next scale. And I think the, the really important part for a lot of companies is how do you think about what that trajectory is and be really smart around where you choose to invest as you, one of the, the scenes that we have is last year's innovation is next year's anchor around your neck. And that's because we, we were in this very fortunately, so this really exciting, rapidly moving innovative space, but the thing that was your advantage not too long ago is everybody can move so quickly now becomes commonplace and a year or two later, if you don't jump on whatever that next innovation is that the industry start to standardize on, you're now on hook paying massive debt and, and paying, you know, you thought you had, you know, home mortgage debt and now you're paying the worst of credit card debt trying to pay that down and maintain your velocity. >>It's >>A whole different kind of fomo, right? I'm fair, miss, I'm gonna miss out. What am I missing out on? What the next big thing exactly been missing out >>On that? And so we encourage a lot of folks, you know, as you think about this as it pertains to automation too, is you solve for some of the problems right in front of you, but really make sure that you're, you're designing the right approach that as you stack on, you know, five times, 10 times as many people building data products and, and you, you're, you're your volume and library of, of data weaving throughout your, your business, make sure you're making those right investments. And that's one of the reasons why we do think automation is so important and, and really this, this next generation of automation, which is a, a metadata and AI back to level of automation that can just achieve and accomplish so much more than, than sort of traditional norms. >>Yeah. On that, like, as far as Dex Gen goes, what do you think is gonna be possible that cloud sets the stage for that maybe, you know, not too long ago seem really outta reach, like, like what's gonna give somebody to work on that 88% in there that's gonna make their spin come your way? >>Ah, good question. So I, I think there's a couple fold. I, you know, I think the, right now we see two things happening. You know, we see large movements going to the, the, the dominant data platforms today. And, and you know, frankly, one of the, the biggest challenges we see people having today is just how do you get data in which is insanity to me because that's not even the value extraction, that is the cost center piece of it. Just get data in so you can start to do something with it. And so I think that becomes a, a huge hurdle, but the access to new technologies, the ability to start to unify more of your data and, and in rapid fashion, I think is, is really important. I think as we start to, to invest more in this metadata backed layer that can connect that those notions of how do you ingest your data, how do you transform it, how do you orchestrate it, how do you observe it? One of the really compelling parts of this is metadata does become the new big data itself. And so to do these really advanced things to give these data teams greater levels of automation and leverage, we actually need cloud capabilities to process large volumes of not the data, but the metadata around the data itself to deliver on these really powerful capabilities. And so I think that's why the, this new world that we see of the, the developer platforms for modern data cloud applications actually benefit from being a cloud native application themselves. >>So before you take off, talk about the AWS relationship part of the startup showcase part of the growth program. And we've talked a lot about the cloud, what it's doing for your business, but let's just talk about again, how integral they have been to your success and, and likewise what you're thinking maybe you bring to their table too. Yeah, >>Well we bring a lot to the table. >>Absolutely. I had no doubt about that. >>I mean, honestly, it, working with with AWS has been truly fantastic. Yep. You know, I think, you know, as a, a startup that's really growing and expanding your footprint, having access to the resources in AWS to drive adoption, drive best practices, drive awareness is incredibly impactful. I think, you know, conversely too, the, the value that Ascend provides to the, the AWS ecosystem is tremendous leverage on onboarding and driving faster use cases, faster adoption of all the really great cool, exciting technologies that we get to hear about by bringing more advanced layers of automation to the existing product stack, we can make it easier for more people to build more powerful things faster and safely. Which I think is what most businesses at reinvent really are looking for. >>It's win-win, win-win. Yeah. That's for sure. Sean, thanks for the time. Thank you John. Good job on the t-shirt and keep up the good work. Thank you very much. I appreciate that. Sean Na, joining us here on the AWS startup program, part of their of the Startup Showcase. We are of course on the Cube, I'm John Walls. We're at the Venetian in Las Vegas, and the cube, as you well know, is the leader in high tech coverage.
SUMMARY :
We're part of the AWS Startup Showcase is the global startup program I don't think you get a lot of argument from some folks, And we use advanced levels of automation to make it easier and faster for them to build automation and how you're applying it to your business case. And, and, you know, as we see standardization around tech stacks, the journey, say in the past 18 months in terms of automation and, and what have you seen from a Yeah, I think the last 18 months have become, you know, really exciting as we've Yeah, exactly. And that's what we see is, you know, as we we go into cloud, But one of the questions we asked them was, do you currently or you know, the more you're, you're able to automate certain processes and then free up your resources and your and of leverage for the teams than than what the, the existing capabilities Because you know, I mean sometimes incremental increases But, but solve the problems that are right in front of you today as, as you get to that next scale. What the next big thing exactly been And so we encourage a lot of folks, you know, as you think about this as it pertains to automation too, cloud sets the stage for that maybe, you know, not too long ago seem And, and you know, frankly, one of the, the biggest challenges we see people having today is just how do So before you take off, talk about the AWS relationship part of the startup showcase I had no doubt about that. You know, I think, you know, as a, a startup that's really growing and expanding your footprint, We're at the Venetian in Las Vegas, and the cube, as you well know,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
five | QUANTITY | 0.99+ |
Shaun Knapps | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sean Knapp | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Sean | PERSON | 0.99+ |
10 times | QUANTITY | 0.99+ |
Sean Na | PERSON | 0.99+ |
88.5% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
five times | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
today | DATE | 0.99+ |
2023 | DATE | 0.99+ |
last year | DATE | 0.99+ |
88% | QUANTITY | 0.99+ |
500 chief data officers | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
third category | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Venetian | LOCATION | 0.97+ |
three and a half percent | QUANTITY | 0.97+ |
First | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
a year | DATE | 0.96+ |
Ascend | ORGANIZATION | 0.96+ |
two things | QUANTITY | 0.95+ |
Send IO | TITLE | 0.9+ |
last 18 months | DATE | 0.85+ |
10 x | QUANTITY | 0.83+ |
next 12 months | DATE | 0.83+ |
hundred | QUANTITY | 0.8+ |
22 | TITLE | 0.78+ |
one of the questions | QUANTITY | 0.77+ |
AS Send IO | ORGANIZATION | 0.76+ |
past 18 months | DATE | 0.73+ |
two later | DATE | 0.72+ |
Snowflake | ORGANIZATION | 0.71+ |
three | QUANTITY | 0.71+ |
Startup Showcase | EVENT | 0.7+ |
half percent | QUANTITY | 0.67+ |
Send io | TITLE | 0.65+ |
couple fold | QUANTITY | 0.62+ |
2022 - Global Startup Program | TITLE | 0.59+ |
Dex Gen | COMMERCIAL_ITEM | 0.44+ |
Reinvent | EVENT | 0.38+ |
Cube | PERSON | 0.35+ |
Alan Bivens & Becky Carroll, IBM | AWS re:Invent 2022
(upbeat music) (logo shimmers) >> Good afternoon everyone, and welcome back to AWS re Invent 2022. We are live here from the show floor in Las Vegas, Nevada, we're theCUBE, my name is Savannah Peterson, joined by John Furrier, John, are you excited for the next segment? >> I love the innovation story, this next segment's going to be really interesting, an example of ecosystem innovation in action, it'll be great. >> Yeah, our next guests are actually award-winning, I am very excited about that, please welcome Alan and Becky from IBM. Thank you both so much for being here, how's the show going for ya? Becky you got a, just a platinum smile, I'm going to go to you first, how's the show so far? >> No, it's going great. There's lots of buzz, lots of excitement this year, of course, three times the number of people, but it's fantastic. >> Three times the number of people- >> (indistinct) for last year. >> That is so exciting, so what is that... Do you know what the total is then? >> I think it's over 55,000. >> Ooh, loving that. >> John: A lot. >> It's a lot, you can tell by the hallways- >> Becky: It's a lot. >> John: It's crowded, right. >> Yeah, you can tell by just the energy and the, honestly the heat in here right now is pretty good. Alan, how are you feeling on the show floor this year? >> Awesome, awesome, we're meeting a lot of partners, talking to a lot of clients. We're really kind of showing them what the new IBM, AWS relationship is all about, so, beautiful time to be here. >> Well Alan, why don't you tell us what that partnership is about, to start us off? >> Sure, sure. So the partnership started with the relationship in our consulting services, and Becky's going to talk more about that, right? And it grew, this year it grew into the IBM software realm where we signed an agreement with AWS around May timeframe this year. >> I love it, so, like you said, you're just getting started- >> Just getting started. >> This is the beginning of something magic. >> We're just scratching the surface with this right? >> Savannah: Yeah. >> But it represents a huge move for IBM to meet our clients where they are, right? Meet 'em where they are with IBM technology, enterprise technology they're used to, but with the look and feel and usage model that they're used to with AWS. >> Absolutely and so to build on that, you know, we're really excited to be an AWS Premier Consulting Partner. We've had this relationship for a little over five years with AWS, I'd say it's really gone up a notch over the last year or two as we've been working more and more closely, doubling down on our investments, doubling down on our certifications, we've got over 15,000 people certified now, almost 16,000 actually- >> Savannah: Wow. >> 14 competencies, 16 service deliveries and counting. We cover a mass of information and services from Data Analytics, IoT, AI, all the way to Modernization, SAP, Security Services, right. So it's pretty comprehensive relationship, but in addition to the fantastic clients that we both share, we're doing some really great things around joint industry solutions, which I'll talk about in a few minutes and some of those are being launched at the conference this year, so that's even better. But the most exciting thing to me right now is that we just found out that we won the Global Innovator Partner of the Year award, and a LATAM Partner of the Year award. >> Savannah: Wow. >> John: That's (indistinct) >> So, super excited for IBM Consulting to win this, we're honored and it's just a great, exciting part to the conference. >> The news coming out of this event, we know tomorrow's going to be the big keynote for the new Head of the ecosystem, Ruba. We're hearing that it's going to be all about the ecosystem, enabling value creation, enabling new kinds of solutions. We heard from the CEO of AWS, this nextGen environment's upon us, it's very solution-oriented- >> Becky: Absolutely. >> A lot of technology, it's not an either or, it's an and equation, this is a huge new shift, I won't say shift, a continuation for AWS, and you guys, we've been covering, so you got the and situation going on... Innovation solutions and innovation technology and customers can choose, build a foundation or have it out of the box. What's your reaction to that? Do you think it's going to go well for AWS and IBM? >> I think it fits well into our partnership, right? The the thing you mentioned that I gravitate to the most is the customer gets to choose and the thing that's been most amazing about the partnership, both of these companies are maniacally focused on the customer, right? And so we've seen that come about as we work on ways the customer to access our technology, consume the technology, right? We've sold software on-prem to customers before, right, now we're going to be selling SaaS on AWS because we had customers that were on AWS, we're making it so that they can more easily purchase it by being in the marketplace, making it so they can draw down their committed spin with AWS, their customers like that a lot- [John] Yeah. >> Right. We've even gone further to enable our distributor network and our resellers, 'cause a lot of our customers have those relationships, so they can buy through them. And recently we've enabled the customer to leverage their EDP, their committed spend with AWS against IBM's ELA and structure, right, so you kind of get a double commit value from a customer point of view, so the amazing part is just been all about the customers. >> Well, that's interesting, you got the technology relationship with AWS, you mentioned how they're engaging with the software consumption in marketplace, licensed deals, there's all kinds of new business model innovations on top of the consumption and building. Then you got the consulting piece, which is again, a big part of, Adam calls it "Business transformation," which is the result of digital transformation. So digital transformation is the process, the outcome is the business transformation, that's kind of where it all kind of connects. Becky, what's your thoughts on the Amazon consulting relationships? Obviously the awards are great but- >> They are, no- >> What's the next step? Where does it go from here? >> I think the best way for me to describe it is to give you some rapid flyer client examples, you know, real customer stories and I think that's where it really, rubber meets the road, right? So one of the most recent examples are IBM CEO Arvind Krishna, in his three key results actually mentioned one of our big clients with AWS which is the Department of Veterans Affairs in the US and is an AI solution that's helped automate claims processing. So the veterans are trying to get their benefits, they submit the claims, snail mail, phone calls, you know, some in person, some over email- >> Savannah: Oh, it gives me all the feels hearing you talk about this- >> It's a process that used to take 25 to 30 days depending on the complexity of the claims, we've gotten it down with AWS down to within 24 hours we can get the veterans what they need really quickly so, I mean, that's just huge. And it's an exciting story that includes data analytics, AI and automation, so that's just one example. You know, we've got examples around SAP where we've developed a next generation SAP for HANA Platform for Phillips Carbon Black hosted on AWS, right? For them, it created an integrated, scalable, digital business, that cut out a hundred percent the capital cost from on-prem solutions. We've got security solutions around architectures for telecommunications advisors and of course we have lots of examples of migration and modernization and moving workloads using Red Hat to do that. So there's a lot of great client examples, so to me, this is the heart of what we do, like you said, both companies are really focused on clients, Amazon's customer-obsessed, and doing what we can for our clients together is where we get the impact. >> Yeah, that's one of the things that, it sounds kind of cliche, "Oh we're going to work backwards from the customer," I know Amazon says that, they do, you guys are also very customer-focused but the customers are changing. So I'd love to get your reaction because we're now in that cloud 2.0, I call that 2.0 or you got the Amazon Classic, my word, and then Next Gen Cloud coming, the customers are different, they're transforming because IT's not a department anymore, it's in the DevOps pipeline. The developers are driving a lot of IT but security and on DataOps, it's the structural change happening at the customer, how do you guys see that at IBM? I know we cover a lot of Red Hat and Arvind talks to us all the time, meeting the customer where they are, where are they? Where are the customers? Can you share your perspective on where they are? >> It's an astute observation, right, the customer is changing. We have both of those sets of customers, right, we still have the traditional customer, our relationship with Central IT, right, and driving governance and all of those things. But the folks that are innovating many times they're in the line of business, they're discovering solutions, they're building new things. And so we need our offerings to be available to them. We need them to understand how to use them and be convenient for these guys and take them through that process. So that change in the customer is one that we are embracing by making our offerings easy to consume, easy to use, and easy to build into solutions and then easy to parlay into what central IT needs to do for governance, compliance, and these types of things, it's becoming our new bread and butter. >> And what's really cool is- >> Is that easy button- >> We've been talking about- >> It's the easy button. >> The easy button a lot on the show this week and if you just, you just described it it's exactly what people want, go on Becky. >> Sorry about that, I was going to say, the cool part is that we're co-creating these things with our clients. So we're using things like the Amazon Working Backward that you just mentioned.` We're using the IBM garage methodology to get innovative to do design working, design thinking workshops, and think about where is that end user?, Where is that stakeholder? Where are they, they thinking, feeling, doing, saying how do we make the easier? How do we get the easy button for them so that they can have the right solutions for their businesses. We work mostly with lines of business in my part of the organization, and they're hungry for that. >> You know, we had a quote on theCUBE yesterday, Savannah remember one of our guests said, you know, back in the, you know, 1990s or two 2000s, if you had four production apps, it was considered complex >> Savannah: Yeah. >> You know, now you got hundreds of workloads, thousands of workloads, so, you know, this end-to-end vision that we heard that's playing out is getting more complex, but the easy button is where these abstraction layers and technology could come in. So it's getting more complex because there's more stuff but it's getting easier because- >> Savannah: What is the magnitude? >> You can make it easier. This is a dynamic, share your thoughts on that. >> It's getting more complex because our clients need to move faster, right, they need to be more agile, right, so not only are there thousands of applications there are hundreds of thousands microservices that are composing those applications. So they need capabilities that help them not just build but govern that structure and put the right compliance over that structure. So this relationship- >> Savannah: Lines of governance, yeah- >> This relationship we built with AWS is in our key areas, it's a strategic move, not a small thing for us, it covers things like automation and integration where you need to build that way. It covers things like data and AI where you need to do the analytics, even things like sustainability where we're totally aligned with what AWS is talking about and trying to do, right, so it's really a good match made there. >> John: It really sounds awesome. >> Yeah, it's clear. I want to dig in a little bit, I love the term, and I saw it in my, it stuck out to me in the notes right away, getting ready for you all, "maniacal", maniacal about the customer, maniacal about the community, I think that's really clear when we're talking about 24 days to 24 hours, like the veteran example that you gave right there, which I genuinely felt in my heart. These are the types of collaborations that really impact people's lives, tell me about some of the other trends or maybe a couple other examples you might have because I think sometimes when our head's in the clouds, we talk a lot about the tech and the functionality, we forget it's touching every single person walking around us, probably in a different way right now than we may even be aware- >> I think one of the things that's been, and our clients have been asking us for, is to help coming into this new era, right, so we've come out of a pandemic where a lot of them had to do some really, really basic quick decisions. Okay, "Contact Center, everyone work from home now." Okay, how do we do that? Okay, so we cobbled something together, now we're back, so what do we do? How do we create digital transformation around that so that we are going forward in a really positive way that works for our clients or for our contact center reps who are maybe used to working from home now versus what our clients need, the response times they need, and AWS has all the technology that we're working with like Amazon Connect to be able to pull those things together with some of our software like Watson Assistant. So those types of solutions are coming together out of that need and now we're moving into the trend where economy's getting tougher, right? More cost cutting potentially is coming, right, better efficiencies, how do we leverage our solutions and help our clients and customers do that? So I think that's what the customer obsession's about, is making sure we really understand where their pain points are, and not just solve them but maybe get rid of 'em. >> John: Yeah, great one. >> Yeah. And not developing in a silo, I mean, it's a classic subway problem, you got to be communicating with your community if you want to continue to serve them. And IBM's been serving their community for a very long time, which is super impressive, do you think they're ready for the challenge? >> Let's do it. >> So we have a new thing on theCUBE. >> Becky: Oh boy. >> We didn't warn you about this, but here we go. Although you told, Alan, you've mentioned you're feeling very cool with the microphone on, so I feel like, I'm going to put you in the hot seat first on this one. Not that I don't think Becky's going to smash it, but I feel like you're channeling the power of the microphone. New challenges, treat it like a 32nd Instagram reel-style story, a hot take, your thought leadership, money clip, you know, this is your moment. What is the biggest takeaway, most important thing happening at the show this year? >> Most important thing happening at the show? Well, I'm glad you mentioned it that way, because earlier you said we may have to sing (presenters and guests all laughing) >> So this is much better than- >> That's actually part of the close. >> John: Hey, hey. >> Don't worry, don't worry, I haven't forgotten that, it's your Instagram reel, go. (Savannah laughs) >> Original audio happening here on theCUBE, courtesy of Alan and IBM, I am so here for it. >> So what my takeaway and what I would like for the audience to take away, out of this conversation especially, but even broadly, the IBM AWS relationship is really like a landmark type of relationship, right? It's one of the biggest that we've established on both sides, right- >> Savannah: It seems huge, okay you are too monolith in the world of companies, like, yeah- >> Becky: Totally. >> It's huge. And it represents a strategic change on both sides, right? With that customer- >> Savannah: Fundamentally- >> In the middle right? >> Savannah: Yeah. >> So we're seeing things like, you know, AWS is working with us to make sure we're building products the way that a AWS client likes to consume them, right, so that we have the right integration, so they get that right look and feel, but they still get the enterprise level capabilities they're used to from IBM, right? So the big takeaway I like for people to take, is this is a new IBM, it's a new AWS and IBM relationship, and so expect more of that goodness, more of those new things coming out of it. [John] Excellent, wow. >> That was great, well done, you nailed it. and you're going to finish with some acapella, right? (Alan laughs) >> You got a pitch pipe ready? (everyone laughs) >> All right Becky, what about you? Give us your hot take. >> Well, so for me, the biggest takeaway is just the way this relationship has grown so much, so, like you said, it's the new IBM it's the new AWS, we were here last year, we had some good things, this year we're back at the show with joint solutions, have been jointly funded and co-created by AWS and IBM. This is huge, this is a really big opportunity and a really big deal that these two companies have come together, identified joint customer needs and we're going after 'em together and we're putting 'em in the booth. >> Savannah: So cool. And there's things like smart edge for welding solutions that are out there. >> Savannah: Yes. >> You know, I talked about, and it's, you know you wouldn't think, "Okay, well what's that?" There's a lot to that, a lot of saving when you look at how you do welding and if you apply things like visual AI and auditory AI to make sure a weld is good. I mean, I think these are, these things are cool, I geek out on these things- >> John: Every vertical. >> I'm geeking out with you right now, just geeking- >> Yeah, yeah, yeah, so- >> Every vertical is infected. >> They are and it's so impactful to have AWS just in lockstep with us, doing these solutions, it's so different from, you know, you kind of create something that you think your customers like and then you put it out there. >> Yeah, versus this moment. >> Yeah, they're better together. >> It's strategic partnership- >> It's truly a strategic partnership. and we're really bringing that this year to reinvent and so I'm super excited about that. >> Congratulations. >> Wow, well, congratulations again on your awards, on your new partnership, I can't wait to hear, I mean, we're seven months in, eight months in to this this SaaS side of the partnership, can't wait to see what we're going to be talking about next year when we have you back on theCUBE. >> I know. >> and maybe again in between now and then. Alan, Becky, thank you both so much for being here, this was truly a joy and I'm sure you gave folks a taste of the new IBM, practicing what you preach. >> John: Great momentum. >> And I'm just, I'm so impressed with the two companies collaborating, for those of us OGs in tech, the big companies never collaborated before- >> Yeah. >> John: Yeah. Joint, co-created solutions. >> And you have friction between products and everything else. I mean's it's really, co-collaboration is, it's a big theme for us at all the shows we've been doing this year but it's just nice to see it in practice too, it's an entirely different thing, so well done. >> Well it's what gets me out of the bed in the morning. >> All right, congratulations. >> Very clearly, your energy is contagious and I love it and yeah, this has been great. Thank all of you at home or at work or on the International Space Station or wherever you might be tuning in from today for joining us, here in Las Vegas at AWS re Invent where we are live from the show floor, wall-to-wall coverage for three days with John Furrier. My name is Savannah Peterson, we're theCUBE, the source for high tech coverage. (cheerful upbeat music)
SUMMARY :
We are live here from the show I love the innovation story, I'm going to go to you the number of people, Do you know what the total is then? on the show floor this year? so, beautiful time to be here. So the partnership started This is the beginning to meet our clients where they are, right? Absolutely and so to and a LATAM Partner of the Year award. to the conference. for the new Head of the ecosystem, Ruba. or have it out of the box. is the customer gets to choose the customer to leverage on the Amazon consulting relationships? is to give you some rapid flyer depending on the complexity of the claims, Yeah, that's one of the things that, So that change in the customer on the show this week the cool part is that we're but the easy button is where This is a dynamic, share and put the right compliance where you need to build that way. I love the term, and I saw and AWS has all the technology ready for the challenge? at the show this year? it's your Instagram reel, go. IBM, I am so here for it. With that customer- So the big takeaway I you nailed it. All right Becky, what about you? Well, so for me, the that are out there. and if you apply things like it's so different from, you know, and so I'm super excited about that. going to be talking about of the new IBM, practicing John: Yeah. at all the shows we've of the bed in the morning. or on the International Space Station
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Alan | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Savannah | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Becky | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Ruba | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
24 hours | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
32nd | QUANTITY | 0.99+ |
seven months | QUANTITY | 0.99+ |
Department of Veterans Affairs | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
eight months | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Three times | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Scott Castle, Sisense | AWS re:Invent 2022
>>Good morning fellow nerds and welcome back to AWS Reinvent. We are live from the show floor here in Las Vegas, Nevada. My name is Savannah Peterson, joined with my fabulous co-host John Furrier. Day two keynotes are rolling. >>Yeah. What do you thinking this? This is the day where everything comes, so the core gets popped off the bottle, all the announcements start flowing out tomorrow. You hear machine learning from swee lot more in depth around AI probably. And then developers with Verner Vos, the CTO who wrote the seminal paper in in early two thousands around web service that becames. So again, just another great year of next level cloud. Big discussion of data in the keynote bulk of the time was talking about data and business intelligence, business transformation easier. Is that what people want? They want the easy button and we're gonna talk a lot about that in this segment. I'm really looking forward to this interview. >>Easy button. We all want the >>Easy, we want the easy button. >>I love that you brought up champagne. It really feels like a champagne moment for the AWS community as a whole. Being here on the floor feels a bit like the before times. I don't want to jinx it. Our next guest, Scott Castle, from Si Sense. Thank you so much for joining us. How are you feeling? How's the show for you going so far? Oh, >>This is exciting. It's really great to see the changes that are coming in aws. It's great to see the, the excitement and the activity around how we can do so much more with data, with compute, with visualization, with reporting. It's fun. >>It is very fun. I just got a note. I think you have the coolest last name of anyone we've had on the show so far, castle. Oh, thank you. I'm here for it. I'm sure no one's ever said that before, but I'm so just in case our audience isn't familiar, tell us about >>Soy Sense is an embedded analytics platform. So we're used to take the queries and the analysis that you can power off of Aurora and Redshift and everything else and bring it to the end user in the applications they already know how to use. So it's all about embedding insights into tools. >>Embedded has been a, a real theme. Nobody wants to, it's I, I keep using the analogy of multiple tabs. Nobody wants to have to leave where they are. They want it all to come in there. Yep. Now this space is older than I think everyone at this table bis been around since 1958. Yep. How do you see Siente playing a role in the evolution there of we're in a different generation of analytics? >>Yeah, I mean, BI started, as you said, 58 with Peter Lu's paper that he wrote for IBM kind of get became popular in the late eighties and early nineties. And that was Gen one bi, that was Cognos and Business Objects and Lotus 1 23 think like green and black screen days. And the way things worked back then is if you ran a business and you wanted to get insights about that business, you went to it with a big check in your hand and said, Hey, can I have a report? And they'd come back and here's a report. And it wasn't quite right. You'd go back and cycle, cycle, cycle and eventually you'd get something. And it wasn't great. It wasn't all that accurate, but it's what we had. And then that whole thing changed in about two, 2004 when self-service BI became a thing. And the whole idea was instead of going to it with a big check in your hand, how about you make your own charts? >>And that was totally transformative. Everybody started doing this and it was great. And it was all built on semantic modeling and having very fast databases and data warehouses. Here's the problem, the tools to get to those insights needed to serve both business users like you and me and also power users who could do a lot more complex analysis and transformation. And as the tools got more complicated, the barrier to entry for everyday users got higher and higher and higher to the point where now you look, look at Gartner and Forester and IDC this year. They're all reporting in the same statistic. Between 10 and 20% of knowledge workers have learned business intelligence and everybody else is just waiting in line for a data analyst or a BI analyst to get a report for them. And that's why the focus on embedded is suddenly showing up so strong because little startups have been putting analytics into their products. People are seeing, oh my, this doesn't have to be hard. It can be easy, it can be intuitive, it can be native. Well why don't I have that for my whole business? So suddenly there's a lot of focus on how do we embed analytics seamlessly? How do we embed the investments people make in machine learning in data science? How do we bring those back to the users who can actually operationalize that? Yeah. And that's what Tysons does. Yeah. >>Yeah. It's interesting. Savannah, you know, data processing used to be what the IT department used to be called back in the day data processing. Now data processing is what everyone wants to do. There's a ton of data we got, we saw the keynote this morning at Adam Lesky. There was almost a standing of vision, big applause for his announcement around ML powered forecasting with Quick Site Cube. My point is people want automation. They want to have this embedded semantic layer in where they are not having all the process of ETL or all the muck that goes on with aligning the data. All this like a lot of stuff that goes on. How do you make it easier? >>Well, to be honest, I, I would argue that they don't want that. I think they, they think they want that, cuz that feels easier. But what users actually want is they want the insight, right? When they are about to make a decision. If you have a, you have an ML powered forecast, Andy Sense has had that built in for years, now you have an ML powered forecast. You don't need it two weeks before or a week after in a report somewhere. You need it when you're about to decide do I hire more salespeople or do I put a hundred grand into a marketing program? It's putting that insight at the point of decision that's important. And you don't wanna be waiting to dig through a lot of infrastructure to find it. You just want it when you need it. What's >>The alternative from a time standpoint? So real time insight, which is what you're saying. Yep. What's the alternative? If they don't have that, what's >>The alternative? Is what we are currently seeing in the market. You hire a bunch of BI analysts and data analysts to do the work for you and you hire enough that your business users can ask questions and get answers in a timely fashion. And by the way, if you're paying attention, there's not enough data analysts in the whole world to do that. Good luck. I am >>Time to get it. I really empathize with when I, I used to work for a 3D printing startup and I can, I have just, I mean, I would call it PTSD flashbacks of standing behind our BI guy with my list of queries and things that I wanted to learn more about our e-commerce platform in our, in our marketplace and community. And it would take weeks and I mean this was only in 2012. We're not talking 1958 here. We're talking, we're talking, well, a decade in, in startup years is, is a hundred years in the rest of the world life. But I think it's really interesting. So talk to us a little bit about infused and composable analytics. Sure. And how does this relate to embedded? Yeah. >>So embedded analytics for a long time was I want to take a dashboard I built in a BI environment. I wanna lift it and shift it into some other application so it's close to the user and that is the right direction to go. But going back to that statistic about how, hey, 10 to 20% of users know how to do something with that dashboard. Well how do you reach the rest of users? Yeah. When you think about breaking that up and making it more personalized so that instead of getting a dashboard embedded in a tool, you get individual insights, you get data visualizations, you get controls, maybe it's not even actually a visualization at all. Maybe it's just a query result that influences the ordering of a list. So like if you're a csm, you have a list of accounts in your book of business, you wanna rank those by who's priorities the most likely to churn. >>Yeah. You get that. How do you get that most likely to churn? You get it from your BI system. So how, but then the question is, how do I insert that back into the application that CSM is using? So that's what we talk about when we talk about Infusion. And SI started the infusion term about two years ago and now it's being used everywhere. We see it in marketing from Click and Tableau and from Looker just recently did a whole launch on infusion. The idea is you break this up into very small digestible pieces. You put those pieces into user experiences where they're relevant and when you need them. And to do that, you need a set of APIs, SDKs, to program it. But you also need a lot of very solid building blocks so that you're not building this from scratch, you're, you're assembling it from big pieces. >>And so what we do aty sense is we've got machine learning built in. We have an LQ built in. We have a whole bunch of AI powered features, including a knowledge graph that helps users find what else they need to know. And we, we provide those to our customers as building blocks so that they can put those into their own products, make them look and feel native and get that experience. In fact, one of the things that was most interesting this last couple of couple of quarters is that we built a technology demo. We integrated SI sensee with Office 365 with Google apps for business with Slack and MS teams. We literally just threw an Nlq box into Excel and now users can go in and say, Hey, which of my sales people in the northwest region are on track to meet their quota? And they just get the table back in Excel. They can build charts of it and PowerPoint. And then when they go to their q do their QBR next week or week after that, they just hit refresh to get live data. It makes it so much more digestible. And that's the whole point of infusion. It's bigger than just, yeah. The iframe based embedding or the JavaScript embedding we used to talk about four or five years >>Ago. APIs are very key. You brought that up. That's gonna be more of the integration piece. How does embedable and composable work as more people start getting on board? It's kind of like a Yeah. A flywheel. Yes. What, how do you guys see that progression? Cause everyone's copying you. We see that, but this is a, this means it's standard. People want this. Yeah. What's next? What's the, what's that next flywheel benefit that you guys coming out with >>Composability, fundamentally, if you read the Gartner analysis, right, they, when they talk about composable, they're talking about building pre-built analytics pieces in different business units for, for different purposes. And being able to plug those together. Think of like containers and services that can, that can talk to each other. You have a composition platform that can pull it into a presentation layer. Well, the presentation layer is where I focus. And so the, so for us, composable means I'm gonna have formulas and queries and widgets and charts and everything else that my, that my end users are gonna wanna say almost minority report style. If I'm not dating myself with that, I can put this card here, I can put that chart here. I can set these filters here and I get my own personalized view. But based on all the investments my organization's made in data and governance and quality so that all that infrastructure is supporting me without me worrying much about it. >>Well that's productivity on the user side. Talk about the software angle development. Yeah. Is your low code, no code? Is there coding involved? APIs are certainly the connective tissue. What's the impact to Yeah, the >>Developer. Oh. So if you were working on a traditional legacy BI platform, it's virtually impossible because this is an architectural thing that you have to be able to do. Every single tool that can make a chart has an API to embed that chart somewhere. But that's not the point. You need the life cycle automation to create models, to modify models, to create new dashboards and charts and queries on the fly. And be able to manage the whole life cycle of that. So that in your composable application, when you say, well I want chart and I want it to go here and I want it to do this and I want it to be filtered this way you can interact with the underlying platform. And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for the next six months. You don't want it popping down into Python and writing that yourself. >>You wanna be able to say, okay, here's my forecasting algorithm. Here are the inputs, here's the dimensions, and then go and just put it somewhere for me. And so that's what you get withy sense. And there aren't any other analytics platforms that were built to do that. We were built that way because of our architecture. We're an API first product. But more importantly, most of the legacy BI tools are legacy. They're coming from that desktop single user, self-service, BI environment. And it's a small use case for them to go embedding. And so composable is kind of out of reach without a complete rebuild. Right? But with SI senses, because our bread and butter has always been embedding, it's all architected to be API first. It's integrated for software developers with gi, but it also has all those low code and no code capabilities for business users to do the minority report style thing. And it's assemble endless components into a workable digital workspace application. >>Talk about the strategy with aws. You're here at the ecosystem, you're in the ecosystem, you're leading product and they have a strategy. We know their strategy, they have some stuff, but then the ecosystem goes faster and ends up making a better product in most of the cases. If you compare, I know they'll take me to school on that, but I, that's pretty much what we report on. Mongo's doing a great job. They have databases. So you kind of see this balance. How are you guys playing in the ecosystem? What's the, what's the feedback? What's it like? What's going on? >>AWS is actually really our best partner. And the reason why is because AWS has been clear for many, many years. They build componentry, they build services, they build infrastructure, they build Redshift, they build all these different things, but they need, they need vendors to pull it all together into something usable. And fundamentally, that's what Cient does. I mean, we didn't invent sequel, right? We didn't invent jackal or dle. These are not, these are underlying analytics technologies, but we're taking the bricks out of the briefcase. We're assembling it into something that users can actually deploy for their use cases. And so for us, AWS is perfect because they focus on the hard bits. The the underlying technologies we assemble those make them usable for customers. And we get the distribution. And of course AWS loves that. Cause it drives more compute and it drives more, more consumption. >>How much do they pay you to say that >>Keynote, >>That was a wonderful pitch. That's >>Absolutely, we always say, hey, they got a lot of, they got a lot of great goodness in the cloud, but they're not always the best at the solutions and that they're trying to bring out, and you guys are making these solutions for customers. Yeah. That resonates with what they got with Amazon. For >>Example, we, last year we did a, a technology demo with Comprehend where we put comprehend inside of a semantic model and we would compile it and then send it back to Redshift. And it takes comprehend, which is a very cool service, but you kind of gotta be a coder to use it. >>I've been hear a lot of hype about the semantic layer. What is, what is going on with that >>Semantec layer is what connects the actual data, the tables in your database with how they're connected and what they mean so that a user like you or me who's saying I wanna bar chart with revenue over time can just work with revenue and time. And the semantic layer translates between what we did and what the database knows >>About. So it speaks English and then they converts it to data language. It's >>Exactly >>Right. >>Yeah. It's facilitating the exchange of information. And, and I love this. So I like that you actually talked about it in the beginning, the knowledge map and helping people figure out what they might not know. Yeah. I, I am not a bi analyst by trade and I, I don't always know what's possible to know. Yeah. And I think it's really great that you're doing that education piece. I'm sure, especially working with AWS companies, depending on their scale, that's gotta be a big part of it. How much is the community play a role in your product development? >>It's huge because I'll tell you, one of the challenges in embedding is someone who sees an amazing experience in outreach or in seismic. And to say, I want that. And I want it to be exactly the way my product is built, but I don't wanna learn a lot. And so you, what you want do is you want to have a community of people who have already built things who can help lead the way. And our community, we launched a new version of the SES community in early 2022 and we've seen a 450% growth in the c in that community. And we've gone from an average of one response, >>450%. I just wanna put a little exclamation point on that. Yeah, yeah. That's awesome. We, >>We've tripled our organic activity. So now if you post this Tysons community, it used to be, you'd get one response maybe from us, maybe from from a customer. Now it's up to three. And it's continuing to trend up. So we're, it's >>Amazing how much people are willing to help each other. If you just get in the platform, >>Do it. It's great. I mean, business is so >>Competitive. I think it's time for the, it's time. I think it's time. Instagram challenge. The reels on John. So we have a new thing. We're gonna run by you. Okay. We just call it the bumper sticker for reinvent. Instead of calling it the Instagram reels. If we're gonna do an Instagram reel for 30 seconds, what would be your take on what's going on this year at Reinvent? What you guys are doing? What's the most important story that you would share with folks on Instagram? >>You know, I think it's really what, what's been interesting to me is the, the story with Redshift composable, sorry. No, composable, Redshift Serverless. Yeah. One of the things I've been >>Seeing, we know you're thinking about composable a lot. Yes. Right? It's, it's just, it's in there, it's in your mouth. Yeah. >>So the fact that Redshift Serverless is now kind becoming the defacto standard, it changes something for, for my customers. Cuz one of the challenges with Redshift that I've seen in, in production is if as people use it more, you gotta get more boxes. You have to manage that. The fact that serverless is now available, it's, it's the default means it now people are just seeing Redshift as a very fast, very responsive repository. And that plays right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top of things. So for me it's, it's a, maybe it's a narrow Instagram reel, but it's an >>Important one. Yeah. And that makes it better for you because you get to embed that. Yeah. And you get access to better data. Faster data. Yeah. Higher quality, relevant, updated. >>Yep. Awesome. As it goes into that 80% of knowledge workers, they have a consumer great expectation of experience. They're expecting that five ms response time. They're not waiting 2, 3, 4, 5, 10 seconds. They're not trained on theola expectations. And so it's, it matters a lot. >>Final question for you. Five years out from now, if things progress the way they're going with more innovation around data, this front end being very usable, semantic layer kicks in, you got the Lambda and you got serverless kind of coming in, helping out along the way. What's the experience gonna look like for a user? What's it in your mind's eye? What's that user look like? What's their experience? >>I, I think it shifts almost every role in a business towards being a quantitative one. Talking about, Hey, this is what I saw. This is my hypothesis and this is what came out of it. So here's what we should do next. I, I'm really excited to see that sort of scientific method move into more functions in the business. Cuz for decades it's been the domain of a few people like me doing strategy, but now I'm seeing it in CSMs, in support people and sales engineers and line engineers. That's gonna be a big shift. Awesome. >>Thank >>You Scott. Thank you so much. This has been a fantastic session. We wish you the best at si sense. John, always pleasure to share the, the stage with you. Thank you to everybody who's attuning in, tell us your thoughts. We're always eager to hear what, what features have got you most excited. And as you know, we will be live here from Las Vegas at reinvent from the show floor 10 to six all week except for Friday. We'll give you Friday off with John Furrier. My name's Savannah Peterson. We're the cube, the the, the leader in high tech coverage.
SUMMARY :
We are live from the show floor here in Las Vegas, Nevada. Big discussion of data in the keynote bulk of the time was We all want the How's the show for you going so far? the excitement and the activity around how we can do so much more with data, I think you have the coolest last name of anyone we've had on the show so far, queries and the analysis that you can power off of Aurora and Redshift and everything else and How do you see Siente playing a role in the evolution there of we're in a different generation And the way things worked back then is if you ran a business and you wanted to get insights about that business, the tools to get to those insights needed to serve both business users like you and me the muck that goes on with aligning the data. And you don't wanna be waiting to dig through a lot of infrastructure to find it. What's the alternative? and data analysts to do the work for you and you hire enough that your business users can ask questions And how does this relate to embedded? Maybe it's just a query result that influences the ordering of a list. And SI started the infusion term And that's the whole point of infusion. That's gonna be more of the integration piece. And being able to plug those together. What's the impact to Yeah, the And most importantly, when you want to use big pieces like, Hey, I wanna forecast revenue for And so that's what you get withy sense. How are you guys playing in the ecosystem? And the reason why is because AWS has been clear for That was a wonderful pitch. the solutions and that they're trying to bring out, and you guys are making these solutions for customers. which is a very cool service, but you kind of gotta be a coder to use it. I've been hear a lot of hype about the semantic layer. And the semantic layer translates between It's So I like that you actually talked about it in And I want it to be exactly the way my product is built, but I don't wanna I just wanna put a little exclamation point on that. And it's continuing to trend up. If you just get in the platform, I mean, business is so What's the most important story that you would share with One of the things I've been Seeing, we know you're thinking about composable a lot. right into the story I'm telling cuz I'm telling them it's not that hard to put some analysis on top And you get access to better data. And so it's, it matters a lot. What's the experience gonna look like for a user? see that sort of scientific method move into more functions in the business. And as you know, we will be live here from Las Vegas at reinvent from the show floor
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Peter Lu | PERSON | 0.99+ |
Friday | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
450% | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Office 365 | TITLE | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
1958 | DATE | 0.99+ |
PowerPoint | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
Forester | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Verner Vos | PERSON | 0.99+ |
early 2022 | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
five ms | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
this year | DATE | 0.99+ |
first product | QUANTITY | 0.99+ |
aws | ORGANIZATION | 0.98+ |
one response | QUANTITY | 0.98+ |
late eighties | DATE | 0.98+ |
Five years | QUANTITY | 0.98+ |
2 | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
Savannah | PERSON | 0.98+ |
Scott Castle | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Sisense | PERSON | 0.97+ |
5 | QUANTITY | 0.97+ |
English | OTHER | 0.96+ |
Click and Tableau | ORGANIZATION | 0.96+ |
Andy Sense | PERSON | 0.96+ |
Looker | ORGANIZATION | 0.96+ |
two weeks | DATE | 0.96+ |
next week | DATE | 0.96+ |
early nineties | DATE | 0.95+ |
ORGANIZATION | 0.95+ | |
serverless | TITLE | 0.94+ |
AWS Reinvent | ORGANIZATION | 0.94+ |
Mongo | ORGANIZATION | 0.93+ |
single | QUANTITY | 0.93+ |
Aurora | TITLE | 0.92+ |
Lotus 1 23 | TITLE | 0.92+ |
One | QUANTITY | 0.92+ |
JavaScript | TITLE | 0.92+ |
SES | ORGANIZATION | 0.92+ |
next six months | DATE | 0.91+ |
MS | ORGANIZATION | 0.91+ |
five years | QUANTITY | 0.89+ |
six | QUANTITY | 0.89+ |
a week | DATE | 0.89+ |
Soy Sense | TITLE | 0.89+ |
hundred grand | QUANTITY | 0.88+ |
Redshift | TITLE | 0.88+ |
Adam Lesky | PERSON | 0.88+ |
Day two keynotes | QUANTITY | 0.87+ |
floor 10 | QUANTITY | 0.86+ |
two thousands | QUANTITY | 0.85+ |
Redshift Serverless | TITLE | 0.85+ |
both business | QUANTITY | 0.84+ |
3 | QUANTITY | 0.84+ |
Noor Faraby & Brian Brunner, Stripe Data Pipeline | AWS re:Invent 2022
>>Hello, fabulous cloud community and welcome to Las Vegas. We are the Cube and we will be broadcasting live from the AWS Reinvent Show floor for the next four days. This is our first opening segment. I am joined by the infamous John Furrier. John, it is your 10th year being here at Reinvent. How does >>It feel? It's been a great to see you. It feels great. I mean, just getting ready for the next four days. It's, this is the marathon of all tech shows. It's, it's busy, it's crowd, it's loud and the content and the people here are really kind of changing the game and the stories are always plentiful and deep and just it's, it really is one of those shows you kind of get intoxicated on the show floor and in the event and after hours people are partying. I mean it is like the big show and 10 years been amazing run People getting bigger. You're seeing the changing ecosystem Next Gen Cloud and you got the Classics Classic still kind of doing its thing. So getting a lot data, a lot of data stories. And our guests here are gonna talk more about that. This is the year the cloud kind of goes next gen and you start to see the success Gen One cloud players go on the next level. It's gonna be really fun. Fun week. >>Yes, I'm absolutely thrilled and you can certainly feel the excitement. The show floor doors just opened, people pouring in the drinks are getting stacked behind us. As you mentioned, it is gonna be a marathon and very exciting. On that note, fantastic interview to kick us off here. We're starting the day with Stripe. Please welcome nor and Brian, how are you both doing today? Excited to be here. >>Really happy to be here. Nice to meet you guys. Yeah, >>Definitely excited to be here. Nice to meet you. >>Yeah, you know, you were mentioning you could feel the temperature and the energy in here. It is hot, it's a hot show. We're a hot crew. Let's just be honest about that. No shame in that. No shame in that game. But I wanna, I wanna open us up. You know, Stripe serving 2 million customers according to the internet. AWS with 1 million customers of their own, both leading companies in your industries. What, just in case there's someone in the audience who hasn't heard of Stripe, what is Stripe and how can companies use it along with AWS nor, why don't you start us off? >>Yeah, so Stripe started back in 2010 originally as a payments company that helped businesses accept and process their payments online. So that was something that traditionally had been really tedious, kind of difficult for web developers to set up. And what Stripe did was actually introduce a couple of lines of code that developers could really easily integrate into their websites and start accepting those payments online. So payments is super core to who Stripe is as a company. It's something that we still focus on a lot today, but we actually like to think of ourselves now as more than just a payments company but rather financial infrastructure for the internet. And that's just because we have expanded into so many different tools and technologies that are beyond payments and actually help businesses with just about anything that they might need to do when it comes to the finances of running an online company. So what I mean by that, couple examples being setting up online marketplaces to accept multi-party payments, running subscriptions and recurring payments, collecting sales tax accurately and compliantly revenue recognition and data and analytics. Importantly on all of those things, which is what Brian and I focus on at Stripe. So yeah, since since 2010 Stripes really grown to serve millions of customers, as you said, from your small startups to your large multinational companies, be able to not only run their payments but also run complex financial operations online. >>Interesting. Even the Cube, the customer of Stripe, it's so easy to integrate. You guys got your roots there, but now as you guys got bigger, I mean you guys have massive traction and people are doing more, you guys are gonna talk here on the data pipeline in front you, the engineering manager. What has it grown to, I mean, what are some of the challenges and opportunities your customers are facing as they look at that data pipeline that you guys are talking about here at Reinvent? >>Yeah, so Stripe Data Pipeline really helps our customers get their data out of Stripe and into, you know, their data warehouse into Amazon Redshift. And that has been something that for our customers it's super important. They have a lot of other data sets that they want to join our Stripe data with to kind of get to more complex, more enriched insights. And Stripe data pipeline is just a really seamless way to do that. It lets you, without any engineering, without any coding, with pretty minimal setup, just connect your Stripe account to your Amazon Redshift data warehouse, really secure. It's encrypted, you know, it's scalable, it's gonna meet all of the needs of kind of a big enterprise and it gets you all of your Stripe data. So anything in our api, a lot of our reports are just like there for you to take and this just overcomes a big hurdle. I mean this is something that would take, you know, multiple engineers months to build if you wanted to do this in house. Yeah, we give it to you, you know, with a couple clicks. So it's kind of a, a step change for getting data out of Stripe into your data work. >>Yeah, the topic of this chat is getting more data outta your data from Stripe with the pipelining, this is kind of an interesting point, I want to get your thoughts. You guys are in the, in the front lines with customers, you know, stripes started out with their roots line of code, get up and running, payment gateway, whatever you wanna call it. Developers just want to get cash on the door. Thank you very much. Now you're kind of turning in growing up and continue to grow. Are you guys like a financial cloud? I mean, would you categorize yourself as a, cuz you're on top of aws? >>Yeah, financial infrastructure of the internet was a, was a claim I definitely wanna touch on from your, earlier today it was >>Powerful. You guys are super financial cloud basically. >>Yeah, super cloud basically the way that AWS kind of is the superstar in cloud computing. That's how we feel at Stripe that we want to put forth as financial infrastructure for the internet. So yeah, a lot of similarities. Actually it's funny, we're, we're really glad to be at aws. I think this is the first time that we've participated in a conference like this. But just to be able to participate and you know, be around AWS where we have a lot of synergies both as companies. Stripe is a customer of AWS and you know, for AWS users they can easily process payments through Stripe. So a lot of synergies there. And yeah, at a company level as well, we find ourselves really aligned with AWS in terms of the goals that we have for our users, helping them scale, expand globally, all of those good things. >>Let's dig in there a little bit more. Sounds like a wonderful collaboration. We love to hear of technology partnerships like that. Brian, talk to us a little bit about the challenges that the data pipeline solves from Stripe for Redshift users. >>Yeah, for sure. So Stripe Data Pipeline uses Amazon RedShift's built in data sharing capabilities, which gives you kind of an instant view into your Stripe data. If you weren't using Stripe data pipeline, you would have to, you know, ingest the state out of our api, kind of pull yourself manually. And yeah, I think that's just like a big part of it really is just the simplicity with what you can pull the data. >>Yeah, absolutely. And I mean the, the complexity of data and the volume of it is only gonna get bigger. So tools like that that can make things a lot easier are what we're all looking for. >>What's the machine learning angle? Cause I know there's lots of big topic here this year. More machine learning, more ai, a lot more solutions on top of the basic building blocks and the primitives at adds, you guys fit right into that. Cause developers doing more, they're either building their own or rolling out solutions. How do you guys see you guys connecting into that with the pipeline? Because, you know, data pipelining people like, they like that's, it feels like a heavy lift. What's the challenge there? Because when people roll their own or try to get in, it's, it's, it could be a lot of muck as they say. Yeah. What's the, what's the real pain point that you guys solve? >>So in terms of, you know, AI and machine learning, what Stripe Data Pipeline is gonna give you is it gives you a lot of signals around your payments that you can incorporate into your models. We actually have a number of customers that use Stripe radar data, so our fraud product and they integrate it with their in-house data that they get from other sources, have a really good understanding of fraud within their whole business. So it's kind of a way to get that data without having to like go through the process of ingesting it. So like, yeah, your, your team doesn't have to think about the ingestion piece. They can just think about, you know, building models, enriching the data, getting insights on top >>And Adam, so let's, we call it etl, the nasty three letter word in my interview with them. And that's what we're getting to where data is actually connecting via APIs and pipelines. Yes. Seamlessly into other data. So the data mashup, it feels like we're back into in the old mashup days now you've got data mashing up together. This integration's now a big practice, it's a becoming an industry standard. What are some of the patterns and matches that you see around how people are integrating their data? Because we all know machine learning works better when there's more data available and people want to connect their data and integrate it without the hassle. What's the, what's some of the use cases that >>Yeah, totally. So as Brian mentioned, there's a ton of use case for engineering teams and being able to get that data reported over efficiently and correctly and that's, you know, something exactly like you touched on that we're seeing nowadays is like simply having access to the data isn't enough. It's all about consolidating it correctly and accurately and effectively so that you can draw the best insights from that. So yeah, we're seeing a lot of use cases for teams across companies, including, a big example is finance teams. We had one of our largest users actually report that they were able to close their books faster than ever from integrating all of their Stripe revenue data for their business with their, the rest of their data in their data warehouse, which was traditionally something that would've taken them days, weeks, you know, having to do the manual aspect. But they were able to, to >>Simplify that, Savannah, you know, we were talking at the last event we were at Supercomputing where it's more speeds and feeds as people get more compute power, right? They can do more at the application level with developers. And one of the things we've been noticing I'd love to get your reaction to is as you guys have customers, millions of customers, are you seeing customers doing more with Stripe that's not just customers where they're more of an ecosystem partner of Stripe as people see that Stripe is not just a, a >>More comprehensive solution. >>Yeah. What's going on with the customer base? I can see the developers embedding it in, but once you get Stripe, you're like a, you're the plumbing, you're the financial bloodline if you will for the all the applications. Are your customers turning into partners, ecosystem partners? How do you see that? >>Yeah, so we definitely, that's what we're hoping to do. We're really hoping to be everything that a user needs when they wanna run an online business, be able to come in and maybe initially they're just using payments or they're just using billing to set up subscriptions but down the line, like as they grow, as they might go public, we wanna be able to scale with them and be able to offer them all of the products that they need to do. So Data Pipeline being a really important one for, you know, if you're a smaller company you might not be needing to leverage all of this big data and making important product decisions that you know, might come down to the very details, but as you scale, it's really something that we've seen a lot of our larger users benefit from. >>Oh and people don't wanna have to factor in too many different variables. There's enough complexity scaling a business, especially if you're headed towards IPO or something like that. Anyway, I love that the Stripe data pipeline is a no code solution as well. So people can do more faster. I wanna talk about it cuz it struck me right away on our lineup that we have engineering and product marketing on the stage with us. Now for those who haven't worked in a very high growth, massive company before, these teams can have a tiny bit of tension only because both teams want a lot of great things for the end user and their community. Tell me a little bit about the culture at Stripe and what it's like collaborating on the data pipeline. >>Yeah, I mean I, I can kick it off, you know, from, from the standpoint like we're on the same team, like we want to grow Stripe data pipeline, that is the goal. So whatever it takes to kind of get that job done is what we're gonna do. And I think that is something that is just really core to all of Stripe is like high collaboration, high trust, you know, this is something where we can all win if we work together. You don't need to, you know, compete with like products for like resourcing or to get your stuff done. It's like no, what's the, what's the, the team goal here, right? Like we're looking for team wins, not, you know, individual wins. >>Awesome. Yeah. And at the end of the day we have the same goal of connecting the product and the user in a way that makes sense and delivering the best product to that target user. So it's, it's really, it's a great collaboration and as Brian mentioned, the culture at Stripe really aligns with that as >>Well. So you got the engineering teams that get value outta that you guys are dealing with, that's your customer. But the security angle really becomes a big, I think catalyst cuz not just engineering, they gotta build stuff in so they're always building, but the security angle's interesting cuz now you got that data feeding security teams, this is becoming very secure security ops oriented. >>Yeah, you know, we are really, really tight partners with our internal security folks. They review everything that we do. We have a really robust security team. But I think, you know, kind of tying back to the Amazon side, like Amazon, Redshift is a very secure product and the way that we share data is really secure. You know, the, the sharing mechanism only works between encrypted clusters. So your data is encrypted at rest, encrypted and transit and excuse me, >>You're allowed to breathe. You also swallow the audience as well as your team at Stripe and all of us here at the Cube would like your survival. First and foremost, the knowledge we'll get to the people. >>Yeah, for sure. Where else was I gonna go? Yeah, so the other thing like you kind of mentioned, you know, there are these ETLs out there, but they, you know that that requires you to trust your data to a third party. So that's another thing here where like your data is only going from stripe to your cluster. There's no one in the middle, no one else has seen what you're doing, there's no other security risks. So security's a big focus and it kind of runs through the whole process both on our side and Amazon side. >>What's the most important story for Stripe at this event? You guys hear? How would you say, how would you say, and if you're on the elevator, what's going on with Stripe? Why now? What's so important at Reinvent for Stripe? >>Yeah, I mean I'm gonna use this as an opportunity to plug data pipelines. That's what we focus on. We're here representing the product, which is the easiest way for any user of aws, a user of Amazon, Redshift and a user of Stripe be able to connect the dots and get their data in the best way possible so that they can draw important business insights from that. >>Right? >>Yeah, I think, you know, I would double what North said, really grow Stripe data pipeline, get it to more customers, get more value for our customers by connecting them with their data and with reporting. I think that's, you know, my goal here is to talk to folks, kind of understand what they want to see out of their data and get them onto Stripe data pipeline. >>And you know, former Mike Mikela, former eight executive now over there at Stripe leading the charge, he knows a lot about Amazon here at aws. The theme tomorrow, Adams Leslie keynote, it's gonna be a lot about data, data integration, data end to end Lifeing, you see more, we call it data as code where engineering infrastructure as code was cloud was starting to see a big trend towards data as code where it's more of an engineering opportunity and solution insights. This data as code is kinda like the next evolution. What do you guys think about that? >>Yeah, definitely there is a ton that you can get out of your data if it's in the right place and you can analyze it in the correct ways. You know, you look at Redshift and you can pull data from Redshift into a ton of other products to like, you know, visualize it to get machine learning insights and you need the data there to be able to do this. So again, Stripe Data Pipeline is a great way to take your data and integrate it into the larger data picture that you're building within your company. >>I love that you are supporting businesses of all sizes and millions of them. No. And Brian, thank you so much for being here and telling us more about the financial infrastructure of the internet. That is Stripe, John Furrier. Thanks as always for your questions and your commentary. And thank you to all of you for tuning in to the Cubes coverage of AWS Reinvent Live here from Las Vegas, Nevada. I'm Savannah Peterson and we look forward to seeing you all week.
SUMMARY :
I am joined by the infamous John Furrier. kind of goes next gen and you start to see the success Gen One cloud players go Yes, I'm absolutely thrilled and you can certainly feel the excitement. Nice to meet you guys. Definitely excited to be here. Yeah, you know, you were mentioning you could feel the temperature and the energy in here. as you said, from your small startups to your large multinational companies, I mean you guys have massive traction and people are doing more, you guys are gonna talk here and it gets you all of your Stripe data. you know, stripes started out with their roots line of code, get up and running, payment gateway, whatever you wanna call it. You guys are super financial cloud basically. But just to be able to participate and you know, be around AWS We love to hear of technology of it really is just the simplicity with what you can pull the data. And I mean the, the complexity of data and the volume of it is only gonna get bigger. blocks and the primitives at adds, you guys fit right into that. So in terms of, you know, AI and machine learning, what Stripe Data Pipeline is gonna give you is matches that you see around how people are integrating their data? that would've taken them days, weeks, you know, having to do the manual aspect. Simplify that, Savannah, you know, we were talking at the last event we were at Supercomputing where it's more speeds and feeds as people I can see the developers embedding it in, but once you get Stripe, decisions that you know, might come down to the very details, but as you scale, Anyway, I love that the Stripe data pipeline is Yeah, I mean I, I can kick it off, you know, from, So it's, it's really, it's a great collaboration and as Brian mentioned, the culture at Stripe really aligns they gotta build stuff in so they're always building, but the security angle's interesting cuz now you Yeah, you know, we are really, really tight partners with our internal security folks. You also swallow the audience as well as your team at Stripe Yeah, so the other thing like you kind of mentioned, We're here representing the product, which is the easiest way for any user I think that's, you know, my goal here is to talk to folks, kind of understand what they want And you know, former Mike Mikela, former eight executive now over there at Stripe leading the charge, Yeah, definitely there is a ton that you can get out of your data if it's in the right place and you can analyze I love that you are supporting businesses of all sizes and millions of them.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian | PERSON | 0.99+ |
Mike Mikela | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Brian Brunner | PERSON | 0.99+ |
Stripe | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
John | PERSON | 0.99+ |
10th year | QUANTITY | 0.99+ |
Stripes | ORGANIZATION | 0.99+ |
Savannah | PERSON | 0.99+ |
Noor Faraby | PERSON | 0.99+ |
1 million customers | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Redshift | ORGANIZATION | 0.99+ |
stripes | ORGANIZATION | 0.99+ |
2 million customers | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
both teams | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.98+ |
millions | QUANTITY | 0.98+ |
Stripe Data Pipeline | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
eight executive | QUANTITY | 0.96+ |
tomorrow | DATE | 0.96+ |
first opening segment | QUANTITY | 0.96+ |
millions of customers | QUANTITY | 0.96+ |
stripe | ORGANIZATION | 0.91+ |
Adams Leslie | PERSON | 0.9+ |
Brad Smith & Simon Ponsford | AWS re:Invent 2022
foreign continued coverage of AWS re invent my name is Savannah Peterson and I am very excited to be joined by two brilliant blokes in the space of efficiency and performance whether you're on Prem or in the cloud today's discussion is going to be fascinating please welcome Brad and Simon to the show how are you Simon coming in from the UK how you feeling well thank you excellent and Brad we have you coming in from Seattle how are you this morning doing fine thank you excellent and feeling bookish given your background love that I know that you both really care about efficiency and performance it's a very hot topic both of the show and in the industry right now I'm curious I'm going to open it up with you Simon what challenges and I think you've actually continued to tackle these throughout the course of your career what challenges were you facing and wanting to solve when you started yellow dog um really we're just looking at cloud and coming from an on-premise environment really wanted to be able to make accessing Cloud particularly a volume to be simple and straightforward um if you look at today at the number of instance types available from the major Cloud providers there's more than seven thousand different instance types whereas on-prem you go along you select your processes you select your systems it's already be really easy when you hit the cloud you've just got this amazing amount of choice so really it was all about how can you make Intelligent Decisions for you know are you going to run your workload how to match it with what you've got on premise and that was really the inspiration for Rafael so staying there for just a second what does yellow dog provide customers is a SAS system so um you get to it by accessing through the yellow platform and what it allows people to do is to be able to make Intelligent Decisions about where to run their workload would that be on premise or in the cloud it has a wealth of information it understands the costs the performance the latency and the availability of every different instance type in all different clouds it really allows people to uh to be able to make use of that information provision exactly what they need and to be able to run their workloads yeah it also includes a provisioner and it also includes a scheduler as well which is a cloud native scheduler so it's designed to be able to cope with um with cloud in terms of things like spots and interruptions and be able to uh to reschedule and fail over between clouds if there's ever need to do so yeah that sounds incredible and I know this means a lot for partners like AMD Brad talk to me about the partnership and what this means for AMD for your customers yeah absolutely it you know we're excited to be aligned with the uh with a company like yellow dog it's it's um you know the the importance of compute is becoming more and more prevalent every day and it's it's always been top of mind but especially now when you think about what the uh what the economy and the rest of the world is kind of facing over the next you know probably a year or longer it's so important that um that you're able to maximize your dollars and your spend and doing away with uh with uh with absolute certainty that you've got the right type of people behind you uh ensuring that you're your dollars are being spent very wisely and the great thing about yell dogs that they have tremendous insight into uh into cost optimization computer optimization across the entire Globe their their indexes is quite remarkable and what it does is it allows uh customers to actually see just how performant and cost efficient AMD is so it allows us to really put our best foot forward and and gives customers a chance to understand something that they probably weren't uh more familiar with the fact that uh that AMD uh is a tremendous a tremendous value in the marketplace yeah and and uh Simon can you tell us a little bit more about the yellow dog index I'm glad you brought that up Brad yes the yellow index is uh is essentially it's live it's available for anyone to access you can just go to index.yam.tech and you'll be able to see pretty much every single instance type that's available from all the major Cloud providers and be able to make your selection are you looking for GPU type nodes are you looking for AMD processors are you looking just for performance essentially what you're able to do is create a live view of effectively what's available in different data centers around the world and the price at this moment in time also just uh as Brad mentioned in terms of you know cost efficiency and uh and being taking green values seriously as we should we should do the yellow index also has the ability to be able to see at that point in time where the best place to be at a runner job is based upon the lowest carbon impact of running at this moment in time and that for many organizations gives an amazing Insight in not just about being able to find the the understand fishing processes but being able to ensure the greenest energy possible is powering that process when you want to be able to run your workload it's so powerful what you just said and I think when we exactly it's not just about it's not just about power but it's about place when we are are looking at Global Computing at scale what I know that there's ESG advantages in and ESG being a very hot topic when we're talking about AMW on AWS and and and leveraging tools like yellow dog what other sorts of advantages Beyond being least carbon impactful can your Mutual customers benefit from so it's not like I say there's many other features you know a very important thing when you're running a high performance Computing workload is being able to match the instruction set that you're running on premise and then being able to use that in the cloud as well and also to be able to make Intelligent Decisions of where should something run should would something be more efficient um to build on premise should we always try and maximize our on-premise resources before going into the cloud there's a lot about being able to just be able to make decisions and yellow itself it makes thousands of decisions per second to be in a workout where the the best and most optimized places to to run your workload yeah so Brad you work with a lot of companies at scale what type of scale is possible when leveraging Technologies like AMD and yellow dog combined well you know I love the fact that you mentioned uh you know HPC and it's one of the areas that actually is most exciting for for me personally and for and for AMD with the combination of yellow dog and AWS and AWS launched the very first HPC uh instance type last year and you know we're we're we haven't even begun to answer a question we haven't gotten to see um the full-scale capability in the cloud when it comes to these uh these very coordinated and very refined workloads that are running at massive scale and and uh you know we've got some some products we'll be launched in the near future as well that are incredibly performant and you know to be honest I don't think I don't think we have even come close to seeing the scale relative to somebody's very optimized workloads in HPC uh that that we're capable of so um we're excited we're excited for the next few years to see how how we can wrap in um some of the tremendous success that AMD has had on-prem in these these these massive compute centers and replicating that same success inside AWS with companies like yellow dog it's uh it we're excited to see what uh what's what's going to come forward can you give us a preview of anything on the record that gets you really excited about the future I was going to ask you what what had you looking forward to 2023 and Beyond but nothing well not nothing official of course uh but um I will say this you know AMD has recently successful had the launch for Genoa uh it's our next next-gen release and it is um it is proving to be it absolutely is the dominant compute engine it at this point that exists and you know when you start to couple that with the the prowess of AWS you know you could see that over time becoming something potentially that um you know um can really start to change the compute landscape quite a bit so we're hopeful that you know in the future we'll have something along those lines uh with AWS and others and um we're very uh we're very bullish in that area love it uh Simon what about you you've been passionate about low carbon I.T for a long time is carbon neutral Tech in our future what I realize is a bold and lofty claim for you but feel free to give us any of your future predictions um yeah so well I started here trying to build solutions for you know many years ago so 2006 um I was part of a team that launched the the world's lowest powered Windows PC that was actually based on the AMD technology back then so uh you can tell that AMD have been working on a low power for us for a long time in terms of carbon neutral yes I think um certainly there's a there's a few data centers around the world now that are getting very close to uh to carbon neutral some of which may have already achieved it so that's really interesting but so you know the the second part of that is really the the manufacturer of everything that goes into those Services systems and being able to to get to uh you know a net zero on those over a period of time and when we do that which is yeah not without challenges but but certainly possible then we really will have carbon neutral I.T which will be uh a benefit to everyone you know mankind itself yeah casual statement and I have to say that I wholeheartedly agree I think that it's one of the greater challenges of Our Generation especially as what we're able to do in HPC in particular since we're talking about it is only going to grow and scale and magnitude and the amount of data that we have to organize certain process is is wild even today so I love that I'm curious is there anything that you can share with us that's in the pipeline for Yellow Dog anything coming up in the future that's very exciting um so we're coming up very soon um we're going to release something called um version 4 again log which contains um what we call a resource framework which is all about making sure you've got everything you need before you run a job either on-prem or in the cloud so that might be anything from making sure you've got the right licenses making sure that your data is all in the right location making sure you've got all aspects of your workflow ready before you start launching compute and start really but you know burning through dollars with computer could potentially sat there uh not not doing anything until other tasks keep catch up so we're really excited about this new V4 release which will uh which will come out very soon awesome we can't wait to learn more about that hopefully here again on the cube Brad what do Partnerships with companies like yellowdog meme for you and for the customers that you're able to serve yeah it's it's incredibly important I it's you know there's one of the difficulties in in compute that we have today especially in Cloud compute there's there's so much available at this point I mean there was a point in time it was very simple and straightforward it's not even close to being that anymore green so you know one of the things I love about yellow dog itself is actually it does a great they do a great job of making very complex situations and environments fairly simple to understand especially from a business perspective and so one of the things that we love about it is it actually helps our customers you know the AMD direct customers better understand how to properly use our technology and to get the most out of it and so it's difficult for us to articulate that message because you know we are a Semiconductor Company so sometimes it's a little tough to be able to articulate workloads and applications in the way that our customer base will be able to understand but you know it's it's so critical to have companies like yellow dog in the middle that can actually you know make that translation for us directly to the customer um you know and and especially too when you start thinking about ESG and environmental relationships and I'd like to make a comment and one of the things that is fantastic about AMD AWS and yellow we all share the same Mission and we're very public about those missions about just being better to the to the planet and um you know AMD has taken some very aggressive uh targets through 2025 much beyond anything that the industry has expected and you know because of that we are you know we are the most um we are the most power efficient xa6 product on the marketplace and it's not even close and you know I look forward to the day when uh you know you start looking at instance types inside these public Cloud providers in conjunction with the old dog and you can actually even start to see maybe potentially what that carbon footprint is based on those decisions you make on compute and um you know considering that more than half to spend for everybody is generally compute in these environments it's critical to really know what your true impact in the world is and um it's just one of the best parts about a partnership like this oh what a wonderful note to close on and I love both the Synergy between all the partners on a technology level but most importantly on a mission level because none of it matters if we don't have a planet that we can continue to innovate on so I'm I'm really grateful that you're both here fighting a good fight working together and also making a lot of information available for companies of all different sizes as they're navigating very complex decision trees in and operating their stack so thank you both Simon and Brad I really appreciate your time it's been incredibly insightful and thank you to our audience for tuning in to our continuing coverage of AWS re invent here on thecube my name is Savannah Peterson and I look forward to learning more with you soon foreign [Music]
SUMMARY :
to the day when uh you know you start
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
Simon | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
2025 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
more than half | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
2006 | DATE | 0.98+ |
today | DATE | 0.98+ |
Brad Smith | PERSON | 0.98+ |
Simon Ponsford | PERSON | 0.98+ |
second part | QUANTITY | 0.97+ |
ESG | TITLE | 0.97+ |
last year | DATE | 0.97+ |
Yellow Dog | ORGANIZATION | 0.96+ |
index.yam.tech | OTHER | 0.96+ |
2023 | DATE | 0.95+ |
a year | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
yellowdog | ORGANIZATION | 0.94+ |
many years ago | DATE | 0.93+ |
Rafael | PERSON | 0.93+ |
yellow dog | ORGANIZATION | 0.92+ |
more than seven thousand different instance types | QUANTITY | 0.91+ |
thousands of decisions per second | QUANTITY | 0.9+ |
two brilliant blokes | QUANTITY | 0.9+ |
first | QUANTITY | 0.89+ |
AMD AWS | ORGANIZATION | 0.88+ |
Windows | TITLE | 0.85+ |
every | QUANTITY | 0.85+ |
this morning | DATE | 0.84+ |
every single instance | QUANTITY | 0.83+ |
one of the difficulties | QUANTITY | 0.83+ |
best parts | QUANTITY | 0.78+ |
lot | QUANTITY | 0.77+ |
every day | QUANTITY | 0.76+ |
one of | QUANTITY | 0.75+ |
one of the things | QUANTITY | 0.75+ |
next few years | DATE | 0.72+ |
HPC | ORGANIZATION | 0.71+ |
Genoa | LOCATION | 0.7+ |
HPC | TITLE | 0.67+ |
things | QUANTITY | 0.65+ |
few data centers | QUANTITY | 0.64+ |
instance | QUANTITY | 0.64+ |
2022 | DATE | 0.63+ |
AMD Brad | ORGANIZATION | 0.63+ |
dog | TITLE | 0.59+ |
V4 | EVENT | 0.57+ |
yellow | ORGANIZATION | 0.57+ |
areas | QUANTITY | 0.56+ |
xa6 | COMMERCIAL_ITEM | 0.56+ |
a second | QUANTITY | 0.56+ |
zero | QUANTITY | 0.56+ |
Jay Boisseau, Dell Technologies | SuperComputing 22
>>We are back in the final stretch at Supercomputing 22 here in Dallas. I'm your host Paul Gillum with my co-host Dave Nicholson, and we've been talking to so many smart people this week. It just, it makes, boggles my mind are next guest. J Poso is the HPC and AI technology strategist at Dell. Jay also has a PhD in astronomy from the University of Texas. And I'm guessing you were up watching the Artemis launch the other night? >>I, I wasn't. I really should have been, but, but I wasn't, I was in full super computing conference mode. So that means discussions at, you know, various venues with people into the wee hours. >>How did you make the transition from a PhD in astronomy to an HPC expert? >>It was actually really straightforward. I did theoretical astrophysics and I was modeling what white dwarfs look like when they create matter and then explode as type one A super Novi, which is a class of stars that blow up. And it's a very important class because they blow up almost exactly the same way. So if you know how bright they are physically, not just how bright they appear in the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when they go off in a galaxy, you know how far the galaxy is about how faint it is. So to model these though, you had to understand equations of physics, including electron degeneracy pressure, as well as normal fluid dynamics kinds of of things. And so you were solving for an explosive burning front, ripping through something. And that required a supercomputer to have anywhere close to the fat fidelity to get a reasonable answer and, and hopefully some understanding. >>So I've always said electrons are degenerate. I've always said it and I, and I mentioned to Paul earlier, I said, finally we're gonna get a guest to consort through this whole dark energy dark matter thing for us. We'll do that after, after, after the segment. >>That's a whole different, >>So, well I guess super computing being a natural tool that you would use. What is, what do you do in your role as a strategist? >>So I'm in the product management team. I spend a lot of time talking to customers about what they want to do next. HPC customers are always trying to be maximally productive of what they've got, but always wanting to know what's coming next. Because if you think about it, we can't simulate the entire human body cell for cell on any supercomputer day. We can simulate parts of it, cell for cell or the whole body with macroscopic physics, but not at the, you know, atomic level, the entire organism. So we're always trying to build more powerful computers to solve larger problems with more fidelity and less approximations in it. And so I help people try to understand which technologies for their next system might give them the best advance in capabilities for their simulation work, their data analytics work, their AI work, et cetera. Another part of it is talking to our great technology partner ecosystem and learning about which technologies they have. Cause it feeds the first thing, right? So understanding what's coming, and Dell has a, we're very proud of our large partner ecosystem. We embrace many different partners in that with different capabilities. So understanding those helps understand what your future systems might be. That those are two of the major roles in it. Strategic customers and strategic technologies. >>So you've had four days to wander the, this massive floor here and lots of startups, lots of established companies doing interesting things. What have you seen this week that really excites you? >>So I'm gonna tell you a dirty little secret here. If you are working for someone who makes super computers, you don't get as much time to wander the floor as you would think because you get lots of meetings with people who really want to understand in an NDA way, not just in the public way that's on the floor, but what's, what are you not telling us on the floor? What's coming next? And so I've been in a large number of customer meetings as well as being on the floor. And while I can't obviously share the everything, that's a non-disclosure topic in those, some things that we're hearing a lot about, people are really concerned with power because they see the TDP on the roadmaps for all the silicon providers going way up. And so people with power comes heat as waste. And so that means cooling. >>So power and cooling has been a big topic here. Obviously accelerators are, are increasing in importance in hpc not just for AI calculations, but now also for simulation calculations. And we are very proud of the three new accelerator platforms we launched here at the show that are coming out in a quarter or so. Those are two of the big topics we've seen. You know, there's, as you walk the floor here, you see lots of interesting storage vendors. HPC community's been do doing storage the same way for roughly 20 years. But now we see a lot of interesting players in that space. We have some great things in storage now and some great things that, you know, are coming in a year or two as well. So it's, it's interesting to see that diversity of that space. And then there's always the fun, exciting topics like quantum computing. We unveiled our first hybrid classical quantum computing system here with I on Q and I can't say what the future holds in this, in this format, but I can say we believe strongly in the future of quantum computing and that this, that future will be integrated with the kind of classical computing infrastructure that we make and that will help make quantum computing more powerful downstream. >>Well, let's go down that rabbit hole because, oh boy, boy, quantum computing has been talked about for a long time. There was a lot of excitement about it four or five years ago, some of the major vendors were announcing quantum computers in the cloud. Excitement has kind of died down. We don't see a lot of activity around, no, not a lot of talk around commercial quantum computers, yet you're deep into this. How close are we to have having a true quantum computer or is it a, is it a hybrid? More >>Likely? So there are probably more than 20 and I think close to 40 companies trying different approaches to make quantum computers. So, you know, Microsoft's pursuing a topol topological approach, do a photonics based approach. I, on Q and i on trap approach. These are all different ways of trying to leverage the quantum properties of nature. We know the properties exist, we use 'em in other technologies. We know the physics, but trying the engineering is very difficult. It's very difficult. I mean, just like it was difficult at one point to split the atom. It's very difficult to build technologies that leverage quantum properties of nature in a consistent and reliable and durable way, right? So I, you know, I wouldn't wanna make a prediction, but I will tell you I'm an optimist. I believe that when a tremendous capability with, with tremendous monetary gain potential lines up with another incentive, national security engineering seems to evolve faster when those things line up, when there's plenty of investment and plenty of incentive things happen. >>So I think a lot of my, my friends in the office of the CTO at Dell Technologies, when they're really leading this effort for us, you know, they would say a few to several years probably I'm an optimist, so I believe that, you know, I, I believe that we will sell some of the solution we announced here in the next year for people that are trying to get their feet wet with quantum. And I believe we'll be selling multiple quantum hybrid classical Dell quantum computing systems multiple a year in a year or two. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade >>When people talk about, I'm talking about people writ large, super leaders in supercomputing, I would say Dell's name doesn't come up in conversations I have. What would you like them to know that they don't know? >>You know, I, I hope that's not true, but I, I, I guess I understand it. We are so good at making the products from which people make clusters that we're number one in servers, we're number one in enterprise storage. We're number one in so many areas of enterprise technology that I, I think in some ways being number one in those things detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. But, you know, depending on which analyst you talk to and how they count, we're number one or number two in the world in supercomputing revenue. We don't always do the biggest splashy systems. We do the, the frontier system at t, the HPC five system at ENI in Europe. That's the largest academic supercomputer in the world and the largest industrial super >>That's based the world on Dell. Dell >>On Dell hardware. Yep. But we, I think our vision is really that we want to help more people use HPC to solve more problems than any vendor in the world. And those problems are various scales. So we are really concerned about the more we're democratizing HPC to make it easier for more people to get in at any scale that their budget and workloads require, we're optimizing it to make sure that it's not just some parts they're getting, that they are validated to work together with maximum scalability and performance. And we have a great HPC and AI innovation lab that does this engineering work. Cuz you know, one of the myths is, oh, I can just go buy a bunch of servers from company X and a network from company Y and a storage system from company Z and then it'll all work as an equivalent cluster. Right? Not true. It'll probably work, but it won't be the highest performance, highest scalability, highest reliability. So we spend a lot of time optimizing and then we are doing things to try to advance the state of HPC as well. What our future systems look like in the second half of this decade might be very different than what they look like right. Now. >>You mentioned a great example of a limitation that we're running up against right now. You mentioned an entire human body as a, as a, as an organism >>Or any large system that you try to model at the atomic level, but it's a huge macro system, >>Right? So will we be able to reach milestones where we can get our arms entirely around something like an entire human organism with simply quantitative advances as opposed to qualitative advances? Right now, as an example, let's just, let's go down to the basics from a Dell perspective. You're in a season where microprocessor vendors are coming out with next gen stuff and those next NextGen microprocessors, GPUs and CPUs are gonna be plugged into NextGen motherboards, PCI e gen five, gen six coming faster memory, bigger memory, faster networking, whether it's NS or InfiniBand storage controllers, all bigger, better, faster, stronger. And I suspect that systems like Frontera, I don't know, but I suspect that a lot of the systems that are out there are not on necessarily what we would think of as current generation technology, but maybe they're n minus one as a practical matter. So, >>But yeah, I mean they have a lifetime, so Exactly. >>The >>Lifetime is longer than the evolution. >>That's the normal technologies. Yeah. So, so what some people miss is this is, this is the reality that when, when we move forward with the latest things that are being talked about here, it's often a two generation move for an individual, for an individual organization. Yep. >>So now some organizations will have multiple systems and they, the system's leapfrog and technology generations, even if one is their real large system, their next one might be newer technology, but smaller, the next one might be a larger one with newer technology and such. Yeah. So the, the biggest super computing sites are, are often running more than one HPC system that have been specifically designed with the latest technologies and, and designed and configured for maybe a different subset of their >>Workloads. Yeah. So, so the, the, to go back to kinda the, the core question, in your opinion, do we need that qualitative leap to something like quantum computing in order to get to the point, or is it simply a question of scale and power at the, at the, at the individual node level to get us to the point where we can in fact gain insight from a digital model of an entire human body, not just looking at a, not, not just looking at an at, at an organ. And to your point, it's not just about human body, any system that we would characterize as being chaotic today, so a weather system, whatever. Do you, are there any milestones that you're thinking of where you're like, wow, you know, I have, I, I understand everything that's going on, and I think we're, we're a year away. We're a, we're, we're a, we're a compute generation away from being able to gain insight out of systems that right now we can't simply because of scale. It's a very, very long question that I just asked you, but I think I, but hopefully, hopefully you're tracking it. What, what are your, what are your thoughts? What are these, what are these inflection points that we, that you've, in your mind? >>So I, I'll I'll start simple. Remember when we used to buy laptops and we worried about what gigahertz the clock speed was Exactly. Everybody knew the gigahertz of it, right? There's some tasks at which we're so good at making the hardware that now the primary issues are how great is the screen? How light is it, what's the battery life like, et cetera. Because for the set of applications on there, we we have enough compute power. We don't, you don't really need your laptop. Most people don't need their laptop to have twice as powerful a processor that actually rather up twice the battery life on it or whatnot, right? We make great laptops. We, we design for all of those, configure those parameters now. And what, you know, we, we see some customers want more of x, somewhat more of y but the, the general point is that the amazing progress in, in microprocessors, it's sufficient for most of the workloads at that level. Now let's go to HPC level or scientific and technical level. And when it needs hpc, if you're trying to model the orbit of the moon around the earth, you don't really need a super computer for that. You can get a highly accurate model on a, on a workstation, on a server, no problem. It won't even really make it break a sweat. >>I had to do it with a slide rule >>That, >>That >>Might make you break a sweat. Yeah. But to do it with a, you know, a single body orbiting with another body, I say orbiting around, but we both know it's really, they're, they're both ordering the center of mass. It's just that if one is much larger, it seems like one's going entirely around the other. So that's, that's not a super computing problem. What about the stars in a galaxy trying to understand how galaxies form spiral arms and how they spur star formation. Right now you're talking a hundred billion stars plus a massive amount of inter stellar medium in there. So can you solve that on that server? Absolutely not. Not even close. Can you solve it on the largest super computer in the world today? Yes and no. You can solve it with approximations on the largest super computer in the world today. But there's a lot of approximations that go into even that. >>The good news is the simulations produce things that we see through our great telescopes. So we know the approximations are sufficient to get good fidelity, but until you really are doing direct numerical simulation of every particle, right? Right. Which is impossible to do. You need a computer as big as the universe to do that. But the approximations and the science in the science as well as the known parts of the science are good enough to give fidelity. So, and answer your question, there's tremendous number of problem scales. There are problems in every field of science and study that exceed the der direct numerical simulation capabilities of systems today. And so we always want more computing power. It's not macho flops, it's real, we need it, we need exo flops and we will need zeta flops beyond that and yada flops beyond that. But an increasing number of problems will be solved as we keep working to solve problems that are farther out there. So in terms of qualitative steps, I do think technologies like quantum computing, to be clear as part of a hybrid classical quantum system, because they're really just accelerators for certain kinds of algorithms, not for general purpose algorithms. I do think advances like that are gonna be necessary to solve some of the very hardest problem. It's easy to actually formulate an optimization problem that is absolutely intractable by the larger systems in the world today, but quantum systems happen to be in theory when they're big and stable enough, great at that kind of problem. >>I, that should be understood. Quantum is not a cure all for absolutely. For the, for the shortage of computing power. It's very good for certain, certain >>Problems. And as you said at this super computing, we see some quantum, but it's a little bit quieter than I probably expected. I think we're in a period now of everybody saying, okay, there's been a lot of buzz. We know it's gonna be real, but let's calm down a little bit and figure out what the right solutions are. And I'm very proud that we offered one of those >>At the show. We, we have barely scratched the surface of what we could talk about as we get into intergalactic space, but unfortunately we only have so many minutes and, and we're out of them. Oh, >>I'm >>J Poso, HPC and AI technology strategist at Dell. Thanks for a fascinating conversation. >>Thanks for having me. Happy to do it anytime. >>We'll be back with our last interview of Supercomputing 22 in Dallas. This is Paul Gillen with Dave Nicholson. Stay with us.
SUMMARY :
We are back in the final stretch at Supercomputing 22 here in Dallas. So that means discussions at, you know, various venues with people into the wee hours. the sky, but if you can determine from first principles how bright they're, then you have a standard ruler for the universe when We'll do that after, after, after the segment. What is, what do you do in your role as a strategist? We can simulate parts of it, cell for cell or the whole body with macroscopic physics, What have you seen this week that really excites you? not just in the public way that's on the floor, but what's, what are you not telling us on the floor? the kind of classical computing infrastructure that we make and that will help make quantum computing more in the cloud. We know the properties exist, we use 'em in other technologies. And then of course you hope it goes to tens and hundreds of, you know, by the end of the decade What would you like them to know that they don't know? detracts a little bit from a subset of the market that is a solution subset as opposed to a product subset. That's based the world on Dell. So we are really concerned about the more we're You mentioned a great example of a limitation that we're running up against I don't know, but I suspect that a lot of the systems that are out there are not on That's the normal technologies. but smaller, the next one might be a larger one with newer technology and such. And to your point, it's not just about human of the moon around the earth, you don't really need a super computer for that. But to do it with a, you know, a single body orbiting with another are sufficient to get good fidelity, but until you really are doing direct numerical simulation I, that should be understood. And as you said at this super computing, we see some quantum, but it's a little bit quieter than We, we have barely scratched the surface of what we could talk about as we get into intergalactic J Poso, HPC and AI technology strategist at Dell. Happy to do it anytime. This is Paul Gillen with Dave Nicholson.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Jay Boisseau | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
J Poso | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
tens | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
University of Texas | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
four | DATE | 0.99+ |
first principles | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
more than 20 | QUANTITY | 0.99+ |
two generation | QUANTITY | 0.98+ |
Supercomputing 22 | TITLE | 0.98+ |
one point | QUANTITY | 0.98+ |
twice | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
five years ago | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
earth | LOCATION | 0.96+ |
more than one | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
a year | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
first thing | QUANTITY | 0.95+ |
20 years | QUANTITY | 0.94+ |
four days | QUANTITY | 0.93+ |
second half of this decade | DATE | 0.93+ |
ENI | ORGANIZATION | 0.91+ |
Z | ORGANIZATION | 0.9+ |
40 companies | QUANTITY | 0.9+ |
e gen five | COMMERCIAL_ITEM | 0.86+ |
a year | QUANTITY | 0.84+ |
hundred billion stars | QUANTITY | 0.83+ |
HPC | ORGANIZATION | 0.83+ |
three new accelerator platforms | QUANTITY | 0.81+ |
end of the decade | DATE | 0.8+ |
hpc | ORGANIZATION | 0.8+ |
Frontera | ORGANIZATION | 0.8+ |
single body | QUANTITY | 0.79+ |
X | ORGANIZATION | 0.76+ |
NextGen | ORGANIZATION | 0.73+ |
Supercomputing 22 | ORGANIZATION | 0.69+ |
five system | QUANTITY | 0.62+ |
gen six | QUANTITY | 0.61+ |
number one | QUANTITY | 0.57+ |
approximations | QUANTITY | 0.53+ |
particle | QUANTITY | 0.53+ |
a quarter | QUANTITY | 0.52+ |
Y | ORGANIZATION | 0.49+ |
type | OTHER | 0.49+ |
22 | OTHER | 0.49+ |
Satish Iyer, Dell Technologies | SuperComputing 22
>>We're back at Super Computing, 22 in Dallas, winding down the final day here. A big show floor behind me. Lots of excitement out there, wouldn't you say, Dave? Just >>Oh, it's crazy. I mean, any, any time you have NASA presentations going on and, and steampunk iterations of cooling systems that the, you know, it's, it's >>The greatest. I've been to hundreds of trade shows. I don't think I've ever seen NASA exhibiting at one like they are here. Dave Nicholson, my co-host. I'm Paul Gell, in which with us is Satish Ier. He is the vice president of emerging services at Dell Technologies and Satit, thanks for joining us on the cube. >>Thank you. Paul, >>What are emerging services? >>Emerging services are actually the growth areas for Dell. So it's telecom, it's cloud, it's edge. So we, we especially focus on all the growth vectors for, for the companies. >>And, and one of the key areas that comes under your jurisdiction is called apex. Now I'm sure there are people who don't know what Apex is. Can you just give us a quick definition? >>Absolutely. So Apex is actually Dells for a into cloud, and I manage the Apex services business. So this is our way of actually bringing cloud experience to our customers, OnPrem and in color. >>But, but it's not a cloud. I mean, you don't, you don't have a Dell cloud, right? It's, it's of infrastructure as >>A service. It's infrastructure and platform and solutions as a service. Yes, we don't have our own e of a public cloud, but we want to, you know, this is a multi-cloud world, so technically customers want to consume where they want to consume. So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. >>You, you mentioned something just ahead of us going on air. A great way to describe Apex, to contrast Apex with CapEx. There's no c there's no cash up front necessary. Yeah, I thought that was great. Explain that, explain that a little more. Well, >>I mean, you know, one, one of the main things about cloud is the consumption model, right? So customers would like to pay for what they consume, they would like to pay in a subscription. They would like to not prepay CapEx ahead of time. They want that economic option, right? So I think that's one of the key tenets for anything in cloud. So I think it's important for us to recognize that and think Apex is basically a way by which customers pay for what they consume, right? So that's a absolutely a key tenant for how, how we want to design Apex. So it's absolutely right. >>And, and among those services are high performance computing services. Now I was not familiar with that as an offering in the Apex line. What constitutes a high performance computing Apex service? >>Yeah, I mean, you know, I mean, this conference is great, like you said, you know, I, there's so many HPC and high performance computing folks here, but one of the things is, you know, fundamentally, if you look at high performance computing ecosystem, it is quite complex, right? And when you call it as an Apex HPC or Apex offering offer, it brings a lot of the cloud economics and cloud, you know, experience to the HPC offer. So fundamentally, it's about our ability for customers to pay for what they consume. It's where Dell takes a lot of the day to day management of the infrastructure on our own so that customers don't need to do the grunge work of managing it, and they can really focus on the actual workload, which actually they run on the CHPC ecosystem. So it, it is, it is high performance computing offer, but instead of them buying the infrastructure, running all of that by themself, we make it super easy for customers to consume and manage it across, you know, proven designs, which Dell always implements across these verticals. >>So what, what makes the high performance computing offering as opposed to, to a rack of powered servers? What do you add in to make it >>Hpc? Ah, that's a great question. So, I mean, you know, so this is a platform, right? So we are not just selling infrastructure by the drink. So we actually are fundamentally, it's based on, you know, we, we, we launch two validated designs, one for life science sales, one for manufacturing. So we actually know how these PPO work together, how they actually are validated design tested solution. And we also, it's a platform. So we actually integrate the softwares on the top. So it's just not the infrastructure. So we actually integrate a cluster manager, we integrate a job scheduler, we integrate a contained orchestration layer. So a lot of these things, customers have to do it by themself, right? If they're buy the infrastructure. So by basically we are actually giving a platform or an ecosystem for our customers to run their workloads. So make it easy for them to actually consume those. >>That's Now is this, is this available on premises for customer? >>Yeah, so we, we, we make it available customers both ways. So we make it available OnPrem for customers who want to, you know, kind of, they want to take that, take that economics. We also make it available in a colo environment if the customers want to actually, you know, extend colo as that OnPrem environment. So we do both. >>What are, what are the requirements for a customer before you roll that equipment in? How do they sort of have to set the groundwork for, >>For Well, I think, you know, fundamentally it starts off with what the actual use case is, right? So, so if you really look at, you know, the two validated designs we talked about, you know, one for, you know, healthcare life sciences, and one other one for manufacturing, they do have fundamentally different requirements in terms of what you need from those infrastructure systems. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require a lot of memory intensive loads, or do they actually require something which has got a lot of compute power. So, you know, it all depends on what they would require in terms of the workloads to be, and then we do havet sizing. So we do have small, medium, large, we have, you know, multiple infrastructure options, CPU core options. Sometimes the customer would also wanna say, you know what, as long as the regular CPUs, I also want some GPU power on top of that. So those are determinations typically a customer makes as part of the ecosystem, right? And so those are things which would, they would talk to us about to say, okay, what is my best option in terms of, you know, kind of workloads I wanna run? And then they can make a determination in terms of how, how they would actually going. >>So this, this is probably a particularly interesting time to be looking at something like HPC via Apex with, with this season of Rolling Thunder from various partners that you have, you know? Yep. We're, we're all expecting that Intel is gonna be rolling out new CPU sets from a powered perspective. You have your 16th generation of PowerEdge servers coming out, P C I E, gen five, and all of the components from partners like Invidia and Broadcom, et cetera, plugging into them. Yep. What, what does that, what does that look like from your, from your perch in terms of talking to customers who maybe, maybe they're doing things traditionally and they're likely to be not, not fif not 15 G, not generation 15 servers. Yeah. But probably more like 14. Yeah, you're offering a pretty huge uplift. Yep. What, what do those conversations look >>Like? I mean, customers, so talking about partners, right? I mean, of course Dell, you know, we, we, we don't bring any solutions to the market without really working with all of our partners, whether that's at the infrastructure level, like you talked about, you know, Intel, amd, Broadcom, right? All the chip vendors, all the way to software layer, right? So we have cluster managers, we have communities orchestrators. So we usually what we do is we bring the best in class, whether it's a software player or a hardware player, right? And we bring it together as a solution. So we do give the customers a choice, and the customers always want to pick what you they know actually is awesome, right? So they that, that we actually do that. And, you know, and one of the main aspects of, especially when you talk about these things, bringing it as a service, right? >>We take a lot of guesswork away from our customer, right? You know, one of the good example of HPC is capacity, right? So customers, these are very, you know, I would say very intensive systems. Very complex systems, right? So customers would like to buy certain amount of capacity, they would like to grow and, you know, come back, right? So give, giving them the flexibility to actually consume more if they want, giving them the buffer and coming down. All of those things are very important as we actually design these things, right? And that takes some, you know, customers are given a choice, but it actually, they don't need to worry about, oh, you know, what happens if I actually have a spike, right? There's already buffer capacity built in. So those are awesome things. When we talk about things as a service, >>When customers are doing their ROI analysis, buying CapEx on-prem versus, versus using Apex, is there a point, is there a crossover point typically at which it's probably a better deal for them to, to go OnPrem? >>Yeah, I mean, it it like specifically talking about hpc, right? I mean, why, you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, right? That's not gonna go away, right? But there are certain reasons why they would look at OnPrem or they would look at, for example, Ola environment, right? One of the main reasons they would like to do that is purely have to do with cost, right? These are pretty expensive systems, right? There is a lot of ingress, egress, there is a lot of data going back and forth, right? Public cloud, you know, it costs money to put data in or actually pull data back, right? And the second one is data residency and security requirements, right? A lot of these things are probably proprietary set of information. We talked about life sciences, there's a lot of research, right? >>Manufacturing, a lot of these things are just, just in time decision making, right? You are on a factory floor, you gotta be able to do that. Now there is a latency requirement. So I mean, I think a lot of things play, you know, plays into this outside of just cost, but data residency requirements, ingress, egress are big things. And when you're talking about mass moments of data you wanna put and pull it back in, they would like to kind of keep it close, keep it local, and you know, get a, get a, get a price >>Point. Nevertheless, I mean, we were just talking to Ian Coley from aws and he was talking about how customers have the need to sort of move workloads back and forth between the cloud and on-prem. That's something that they're addressing without posts. You are very much in the, in the on-prem world. Do you have, or will you have facilities for customers to move workloads back and forth? Yeah, >>I wouldn't, I wouldn't necessarily say, you know, Dell's cloud strategy is multi-cloud, right? So we basically, so it kind of falls into three, I mean we, some customers, some workloads are suited always for public cloud. It's easier to consume, right? There are, you know, customers also consume on-prem, the customers also consuming Kohler. And we also have like Dell's amazing piece of software like storage software. You know, we make some of these things available for customers to consume a software IP on their public cloud, right? So, you know, so this is our multi-cloud strategy. So we announced a project in Alpine, in Delta fold. So you know, if you look at those, basically customers are saying, I love your Dell IP on this, on this product, on the storage, can you make it available through, in this public environment, whether, you know, it's any of the hyper skill players. So if we do all of that, right? So I think it's, it shows that, you know, it's not always tied to an infrastructure, right? Customers want to consume the best thumb and if we need to be consumed in hyperscale, we can make it available. >>Do you support containers? >>Yeah, we do support containers on hpc. We have, we have two container orchestrators we have to support. We, we, we have aner similarity, we also have a container options to customers. Both options. >>What kind of customers are you signing up for the, for the HPC offerings? Are they university research centers or is it tend to be smaller >>Companies? It, it's, it's, you know, the last three days, this conference has been great. We probably had like, you know, many, many customers talking to us. But HC somewhere in the range of 40, 50 customers, I would probably say lot of interest from educational institutions, universities research, to your point, a lot of interest from manufacturing, factory floor automation. A lot of customers want to do dynamic simulations on factory floor. That is also quite a bit of interest from life sciences pharmacies because you know, like I said, we have two designs, one on life sciences, one on manufacturing, both with different dynamics on the infrastructure. So yeah, quite a, quite a few interest definitely from academics, from life sciences, manufacturing. We also have a lot of financials, big banks, you know, who wants to simulate a lot of the, you know, brokerage, a lot of, lot of financial data because we have some, you know, really optimized hardware we announced in Dell for, especially for financial services. So there's quite a bit of interest from financial services as well. >>That's why that was great. We often think of Dell as, as the organization that democratizes all things in it eventually. And, and, and, and in that context, you know, this is super computing 22 HPC is like the little sibling trailing around, trailing behind the super computing trend. But we definitely have seen this move out of just purely academia into the business world. Dell is clearly a leader in that space. How has Apex overall been doing since you rolled out that strategy, what, two couple? It's been, it's been a couple years now, hasn't it? >>Yeah, it's been less than two years. >>How are, how are, how are mainstream Dell customers embracing Apex versus the traditional, you know, maybe 18 months to three year upgrade cycle CapEx? Yeah, >>I mean I look, I, I think that is absolutely strong momentum for Apex and like we, Paul pointed out earlier, we started with, you know, making the infrastructure and the platforms available to customers to consume as a service, right? We have options for customers, you know, to where Dell can fully manage everything end to end, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, basically environment for the customers, we also have options where customers would say, you know what, I actually have a pretty sophisticated IT organization. I want Dell to manage the infrastructure, but up to this level in the layer up to the guest operating system, I'll take care of the rest, right? So we are seeing customers who are coming to us with various requirements in terms of saying, I can do up to here, but you take all of this pain point away from me or you do everything for me. >>It all depends on the customer. So we do have wide interest. So our, I would say our products and the portfolio set in Apex is expanding and we are also learning, right? We are getting a lot of feedback from customers in terms of what they would like to see on some of these offers. Like the example we just talked about in terms of making some of the software IP available on a public cloud where they'll look at Dell as a software player, right? That's also is absolutely critical. So I think we are giving customers a lot of choices. Our, I would say the choice factor and you know, we are democratizing, like you said, expanding in terms of the customer choices. And I >>Think it's, we're almost outta our time, but I do wanna be sure we get to Dell validated designs, which you've mentioned a couple of times. How specific are the, well, what's the purpose of these designs? How specific are they? >>They, they are, I mean I, you know, so the most of these valid, I mean, again, we look at these industries, right? And we look at understanding exactly how would, I mean we have huge embedded base of customers utilizing HPC across our ecosystem in Dell, right? So a lot of them are CapEx customers. We actually do have an active customer profile. So these validated designs takes into account a lot of customer feedback, lot of partner feedback in terms of how they utilize this. And when you build these solutions, which are kind of end to end and integrated, you need to start anchoring on something, right? And a lot of these things have different characteristics. So these validated design basically prove to us that, you know, it gives a very good jump off point for customers. That's the way I look at it, right? So a lot of them will come to the table with, they don't come to the blank sheet of paper when they say, oh, you know what I'm, this, this is my characteristics of what I want. I think this is a great point for me to start from, right? So I think that that gives that, and plus it's the power of validation, really, right? We test, validate, integrate, so they know it works, right? So all of those are hypercritical. When you talk to, >>And you mentioned healthcare, you, you mentioned manufacturing, other design >>Factoring. We just announced validated design for financial services as well, I think a couple of days ago in the event. So yep, we are expanding all those DVDs so that we, we can, we can give our customers a choice. >>We're out of time. Sat ier. Thank you so much for joining us. Thank you. At the center of the move to subscription to everything as a service, everything is on a subscription basis. You really are on the leading edge of where, where your industry is going. Thanks for joining us. >>Thank you, Paul. Thank you Dave. >>Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas, wrapping up the show this afternoon and stay with us for, they'll be half more soon.
SUMMARY :
Lots of excitement out there, wouldn't you say, Dave? you know, it's, it's He is the vice Thank you. So it's telecom, it's cloud, it's edge. Can you just give us a quick definition? So this is our way I mean, you don't, you don't have a Dell cloud, right? So this is Dell's way of actually, you know, supporting a multi-cloud strategy for our customers. You, you mentioned something just ahead of us going on air. I mean, you know, one, one of the main things about cloud is the consumption model, right? an offering in the Apex line. we make it super easy for customers to consume and manage it across, you know, proven designs, So, I mean, you know, so this is a platform, if the customers want to actually, you know, extend colo as that OnPrem environment. So, you know, the customers initially figure out, okay, how do they actually require something which is going to require Thunder from various partners that you have, you know? I mean, of course Dell, you know, we, we, So customers, these are very, you know, I would say very intensive systems. you know, we do have a ma no, a lot of customers consume high performance compute and public cloud, in, they would like to kind of keep it close, keep it local, and you know, get a, Do you have, or will you have facilities So you know, if you look at those, basically customers are saying, I love your Dell IP on We have, we have two container orchestrators We also have a lot of financials, big banks, you know, who wants to simulate a you know, this is super computing 22 HPC is like the little sibling trailing around, take a lot of the pain points away, like we talked about because you know, managing a cloud scale, you know, we are democratizing, like you said, expanding in terms of the customer choices. How specific are the, well, what's the purpose of these designs? So these validated design basically prove to us that, you know, it gives a very good jump off point for So yep, we are expanding all those DVDs so that we, Thank you so much for joining us. Paul Gillum with Dave Nicholson here from Supercomputing 22 in Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Terry | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ian Coley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Terry Ramos | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Gell | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Paul Gillum | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
190 days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
European Space Agency | ORGANIZATION | 0.99+ |
Max Peterson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Arcus Global | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Bahrain | LOCATION | 0.99+ |
D.C. | LOCATION | 0.99+ |
Everee | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Zero Days | TITLE | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Washington | LOCATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
Department for Wealth and Pensions | ORGANIZATION | 0.99+ |
Ireland | LOCATION | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
an hour | QUANTITY | 0.99+ |
Paris | LOCATION | 0.99+ |
five weeks | QUANTITY | 0.99+ |
1.8 billion | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Germany | LOCATION | 0.99+ |
450 applications | QUANTITY | 0.99+ |
Department of Defense | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Satish Iyer | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
Middle East | LOCATION | 0.99+ |
42% | QUANTITY | 0.99+ |
Jet Propulsion Lab | ORGANIZATION | 0.99+ |