Java Power Panel V1 FOR REVIEW
(upbeat music) >> Facilitator: From theCUBE studios in Palo Alto in Boston, connecting with other leaders all around the world. This is a CUBE conversation. >> Java is the world's most popular programming language. And it remains the leading application development platform. But what's the status of Java? What a customers doing? And very importantly, what is Oracle's and the community strategy with respect to Java? Welcome everybody to this Java power panel on theCUBE. I'm your host, Dave Vellante. Manish Gupta here, he's the Vice President of Global Marketing at Java for Oracle, Donald Smith is also on the panel, and he's the Senior Director of Product Management at Oracle and we're joined by David Floyd who is a CTO of Wikibon Research and has done a number of research activities on this very topic. Gentlemen, welcome to theCUBE, great to see you. >> Thank you. >> Thank you. >> Manish, I want to start with you. Can you help us understand really what dig into Oracle strategy with respect to Java. The technology, the licensing, the support. How has that evolved over time? Take us through that. >> Dave, with 51 billion JVMs deployed worldwide, Java has truly cemented its position as the language of innovation and the technology world. There's no question about that. In fact, I like to say it's really the language of empowerment. Given the the impact it has had numerous applications ranging from the Mars Rover to genomics and everything in between. As Oracle acquired sign over 10 years ago, it's really kept it front of mind, two aspects of what we want to do with the technology and the platform. The first one was to ensure there was broad accessibility to the technology and the platform for anybody that wanted to benefit from it. And the second one was to ensure that the ecosystem remained vibrant and thriving throughout. I managed to do both. And underpinning these two objectives were really three pillars of our strategy. The first one was around trust, ensuring that openness and transparency of the technology was as was before continued to be the case going forward. The second element of that within the trust pillar was to ensure that as enterprises invested in the technology that investment was protected, it was not, you invest and you lose over a period of time in a backward compatibility, interoperability, certifications, were all foundational to the platform itself to the features, to the innovation moving forward. And more recently as we have rethought to the support, the licensing and the overall structure of the pricing that we have ensured that ultimately the trust comes along in those dimensions as well. So the launch of the Java subscription came along with, pay as you go model, it's a transparent pricing structure and discuss structure published on the website. So you can go and see what it would cost for the desktop on servers or cloud deployment. So those were the things that made kind of the first pillar happen. The second one was Dunno innovation. Over the last 25 years, Java has stood the test of time. It has delivered the needs of today while preparing for the future. And that remains the case. It is not something that has sort of focused on the fat of the day and the hot thing for the day, but really more important that it is prepared to deal with the mission critical, massive scale deployments that can run for years, for decades, in some cases. And keeping that in mind, Oracle has continued to put more and more technology into the open source world with every release that comes out, you can see 80 plus percent of the contributions come from Oracle. So that's the second pillar around innovation. And the third piece of the strategy has been around predictability. Ensuring that Java, the technology and platform perform as advertised, and that goes into the feature releases, it goes into the release process, it goes into the fact that you were broadly within the open JDK environment for developing and executing the roadmap. From a CIO standpoint, it's important to know that the technology used to develop your applications has talent around. And your, if you're going to develop something like Java, you'll find the right Java engineers to do the job. That is not a question, right? And so that's part of predictability. And finally, again with the change in the six months release cadence that came about three years ago, with the release of Java 10, we've really made sure that it's not, no, a bunch of things come about. You don't know when they're going to be released, but you know, like clockwork, you'll have a new Java list every six months. And that's been the case every March and September, since Java 10, you've had a new release of Java with certain features that come up and we just launched Java 15. So trust innovation predictability, have really been the three pillars on which we've executed the strategy for Java. >> Excellent, thank you for that intro, and we're going to get into it now. I'm glad you mentioned the sun acquisition. I said at the time that Java was the linchpin of that acquisition, many people, of course, we looked at the integration piece with the hardware, but it was really Java and the capabilities that it brings. And of course, a lot of Oracle software written in Java and not the least of which is a fusion. But now let's get into the components of this. And I want to talk a little bit about the methodology of this and going to call on you David Floria. But essentially my understanding is that Wikibon went through and David, you led this, you did a technical deep dive, which you always do, did a number of in depth interviews with Java customers. And then of course you also did a web survey and then you built from that data and economic model. So you can try to understand the sort of dimensions of the financials if you will. So what were your key findings there? >> So the key findings were that Java was in a good state that people were happy with the Java. The second key finding is that the business case itself for using the Oracle services, the subscription services was good. It didn't mean to say that that wasn't for every company, the right way to do it, but there was a very good return on that. And the third area was that there was a degree of confidence that the new way of doing things, the six-month cycle, as opposed to the three-year cycle was overall a benefit to the rate of change, the ability for them to introduce new features quickly. >> Okay, well, I mean, you know, and I read that research. And to me my takeaways where I saw the continued relevance of Java, which is kind of goes without saying, but a lot of times it gets lost in the headlines. That subscription piece is key. We're going to get into some of the economics as to how that affects customers and it saves you money. And the other piece was the roadmap becoming more transparent. And I don't want to dig into that a little bit, but before we do, let's get into that innovation component Manish, mentioned that several times, but Don, I want to go to you guys. We have a slide on the various components of the innovation. If you would bring this up and Don I wonder if you could talk to this and give us some examples if you would. >> Yeah, sure. So we were the number one development platform for the last 25 years. We want to be the number one development platform for the next 25 years. And in order to do that, we have to be constantly innovating and constantly innovating not only the business side in terms of the subscription and the support offerings and commercial features like Manish was talking about, but also the platform in general. And so the way we like to talk about innovation as we break it down by these pillars that you can see on the slide. And so the first pillar is continuous improvements to the language. So this is watching developers trying to write the same piece of code over and over again, and us asking, can we make you more efficient? Can we give you more language features that reduce the amount of boilerplate that you have to write? The second pillar is a project that we just announced a few months ago called Leyden. And the idea with Leyden is addressing the longterm pinpoints of Java slow startup time and time to peak performance. So if you go back 10 years ago, everybody knows about Java as an enterprise platform, Java EE application servers. They all had the notion of being very long lived. And so Java at that time would be optimized towards long lived applications, startup, and performance. Where if it took a little while to get there, it didn't matter as long as when it got there, it was super fast. And so we're trying to get that peak performance faster in the world of microservices. In a similar vein with project loom, we're looking at making concurrency simple again, looking at how developers are doing more reactive style programming and realizing that the threading model needs to be rethought from the ground up, that project is looking really, really good. Then we have project Panama. Project Panama is all about making it easier to connect Java with native libraries. Valhalla is all about improving, there's a couple of benefits, but it's all about improving memory density and being able to access and iterate and operate over primitive data types at super fast speeds by better optimizing how that information is stored in memory. And then the other pillar of the final pillar that we have been working on from an innovation perspective is ZGC. We introduced a new garbage collector technology a few years ago, G1GCE a generational garbage collector with the eye towards making garbage collection in Java pause lists. So if you, again, if you go back in time and look at the history of Java, memory management is awesome, but there's always that cost and risk of a garbage collection cycle, taking a bit of time away from a critical application. And ZGC is all about getting rid of that. So lots of innovation, lots of different pillars going on right now. >> Awesome, I'm impressed. There's something after Valhalla. I thought that was Nirvana. (laughing) But now, and these are all open source projects, right? And you guys obviously provide committers, there are other people in the open source world who provide that, is that correct Don? >> Yeah, that's correct. We have about 80% of the contributions in open JDK. We are the stewards of open JDK and lead the project. Most of the pillars I talked about here are you know Oracle folks working on that. >> Awesome. Okay, let's get into some of the data. David, I want to come back to you and talk about some of the survey results guys, if you bring up that next slide. Why David, why do people upgrade? What are the drivers? It's really talks to the large companies and what's different from the small company or mid-size companies? What are the takeaways here? >> David: Well, so this is interesting, and as you might expect, large enterprises, have very concerned about application stability. Whereas midsize or enterprises are much more concerned about the performance, making sure that the performance is good. They are both concerned about reliable performance and security, but it's interesting that from a regulation point of view, mid-size companies really want to make sure that they are obeying the regulations, that they are meeting those. Whereas larger organizations usually have their own security and regulation functions looking very hard at these things. So that looking less to the platform to provide those than their own people. >> Yeah, I think you're right. I think the midsize organizations don't have as many people running around taking care of security and it's harder for them to keep up with the edicts of the organization. So they want to stay more current. Don, I wonder if you can add anything to this data from an innovation standpoint. >> Yeah, well, and from a product management standpoint, and what we see here is when you look at just going from fortune 500 to global 2000, you see things that are important to one or less so than the other. You can extrapolate that all the way down to a small company or a startup. And that's why providing the most flexibility in terms of an offering to allow people to decide what, when, where, and how they would be going to upgrade their software so they can do it when they want, and on their own terms. You can see that that becomes really important. And also making sure that we're providing innovation in a broad way so that it'll appeal both to the enterprise and again extrapolating that forward down to even very small startups. >> You know, David, the other thing that struck me in the data, if we bring up that other piece is the upgrade strategy, and there was a stark difference between large enterprises and midsize organizations. Talk to this data, if you would. >> Yes, this is again, a pretty stark difference between them. When you're looking at large enterprises, they really wants stability and they don't want to upgrade so often. Whereas mid-size enterprises, are much more willing to both upgrade on a regular cadence and really have a much more up-to-date, or have always have the latest software. They're driving smaller applications, but they're much more agile about their approach to it. Again, emphasizing what Don was saying about the smaller enterprises wanting a different strategy and a different way of doing things than large enterprises. >> So Manish this says to me that you got it right from a strategy standpoint. I mean, any color you can add here. >> Yeah, it's very intuitive that whether you're a large organization, a mid-sized enterprise or a small business, right? You face competitive pressures, your dynamics are unique. What you're able to do with the resources, what you desire to do at the pace that is appropriate for your environment, are really unique to you, and to try to force one model across any one size or across any set of dynamics is just not appropriate. So we've always felt that giving the enterprises and the organization, the ability to move at the pace of their business is the right approach. And so when we designed the Oracle Java SE subscription, we truly have that front and center in our thought process. And that structure seems to be working well. >> David, what I like about the way you do research is you actually build an economic model. A lot of these business value projects. I know this well, having been in the business a long time, they'll go out to ask the customer what they got, and then the customer said, "Well, I got a 111% ROI, and boom, that's what it is. You actually construct an economic model, you bring in rules of thumb, it allows you to do what ifs you can test that model and calibrate it against the real world. So I commend you on that. You've done a lot of hard work there, but bottom line at forests, I mean, let's bring up the economics. I mean, that's what people ultimately want to know. Does this save me money? What's the bottom line here? >> Yeah. Yes, that's a very important question. And the way we go about it is to ask the questions so that we can extract from those questions, how much effort it took, for example, to upgrade things, how much effort it took for important applications and not so important applications. So we have a very detailed model driven by the survey itself and in the back of the research, I'm a great believer that you should be able to follow exactly what the research said, what the survey said and how it was applied to the model. So, and what were you focused on was, what was the return of using the Java subscription service or taking an upgrade every six months? Those were the two ways that we looked at it. And for large enterprises, the four-year costs for the enterprise was $11 million, but for taking the additional subscription service, and this was well well covered, the payback is within a year, well covered by the lower costs of managing in a lot of systems and environment. And we found a very similar result on those midsize services. There, it was 3 million, and again, they got that back the year in terms of payback. So, but that's one alternative. There is another alternative that may be worthwhile the extra money if you really want to be up-to-date and or if you want to drive a much more aggressive strategy for your organization. >> So these are huge numbers. I mean, he's talking about 30% savings on average for large and mid-sized enterprises in the percentage terms, but the absolute dollars are actually enormous. So, you know, large companies here, we're talking about $20 billion enterprises with 500 or more Java applications. And mid-size is, you're talking about a couple, two, $3 billion companies. Manish, what are you saying in the customer base in terms of the economics? >> Yeah, you know anytime an organization is looking at an offering and a solution, they want to make sure just giving them the value. And we all know that the priorities of businesses have, they want to focus on that. Managing the Java estate is important, but is it the thing where they want to invest the dollars? And if they are investing the dollars, are they getting the return? We find that if you can give the enterprises an ability where they can see the return, the cost is right for them. And if you can mirror that and you can map it also with reduce risk, then you put the right formula. And with the subscription, they're able to not only see the cost savings that the model indicates clearly, but they're also able to reduce the risk in terms of security protection and other things. So it's a really, really good combination for the enterprises. >> Well, thank you, I wonder Manish, if you could bring us home here and just kind of summarize from your thoughts, everything you've heard today, what are the key takeaways? >> You know Java has been around for 25 years, and we certainly believe it's really positioned well for what's required today. And perhaps more importantly, what is needed for the next decade and for the next 25 years. Having now served thousands of customers with the Java subscription, it's clear that it is meeting the needs of fortune 10 organizations all the way down to a 5% development house, for example. What we're hearing from across the board is really Java has been the go to platform and it continues to be the go to platform for mission critical development and deployment. However, the complexity as the Java estate becomes large when you've got tens to hundreds, in some cases over a thousand applications running across the enterprise, that complexity can be daunting. And the Java subscription is really serving the needs in three ways. One, it's getting them the best of class support from Oracle, which is a steward of Java, the company that is generating over 80% of innovation with every single release. The second thing they're getting the business flexibility. So they can move at the pace that works for them. And therapies is as a business model as indicated that they're getting it at a lower cost while having your list. So the combination of these things is the reason why we're seeing very high renewal rates, why we're seeing thousands of organization take it over. And I want to wrap it up by saying one final thing, that you can count on Oracle to be the transparent, to be the right steward or both technology innovation, as well as to ensuring the support for the vast ecosystem whether it's libraries, frameworks, user groups, educational services and so on. So Java is here, has been here for the enterprise, large and small, and it's ready for the next generation as well. >> Great, thank you for that. Well, one more question. What's the call to action? If I'm a mid-sized company or a large company, I've made investments in Java, what, what should I do next? >> I would say, take a look at the Oracle subscription. It will reduce your rates. It'll save you a cost and it'll give you a lower risk parameter for your organization. >> Great, nice and crisp, I like it. If you like, if you guys don't object, I'm going to give you my summary. I've been taking notes this whole time and so, we've explored two options. Customers can do it yourself or go with the subscription on a regular cadence. It's very clear to me that that Java remains relevant as we set up top. It's the world's most popular programming language we know about all that. The ecosystem is really moving fast. Of course, with the stewardship of Oracle cloud microservices, the development of, of modern applications. I think that the directional changes that Manish you guys and, and Don and Oracle have made were really the right call. The research that David you did, shows that it's serving customers better. It lowers costs, it's cutting down risk particularly for the mid-sized companies that maybe are, or don't have the security infrastructure and the talent to go chase those problems. And I love the roadmap piece. The more transparent roadmap really is going to give the industry and the community much more confidence to invest and move forward. So guys, thanks very much for coming on this CUBE Java power panel. It was great to have you. >> Thank you. >> Thank you. >> Thank you. >> All right, I thank you for watching everybody. This is Dave Vellante, for theCUBE, and we'll see you next time. (soft music)
SUMMARY :
leaders all around the world. And it remains the leading The technology, the and that goes into the feature releases, of the financials if you will. And the third area was that And the other piece and realizing that the threading in the open source world JDK and lead the project. What are the drivers? making sure that the performance is good. and it's harder for them to keep up You can extrapolate that all the way down in the data, if we bring or have always have the latest software. me that you got it right the ability to move at and calibrate it against the real world. and in the back of the research, in terms of the economics? but is it the thing where they and for the next 25 years. What's the call to action? at the Oracle subscription. and the talent to go chase those problems. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyd | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
3 million | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
four-year | QUANTITY | 0.99+ |
Java 15 | TITLE | 0.99+ |
$11 million | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
six-month | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Donald Smith | PERSON | 0.99+ |
David Floria | PERSON | 0.99+ |
three-year | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
Java 10 | TITLE | 0.99+ |
Manish Gupta | PERSON | 0.99+ |
111% | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Wikibon Research | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Manish | ORGANIZATION | 0.99+ |
second element | QUANTITY | 0.99+ |
third piece | QUANTITY | 0.99+ |
500 | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
second pillar | QUANTITY | 0.99+ |
two ways | QUANTITY | 0.99+ |
first pillar | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Don | PERSON | 0.99+ |
Manish | PERSON | 0.99+ |
second pillar | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
one alternative | QUANTITY | 0.99+ |
third area | QUANTITY | 0.99+ |
two options | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
two aspects | QUANTITY | 0.98+ |
over 80% | QUANTITY | 0.98+ |
$3 billion | QUANTITY | 0.98+ |
September | DATE | 0.98+ |
Java EE | TITLE | 0.98+ |
Dave | PERSON | 0.98+ |
10 years ago | DATE | 0.98+ |
Boston | LOCATION | 0.98+ |
second one | QUANTITY | 0.98+ |
six months | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Breaking Analysis: Databricks faces critical strategic decisions…here’s why
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.
SUMMARY :
bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Mike Olson | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Erik Bradley | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
60 years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Databricks' | ORGANIZATION | 0.99+ |
two places | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
M&A | ORGANIZATION | 0.99+ |
Frank Quattrone | PERSON | 0.99+ |
second element | QUANTITY | 0.99+ |
Daren Brabham | PERSON | 0.99+ |
TechAlpha Partners | ORGANIZATION | 0.99+ |
third element | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
50 year | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Jay Marshall, Neural Magic | AWS Startup Showcase S3E1
(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)
SUMMARY :
of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jay | PERSON | 0.99+ |
Jay Marshall | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Nir Shavit | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
Alexa | TITLE | 0.99+ |
2010s | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
each core | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
nine years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
BERT | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
ChatGPT | TITLE | 0.98+ |
20 years | QUANTITY | 0.98+ |
over 50% | QUANTITY | 0.97+ |
second nature | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
ARM | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
DeepSparse | TITLE | 0.94+ |
neuralmagic.com/graviton | OTHER | 0.94+ |
SiliconANGLE | ORGANIZATION | 0.94+ |
WebSphere | TITLE | 0.94+ |
nine | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.93+ |
five milliseconds | QUANTITY | 0.92+ |
AWS Startup Showcase | EVENT | 0.91+ |
two | QUANTITY | 0.9+ |
YOLO | ORGANIZATION | 0.89+ |
CUBE | ORGANIZATION | 0.88+ |
OPT | TITLE | 0.88+ |
last six months | DATE | 0.88+ |
season three | QUANTITY | 0.86+ |
double | QUANTITY | 0.86+ |
one customer | QUANTITY | 0.86+ |
Supercloud | EVENT | 0.86+ |
one side | QUANTITY | 0.85+ |
Vice | PERSON | 0.85+ |
x86 | OTHER | 0.83+ |
AI/ML: Top Startups Building Foundational Models | TITLE | 0.82+ |
ECS | TITLE | 0.81+ |
$100 billion | QUANTITY | 0.81+ |
DevOps | TITLE | 0.81+ |
WebLogic | TITLE | 0.8+ |
EKS | TITLE | 0.8+ |
a minute | QUANTITY | 0.8+ |
neuralmagic.com | OTHER | 0.79+ |
Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1
(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Nishihara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
35 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ant Group | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
32 GPUs | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
September | DATE | 0.99+ |
today | DATE | 0.99+ |
Moore's Law | TITLE | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
PyTorch | TITLE | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
second reason | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
UC Berkeley | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
OpenAI | ORGANIZATION | 0.99+ |
Anyscale | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ByteDance | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Asure | ORGANIZATION | 0.98+ |
one line | QUANTITY | 0.98+ |
one GPU | QUANTITY | 0.98+ |
ChatGPT | TITLE | 0.98+ |
TensorFlow | TITLE | 0.98+ |
last year | DATE | 0.98+ |
first bucket | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two layers | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Alipay | ORGANIZATION | 0.98+ |
Ray | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
Instacart | ORGANIZATION | 0.97+ |
Madhura Maskasky, Platform9 | International Women's Day
(bright upbeat music) >> Hello and welcome to theCUBE's coverage of International Women's Day. I'm your host, John Furrier here in Palo Alto, California Studio and remoting is a great guest CUBE alumni, co-founder, technical co-founder and she's also the VP of Product at Platform9 Systems. It's a company pioneering Kubernetes infrastructure, been doing it for a long, long time. Madhura Maskasky, thanks for coming on theCUBE. Appreciate you. Thanks for coming on. >> Thank you for having me. Always exciting. >> So I always... I love interviewing you for many reasons. One, you're super smart, but also you're a co-founder, a technical co-founder, so entrepreneur, VP of product. It's hard to do startups. (John laughs) Okay, so everyone who started a company knows how hard it is. It really is and the rewarding too when you're successful. So I want to get your thoughts on what's it like being an entrepreneur, women in tech, some things you've done along the way. Let's get started. How did you get into your career in tech and what made you want to start a company? >> Yeah, so , you know, I got into tech long, long before I decided to start a company. And back when I got in tech it was very clear to me as a direction for my career that I'm never going to start a business. I was very explicit about that because my father was an entrepreneur and I'd seen how rough the journey can be. And then my brother was also and is an entrepreneur. And I think with both of them I'd seen the ups and downs and I had decided to myself and shared with my family that I really want a very well-structured sort of job at a large company type of path for my career. I think the tech path, tech was interesting to me, not because I was interested in programming, et cetera at that time, to be honest. When I picked computer science as a major for myself, it was because most of what you would consider, I guess most of the cool students were picking that as a major, let's just say that. And it sounded very interesting and cool. A lot of people were doing it and that was sort of the top, top choice for people and I decided to follow along. But I did discover after I picked computer science as my major, I remember when I started learning C++ the first time when I got exposure to it, it was just like a light bulb clicking in my head. I just absolutely loved the language, the lower level nature, the power of it, and what you can do with it, the algorithms. So I think it ended up being a really good fit for me. >> Yeah, so it clicked for you. You tried it, it was all the cool kids were doing it. I mean, I can relate, I did the same thing. Next big thing is computer science, you got to be in there, got to be smart. And then you get hooked on it. >> Yeah, exactly. >> What was the next level? Did you find any blockers in your way? Obviously male dominated, it must have been a lot of... How many females were in your class? What was the ratio at that time? >> Yeah, so the ratio was was pretty, pretty, I would say bleak when it comes to women to men. I think computer science at that time was still probably better compared to some of the other majors like mechanical engineering where I remember I had one friend, she was the single girl in an entire class of about at least 120, 130 students or so. So ratio was better for us. I think there were maybe 20, 25 girls in our class. It was a large class and maybe the number of men were maybe three X or four X number of women. So relatively better. Yeah. >> How about the job when you got into the structured big company? How did that go? >> Yeah, so, you know, I think that was a pretty smooth path I would say after, you know, you graduated from undergrad to grad school and then when I got into Oracle first and VMware, I think both companies had the ratios were still, you know, pretty off. And I think they still are to a very large extent in this industry, but I think this industry in my experience does a fantastic job of, you know, bringing everybody and kind of embracing them and treating them at the same level. That was definitely my experience. And so that makes it very easy for self-confidence, for setting up a path for yourself to thrive. So that was it. >> Okay, so you got an undergraduate degree, okay, in computer science and a master's from Stanford in databases and distributed systems. >> That's right. >> So two degrees. Was that part of your pathway or you just decided, "I want to go right into school?" Did it go right after each other? How did that work out? >> Yeah, so when I went into school, undergrad there was no special major and I didn't quite know if I liked a particular subject or set of subjects or not. Even through grad school, first year it wasn't clear to me, but I think in second year I did start realizing that in general I was a fan of backend systems. I was never a front-end person. The backend distributed systems really were of interest to me because there's a lot of complex problems to solve, and especially databases and large scale distributed systems design in the context of database systems, you know, really started becoming a topic of interest for me. And I think luckily enough at Stanford there were just fantastic professors like Mendel Rosenblum who offered operating system class there, then started VMware and later on I was able to join the company and I took his class while at school and it was one of the most fantastic classes I've ever taken. So they really had and probably I think still do a fantastic curriculum when it comes to distributor systems. And I think that probably helped stoke that interest. >> How do you talk to the younger girls out there in elementary school and through? What's the advice as they start to get into computer science, which is changing and still evolving? There's backend, there's front-end, there's AI, there's data science, there's no code, low code, there's cloud. What's your advice when they say what's the playbook? >> Yeah, so I think two things I always say, and I share this with anybody who's looking to get into computer science or engineering for that matter, right? I think one is that it's, you know, it's important to not worry about what that end specialization's going to be, whether it's AI or databases or backend or front-end. It does naturally evolve and you lend yourself to a path where you will understand, you know, which systems, which aspect you like better. But it's very critical to start with getting the fundamentals well, right? Meaning all of the key coursework around algorithm, systems design, architecture, networking, operating system. I think it is just so crucial to understand those well, even though at times you make question is this ever going to be relevant and useful to me later on in my career? It really does end up helping in ways beyond, you know, you can describe. It makes you a much better engineer. So I think that is the most important aspect of, you know, I would think any engineering stream, but definitely true for computer science. Because there's also been a trend more recently, I think, which I'm not a big fan of, of sort of limited scoped learning, which is you decide early on that you're going to be, let's say a front-end engineer, which is fine, you know. Understanding that is great, but if you... I don't think is ideal to let that limit the scope of your learning when you are an undergrad phrase or grad school. Because later on it comes back to sort of bite you in terms of you not being able to completely understand how the systems work. >> It's a systems kind of thinking. You got to have that mindset of, especially now with cloud, you got distributed systems paradigm going to the edge. You got 5G, Mobile World Congress recently happened, you got now all kinds of IOT devices out there, IP of devices at the edge. Distributed computing is only getting more distributed. >> That's right. Yeah, that's exactly right. But the other thing is also happens... That happens in computer science is that the abstraction layers keep raising things up and up and up. Where even if you're operating at a language like Java, which you know, during some of my times of programming there was a period when it was popular, it already abstracts you so far away from the underlying system. So it can become very easier if you're doing, you know, Java script or UI programming that you really have no understanding of what's happening behind the scenes. And I think that can be pretty difficult. >> Yeah. It's easy to lean in and rely too heavily on the abstractions. I want to get your thoughts on blockers. In your career, have you had situations where it's like, "Oh, you're a woman, okay seat at the table, sit on the side." Or maybe people misunderstood your role. How did you deal with that? Did you have any of that? >> Yeah. So, you know, I think... So there's something really kind of personal to me, which I like to share a few times, which I think I believe in pretty strongly. And which is for me, sort of my personal growth began at a very early phase because my dad and he passed away in 2012, but throughout the time when I was growing up, I was his special little girl. And every little thing that I did could be a simple test. You know, not very meaningful but the genuine pride and pleasure that he felt out of me getting great scores in those tests sort of et cetera, and that I could see that in him, and then I wanted to please him. And through him, I think I build that confidence in myself that I am good at things and I can do good. And I think that just set the building blocks for me for the rest of my life, right? So, I believe very strongly that, you know, yes, there are occasions of unfair treatment and et cetera, but for the most part, it comes from within. And if you are able to be a confident person who is kind of leveled and understands and believes in your capabilities, then for the most part, the right things happen around you. So, I believe very strongly in that kind of grounding and in finding a source to get that for yourself. And I think that many women suffer from the biggest challenge, which is not having enough self-confidence. And I've even, you know, with everything that I said, I've myself felt that, experienced that a few times. And then there's a methodical way to get around it. There's processes to, you know, explain to yourself that that's actually not true. That's a fake feeling. So, you know, I think that is the most important aspect for women. >> I love that. Get the confidence. Find the source for the confidence. We've also been hearing about curiosity and building, you mentioned engineering earlier, love that term. Engineering something, like building something. Curiosity, engineering, confidence. This brings me to my next question for you. What do you think the key skills and qualities are needed to succeed in a technical role? And how do you develop to maintain those skills over time? >> Yeah, so I think that it is so critical that you love that technology that you are part of. It is just so important. I mean, I remember as an example, at one point with one of my buddies before we started Platform9, one of my buddies, he's also a fantastic computer scientists from VMware and he loves video games. And so he said, "Hey, why don't we try to, you know, hack up a video game and see if we can take it somewhere?" And so, it sounded cool to me. And then so we started doing things, but you know, something I realized very quickly is that I as a person, I absolutely hate video games. I've never liked them. I don't think that's ever going to change. And so I was miserable. You know, I was trying to understand what's going on, how to build these systems, but I was not enjoying it. So, I'm glad that I decided to not pursue that. So it is just so important that you enjoy whatever aspect of technology that you decide to associate yourself with. I think that takes away 80, 90% of the work. And then I think it's important to inculcate a level of discipline that you are not going to get sort of... You're not going to get jaded or, you know, continue with happy path when doing the same things over and over again, but you're not necessarily challenging yourself, or pushing yourself, or putting yourself in uncomfortable situation. I think a combination of those typically I think works pretty well in any technical career. >> That's a great advice there. I think trying things when you're younger, or even just for play to understand whether you abandon that path is just as important as finding a good path because at least you know that skews the value in favor of the choices. Kind of like math probability. So, great call out there. So I have to ask you the next question, which is, how do you keep up to date given all the changes? You're in the middle of a world where you've seen personal change in the past 10 years from OpenStack to now. Remember those days when I first interviewed you at OpenStack, I think it was 2012 or something like that. Maybe 10 years ago. So much changed. How do you keep up with technologies in your field and resources that you rely on for personal development? >> Yeah, so I think when it comes to, you know, the field and what we are doing for example, I think one of the most important aspect and you know I am product manager and this is something I insist that all the other product managers in our team also do, is that you have to spend 50% of your time talking to prospects, customers, leads, and through those conversations they do a huge favor to you in that they make you aware of the other things that they're keeping an eye on as long as you're doing the right job of asking the right questions and not just, you know, listening in. So I think that to me ends up being one of the biggest sources where you get tidbits of information, new things, et cetera, and then you pursue. To me, that has worked to be a very effective source. And then the second is, you know, reading and keeping up with all of the publications. You guys, you know, create a lot of great material, you interview a lot of people, making sure you are watching those for us you know, and see there's a ton of activities, new projects keeps coming along every few months. So keeping up with that, listening to podcasts around those topics, all of that helps. But I think the first one I think goes in a big way in terms of being aware of what matters to your customers. >> Awesome. Let me ask you a question. What's the most rewarding aspect of your job right now? >> So, I think there are many. So I think I love... I've come to realize that I love, you know, the high that you get out of being an entrepreneur independent of, you know, there's... In terms of success and failure, there's always ups and downs as an entrepreneur, right? But there is this... There's something really alluring about being able to, you know, define, you know, path of your products and in a way that can potentially impact, you know, a number of companies that'll consume your products, employees that work with you. So that is, I think to me, always been the most satisfying path, is what kept me going. I think that is probably first and foremost. And then the projects. You know, there's always new exciting things that we are working on. Even just today, there are certain projects we are working on that I'm super excited about. So I think it's those two things. >> So now we didn't get into how you started. You said you didn't want to do a startup and you got the big company. Your dad, your brother were entrepreneurs. How did you get into it? >> Yeah, so, you know, it was kind of surprising to me as well, but I think I reached a point of VMware after spending about eight years or so where I definitely packed hold and I could have pushed myself by switching to a completely different company or a different organization within VMware. And I was trying all of those paths, interviewed at different companies, et cetera, but nothing felt different enough. And then I think I was very, very fortunate in that my co-founders, Sirish Raghuram, Roopak Parikh, you know, Bich, you've met them, they were kind of all at the same journey in their careers independently at the same time. And so we would all eat lunch together at VMware 'cause we were on the same team and then we just started brainstorming on different ideas during lunchtime. And that's kind of how... And we did that almost for a year. So by the time that the year long period went by, at the end it felt like the most logical, natural next step to leave our job and to, you know, to start off something together. But I think I wouldn't have done that had it not been for my co-founders. >> So you had comfort with the team as you knew each other at VMware, but you were kind of a little early, (laughing) you had a vision. It's kind of playing out now. How do you feel right now as the wave is hitting? Distributed computing, microservices, Kubernetes, I mean, stuff you guys did and were doing. I mean, it didn't play out exactly, but directionally you were right on the line there. How do you feel? >> Yeah. You know, I think that's kind of the challenge and the fun part with the startup journey, right? Which is you can never predict how things are going to go. When we kicked off we thought that OpenStack is going to really take over infrastructure management space and things kind of went differently, but things are going that way now with Kubernetes and distributed infrastructure. And so I think it's been interesting and in every path that you take that does end up not being successful teaches you so much more, right? So I think it's been a very interesting journey. >> Yeah, and I think the cloud, certainly AWS hit that growth right at 2013 through '17, kind of sucked all the oxygen out. But now as it reverts back to this abstraction layer essentially makes things look like private clouds, but they're just essentially DevOps. It's cloud operations, kind of the same thing. >> Yeah, absolutely. And then with the edge things are becoming way more distributed where having a single large cloud provider is becoming even less relevant in that space and having kind of the central SaaS based management model, which is what we pioneered, like you said, we were ahead of the game at that time, is becoming sort of the most obvious choice now. >> Now you look back at your role at Stanford, distributed systems, again, they have world class program there, neural networks, you name it. It's really, really awesome. As well as Cal Berkeley, there was in debates with each other, who's better? But that's a separate interview. Now you got the edge, what are some of the distributed computing challenges right now with now the distributed edge coming online, industrial 5G, data? What do you see as some of the key areas to solve from a problem statement standpoint with edge and as cloud goes on-premises to essentially data center at the edge, apps coming over the top AI enabled. What's your take on that? >> Yeah, so I think... And there's different flavors of edge and the one that we focus on is, you know, what we call thick edge, which is you have this problem of managing thousands of as we call it micro data centers, rather than managing maybe few tens or hundreds of large data centers where the problem just completely shifts on its head, right? And I think it is still an unsolved problem today where whether you are a retailer or a telecommunications vendor, et cetera, managing your footprints of tens of thousands of stores as a retailer is solved in a very archaic way today because the tool set, the traditional management tooling that's designed to manage, let's say your data centers is not quite, you know, it gets retrofitted to manage these environments and it's kind of (indistinct), you know, round hole kind of situation. So I think the top most challenges are being able to manage this large footprint of micro data centers in the most effective way, right? Where you have latency solved, you have the issue of a small footprint of resources at thousands of locations, and how do you fit in your containerized or virtualized or other workloads in the most effective way? To have that solved, you know, you need to have the security aspects around these environments. So there's a number of challenges that kind of go hand-in-hand, like what is the most effective storage which, you know, can still be deployed in that compact environment? And then cost becomes a related point. >> Costs are huge 'cause if you move data, you're going to have cost. If you move compute, it's not as much. If you have an operating system concept, is the data and state or stateless? These are huge problems. This is an operating system, don't you think? >> Yeah, yeah, absolutely. It's a distributed operating system where it's multiple layers, you know, of ways of solving that problem just in the context of data like you said having an intermediate caching layer so that you know, you still do just in time processing at those edge locations and then send some data back and that's where you can incorporate some AI or other technologies, et cetera. So, you know, just data itself is a multi-layer problem there. >> Well, it's great to have you on this program. Advice final question for you, for the folks watching technical degrees, most people are finding out in elementary school, in middle school, a lot more robotics programs, a lot more tech exposure, you know, not just in Silicon Valley, but all around, you're starting to see that. What's your advice for young girls and people who are getting either coming into the workforce re-skilled as they get enter, it's easy to enter now as they stay in and how do they stay in? What's your advice? >> Yeah, so, you know, I think it's the same goal. I have two little daughters and it's the same principle I try to follow with them, which is I want to give them as much exposure as possible without me having any predefined ideas about what you know, they should pursue. But it's I think that exposure that you need to find for yourself one way or the other, because you really never know. Like, you know, my husband landed into computer science through a very, very meandering path, and then he discovered later in his career that it's the absolute calling for him. It's something he's very good at, right? But so... You know, it's... You know, the reason why he thinks he didn't pick that path early is because he didn't quite have that exposure. So it's that exposure to various things, even things you think that you may not be interested in is the most important aspect. And then things just naturally lend themselves. >> Find your calling, superpower, strengths. Know what you don't want to do. (John chuckles) >> Yeah, exactly. >> Great advice. Thank you so much for coming on and contributing to our program for International Women's Day. Great to see you in this context. We'll see you on theCUBE. We'll talk more about Platform9 when we go KubeCon or some other time. But thank you for sharing your personal perspective and experiences for our audience. Thank you. >> Fantastic. Thanks for having me, John. Always great. >> This is theCUBE's coverage of International Women's Day, I'm John Furrier. We're talking to the leaders in the industry, from developers to the boardroom and everything in between and getting the stories out there making an impact. Thanks for watching. (bright upbeat music)
SUMMARY :
and she's also the VP of Thank you for having me. I love interviewing you for many reasons. Yeah, so , you know, And then you get hooked on it. Did you find any blockers in your way? I think there were maybe I would say after, you know, Okay, so you got an pathway or you just decided, systems, you know, How do you talk to the I think one is that it's, you know, you got now all kinds of that you really have no How did you deal with that? And I've even, you know, And how do you develop to a level of discipline that you So I have to ask you the And then the second is, you know, reading Let me ask you a question. that I love, you know, and you got the big company. Yeah, so, you know, I mean, stuff you guys did and were doing. Which is you can never predict kind of the same thing. which is what we pioneered, like you said, Now you look back at your and how do you fit in your Costs are huge 'cause if you move data, just in the context of data like you said a lot more tech exposure, you know, Yeah, so, you know, I Know what you don't want to do. Great to see you in this context. Thanks for having me, John. and getting the stories
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Madhura Maskasky | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
20 | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
Mendel Rosenblum | PERSON | 0.99+ |
Sirish Raghuram | PERSON | 0.99+ |
John | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Roopak Parikh | PERSON | 0.99+ |
Platform9 Systems | ORGANIZATION | 0.99+ |
International Women's Day | EVENT | 0.99+ |
Java | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
C++ | TITLE | 0.99+ |
10 years ago | DATE | 0.99+ |
'17 | DATE | 0.99+ |
today | DATE | 0.98+ |
KubeCon | EVENT | 0.98+ |
two little daughters | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
25 girls | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
Cal Berkeley | ORGANIZATION | 0.98+ |
Bich | PERSON | 0.98+ |
two things | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
two degrees | QUANTITY | 0.98+ |
single girl | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
about eight years | QUANTITY | 0.98+ |
single | QUANTITY | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
one friend | QUANTITY | 0.96+ |
5G | ORGANIZATION | 0.96+ |
one point | QUANTITY | 0.94+ |
first one | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
tens | QUANTITY | 0.92+ |
a year | QUANTITY | 0.91+ |
tens of thousands of stores | QUANTITY | 0.89+ |
Palo Alto, California Studio | LOCATION | 0.88+ |
Platform9 | ORGANIZATION | 0.88+ |
Kubernetes | ORGANIZATION | 0.86+ |
about at least 120 | QUANTITY | 0.85+ |
Mobile World Congress | EVENT | 0.82+ |
130 students | QUANTITY | 0.82+ |
hundreds of large data centers | QUANTITY | 0.8+ |
80, 90% | QUANTITY | 0.79+ |
VMware | TITLE | 0.73+ |
past 10 years | DATE | 0.72+ |
John Kreisa, Couchbase | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music intro) (logo background tingles) >> Hi everybody, welcome back to day three of MWC23, my name is Dave Vellante and we're here live at the Theater of Barcelona, Lisa Martin, David Nicholson, John Furrier's in our studio in Palo Alto. Lot of buzz at the show, the Mobile World Daily Today, front page, Netflix chief hits back in fair share row, Greg Peters, the co-CEO of Netflix, talking about how, "Hey, you guys want to tax us, the telcos want to tax us, well, maybe you should help us pay for some of the content. Your margins are higher, you have a monopoly, you know, we're delivering all this value, you're bundling Netflix in, from a lot of ISPs so hold on, you know, pump the brakes on that tax," so that's the big news. Lockheed Martin, FOSS issues, AI guidelines, says, "AI's not going to take over your job anytime soon." Although I would say, your job's going to be AI-powered for the next five years. We're going to talk about data, we've been talking about the disaggregation of the telco stack, part of that stack is a data layer. John Kreisa is here, the CMO of Couchbase, John, you know, we've talked about all week, the disaggregation of the telco stacks, they got, you know, Silicon and operating systems that are, you know, real time OS, highly reliable, you know, compute infrastructure all the way up through a telemetry stack, et cetera. And that's a proprietary block that's really exploding, it's like the big bang, like we saw in the enterprise 20 years ago and we haven't had much discussion about that data layer, sort of that horizontal data layer, that's the market you play in. You know, Couchbase obviously has a lot of telco customers- >> John: That's right. >> We've seen, you know, Snowflake and others launch telco businesses. What are you seeing when you talk to customers at the show? What are they doing with that data layer? >> Yeah, so they're building applications to drive and power unique experiences for their users, but of course, it all starts with where the data is. So they're building mobile applications where they're stretching it out to the edge and you have to move the data to the edge, you have to have that capability to deliver that highly interactive experience to their customers or for their own internal use cases out to that edge, so seeing a lot of that with Couchbase and with our customers in telco. >> So what do the telcos want to do with data? I mean, they've got the telemetry data- >> John: Yeah. >> Now they frequently complain about the over-the-top providers that have used that data, again like Netflix, to identify customer demand for content and they're mopping that up in a big way, you know, certainly Amazon and shopping Google and ads, you know, they're all using that network. But what do the telcos do today and what do they want to do in the future? They're all talking about monetization, how do they monetize that data? >> Yeah, well, by taking that data, there's insight to be had, right? So by usage patterns and what's happening, just as you said, so they can deliver a better experience. It's all about getting that edge, if you will, on their competition and so taking that data, using it in a smart way, gives them that edge to deliver a better service and then grow their business. >> We're seeing a lot of action at the edge and, you know, the edge can be a Home Depot or a Lowe's store, but it also could be the far edge, could be a, you know, an oil drilling, an oil rig, it could be a racetrack, you know, certainly hospitals and certain, you know, situations. So let's think about that edge, where there's maybe not a lot of connectivity, there might be private networks going in, in the future- >> John: That's right. >> Private 5G networks. What's the data flow look like there? Do you guys have any customers doing those types of use cases? >> Yeah, absolutely. >> And what are they doing with the data? >> Yeah, absolutely, we've got customers all across, so telco and transportation, all kinds of service delivery and healthcare, for example, we've got customers who are delivering healthcare out at the edge where they have a remote location, they're able to deliver healthcare, but as you said, there's not always connectivity, so they need to have the applications, need to continue to run and then sync back once they have that connectivity. So it's really having the ability to deliver a service, reliably and then know that that will be synced back to some central server when they have connectivity- >> So the processing might occur where the data- >> Compute at the edge. >> How do you sync back? What is that technology? >> Yeah, so there's, so within, so Couchbase and Couchbase's case, we have an autonomous sync capability that brings it back to the cloud once they get back to whether it's a private network that they want to run over, or if they're doing it over a public, you know, wifi network, once it determines that there's connectivity and, it can be peer-to-peer sync, so different edge apps communicating with each other and then ultimately communicating back to a central server. >> I mean, the other theme here, of course, I call it the software-defined telco, right? But you got to have, you got to run on something, got to have hardware. So you see companies like AWS putting Outposts, out to the edge, Outposts, you know, doesn't really run a lot of database to mind, I mean, it runs RDS, you know, maybe they're going to eventually work with companies like... I mean, you're a partner of AWS- >> John: We are. >> Right? So do you see that kind of cloud infrastructure that's moving to the edge? Do you see that as an opportunity for companies like Couchbase? >> Yeah, we do. We see customers wanting to push more and more of that compute out to the edge and so partnering with AWS gives us that opportunity and we are certified on Outpost and- >> Oh, you are? >> We are, yeah. >> Okay. >> Absolutely. >> When did that, go down? >> That was last year, but probably early last year- >> So I can run Couchbase at the edge, on Outpost? >> Yeah, that's right. >> I mean, you know, Outpost adoption has been slow, we've reported on that, but are you seeing any traction there? Are you seeing any nibbles? >> Starting to see some interest, yeah, absolutely. And again, it has to be for the right use case, but again, for service delivery, things like healthcare and in transportation, you know, they're starting to see where they want to have that compute, be very close to where the actions happen. >> And you can run on, in the data center, right? >> That's right. >> You can run in the cloud, you know, you see HPE with GreenLake, you see Dell with Apex, that's essentially their Outposts. >> Yeah. >> They're saying, "Hey, we're going to take our whole infrastructure and make it as a service." >> Yeah, yeah. >> Right? And so you can participate in those environments- >> We do. >> And then so you've got now, you know, we call it supercloud, you've got the on-prem, you've got the, you can run in the public cloud, you can run at the edge and you want that consistent experience- >> That's right. >> You know, from a data layer- >> That's right. >> So is that really the strategy for a data company is taking or should be taking, that horizontal layer across all those use cases? >> You do need to think holistically about it, because you need to be able to deliver as a, you know, as a provider, wherever the customer wants to be able to consume that application. So you do have to think about any of the public clouds or private networks and all the way to the edge. >> What's different John, about the telco business versus the traditional enterprise? >> Well, I mean, there's scale, I mean, one thing they're dealing with, particularly for end user-facing apps, you're dealing at a very very high scale and the expectation that you're going to deliver a very interactive experience. So I'd say one thing in particular that we are focusing on, is making sure we deliver that highly interactive experience but it's the scale of the number of users and customers that they have, and the expectation that your application's always going to work. >> Speaking of applications, I mean, it seems like that's where the innovation is going to come from. We saw yesterday, GSMA announced, I think eight APIs telco APIs, you know, we were talking on theCUBE, one of the analysts was like, "Eight, that's nothing," you know, "What do these guys know about developers?" But you know, as Daniel Royston said, "Eight's better than zero." >> Right? >> So okay, so we're starting there, but the point being, it's all about the apps, that's where the innovation's going to come from- >> That's right. >> So what are you seeing there, in terms of building on top of the data app? >> Right, well you have to provide, I mean, have to provide the APIs and the access because it is really, the rubber meets the road, with the developers and giving them the ability to create those really rich applications where they want and create the experiences and innovate and change the way that they're giving those experiences. >> Yeah, so what's your relationship with developers at Couchbase? >> John: Yeah. >> I mean, talk about that a little bit- >> Yeah, yeah, so we have a great relationship with developers, something we've been investing more and more in, in terms of things like developer relations teams and community, Couchbase started in open source, continue to be based on open source projects and of course, those are very developer centric. So we provide all the consistent APIs for developers to create those applications, whether it's something on Couchbase Lite, which is our kind of edge-based database, or how they can sync that data back and we actually automate a lot of that syncing which is a very difficult developer task which lends them to one of the developer- >> What I'm trying to figure out is, what's the telco developer look like? Is that a developer that comes from the enterprise and somebody comes from the blockchain world, or AI or, you know, there really doesn't seem to be a lot of developer talk here, but there's a huge opportunity. >> Yeah, yeah. >> And, you know, I feel like, the telcos kind of remind me of, you know, a traditional legacy company trying to get into the developer world, you know, even Oracle, okay, they bought Sun, they got Java, so I guess they have developers, but you know, IBM for years tried with Bluemix, they had to end up buying Red Hat, really, and that gave them the developer community. >> Yep. >> EMC used to have a thing called EMC Code, which was a, you know, good effort, but eh. And then, you know, VMware always trying to do that, but, so as you move up the stack obviously, you have greater developer affinity. Where do you think the telco developer's going to come from? How's that going to evolve? >> Yeah, it's interesting, and I think they're... To kind of get to your first question, I think they're fairly traditional enterprise developers and when we break that down, we look at it in terms of what the developer persona is, are they a front-end developer? Like they're writing that front-end app, they don't care so much about the infrastructure behind or are they a full stack developer and they're really involved in the entire application development lifecycle? Or are they living at the backend and they're really wanting to just focus in on that data layer? So we lend towards all of those different personas and we think about them in terms of the APIs that we create, so that's really what the developers are for telcos is, there's a combination of those front-end and full stack developers and so for them to continue to innovate they need to appeal to those developers and that's technology, like Couchbase, is what helps them do that. >> Yeah and you think about the Apples, you know, the app store model or Apple sort of says, "Okay, here's a developer kit, go create." >> John: Yeah. >> "And then if it's successful, you're going to be successful and we're going to take a vig," okay, good model. >> John: Yeah. >> I think I'm hearing, and maybe I misunderstood this, but I think it was the CEO or chairman of Ericsson on the day one keynotes, was saying, "We are going to monetize the, essentially the telemetry data, you know, through APIs, we're going to charge for that," you know, maybe that's not the best approach, I don't know, I think there's got to be some innovation on top. >> John: Yeah. >> Now maybe some of these greenfield telcos are going to do like, you take like a dish networks, what they're doing, they're really trying to drive development layers. So I think it's like this wild west open, you know, community that's got to be formed and right now it's very unclear to me, do you have any insights there? >> I think it is more, like you said, Wild West, I think there's no emerging standard per se for across those different company types and sort of different pieces of the industry. So consequently, it does need to form some more standards in order to really help it grow and I think you're right, you have to have the right APIs and the right access in order to properly monetize, you have to attract those developers or you're not going to be able to monetize properly. >> Do you think that if, in thinking about your business and you know, you've always sold to telcos, but now it's like there's this transformation going on in telcos, will that become an increasingly larger piece of your business or maybe even a more important piece of your business? Or it's kind of be steady state because it's such a slow moving industry? >> No, it is a big and increasing piece of our business, I think telcos like other enterprises, want to continue to innovate and so they look to, you know, technologies like, Couchbase document database that allows them to have more flexibility and deliver the speed that they need to deliver those kinds of applications. So we see a lot of migration off of traditional legacy infrastructure in order to build that new age interface and new age experience that they want to deliver. >> A lot of buzz in Silicon Valley about open AI and Chat GPT- >> Yeah. >> You know, what's your take on all that? >> Yeah, we're looking at it, I think it's exciting technology, I think there's a lot of applications that are kind of, a little, sort of innovate traditional interfaces, so for example, you can train Chat GPT to create code, sample code for Couchbase, right? You can go and get it to give you that sample app which gets you a headstart or you can actually get it to do a better job of, you know, sorting through your documentation, like Chat GPT can do a better job of helping you get access. So it improves the experience overall for developers, so we're excited about, you know, what the prospect of that is. >> So you're playing around with it, like everybody is- >> Yeah. >> And potentially- >> Looking at use cases- >> Ways tO integrate, yeah. >> Hundred percent. >> So are we. John, thanks for coming on theCUBE. Always great to see you, my friend. >> Great, thanks very much. >> All right, you're welcome. All right, keep it right there, theCUBE will be back live from Barcelona at the theater. SiliconANGLE's continuous coverage of MWC23. Go to siliconangle.com for all the news, theCUBE.net is where all the videos are, keep it right there. (cheerful upbeat music outro)
SUMMARY :
that drive human progress. that's the market you play in. We've seen, you know, and you have to move the data to the edge, you know, certainly Amazon that edge, if you will, it could be a racetrack, you know, Do you guys have any customers the applications, need to over a public, you know, out to the edge, Outposts, you know, of that compute out to the edge in transportation, you know, You can run in the cloud, you know, and make it as a service." to deliver as a, you know, and the expectation that But you know, as Daniel Royston said, and change the way that they're continue to be based on open or AI or, you know, there developer world, you know, And then, you know, VMware and so for them to continue to innovate about the Apples, you know, and we're going to take data, you know, through APIs, are going to do like, you and the right access in and so they look to, you know, so we're excited about, you know, yeah. Always great to see you, Go to siliconangle.com for all the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Greg Peters | PERSON | 0.99+ |
Daniel Royston | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John Kreisa | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Eight | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Chat GPT | TITLE | 0.99+ |
Hundred percent | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
Couchbase | ORGANIZATION | 0.98+ |
John Furrier | PERSON | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
Apex | ORGANIZATION | 0.98+ |
Home Depot | ORGANIZATION | 0.98+ |
early last year | DATE | 0.98+ |
Barcelona | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
MWC23 | EVENT | 0.97+ |
Bluemix | ORGANIZATION | 0.96+ |
Sun | ORGANIZATION | 0.96+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
GreenLake | ORGANIZATION | 0.94+ |
Apples | ORGANIZATION | 0.94+ |
Snowflake | ORGANIZATION | 0.93+ |
Outpost | ORGANIZATION | 0.93+ |
VMware | ORGANIZATION | 0.93+ |
zero | QUANTITY | 0.93+ |
EMC | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
today | DATE | 0.89+ |
Mobile World Daily Today | TITLE | 0.88+ |
Wild West | ORGANIZATION | 0.88+ |
theCUBE.net | OTHER | 0.87+ |
app store | TITLE | 0.86+ |
one thing | QUANTITY | 0.86+ |
EMC Code | TITLE | 0.86+ |
Couchbase | TITLE | 0.85+ |
Dave Duggal, EnterpriseWeb & Azhar Sayeed, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (ambient music) >> Lisa: Hey everyone, welcome back to Barcelona, Spain. It's theCUBE Live at MWC 23. Lisa Martin with Dave Vellante. This is day two of four days of cube coverage but you know that, because you've already been watching yesterday and today. We're going to have a great conversation next with EnterpriseWeb and Red Hat. We've had great conversations the last day and a half about the Telco industry, the challenges, the opportunities. We're going to unpack that from this lens. Please welcome Dave Duggal, founder and CEO of EnterpriseWeb and Azhar Sayeed is here, Senior Director Solution Architecture at Red Hat. >> Guys, it's great to have you on the program. >> Yes. >> Thank you Lisa, >> Great being here with you. >> Dave let's go ahead and start with you. Give the audience an overview of EnterpriseWeb. What kind of business is it? What's the business model? What do you guys do? >> Okay so, EnterpriseWeb is reinventing middleware, right? So the historic middleware was to build vertically integrated stacks, right? And those stacks are now such becoming the rate limiters for interoperability for so the end-to-end solutions that everybody's looking for, right? Red Hat's talking about the unified platform. You guys are talking about Supercloud, EnterpriseWeb addresses that we've built middleware based on serverless architecture, so lightweight, low latency, high performance middleware. And we're working with the world's biggest, we sell through channels and we work through partners like Red Hat Intel, Fortnet, Keysight, Tech Mahindra. So working with some of the biggest players that have recognized the value of our innovation, to deliver transformation to the Telecom industry. >> So what are you guys doing together? Is this, is this an OpenShift play? >> Is it? >> Yeah. >> Yeah, so we've got two projects right her on the floor at MWC throughout the various partners, where EnterpriseWeb is actually providing an application layer, sorry application middleware over Red Hat's, OpenShift and we're essentially generating operators so Red Hat operators, so that all our vendors, and, sorry vendors that we onboard into our catalog can be deployed easily through the OpenShift platform. And we allow those, those vendors to be flexibly composed into network services. So the real challenge for operators historically is that they, they have challenges onboarding the vendors. It takes a long time. Each one of them is a snowflake. They, you know, even though there's standards they don't all observe or follow the same standards. So we make it easier using models, right? For, in a model driven process to on boards or streamline that onboarding process, compose functions into services deploy those services seamlessly through Red Hat's OpenShift, and then manage the, the lifecycle, like the quality of service and the SLAs for those services. >> So Red Hat obviously has pretty prominent Telco business has for a while. Red Hat OpenStack actually is is pretty popular within the Telco business. People thought, "Oh, OpenStack, that's dead." Actually, no, it's actually doing quite well. We see it all over the place where for whatever reason people want to build their own cloud. And, and so, so what's happening in the industry because you have the traditional Telcos we heard in the keynotes that kind of typical narrative about, you know, we can't let the over the top vendors do this again. We're, we're going to be Apifi everything, we're going to monetize this time around, not just with connectivity but the, but the fact is they really don't have a developer community. >> Yes. >> Yet anyway. >> Then you have these disruptors over here that are saying "Yeah, we're going to enable ISVs." How do you see it? What's the landscape look like? Help us understand, you know, what the horses on the track are doing. >> Sure. I think what has happened, Dave, is that the conversation has moved a little bit from where they were just looking at IS infrastructure service with virtual machines and OpenStack, as you mentioned, to how do we move up the value chain and look at different applications. And therein comes the rub, right? You have applications with different requirements, IT network that have various different requirements that are there. So as you start to build those cloud platform, as you start to modernize those set of applications, you then start to look at microservices and how you build them. You need the ability to orchestrate them. So some of those problem statements have moved from not just refactoring those applications, but actually now to how do you reliably deploy, manage in a multicloud multi cluster way. So this conversation around Supercloud or this conversation around multicloud is very >> You could say Supercloud. That's okay >> (Dave Duggal and Azhar laughs) >> It's absolutely very real though. The reason why it's very real is, if you look at transformations around Telco, there are two things that are happening. One, Telco IT, they're looking at partnerships with hybrid cloud, I mean with public cloud players to build a hybrid environment. They're also building their own Telco Cloud environment for their network functions. Now, in both of those spaces, they end up operating two to three different environments themselves. Now how do you create a level of abstraction across those? How do you manage that particular infrastructure? And then how do you orchestrate all of those different workloads? Those are the type of problems that they're actually beginning to solve. So they've moved on from really just putting that virtualizing their application, putting it on OpenStack to now really seriously looking at "How do I build a service?" "How do I leverage the catalog that's available both in my private and public and build an overall service process?" >> And by the way what you just described as hybrid cloud and multicloud is, you know Supercloud is what multicloud should have been. And what, what it originally became is "I run on this cloud and I run on this cloud" and "I run on this cloud and I have a hybrid." And, and Supercloud is meant to create a common experience across those clouds. >> Dave Duggal: Right? >> Thanks to, you know, Supercloud middleware. >> Yeah. >> Right? And, and so that's what you guys do. >> Yeah, exactly. Exactly. Dave, I mean, even the name EnterpriseWeb, you know we started from looking from the application layer down. If you look at it, the last 10 years we've looked from the infrastructure up, right? And now everybody's looking northbound saying "You know what, actually, if I look from the infrastructure up the only thing I'll ever build is silos, right?" And those silos get in the way of the interoperability and the agility the businesses want. So we take the perspective as high level abstractions, common tools, so that if I'm a CXO, I can look down on my environments, right? When I'm really not, I honestly, if I'm an, if I'm a CEO I don't really care or CXO, I don't really care so much about my infrastructure to be honest. I care about my applications and their behavior. I care about my SLAs and my quality of service, right? Those are the things I care about. So I really want an EnterpriseWeb, right? Something that helps me connect all my distributed applications all across all of the environments. So I can have one place a consistency layer that speaks a common language. We know that there's a lot of heterogeneity down all those layers and a lot of complexity down those layers. But the business doesn't care. They don't want to care, right? They want to actually take their applications deploy them where they're the most performant where they're getting the best cost, right? The lowest and maybe sustainability concerns, all those. They want to address those problems, meet their SLAs meet their quality service. And you know what, if it's running on Amazon, great. If it's running on Google Cloud platform, great. If it, you know, we're doing one project right here that we're demonstrating here is with with Amazon Tech Mahindra and OpenShift, where we took a disaggregated 5G core, right? So this is like sort of latest telecom, you know net networking software, right? We're deploying pulling elements of that network across core, across Amazon EKS, OpenShift on Red Hat ROSA, as well as just OpenShift for cloud. And we, through a single pane of deployment and management, we deployed the elements of the 5G core across them and then connected them in an end-to-end process. That's Telco Supercloud. >> Dave Vellante: So that's an O-RAN deployment. >> Yeah that's >> So, the big advantage of that, pardon me, Dave but the big advantage of that is the customer really doesn't care where the components are being served from for them. It's a 5G capability. It happens to sit in different locations. And that's, it's, it's about how do you abstract and how do you manage all those different workloads in a cohesive way? And that's exactly what EnterpriseWeb is bringing to the table. And what we do is we abstract the underlying infrastructure which is the cloud layer. So if, because AWS operating environment is different then private cloud operating environment then Azure environment, you have the networking is set up is different in each one of them. If there is a way you can abstract all of that and present it in a common operating model it becomes a lot easier than for anybody to be able to consume. >> And what a lot of customers tell me is the way they deal with multicloud complexity is they go with mono cloud, right? And so they'll lose out on some of the best services >> Absolutely >> If best of, so that's not >> that's not ideal, but at the end of the day, agree, developers don't want to muck with all the plumbing >> Dave Duggal: Yep. >> They want to write code. >> Azhar: Correct. >> So like I come back to are the traditional Telcos leaning in on a way that they're going to enable ISVs and developers to write on top of those platforms? Or are there sort of new entrance and disruptors? And I know, I know the answer is both >> Dave Duggal: Yep. >> but I feel as though the Telcos still haven't, traditional Telcos haven't tuned in to that developer affinity, but you guys sell to them. >> What, what are you seeing? >> Yeah, so >> What we have seen is there are Telcos fall into several categories there. If you look at the most mature ones, you know they are very eager to move up the value chain. There are some smaller very nimble ones that have actually doing, they're actually doing something really interesting. For example, they've provided sandbox environments to developers to say "Go develop your applications to the sandbox environment." We'll use that to build an net service with you. I can give you some interesting examples across the globe that, where that is happening, right? In AsiaPac, particularly in Australia, ANZ region. There are a couple of providers who have who have done this, but in, in, in a very interesting way. But the challenges to them, why it's not completely open or public yet is primarily because they haven't figured out how to exactly monetize that. And, and that's the reason why. So in the absence of that, what will happen is they they have to rely on the ISV ecosystem to be able to build those capabilities which they can then bring it on as part of the catalog. But in Latin America, I was talking to one of the providers and they said, "Well look we have a public cloud, we have our own public cloud, right?" What we want do is use that to offer localized services not just bring everything in from the top >> But, but we heard from Ericson's CEO they're basically going to monetize it by what I call "gouge", the developers >> (Azhar laughs) >> access to the network telemetry as opposed to saying, "Hey, here's an open platform development on top of it and it will maybe create something like an app store and we'll take a piece of the action." >> So ours, >> to be is a better model. >> Yeah. So that's perfect. Our second project that we're showing here is with Intel, right? So Intel came to us cause they are a reputation for doing advanced automation solutions. They gave us carte blanche in their labs. So this is Intel Network Builders they said pick your partners. And we went with the Red Hat, Fort Net, Keysite this company KX doing AIML. But to address your DevX, here's Intel explicitly wants to get closer to the developers by exposing their APIs, open APIs over their infrastructure. Just like Red Hat has APIs, right? And so they can expose them northbound to developers so developers can leverage and tune their applications, right? But the challenge there is what Intel is doing at the low level network infrastructure, right? Is fundamentally complex, right? What you want is an abstraction layer where develop and this gets to, to your point Dave where you just said like "The developers just want to get their job done." or really they want to focus on the business logic and accelerate that service delivery, right? So the idea here is an EnterpriseWeb they can literally declaratively compose their services, express their intent. "I want this to run optimized for low latency. I want this to run optimized for energy consumption." Right? And that's all they say, right? That's a very high level statement. And then the run time translates it between all the elements that are participating in that service to realize the developer's intent, right? No hands, right? Zero touch, right? So that's now a movement in telecom. So you're right, it's taking a while because these are pretty fundamental shifts, right? But it's intent based networking, right? So it's almost two parts, right? One is you have to have the open APIs, right? So that the infrastructure has to expose its capabilities. Then you need abstractions over the top that make it simple for developers to take, you know, make use of them. >> See, one of the demonstrations we are doing is around AIOps. And I've had literally here on this floor, two conversations around what I call as network as a platform. Although it sounds like a cliche term, that's exactly what Dave was describing in terms of exposing APIs from the infrastructure and utilizing them. So once you get that data, then now you can do analytics and do machine learning to be able to build models and figure out how you can orchestrate better how you can monetize better, how can how you can utilize better, right? So all of those things become important. It's just not about internal optimization but it's also about how do you expose it to third party ecosystem to translate that into better delivery mechanisms or IOT capability and so on. >> But if they're going to charge me for every API call in the network I'm going to go broke (team laughs) >> And I'm going to get really pissed. I mean, I feel like, I'm just running down, Oracle. IBM tried it. Oracle, okay, they got Java, but they don't they don't have developer jobs. VMware, okay? They got Aria. EMC used to have a thing called code. IBM had to buy Red Hat to get to the developer community. (Lisa laughs) >> So I feel like the telcos don't today have those developer shops. So, so they have to partner. [Azhar] Yes. >> With guys like you and then be more open and and let a zillion flowers bloom or else they're going to get disrupted in a big way and they're going to it's going to be a repeat of the over, over the top in, in in a different model that I can't predict. >> Yeah. >> Absolutely true. I mean, look, they cannot be in the connectivity business. Telcos cannot be just in the connectivity business. It's, I think so, you know, >> Dave Vellante: You had a fry a frozen hand (Dave Daggul laughs) >> off that, you know. >> Well, you know, think about they almost have to go become over the top on themselves, right? That's what the cloud guys are doing, right? >> Yeah. >> They're riding over their backbone that by taking a creating a high level abstraction, they in turn abstract away the infrastructure underneath them, right? And that's really the end game >> Right? >> Dave Vellante: Yeah. >> Is because now, >> they're over the top it's their network, it's their infrastructure, right? They don't want to become bid pipes. >> Yep. >> Now you, they can take OpenShift, run that in any cloud. >> Yep. >> Right? >> You can run that in hybrid cloud, enterprise web can do the application layer configuration and management. And together we're running, you know, OSI layers one through seven, east to west, north to south. We're running across the the RAN, the core and the transport. And that is telco super cloud, my friend. >> Yeah. Well, >> (Dave Duggal laughs) >> I'm dominating the conversation cause I love talking super cloud. >> I knew you would. >> So speaking of super superpowers, when you're in customer or prospective customer conversations with providers and they've got, obviously they're they're in this transformative state right now. How, what do you describe as the superpower between Red Hat and EnterpriseWeb in terms of really helping these Telcos transforms. But at the end of the day, the connectivity's there the end user gets what they want, which is I want this to work wherever I am. >> Yeah, yeah. That's a great question, Lisa. So I think the way you could look at it is most software has, has been evolved to be specialized, right? So in Telcos' no different, right? We have this in the enterprise, right? All these specialized stacks, all these components that they wire together in the, in you think of Telco as a sort of a super set of enterprise problems, right? They have all those problems like magnified manyfold, right? And so you have specialized, let's say orchestrators and other tools for every Telco domain for every Telco layer. Now you have a zoo of orchestrators, right? None of them were designed to work together, right? They all speak a specific language, let's say quote unquote for doing a specific purpose. But everything that's interesting in the 21st century is across layers and across domains, right? If a siloed static application, those are dead, right? Nobody's doing those anymore. Even developers don't do those developers are doing composition today. They're not doing, nobody wants to hear about a 6 million lines of code, right? They want to hear, "How did you take these five things and bring 'em together for productive use?" >> Lisa: Right. How did you deliver faster for my enterprise? How did you save me money? How did you create business value? And that's what we're doing together. >> I mean, just to add on to Dave, I was talking to one of the providers, they have more than 30,000 nodes in their infrastructure. When I say no to your servers running, you know, Kubernetes,running open stack, running different components. If try managing that in one single entity, if you will. Not possible. You got to fragment, you got to segment in some way. Now the question is, if you are not exposing that particular infrastructure and the appropriate KPIs and appropriate things, you will not be able to efficiently utilize that across the board. So you need almost a construct that creates like a manager of managers, a hierarchical structure, which would allow you to be more intelligent in terms of how you place those, how you manage that. And so when you ask the question about what's the secret sauce between the two, well this is exactly where EnterpriseWeb brings in that capability to analyze information, be more intelligent about it. And what we do is provide an abstraction of the cloud layer so that they can, you know, then do the right job in terms of making sure that it's appropriate and it's consistent. >> Consistency is key. Guys, thank you so much. It's been a pleasure really digging through EnterpriseWeb. >> Thank you. >> What you're doing >> with Red Hat. How you're helping the organization transform and Supercloud, we can't forget Supercloud. (Dave Vellante laughs) >> Fight Supercloud. Guys, thank you so much for your time. >> Thank you so much Lisa. >> Thank you. >> Thank you guys. >> Very nice. >> Lisa: We really appreciate it. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live tech coverage coming to you live from MWC 23. We'll be back after a short break.
SUMMARY :
that drive human progress. the challenges, the opportunities. have you on the program. What's the business model? So the historic middleware So the real challenge for happening in the industry What's the landscape look like? You need the ability to orchestrate them. You could say Supercloud. And then how do you orchestrate all And by the way Thanks to, you know, And, and so that's what you guys do. even the name EnterpriseWeb, you know that's an O-RAN deployment. of that is the customer but you guys sell to them. on the ISV ecosystem to be able take a piece of the action." So that the infrastructure has and figure out how you And I'm going to get So, so they have to partner. the over, over the top in, in in the connectivity business. They don't want to become bid pipes. OpenShift, run that in any cloud. And together we're running, you know, I'm dominating the conversation the end user gets what they want, which is And so you have specialized, How did you create business value? You got to fragment, you got to segment Guys, thank you so much. and Supercloud, we Guys, thank you so much for your time. to you live from MWC 23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave Duggal | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Fortnet | ORGANIZATION | 0.99+ |
Keysight | ORGANIZATION | 0.99+ |
EnterpriseWeb | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two projects | QUANTITY | 0.99+ |
Telcos' | ORGANIZATION | 0.99+ |
Latin America | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave Daggul | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
second project | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
more than 30,000 nodes | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
KX | ORGANIZATION | 0.99+ |
Azhar Sayeed | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
Tech Mahindra | ORGANIZATION | 0.98+ |
two conversations | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
five things | QUANTITY | 0.98+ |
telcos | ORGANIZATION | 0.97+ |
four days | QUANTITY | 0.97+ |
Azhar | PERSON | 0.97+ |
Chris Jones, Platform9 | Finding your "Just Right” path to Cloud Native
(upbeat music) >> Hi everyone. Welcome back to this Cube conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." Got a great conversation around Cloud Native, Cloud Native Journey, how enterprises are looking at Cloud Native and putting it all together. And it comes down to operations, developer productivity, and security. It's the hottest topic in technology. We got Chris Jones here in the studio, director of Product Management for Platform9. Chris, thanks for coming in. >> Hey, thanks. >> So when we always chat about, when we're at KubeCon. KubeConEU is coming up and in a few, in a few months, the number one conversation is developer productivity. And the developers are driving all the standards. It's interesting to see how they just throw everything out there and whatever gets adopted ends up becoming the standard, not the old school way of kind of getting stuff done. So that's cool. Security Kubernetes and Containers are all kind of now that next level. So you're starting to see the early adopters moving to the mainstream. Enterprises, a variety of different approaches. You guys are at the center of this. We've had a couple conversations with your CEO and your tech team over there. What are you seeing? You're building the products. What's the core product focus right now for Platform9? What are you guys aiming for? >> The core is that blend of enabling your infrastructure and PlatformOps or DevOps teams to be able to go fast and run in a stable environment, but at the same time enable developers. We don't want people going back to what I've been calling Shadow IT 2.0. It's, hey, I've been told to do something. I kicked off this Container initiative. I need to run my software somewhere. I'm just going to go figure it out. We want to keep those people productive. At the same time we want to enable velocity for our operations teams, be it PlatformOps or DevOps. >> Take us through in your mind and how you see the industry rolling out this Cloud Native journey. Where do you see customers out there? Because DevOps have been around, DevSecOps is rocking, you're seeing AI, hot trend now. Developers are still in charge. Is there a change to the infrastructure of how developers get their coding done and the infrastructure, setting up the DevOps is key, but when you add the Cloud Native journey for an enterprise, what changes? What is the, what is the, I guess what is the Cloud Native journey for an enterprise these days? >> The Cloud Native journey or the change? When- >> Let's start with the, let's start with what they want to do. What's the goal and then how does that happen? >> I think the goal is that promise land. Increased resiliency, better scalability, and overall reduced costs. I've gone from physical to virtual that gave me a higher level of density, packing of resources. I'm moving to Containers. I'm removing that OS layer again. I'm getting a better density again, but all of a sudden I'm running Kubernetes. What does that, what does that fundamentally do to my operations? Does it magically give me scalability and resiliency? Or do I need to change what I'm running and how it's running so it fits that infrastructure? And that's the reality, is you can't just take a Container and drop it into Kubernetes and say, hey, I'm now Cloud Native. I've got reduced cost, or I've got better resiliency. There's things that your engineering teams need to do to make sure that application is a Cloud Native. And then there's what I think is one of the largest shifts of virtual machines to containers. When I was in the world of application performance monitoring, we would see customers saying, well, my engineering team have this Java app, and they said it needs a VM with 12 gig of RAM and eight cores, and that's what we gave it. But it's running slow. I'm working with the application team and you can see it's running slow. And they're like, well, it's got all of its resources. One of those nice features of virtualization is over provisioning. So the infrastructure team would say, well, we gave it, we gave it all a RAM it needed. And what's wrong with that being over provisioned? It's like, well, Java expects that RAM to be there. Now all of a sudden, when you move to the world of containers, what we've got is that's not a set resource limit, really is like it used to be in a VM, right? When you set it for a container, your application teams really need to be paying attention to your resource limits and constraints within the world of Kubernetes. So instead of just being able to say, hey, I'm throwing over the fence and now it's just going to run on a VM, and that VMs got everything it needs. It's now really running on more, much more of a shared infrastructure where limits and constraints are going to impact the neighbors. They are going to impact who's making that decision around resourcing. Because that Kubernetes concept of over provisioning and the virtualization concept of over provisioning are not the same. So when I look at this problem, it's like, well, what changed? Well, I'll do my scale tests as an application developer and tester, and I'd see what resources it needs. I asked for that in the VM, that sets the high watermark, job's done. Well, Kubernetes, it's no longer a VM, it's a Kubernetes manifest. And well, who owns that? Who's writing it? Who's setting those limits? To me, that should be the application team. But then when it goes into operations world, they're like, well, that's now us. Can we change those? So it's that amalgamation of the two that is saying, I'm a developer. I used to pay attention, but now I need to pay attention. And an infrastructure person saying, I used to just give 'em what they wanted, but now I really need to know what they've wanted, because it's going to potentially have a catastrophic impact on what I'm running. >> So what's the impact for the developer? Because, infrastructure's code is what everybody wants. The developer just wants to get the code going and they got to pay attention to all these things, or don't they? Is that where you guys come in? How do you guys see the problem? Actually scope the problem that you guys solve? 'Cause I think you're getting at I think the core issue here, which is, I've got Kubernetes, I've got containers, I've got developer productivity that I want to focus on. What's the problem that you guys solve? >> Platform operation teams that are adopting Cloud Native in their environment, they've got that steep learning curve of Kubernetes plus this fundamental change of how an app runs. What we're doing is taking away the burden of needing to operate and run Kubernetes and giving them the choice of the flexibility of infrastructure and location. Be that an air gap environment like a, let's say a telco provider that needs to run a containerized network function and containerized workloads for 5G. That's one thing that we can deploy and achieve in a completely inaccessible environment all the way through to Platform9 running traditionally as SaaS, as we were born, that's remotely managing and controlling your Kubernetes environments on-premise AWS. That hybrid cloud experience that could be also Bare Metal, but it's our platform running your environments with our support there, 24 by seven, that's proactively reaching out. So it's removing a lot of that burden and the complications that come along with operating the environment and standing it up, which means all of a sudden your DevOps and platform operations teams can go and work with your engineers and application developers and say, hey, let's get, let's focus on the stuff that, that we need to be focused on, which is running our business and providing a service to our customers. Not figuring out how to upgrade a Kubernetes cluster, add new nodes, and configure all of the low level. >> I mean there are, that's operations that just needs to work. And sounds like as they get into the Cloud Native kind of ops, there's a lot of stuff that kind of goes wrong. Or you go, oops, what do we buy into? Because the CIOs, let's go, let's go Cloud Native. We want to, we got to get set up for the future. We're going to be Cloud Native, not just lift and shift and we're going to actually build it out right. Okay, that sounds good. And when we have to actually get done. >> Chris: Yeah. >> You got to spin things up and stand up the infrastructure. What specifically use case do you guys see that emerges for Platform9 when people call you up and you go talk to customers and prospects? What's the one thing or use case or cases that you guys see that you guys solve the best? >> So I think one of the, one of the, I guess new use cases that are coming up now, everyone's talking about economic pressures. I think the, the tap blows open, just get it done. CIO is saying let's modernize, let's use the cloud. Now all of a sudden they're recognizing, well wait, we're spending a lot of money now. We've opened that tap all the way, what do we do? So now they're looking at ways to control that spend. So we're seeing that as a big emerging trend. What we're also sort of seeing is people looking at their data centers and saying, well, I've got this huge legacy environment that's running a hypervisor. It's running VMs. Can we still actually do what we need to do? Can we modernize? Can we start this Cloud Native journey without leaving our data centers, our co-locations? Or if I do want to reduce costs, is that that thing that says maybe I'm repatriating or doing a reverse migration? Do I have to go back to my data center or are there other alternatives? And we're seeing that trend a lot. And our roadmap and what we have in the product today was specifically built to handle those, those occurrences. So we brought in KubeVirt in terms of virtualization. We have a long legacy doing OpenStack and private clouds. And we've worked with a lot of those users and customers that we have and asked the questions, what's important? And today, when we look at the world of Cloud Native, you can run virtualization within Kubernetes. So you can, instead of running two separate platforms, you can have one. So all of a sudden, if you're looking to modernize, you can start on that new infrastructure stack that can run anywhere, Kubernetes, and you can start bringing VMs over there as you are containerizing at the same time. So now you can keep your application operations in one environment. And this also helps if you're trying to reduce costs. If you really are saying, we put that Dev environment in AWS, we've got a huge amount of velocity out of it now, can we do that elsewhere? Is there a co-location we can go to? Is there a provider that we can go to where we can run that infrastructure or run the Kubernetes, but not have to run the infrastructure? >> It's going to be interesting too, when you see the Edge come online, you start, we've got Mobile World Congress coming up, KubeCon events we're going to be at, the conversation is not just about public cloud. And you guys obviously solve a lot of do-it-yourself implementation hassles that emerge when people try to kind of stand up their own environment. And we hear from developers consistency between code, managing new updates, making sure everything is all solid so they can go fast. That's the goal. And that, and then people can get standardized on that. But as you get public cloud and do it yourself, kind of brings up like, okay, there's some gaps there as the architecture changes to be more distributed computing, Edge, on-premises cloud, it's cloud operations. So that's cool for DevOps and Cloud Native. How do you guys differentiate from say, some the public cloud opportunities and the folks who are doing it themselves? How do you guys fit in that world and what's the pitch or what's the story? >> The fit that we look at is that third alternative. Let's get your team focused on what's high value to your business and let us deliver that public cloud experience on your infrastructure or in the public cloud, which gives you that ability to still be flexible if you want to make choices to run consistently for your developers in two different locations. So as I touched on earlier, instead of saying go figure out Kubernetes, how do you upgrade a hundred worker nodes in place upgrade. We've solved that problem. That's what we do every single day of the week. Don't go and try to figure out how to upgrade a cluster and then upgrade all of the, what I call Kubernetes friends, your core DNSs, your metrics server, your Kubernetes dashboard. These are all things that we package, we test, we version. So when you click upgrade, we've already handled that entire process. So it's saying don't have your team focused on that lower level piece of work. Get them focused on what is important, which is your business services. >> Yeah, the infrastructure and getting that stood up. I mean, I think the thing that's interesting, if you look at the market right now, you mentioned cost savings and recovery, obviously kind of a recession. I mean, people are tightening their belts for sure. I don't think the digital transformation and Cloud Native spend is going to plummet. It's going to probably be on hold and be squeezed a little bit. But to your point, people are refactoring looking at how to get the best out of what they got. It's not just open the tap of spend the cash like it used to be. Yeah, a couple months, even a couple years ago. So okay, I get that. But then you look at the what's coming, AI. You're seeing all the new data infrastructure that's coming. The containers, Kubernetes stuff, got to get stood up pretty quickly and it's got to be reliable. So to your point, the teams need to get done with this and move on to the next thing. >> Chris: Yeah, yeah, yeah. >> 'Cause there's more coming. I mean, there's a lot coming for the apps that are building in Data Native, AI-Native, Cloud Native. So it seems that this Kubernetes thing needs to get solved. Is that kind of what you guys are focused on right now? >> So, I mean to use a customer, we have a customer that's in AI/ML and they run their platform at customer sites and that's hardware bound. You can't run AI machine learning on anything anywhere. Well, with Platform9 they can. So we're enabling them to deliver services into their customers that's running their AI/ML platform in their customer's data centers anywhere in the world on hardware that is purpose-built for running that workload. They're not Kubernetes experts. That's what we are. We're bringing them that ability to focus on what's important and just delivering their business services whilst they're enabling our team. And our 24 by seven proactive management are always on assurance to keep that up and running for them. So when something goes bump at the night at 2:00am, our guys get woken up. They're the ones that are reaching out to the customer saying, your environments have a problem, we're taking these actions to fix it. Obviously sometimes, especially if it is running on Bare Metal, there's things you can't do remotely. So you might need someone to go and do that. But even when that happens, you're not by yourself. You're not sitting there like I did when I worked for a bank in one of my first jobs, three o'clock in the morning saying, wow, our end of day processing is stuck. Who else am I waking up? Right? >> Exactly, yeah. Got to get that cash going. But this is a great use case. I want to get to the customer. What do some of the successful customers say to you for the folks watching that aren't yet a customer of Platform9, what are some of the accolades and comments or anecdotes that you guys hear from customers that you have? >> It just works, which I think is probably one of the best ones you can get. Customers coming back and being able to show to their business that they've delivered growth, like business growth and productivity growth and keeping their organization size the same. So we started on our containerization journey. We went to Kubernetes. We've deployed all these new workloads and our operations team is still six people. We're doing way more with growth less, and I think that's also talking to the strength that we're bringing, 'cause we're, we're augmenting that team. They're spending less time on the really low level stuff and automating a lot of the growth activity that's involved. So when it comes to being able to grow their business, they can just focus on that, not- >> Well you guys do the heavy lifting, keep on top of the Kubernetes, make sure that all the versions are all done. Everything's stable and consistent so they can go on and do the build out and provide their services. That seems to be what you guys are best at. >> Correct, correct. >> And so what's on the roadmap? You have the product, direct product management, you get the keys to the kingdom. What is, what is the focus? What's your focus right now? Obviously Kubernetes is growing up, Containers. We've been hearing a lot at the last KubeCon about the security containers is getting better. You've seen verification, a lot more standards around some things. What are you focused on right now for at a product over there? >> Edge is a really big focus for us. And I think in Edge you can look at it in two ways. The mantra that I drive is Edge must be remote. If you can't do something remotely at the Edge, you are using a human being, that's not Edge. Our Edge management capabilities and being in the market for over two years are a hundred percent remote. You want to stand up a store, you just ship the server in there, it gets racked, the rest of it's remote. Imagine a store manager in, I don't know, KFC, just plugging in the server, putting in the ethernet cable, pressing the power button. The rest of all that provisioning for that Cloud Native stack, Kubernetes, KubeVirt for virtualization is done remotely. So we're continuing to focus on that. The next piece that is related to that is allowing people to run Platform9 SaaS in their data centers. So we do ag app today and we've had a really strong focus on telecommunications and the containerized network functions that come along with that. So this next piece is saying, we're bringing what we run as SaaS into your data center, so then you can run it. 'Cause there are many people out there that are saying, we want these capabilities and we want everything that the Platform9 control plane brings and simplifies. But unfortunately, regulatory compliance reasons means that we can't leverage SaaS. So they might be using a cloud, but they're saying that's still our infrastructure. We're still closed that network down, or they're still on-prem. So they're two big priorities for us this year. And that on-premise experiences is paramount, even to the point that we will be delivering a way that when you run an on-premise, you can still say, wait a second, well I can send outbound alerts to Platform9. So their support team can still be proactively helping me as much as they could, even though I'm running Platform9s control plane. So it's sort of giving that blend of two experiences. They're big, they're big priorities. And the third pillar is all around virtualization. It's saying if you have economic pressures, then I think it's important to look at what you're spending today and realistically say, can that be reduced? And I think hypervisors and virtualization is something that should be looked at, because if you can actually reduce that spend, you can bring in some modernization at the same time. Let's take some of those nos that exist that are two years into their five year hardware life cycle. Let's turn that into a Cloud Native environment, which is enabling your modernization in place. It's giving your engineers and application developers the new toys, the new experiences, and then you can start running some of those virtualized workloads with KubeVirt, there. So you're reducing cost and you're modernizing at the same time with your existing infrastructure. >> You know Chris, the topic of this content series that we're doing with you guys is finding the right path, trusting the right path to Cloud Native. What does that mean? I mean, if you had to kind of summarize that phrase, trusting the right path to Cloud Native, what does that mean? It mean in terms of architecture, is it deployment? Is it operations? What's the underlying main theme of that quote? What's the, what's? How would you talk to a customer and say, what does that mean if someone said, "Hey, what does that right path mean?" >> I think the right path means focusing on what you should be focusing on. I know I've said it a hundred times, but if your entire operations team is trying to figure out the nuts and bolts of Kubernetes and getting three months into a journey and discovering, ah, I need Metrics Server to make something function. I want to use Horizontal Pod Autoscaler or Vertical Pod Autoscaler and I need this other thing, now I need to manage that. That's not the right path. That's literally learning what other people have been learning for the last five, seven years that have been focused on Kubernetes solely. So the why- >> There's been a lot of grind. People have been grinding it out. I mean, that's what you're talking about here. They've been standing up the, when Kubernetes started, it was all the promise. >> Chris: Yep. >> And essentially manually kind of getting in in the weeds and configuring it. Now it's matured up. They want stability. >> Chris: Yeah. >> Not everyone can get down and dirty with Kubernetes. It's not something that people want to generally do unless you're totally into it, right? Like I mean, I mean ops teams, I mean, yeah. You know what I mean? It's not like it's heavy lifting. Yeah, it's important. Just got to get it going. >> Yeah, I mean if you're deploying with Platform9, your Ops teams can tinker to their hearts content. We're completely compliant upstream Kubernetes. You can go and change an API server flag, let's go and mess with the scheduler, because we want to. You can still do that, but don't, don't have your team investing in all this time to figure it out. It's been figured out. >> John: Got it. >> Get them focused on enabling velocity for your business. >> So it's not build, but run. >> Chris: Correct? >> Or run Kubernetes, not necessarily figure out how to kind of get it all, consume it out. >> You know we've talked to a lot of customers out there that are saying, "I want to be able to deliver a service to my users." Our response is, "Cool, let us run it. You consume it, therefore deliver it." And we're solving that in one hit versus figuring out how to first run it, then operate it, then turn that into a consumable service. >> So the alternative Platform9 is what? They got to do it themselves or use the Cloud or what's the, what's the alternative for the customer for not using Platform9? Hiring more people to kind of work on it? What's the? >> People, building that kind of PaaS experience? Something that I've been very passionate about for the past year is looking at that world of sort of GitOps and what that means. And if you go out there and you sort of start asking the question what's happening? Just generally with Kubernetes as well and GitOps in that scope, then you'll hear some people saying, well, I'm making it PaaS, because Kubernetes is too complicated for my developers and we need to give them something. There's some great material out there from the likes of Intuit and Adobe where for two big contributors to Argo and the Argo projects, they almost have, well they do have, different experiences. One is saying, we went down the PaaS route and it failed. The other one is saying, well we've built a really stable PaaS and it's working. What are they trying to do? They're trying to deliver an outcome to make it easy to use and consume Kubernetes. So you could go out there and say, hey, I'm going to build a Kubernetes cluster. Sounds like Argo CD is a great way to expose that to my developers so they can use Kubernetes without having to use Kubernetes and start automating things. That is an approach, but you're going to be going completely open source and you're going to have to bring in all the individual components, or you could just lay that, lay it down, and consume it as a service and not have to- >> And mentioned to it. They were the ones who kind of brought that into the open. >> They did. Inuit is the primary contributor to the Argo set of products. >> How has that been received in the market? I mean, they had the event at the Computer History Museum last fall. What's the momentum there? What's the big takeaway from that project? >> Growth. To me, growth. I mean go and track the stars on that one. It's just, it's growth. It's unlocking machine learning. Argo workflows can do more than just make things happen. Argo CD I think the approach they're taking is, hey let's make this simple to use, which I think can be lost. And I think credit where credit's due, they're really pushing to bring in a lot of capabilities to make it easier to work with applications and microservices on Kubernetes. It's not just that, hey, here's a GitOps tool. It can take something from a Git repo and deploy it and maybe prioritize it and help you scale your operations from that perspective. It's taking a step back and saying, well how did we get to production in the first place? And what can be done down there to help as well? I think it's growth expansion of features. They had a huge release just come out in, I think it was 2.6, that brought in things that as a product manager that I don't often look at like really deep technical things and say wow, that's powerful. But they have, they've got some great features in that release that really do solve real problems. >> And as the product, as the product person, who's the target buyer for you? Who's the customer? Who's making that? And you got decision maker, influencer, and recommender. Take us through the customer persona for you guys. >> So that Platform Ops, DevOps space, right, the people that need to be delivering Containers as a service out to their organization. But then it's also important to say, well who else are our primary users? And that's developers, engineers, right? They shouldn't have to say, oh well I have access to a Kubernetes cluster. Do I have to use kubectl or do I need to go find some other tool? No, they can just log to Platform9. It's integrated with your enterprise id. >> They're the end customer at the end of the day, they're the user. >> Yeah, yeah. They can log in. And they can see the clusters you've given them access to as a Platform Ops Administrator. >> So job well done for you guys. And your mind is the developers are moving 'em fast, coding and happy. >> Chris: Yeah, yeah. >> And and from a customer standpoint, you reduce the maintenance cost, because you keep the Ops smoother, so you got efficiency and maintenance costs kind of reduced or is that kind of the benefits? >> Yeah, yep, yeah. And at two o'clock in the morning when things go inevitably wrong, they're not there by themselves, and we're proactively working with them. >> And that's the uptime issue. >> That is the uptime issue. And Cloud doesn't solve that, right? Everyone experienced that Clouds can go down, entire regions can go offline. That's happened to all Cloud providers. And what do you do then? Kubernetes isn't your recovery plan. It's part of it, right, but it's that piece. >> You know Chris, to wrap up this interview, I will say that "theCUBE" is 12 years old now. We've been to OpenStack early days. We had you guys on when we were covering OpenStack and now Cloud has just been booming. You got AI around the corner, AI Ops, now you got all this new data infrastructure, it's just amazing Cloud growth, Cloud Native, Security Native, Cloud Native, Data Native, AI Native. It's going to be all, this is the new app environment, but there's also existing infrastructure. So going back to OpenStack, rolling our own cloud, building your own cloud, building infrastructure cloud, in a cloud way, is what the pioneers have done. I mean this is what we're at. Now we're at this scale next level, abstracted away and make it operational. It seems to be the key focus. We look at CNCF at KubeCon and what they're doing with the cloud SecurityCon, it's all about operations. >> Chris: Yep, right. >> Ops and you know, that's going to sound counterintuitive 'cause it's a developer open source environment, but you're starting to see that Ops focus in a good way. >> Chris: Yeah, yeah, yeah. >> Infrastructure as code way. >> Chris: Yep. >> What's your reaction to that? How would you summarize where we are in the industry relative to, am I getting, am I getting it right there? Is that the right view? What am I missing? What's the current state of the next level, NextGen infrastructure? >> It's a good question. When I think back to sort of late 2019, I sort of had this aha moment as I saw what really truly is delivering infrastructure as code happening at Platform9. There's an open source project Ironic, which is now also available within Kubernetes that is Metal Kubed that automates Bare Metal as code, which means you can go from an empty server, lay down your operating system, lay down Kubernetes, and you've just done everything delivered to your customer as code with a Cloud Native platform. That to me was sort of the biggest realization that I had as I was moving into this industry was, wait, it's there. This can be done. And the evolution of tooling and operations is getting to the point where that can be achieved and it's focused on by a number of different open source projects. Not just Ironic and and Metal Kubed, but that's a huge win. That is truly getting your infrastructure. >> John: That's an inflection point, really. >> Yeah. >> If you think about it, 'cause that's one of the problems. We had with the Bare Metal piece was the automation and also making it Cloud Ops, cloud operations. >> Right, yeah. I mean, one of the things that I think Ironic did really well was saying let's just treat that piece of Bare Metal like a Cloud VM or an instance. If you got a problem with it, just give the person using it or whatever's using it, a new one and reimage it. Just tell it to reimage itself and it'll just (snaps fingers) go. You can do self-service with it. In Platform9, if you log in to our SaaS Ironic, you can go and say, I want that physical server to myself, because I've got a giant workload, or let's turn it into a Kubernetes cluster. That whole thing is automated. To me that's infrastructure as code. I think one of the other important things that's happening at the same time is we're seeing GitOps, we're seeing things like Terraform. I think it's important for organizations to look at what they have and ask, am I using tools that are fit for tomorrow or am I using tools that are yesterday's tools to solve tomorrow's problems? And when especially it comes to modernizing infrastructure as code, I think that's a big piece to look at. >> Do you see Terraform as old or new? >> I see Terraform as old. It's a fantastic tool, capable of many great things and it can work with basically every single provider out there on the planet. It is able to do things. Is it best fit to run in a GitOps methodology? I don't think it is quite at that point. In fact, if you went and looked at Flux, Flux has ways that make Terraform GitOps compliant, which is absolutely fantastic. It's using two tools, the best of breeds, which is solving that tomorrow problem with tomorrow solutions. >> Is the new solutions old versus new. I like this old way, new way. I mean, Terraform is not that old and it's been around for about eight years or so, whatever. But HashiCorp is doing a great job with that. I mean, so okay with Terraform, what's the new address? Is it more complex environments? Because Terraform made sense when you had basic DevOps, but now it sounds like there's a whole another level of complexity. >> I got to say. >> New tools. >> That kind of amalgamation of that application into infrastructure. Now my app team is paying way more attention to that manifest file, which is what GitOps is trying to solve. Let's templatize things. Let's version control our manifest, be it helm, customize, or just a straight up Kubernetes manifest file, plain and boring. Let's get that version controlled. Let's make sure that we know what is there, why it was changed. Let's get some auditability and things like that. And then let's get that deployment all automated. So that's predicated on the cluster existing. Well why can't we do the same thing with the cluster, the inception problem. So even if you're in public cloud, the question is like, well what's calling that API to call that thing to happen? Where is that file living? How well can I manage that in a large team? Oh my God, something just changed. Who changed it? Where is that file? And I think that's one of big, the big pieces to be sold. >> Yeah, and you talk about Edge too and on-premises. I think one of the things I'm observing and certainly when DevOps was rocking and rolling and infrastructures code was like the real push, it was pretty much the public cloud, right? >> Chris: Yep. >> And you did Cloud Native and you had stuff on-premises. Yeah you did some lifting and shifting in the cloud, but the cool stuff was going in the public cloud and you ran DevOps. Okay, now you got on-premise cloud operation and Edge. Is that the new DevOps? I mean 'cause what you're kind of getting at with old new, old new Terraform example is an interesting point, because you're pointing out potentially that that was good DevOps back in the day or it still is. >> Chris: It is, I was going to say. >> But depending on how you define what DevOps is. So if you say, I got the new DevOps with public on-premise and Edge, that's just not all public cloud, that's essentially distributed Cloud Native. >> Correct. Is that the new DevOps in your mind or is that? How would you, or is that oversimplifying it? >> Or is that that term where everyone's saying Platform Ops, right? Has it shifted? >> Well you bring up a good point about Terraform. I mean Terraform is well proven. People love it. It's got great use cases and now there seems to be new things happening. We call things like super cloud emerging, which is multicloud and abstraction layers. So you're starting to see stuff being abstracted away for the benefits of moving to the next level, so teams don't get stuck doing the same old thing. They can move on. Like what you guys are doing with Platform9 is providing a service so that teams don't have to do it. >> Correct, yeah. >> That makes a lot of sense, So you just, now it's running and then they move on to the next thing. >> Chris: Yeah, right. >> So what is that next thing? >> I think Edge is a big part of that next thing. The propensity for someone to put up with a delay, I think it's gone. For some reason, we've all become fairly short-tempered, Short fused. You know, I click the button, it should happen now, type people. And for better or worse, hopefully it gets better and we all become a bit more patient. But how do I get more effective and efficient at delivering that to that really demanding- >> I think you bring up a great point. I mean, it's not just people are getting short-tempered. I think it's more of applications are being deployed faster, security is more exposed if they don't see things quicker. You got data now infrastructure scaling up massively. So, there's a double-edged swords to scale. >> Chris: Yeah, yeah. I mean, maintenance, downtime, uptime, security. So yeah, I think there's a tension around, and one hand enthusiasm around pushing a lot of code and new apps. But is the confidence truly there? It's interesting one little, (snaps finger) supply chain software, look at Container Security for instance. >> Yeah, yeah. It's big. I mean it was codified. >> Do you agree that people, that's kind of an issue right now. >> Yeah, and it was, I mean even the supply chain has been codified by the US federal government saying there's things we need to improve. We don't want to see software being a point of vulnerability, and software includes that whole process of getting it to a running point. >> It's funny you mentioned remote and one of the thing things that you're passionate about, certainly Edge has to be remote. You don't want to roll a truck or labor at the Edge. But I was doing a conversation with, at Rebars last year about space. It's hard to do brake fix on space. It's hard to do a, to roll a someone to configure satellite, right? Right? >> Chris: Yeah. >> So Kubernetes is in space. We're seeing a lot of Cloud Native stuff in apps, in space, so just an example. This highlights the fact that it's got to be automated. Is there a machine learning AI angle with all this ChatGPT talk going on? You see all the AI going the next level. Some pretty cool stuff and it's only, I know it's the beginning, but I've heard people using some of the new machine learning, large language models, large foundational models in areas I've never heard of. Machine learning and data centers, machine learning and configuration management, a lot of different ways. How do you see as the product person, you incorporating the AI piece into the products for Platform9? >> I think that's a lot about looking at the telemetry and the information that we get back and to use one of those like old idle terms, that continuous improvement loop to feed it back in. And I think that's really where machine learning to start with comes into effect. As we run across all these customers, our system that helps at two o'clock in the morning has that telemetry, it's got that data. We can see what's changing and what's happening. So it's writing the right algorithms, creating the right machine learning to- >> So training will work for you guys. You have enough data and the telemetry to do get that training data. >> Yeah, obviously there's a lot of investment required to get there, but that is something that ultimately that could be achieved with what we see in operating people's environments. >> Great. Chris, great to have you here in the studio. Going wide ranging conversation on Kubernetes and Platform9. I guess my final question would be how do you look at the next five years out there? Because you got to run the product management, you got to have that 20 mile steer, you got to look at the customers, you got to look at what's going on in the engineering and you got to kind of have that arc. This is the right path kind of view. What's the five year arc look like for you guys? How do you see this playing out? 'Cause KubeCon is coming up and we're you seeing Kubernetes kind of break away with security? They had, they didn't call it KubeCon Security, they call it CloudNativeSecurityCon, they just had in Seattle inaugural events seemed to go well. So security is kind of breaking out and you got Kubernetes. It's getting bigger. Certainly not going away, but what's your five year arc of of how Platform9 and Kubernetes and Ops evolve? >> It's to stay on that theme, it's focusing on what is most important to our users and getting them to a point where they can just consume it, so they're not having to operate it. So it's finding those big items and bringing that into our platform. It's something that's consumable, that's just taken care of, that's tested with each release. So it's simplifying operations more and more. We've always said freedom in cloud computing. Well we started on, we started on OpenStack and made that simple. Stable, easy, you just have it, it works. We're doing that with Kubernetes. We're expanding out that user, right, we're saying bring your developers in, they can download their Kube conflict. They can see those Containers that are running there. They can access the events, the log files. They can log in and build a VM using KubeVirt. They're self servicing. So it's alleviating pressures off of the Ops team, removing the help desk systems that people still seem to rely on. So it's like what comes into that field that is the next biggest issue? Is it things like CI/CD? Is it simplifying GitOps? Is it bringing in security capabilities to talk to that? Or is that a piece that is a best of breed? Is there a reason that it's been spun out to its own conference? Is this something that deserves a focus that should be a specialized capability instead of tooling and vendors that we work with, that we partner with, that could be brought in as a service. I think it's looking at those trends and making sure that what we bring in has the biggest impact to our users. >> That's awesome. Thanks for coming in. I'll give you the last word. Put a plug in for Platform9 for the people who are watching. What should they know about Platform9 that they might not know about it or what should? When should they call you guys and when should they engage? Take a take a minute to give the plug. >> The plug. I think it's, if your operations team is focused on building Kubernetes, stop. That shouldn't be the cloud. That shouldn't be in the Edge, that shouldn't be at the data center. They should be consuming it. If your engineering teams are all trying different ways and doing different things to use and consume Cloud Native services and Kubernetes, they shouldn't be. You want consistency. That's how you get economies of scale. Provide them with a simple platform that's integrated with all of your enterprise identity where they can just start consuming instead of having to solve these problems themselves. It's those, it's those two personas, right? Where the problems manifest. What are my operations teams doing, and are they delivering to my company or are they building infrastructure again? And are my engineers sprinting or crawling? 'Cause if they're not sprinting, you should be asked the question, do I have the right Cloud Native tooling in my environment and how can I get them back? >> I think it's developer productivity, uptime, security are the tell signs. You get that done. That's the goal of what you guys are doing, your mission. >> Chris: Yep. >> Great to have you on, Chris. Thanks for coming on. Appreciate it. >> Chris: Thanks very much. 0 Okay, this is "theCUBE" here, finding the right path to Cloud Native. I'm John Furrier, host of "theCUBE." Thanks for watching. (upbeat music)
SUMMARY :
And it comes down to operations, And the developers are I need to run my software somewhere. and the infrastructure, What's the goal and then I asked for that in the VM, What's the problem that you guys solve? and configure all of the low level. We're going to be Cloud Native, case or cases that you guys see We've opened that tap all the way, It's going to be interesting too, to your business and let us deliver the teams need to get Is that kind of what you guys are always on assurance to keep that up customers say to you of the best ones you can get. make sure that all the You have the product, and being in the market with you guys is finding the right path, So the why- I mean, that's what kind of getting in in the weeds Just got to get it going. to figure it out. velocity for your business. how to kind of get it all, a service to my users." and GitOps in that scope, of brought that into the open. Inuit is the primary contributor What's the big takeaway from that project? hey let's make this simple to use, And as the product, the people that need to at the end of the day, And they can see the clusters So job well done for you guys. the morning when things And what do you do then? So going back to OpenStack, Ops and you know, is getting to the point John: That's an 'cause that's one of the problems. that physical server to myself, It is able to do things. Terraform is not that the big pieces to be sold. Yeah, and you talk about Is that the new DevOps? I got the new DevOps with Is that the new DevOps Like what you guys are move on to the next thing. at delivering that to I think you bring up a great point. But is the confidence truly there? I mean it was codified. Do you agree that people, I mean even the supply and one of the thing things I know it's the beginning, and the information that we get back the telemetry to do get that could be achieved with what we see and you got to kind of have that arc. that is the next biggest issue? Take a take a minute to give the plug. and are they delivering to my company That's the goal of what Great to have you on, Chris. finding the right path to Cloud Native.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Chris Jones | PERSON | 0.99+ |
12 gig | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
two personas | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
three months | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
eight cores | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
last year | DATE | 0.99+ |
GitOps | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
over two years | QUANTITY | 0.99+ |
HashiCorp | ORGANIZATION | 0.99+ |
Terraform | ORGANIZATION | 0.99+ |
two separate platforms | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
two ways | QUANTITY | 0.98+ |
third alternative | QUANTITY | 0.98+ |
each release | QUANTITY | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
third pillar | QUANTITY | 0.98+ |
2:00am | DATE | 0.98+ |
first jobs | QUANTITY | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Cloud Native | TITLE | 0.98+ |
this year | DATE | 0.98+ |
late 2019 | DATE | 0.98+ |
Platform9 | TITLE | 0.98+ |
one environment | QUANTITY | 0.98+ |
last fall | DATE | 0.97+ |
Kubernetes | TITLE | 0.97+ |
yesterday | DATE | 0.97+ |
two experiences | QUANTITY | 0.97+ |
about eight years | QUANTITY | 0.97+ |
DevSecOps | TITLE | 0.97+ |
Git | TITLE | 0.97+ |
Flux | ORGANIZATION | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
two big contributors | QUANTITY | 0.96+ |
Cloud Native | TITLE | 0.96+ |
DevOps | TITLE | 0.96+ |
Rebars | ORGANIZATION | 0.95+ |
Kesha Williams, Slalom | Special Program Series: Women of the Cloud
(bright upbeat music) >> Hey everyone. Welcome to theCUBE Special Program series: Women of the Cloud brought to you by AWS. I'm your host for the series, Lisa Martin. Very pleased to welcome Kesha Williams, senior principal at Slalom who joins me next. Kesha, great to have you. Thank you so much for your time today. >> Thank you for having me Lisa. >> Tell me a little bit about you and your role at Slalom. >> Hi everyone. I've been in tech for 26 years working across several industries like the airline industry, healthcare, hospitality and several government agencies. I really built a solid foundation in the Java software engineering space. A few years ago I added on AWS in the cloud and I really haven't looked back since. Throughout my career, I realized that I had a heart to teach and mentor, and that's what really brought me to Slalom. I currently serve as a program director in our AWS Cloud Residency program, which is a career accelerator for cloud engineers. >> 26 years. So you've had some great experiences and talk along that journey. You've grown your career as well. I love that you have that heart for teaching and mentoring. I think that's fantastic. Talk about, for the audience, some of the tactical recommendations that you have for those watching to be able to follow in your footsteps and grow their careers in tech. >> Well, tech is a very broad category. I always recommend that people really figure out what they enjoy doing to help narrow that focus into a specific domain in technology. For example, do you enjoy coding? Then you would look to be a software engineer. Do you enjoy telling people what to do? Then you may enjoy technical project management, and there are so many disciplines. I also recommend for people just getting started in tech to really consider the cloud. There is a huge demand for cloud engineers and people that are cloud-literate and not enough people to fill that demand. If you're looking to start a career in the cloud, I always recommend starting with learning the foundations, so going after your AWS Certified Cloud Practitioner exam. And once you understand the foundations, then start to build that hands on experience and build that portfolio so that you can speak to what you've developed in the past. And once you have that understanding, start to think about your specialty area. Do you want to specialize in machine learning or security or networking, and then continue to go after those more advanced certifications? >> That is brilliant advice that you really walked the audience through very strategically. I love how you think about it in that sense. I'd love to get into now you've grown your career over 26 years, as you said, some of the success stories that you've had in cloud. Can you share a few of those with us that you think really demonstrate the value of that foundation that you've built? >> Sure. I think a lot about success stories that really hit home and the first one that comes to mind is Georgia State University. That hits home because I'm from Georgia. It also hits home because my son attended Georgia State University. And Slalom joined Georgia State to really help them adopt this serverless approach and implement DevOps practices, and what that brings with serverless, you're able to really think less about the infrastructure management, and focus on building solutions and capabilities in Georgia State's example, really helping students achieve what they're trying to achieve. And I think that just the serverless model helps organizations move faster and deliver faster and innovate faster, and that's what we saw at Georgia State University. I'm happy when I think about that project because now Georgia State is ranked as the fourth most innovative university in the country, and I believe it's because we were able to help them shift and move some of their key applications to the cloud and really realize the benefits of what the cloud brings. >> And so, I love that. The fourth most innovative university in the country. That's a pretty impressive pedigree to be able to have there and you've shown the value of that. There's value across the organization, right? Across the staff, the educators, the students, the prospective students, and of course they have such great technology foundation with which they can use to learn and grow. You've got a second great example at Securian. I'd love to hear that success story and how you really helped that organization transform itself. >> Right. Securian, that case study really speaks to me because I'm all about teaching and mentoring, and empowering people to really realize the benefits of the cloud, and we were able to do that at Securian. We came in and really helped them define their cloud strategy, define that adoption strategy, define how they're going to migrate their applications to the cloud, and then we worked right alongside them to help them do that migration. But as a part of that, we talked about talent development and really help them up level their skills to be able to maintain what we've developed from an ongoing long-term perspective. >> The talent focus, the demand for talent, your focus on that is it can be such a flywheel for organizations in terms of innovation, evolution, that in upskilling is something that every organization I think regardless of industry should be focused on. Talk to me a little bit more about the heart that you have for helping organizations to attract that talent, to retain that talent by being able to be embracing of technology in emerging technologies in their organization, and how does that help them attract talent? >> Well, when you think about the mindset of engineers and the people in tech, we always have this goal to be at the leading edge and keep our skills current and have an opportunity to experiment with the latest and greatest technologies. And there is a huge appetite for cloud engineering skills from an engineer perspective and just from a demand perspective in the industry. So when companies are utilizing these really leading edge technologies that have shifted how we build applications, how we support applications, it really attracts top talent. >> Absolutely, and that should be a focus of every organization. Speaking of talent, one of the things that is talked about tremendously in organizations is diversity. But talk to me about some of the things that you see from a diversity lens through your eyes and what are some of the challenges today? There's so much talk about it, but yet dot dot dot to be continued. >> Right, Right. I am super excited that there is a huge focus on diversity in tech. Like I mentioned before, I've been in tech for 26 years, and I remember when a lot of organizations didn't care about diversity. So I'm appreciative that now there's a huge focus. But with that, there's also a need and a desire to focus on what we call inclusion and equity. So we're seeing organizations hire diverse candidates, but when those people come in, they're not in an environment that's welcoming. They're not in an environment where they feel included. And so there can be a retention problem if there isn't a focus on also inclusion and equity, which I call the other side of diversity. >> Yeah, the other side of the coin there. That's a great point that inclusion and equity are so critical to that diversity piece. In fact, they're really kind of engines to help make it successful so that organizations can attract diverse talent, but also retain them, make them feel welcome. Talk to me about some of the commitments that Slalom has to really a DEI approach. >> Right. At Slalom, we work really hard to build a culture where employees can bring their a authentic selves to work and be authentic, and really enjoy equitable opportunities in a welcoming environment that celebrates authenticity. For example, our employees have access to a multitude of employee resource groups. Those types of groups, we call them ERGs, they really help with a sense of inclusion and a sense of belonging. When I think about the cloud residency, we do the same thing. We have a focus on diversity, so our leadership team is diverse, the residents in the program are diverse. So we have diversity from the bottom to the top. We also practice equity and inclusion in how we staff our residents on projects and how we make sure really I call it an even playing field for everyone, and really think about and understand some of the barriers that people face. And like I said, try to make it an even playing field. >> Wouldn't that be nice one day if there actually is an even playing field and we don't have to focus on this so much? That's kind of a nirvana, I think, for us to get to, but so much productivity comes when people are treated fairly. And to your point, I love that you said getting to be their authentic selves. I think that's what everybody wants in every walk of life, in every aspect of life. Let me being my authentic self and employer, I'm going to be far more productive as a result for you. I just think they're linked like this. >> I totally agree. Like you mentioned, it helps bring retention. And when people have that sense of belonging, that sense of inclusion and they know that the organization they work for really cares and values those those things. >> Speaking of authenticity, the organization needs to be authentic. That's a whole other conversation, Kesha, we could have I'm sure. But I want to ask you a final question. I can't believe you have 26 years experience in tech. Don't look at for one, but you have had- I appreciate that- >> such opportunities to grow and expand your career. You've left our audience with some fantastic strategic advice, tactical recommendations for how they can really climb that ladder. What do you see as next for the evolution in the cloud and where do you think your role is going to go? >> I definitely see this growing demand and need for machine learning. The use of how we're applying machine learning really in every area of life is just exploding. And I see just next this supercharged focus on truly democratizing machine learning and putting it in the hands of everyone: technical people, business people, non-technical people. And when I think about AWS and some of their newer services, it really seeks to do just that. And when I think about my role and in the Cloud Residency and how that role will evolve, it's just very important for me to lead the team to be intentional in building cloud engineers that can quickly jumpstart their machine learning journey to help fill that demand and better serve our clients. I also see my role really evolving into one that truly stays in line with the trends that we're seeing in the tech industry, and bringing those trends back and really preparing our cloud engineers to succeed. >> It's all about being intentional, intentional in DEI, intentional in cloud engineering, intentional in democratizing machine learning. Kesha, it's been such a pleasure to have you on the program, Women of Cloud. Thank you so much for sharing your insights and your advice with the audience. I know they appreciate it. >> Thank you for having me. >> My pleasure. For Kesha Williams, I'm Lisa Martin. You're watching this special CUBE program series, Women of the Cloud brought to you by AWS. We thank you so much for watching and we'll see you soon. (bright upbeat music)
SUMMARY :
Women of the Cloud brought to you by AWS. you and your role at Slalom. and I really haven't looked back since. I love that you have that heart and not enough people to fill that demand. that you think really and the first one that comes to mind and how you really and empowering people to really realize and how does that help and have an opportunity to Absolutely, and that should be a focus and a desire to focus on what that Slalom has to really a DEI approach. the bottom to the top. I love that you said getting and they know that the the organization needs to be authentic. and where do you think and in the Cloud Residency to have you on the Women of the Cloud brought to you by AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Keisha | PERSON | 0.99+ |
Kesha | PERSON | 0.99+ |
Securion | ORGANIZATION | 0.99+ |
Kesha Williams | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Keisha Williams | PERSON | 0.99+ |
Securian | ORGANIZATION | 0.99+ |
Georgia State University | ORGANIZATION | 0.99+ |
Slalom | ORGANIZATION | 0.99+ |
Georgia | LOCATION | 0.99+ |
26 years | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
26 years | QUANTITY | 0.99+ |
Georgia State | ORGANIZATION | 0.99+ |
Georgia State | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
over 26 years | QUANTITY | 0.99+ |
Women of the Cloud | TITLE | 0.98+ |
Java | TITLE | 0.98+ |
first one | QUANTITY | 0.97+ |
fourth most innovative university | QUANTITY | 0.92+ |
aws | ORGANIZATION | 0.91+ |
fourth most innovative university | QUANTITY | 0.9+ |
fourth most innovative university | QUANTITY | 0.9+ |
example | QUANTITY | 0.88+ |
som | ORGANIZATION | 0.79+ |
one day | QUANTITY | 0.77+ |
few years ago | DATE | 0.77+ |
Slalom | PERSON | 0.72+ |
Women of the | TITLE | 0.69+ |
theCUBE | ORGANIZATION | 0.64+ |
Breaking Analysis: ChatGPT Won't Give OpenAI First Mover Advantage
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> OpenAI The company, and ChatGPT have taken the world by storm. Microsoft reportedly is investing an additional 10 billion dollars into the company. But in our view, while the hype around ChatGPT is justified, we don't believe OpenAI will lock up the market with its first mover advantage. Rather, we believe that success in this market will be directly proportional to the quality and quantity of data that a technology company has at its disposal, and the compute power that it could deploy to run its system. Hello and welcome to this week's Wikibon CUBE insights, powered by ETR. In this Breaking Analysis, we unpack the excitement around ChatGPT, and debate the premise that the company's early entry into the space may not confer winner take all advantage to OpenAI. And to do so, we welcome CUBE collaborator, alum, Sarbjeet Johal, (chuckles) and John Furrier, co-host of the Cube. Great to see you Sarbjeet, John. Really appreciate you guys coming to the program. >> Great to be on. >> Okay, so what is ChatGPT? Well, actually we asked ChatGPT, what is ChatGPT? So here's what it said. ChatGPT is a state-of-the-art language model developed by OpenAI that can generate human-like text. It could be fine tuned for a variety of language tasks, such as conversation, summarization, and language translation. So I asked it, give it to me in 50 words or less. How did it do? Anything to add? >> Yeah, think it did good. It's large language model, like previous models, but it started applying the transformers sort of mechanism to focus on what prompt you have given it to itself. And then also the what answer it gave you in the first, sort of, one sentence or two sentences, and then introspect on itself, like what I have already said to you. And so just work on that. So it it's self sort of focus if you will. It does, the transformers help the large language models to do that. >> So to your point, it's a large language model, and GPT stands for generative pre-trained transformer. >> And if you put the definition back up there again, if you put it back up on the screen, let's see it back up. Okay, it actually missed the large, word large. So one of the problems with ChatGPT, it's not always accurate. It's actually a large language model, and it says state of the art language model. And if you look at Google, Google has dominated AI for many times and they're well known as being the best at this. And apparently Google has their own large language model, LLM, in play and have been holding it back to release because of backlash on the accuracy. Like just in that example you showed is a great point. They got almost right, but they missed the key word. >> You know what's funny about that John, is I had previously asked it in my prompt to give me it in less than a hundred words, and it was too long, I said I was too long for Breaking Analysis, and there it went into the fact that it's a large language model. So it largely, it gave me a really different answer the, for both times. So, but it's still pretty amazing for those of you who haven't played with it yet. And one of the best examples that I saw was Ben Charrington from This Week In ML AI podcast. And I stumbled on this thanks to Brian Gracely, who was listening to one of his Cloudcasts. Basically what Ben did is he took, he prompted ChatGPT to interview ChatGPT, and he simply gave the system the prompts, and then he ran the questions and answers into this avatar builder and sped it up 2X so it didn't sound like a machine. And voila, it was amazing. So John is ChatGPT going to take over as a cube host? >> Well, I was thinking, we get the questions in advance sometimes from PR people. We should actually just plug it in ChatGPT, add it to our notes, and saying, "Is this good enough for you? Let's ask the real question." So I think, you know, I think there's a lot of heavy lifting that gets done. I think the ChatGPT is a phenomenal revolution. I think it highlights the use case. Like that example we showed earlier. It gets most of it right. So it's directionally correct and it feels like it's an answer, but it's not a hundred percent accurate. And I think that's where people are seeing value in it. Writing marketing, copy, brainstorming, guest list, gift list for somebody. Write me some lyrics to a song. Give me a thesis about healthcare policy in the United States. It'll do a bang up job, and then you got to go in and you can massage it. So we're going to do three quarters of the work. That's why plagiarism and schools are kind of freaking out. And that's why Microsoft put 10 billion in, because why wouldn't this be a feature of Word, or the OS to help it do stuff on behalf of the user. So linguistically it's a beautiful thing. You can input a string and get a good answer. It's not a search result. >> And we're going to get your take on on Microsoft and, but it kind of levels the playing- but ChatGPT writes better than I do, Sarbjeet, and I know you have some good examples too. You mentioned the Reed Hastings example. >> Yeah, I was listening to Reed Hastings fireside chat with ChatGPT, and the answers were coming as sort of voice, in the voice format. And it was amazing what, he was having very sort of philosophy kind of talk with the ChatGPT, the longer sentences, like he was going on, like, just like we are talking, he was talking for like almost two minutes and then ChatGPT was answering. It was not one sentence question, and then a lot of answers from ChatGPT and yeah, you're right. I, this is our ability. I've been thinking deep about this since yesterday, we talked about, like, we want to do this segment. The data is fed into the data model. It can be the current data as well, but I think that, like, models like ChatGPT, other companies will have those too. They can, they're democratizing the intelligence, but they're not creating intelligence yet, definitely yet I can say that. They will give you all the finite answers. Like, okay, how do you do this for loop in Java, versus, you know, C sharp, and as a programmer you can do that, in, but they can't tell you that, how to write a new algorithm or write a new search algorithm for you. They cannot create a secretive code for you to- >> Not yet. >> Have competitive advantage. >> Not yet, not yet. >> but you- >> Can Google do that today? >> No one really can. The reasoning side of the data is, we talked about at our Supercloud event, with Zhamak Dehghani who's was CEO of, now of Nextdata. This next wave of data intelligence is going to come from entrepreneurs that are probably cross discipline, computer science and some other discipline. But they're going to be new things, for example, data, metadata, and data. It's hard to do reasoning like a human being, so that needs more data to train itself. So I think the first gen of this training module for the large language model they have is a corpus of text. Lot of that's why blog posts are, but the facts are wrong and sometimes out of context, because that contextual reasoning takes time, it takes intelligence. So machines need to become intelligent, and so therefore they need to be trained. So you're going to start to see, I think, a lot of acceleration on training the data sets. And again, it's only as good as the data you can get. And again, proprietary data sets will be a huge winner. Anyone who's got a large corpus of content, proprietary content like theCUBE or SiliconANGLE as a publisher will benefit from this. Large FinTech companies, anyone with large proprietary data will probably be a big winner on this generative AI wave, because it just, it will eat that up, and turn that back into something better. So I think there's going to be a lot of interesting things to look at here. And certainly productivity's going to be off the charts for vanilla and the internet is going to get swarmed with vanilla content. So if you're in the content business, and you're an original content producer of any kind, you're going to be not vanilla, so you're going to be better. So I think there's so much at play Dave (indistinct). >> I think the playing field has been risen, so we- >> Risen and leveled? >> Yeah, and leveled to certain extent. So it's now like that few people as consumers, as consumers of AI, we will have a advantage and others cannot have that advantage. So it will be democratized. That's, I'm sure about that. But if you take the example of calculator, when the calculator came in, and a lot of people are, "Oh, people can't do math anymore because calculator is there." right? So it's a similar sort of moment, just like a calculator for the next level. But, again- >> I see it more like open source, Sarbjeet, because like if you think about what ChatGPT's doing, you do a query and it comes from somewhere the value of a post from ChatGPT is just a reuse of AI. The original content accent will be come from a human. So if I lay out a paragraph from ChatGPT, did some heavy lifting on some facts, I check the facts, save me about maybe- >> Yeah, it's productive. >> An hour writing, and then I write a killer two, three sentences of, like, sharp original thinking or critical analysis. I then took that body of work, open source content, and then laid something on top of it. >> And Sarbjeet's example is a good one, because like if the calculator kids don't do math as well anymore, the slide rule, remember we had slide rules as kids, remember we first started using Waze, you know, we were this minority and you had an advantage over other drivers. Now Waze is like, you know, social traffic, you know, navigation, everybody had, you know- >> All the back roads are crowded. >> They're car crowded. (group laughs) Exactly. All right, let's, let's move on. What about this notion that futurist Ray Amara put forth and really Amara's Law that we're showing here, it's, the law is we, you know, "We tend to overestimate the effect of technology in the short run and underestimate it in the long run." Is that the case, do you think, with ChatGPT? What do you think Sarbjeet? >> I think that's true actually. There's a lot of, >> We don't debate this. >> There's a lot of awe, like when people see the results from ChatGPT, they say what, what the heck? Like, it can do this? But then if you use it more and more and more, and I ask the set of similar question, not the same question, and it gives you like same answer. It's like reading from the same bucket of text in, the interior read (indistinct) where the ChatGPT, you will see that in some couple of segments. It's very, it sounds so boring that the ChatGPT is coming out the same two sentences every time. So it is kind of good, but it's not as good as people think it is right now. But we will have, go through this, you know, hype sort of cycle and get realistic with it. And then in the long term, I think it's a great thing in the short term, it's not something which will (indistinct) >> What's your counter point? You're saying it's not. >> I, no I think the question was, it's hyped up in the short term and not it's underestimated long term. That's what I think what he said, quote. >> Yes, yeah. That's what he said. >> Okay, I think that's wrong with this, because this is a unique, ChatGPT is a unique kind of impact and it's very generational. People have been comparing it, I have been comparing to the internet, like the web, web browser Mosaic and Netscape, right, Navigator. I mean, I clearly still remember the days seeing Navigator for the first time, wow. And there weren't not many sites you could go to, everyone typed in, you know, cars.com, you know. >> That (indistinct) wasn't that overestimated, the overhyped at the beginning and underestimated. >> No, it was, it was underestimated long run, people thought. >> But that Amara's law. >> That's what is. >> No, they said overestimated? >> Overestimated near term underestimated- overhyped near term, underestimated long term. I got, right I mean? >> Well, I, yeah okay, so I would then agree, okay then- >> We were off the charts about the internet in the early days, and it actually exceeded our expectations. >> Well there were people who were, like, poo-pooing it early on. So when the browser came out, people were like, "Oh, the web's a toy for kids." I mean, in 1995 the web was a joke, right? So '96, you had online populations growing, so you had structural changes going on around the browser, internet population. And then that replaced other things, direct mail, other business activities that were once analog then went to the web, kind of read only as you, as we always talk about. So I think that's a moment where the hype long term, the smart money, and the smart industry experts all get the long term. And in this case, there's more poo-pooing in the short term. "Ah, it's not a big deal, it's just AI." I've heard many people poo-pooing ChatGPT, and a lot of smart people saying, "No this is next gen, this is different and it's only going to get better." So I think people are estimating a big long game on this one. >> So you're saying it's bifurcated. There's those who say- >> Yes. >> Okay, all right, let's get to the heart of the premise, and possibly the debate for today's episode. Will OpenAI's early entry into the market confer sustainable competitive advantage for the company. And if you look at the history of tech, the technology industry, it's kind of littered with first mover failures. Altair, IBM, Tandy, Commodore, they and Apple even, they were really early in the PC game. They took a backseat to Dell who came in the scene years later with a better business model. Netscape, you were just talking about, was all the rage in Silicon Valley, with the first browser, drove up all the housing prices out here. AltaVista was the first search engine to really, you know, index full text. >> Owned by Dell, I mean DEC. >> Owned by Digital. >> Yeah, Digital Equipment >> Compaq bought it. And of course as an aside, Digital, they wanted to showcase their hardware, right? Their super computer stuff. And then so Friendster and MySpace, they came before Facebook. The iPhone certainly wasn't the first mobile device. So lots of failed examples, but there are some recent successes like AWS and cloud. >> You could say smartphone. So I mean. >> Well I know, and you can, we can parse this so we'll debate it. Now Twitter, you could argue, had first mover advantage. You kind of gave me that one John. Bitcoin and crypto clearly had first mover advantage, and sustaining that. Guys, will OpenAI make it to the list on the right with ChatGPT, what do you think? >> I think categorically as a company, it probably won't, but as a category, I think what they're doing will, so OpenAI as a company, they get funding, there's power dynamics involved. Microsoft put a billion dollars in early on, then they just pony it up. Now they're reporting 10 billion more. So, like, if the browsers, Microsoft had competitive advantage over Netscape, and used monopoly power, and convicted by the Department of Justice for killing Netscape with their monopoly, Netscape should have had won that battle, but Microsoft killed it. In this case, Microsoft's not killing it, they're buying into it. So I think the embrace extend Microsoft power here makes OpenAI vulnerable for that one vendor solution. So the AI as a company might not make the list, but the category of what this is, large language model AI, is probably will be on the right hand side. >> Okay, we're going to come back to the government intervention and maybe do some comparisons, but what are your thoughts on this premise here? That, it will basically set- put forth the premise that it, that ChatGPT, its early entry into the market will not confer competitive advantage to >> For OpenAI. >> To Open- Yeah, do you agree with that? >> I agree with that actually. It, because Google has been at it, and they have been holding back, as John said because of the scrutiny from the Fed, right, so- >> And privacy too. >> And the privacy and the accuracy as well. But I think Sam Altman and the company on those guys, right? They have put this in a hasty way out there, you know, because it makes mistakes, and there are a lot of questions around the, sort of, where the content is coming from. You saw that as your example, it just stole the content, and without your permission, you know? >> Yeah. So as quick this aside- >> And it codes on people's behalf and the, those codes are wrong. So there's a lot of, sort of, false information it's putting out there. So it's a very vulnerable thing to do what Sam Altman- >> So even though it'll get better, others will compete. >> So look, just side note, a term which Reid Hoffman used a little bit. Like he said, it's experimental launch, like, you know, it's- >> It's pretty damn good. >> It is clever because according to Sam- >> It's more than clever. It's good. >> It's awesome, if you haven't used it. I mean you write- you read what it writes and you go, "This thing writes so well, it writes so much better than you." >> The human emotion drives that too. I think that's a big thing. But- >> I Want to add one more- >> Make your last point. >> Last one. Okay. So, but he's still holding back. He's conducting quite a few interviews. If you want to get the gist of it, there's an interview with StrictlyVC interview from yesterday with Sam Altman. Listen to that one it's an eye opening what they want- where they want to take it. But my last one I want to make it on this point is that Satya Nadella yesterday did an interview with Wall Street Journal. I think he was doing- >> You were not impressed. >> I was not impressed because he was pushing it too much. So Sam Altman's holding back so there's less backlash. >> Got 10 billion reasons to push. >> I think he's almost- >> Microsoft just laid off 10000 people. Hey ChatGPT, find me a job. You know like. (group laughs) >> He's overselling it to an extent that I think it will backfire on Microsoft. And he's over promising a lot of stuff right now, I think. I don't know why he's very jittery about all these things. And he did the same thing during Ignite as well. So he said, "Oh, this AI will write code for you and this and that." Like you called him out- >> The hyperbole- >> During your- >> from Satya Nadella, he's got a lot of hyperbole. (group talks over each other) >> All right, Let's, go ahead. >> Well, can I weigh in on the whole- >> Yeah, sure. >> Microsoft thing on whether OpenAI, here's the take on this. I think it's more like the browser moment to me, because I could relate to that experience with ChatG, personally, emotionally, when I saw that, and I remember vividly- >> You mean that aha moment (indistinct). >> Like this is obviously the future. Anything else in the old world is dead, website's going to be everywhere. It was just instant dot connection for me. And a lot of other smart people who saw this. Lot of people by the way, didn't see it. Someone said the web's a toy. At the company I was worked for at the time, Hewlett Packard, they like, they could have been in, they had invented HTML, and so like all this stuff was, like, they just passed, the web was just being passed over. But at that time, the browser got better, more websites came on board. So the structural advantage there was online web usage was growing, online user population. So that was growing exponentially with the rise of the Netscape browser. So OpenAI could stay on the right side of your list as durable, if they leverage the category that they're creating, can get the scale. And if they can get the scale, just like Twitter, that failed so many times that they still hung around. So it was a product that was always successful, right? So I mean, it should have- >> You're right, it was terrible, we kept coming back. >> The fail whale, but it still grew. So OpenAI has that moment. They could do it if Microsoft doesn't meddle too much with too much power as a vendor. They could be the Netscape Navigator, without the anti-competitive behavior of somebody else. So to me, they have the pole position. So they have an opportunity. So if not, if they don't execute, then there's opportunity. There's not a lot of barriers to entry, vis-a-vis say the CapEx of say a cloud company like AWS. You can't replicate that, Many have tried, but I think you can replicate OpenAI. >> And we're going to talk about that. Okay, so real quick, I want to bring in some ETR data. This isn't an ETR heavy segment, only because this so new, you know, they haven't coverage yet, but they do cover AI. So basically what we're seeing here is a slide on the vertical axis's net score, which is a measure of spending momentum, and in the horizontal axis's is presence in the dataset. Think of it as, like, market presence. And in the insert right there, you can see how the dots are plotted, the two columns. And so, but the key point here that we want to make, there's a bunch of companies on the left, is he like, you know, DataRobot and C3 AI and some others, but the big whales, Google, AWS, Microsoft, are really dominant in this market. So that's really the key takeaway that, can we- >> I notice IBM is way low. >> Yeah, IBM's low, and actually bring that back up and you, but then you see Oracle who actually is injecting. So I guess that's the other point is, you're not necessarily going to go buy AI, and you know, build your own AI, you're going to, it's going to be there and, it, Salesforce is going to embed it into its platform, the SaaS companies, and you're going to purchase AI. You're not necessarily going to build it. But some companies obviously are. >> I mean to quote IBM's general manager Rob Thomas, "You can't have AI with IA." information architecture and David Flynn- >> You can't Have AI without IA >> without, you can't have AI without IA. You can't have, if you have an Information Architecture, you then can power AI. Yesterday David Flynn, with Hammersmith, was on our Supercloud. He was pointing out that the relationship of storage, where you store things, also impacts the data and stressablity, and Zhamak from Nextdata, she was pointing out that same thing. So the data problem factors into all this too, Dave. >> So you got the big cloud and internet giants, they're all poised to go after this opportunity. Microsoft is investing up to 10 billion. Google's code red, which was, you know, the headline in the New York Times. Of course Apple is there and several alternatives in the market today. Guys like Chinchilla, Bloom, and there's a company Jasper and several others, and then Lena Khan looms large and the government's around the world, EU, US, China, all taking notice before the market really is coalesced around a single player. You know, John, you mentioned Netscape, they kind of really, the US government was way late to that game. It was kind of game over. And Netscape, I remember Barksdale was like, "Eh, we're going to be selling software in the enterprise anyway." and then, pshew, the company just dissipated. So, but it looks like the US government, especially with Lena Khan, they're changing the definition of antitrust and what the cause is to go after people, and they're really much more aggressive. It's only what, two years ago that (indistinct). >> Yeah, the problem I have with the federal oversight is this, they're always like late to the game, and they're slow to catch up. So in other words, they're working on stuff that should have been solved a year and a half, two years ago around some of the social networks hiding behind some of the rules around open web back in the days, and I think- >> But they're like 15 years late to that. >> Yeah, and now they got this new thing on top of it. So like, I just worry about them getting their fingers. >> But there's only two years, you know, OpenAI. >> No, but the thing (indistinct). >> No, they're still fighting other battles. But the problem with government is that they're going to label Big Tech as like a evil thing like Pharma, it's like smoke- >> You know Lena Khan wants to kill Big Tech, there's no question. >> So I think Big Tech is getting a very seriously bad rap. And I think anything that the government does that shades darkness on tech, is politically motivated in most cases. You can almost look at everything, and my 80 20 rule is in play here. 80% of the government activity around tech is bullshit, it's politically motivated, and the 20% is probably relevant, but off the mark and not organized. >> Well market forces have always been the determining factor of success. The governments, you know, have been pretty much failed. I mean you look at IBM's antitrust, that, what did that do? The market ultimately beat them. You look at Microsoft back in the day, right? Windows 95 was peaking, the government came in. But you know, like you said, they missed the web, right, and >> so they were hanging on- >> There's nobody in government >> to Windows. >> that actually knows- >> And so, you, I think you're right. It's market forces that are going to determine this. But Sarbjeet, what do you make of Microsoft's big bet here, you weren't impressed with with Nadella. How do you think, where are they going to apply it? Is this going to be a Hail Mary for Bing, or is it going to be applied elsewhere? What do you think. >> They are saying that they will, sort of, weave this into their products, office products, productivity and also to write code as well, developer productivity as well. That's a big play for them. But coming back to your antitrust sort of comments, right? I believe the, your comment was like, oh, fed was late 10 years or 15 years earlier, but now they're two years. But things are moving very fast now as compared to they used to move. >> So two years is like 10 Years. >> Yeah, two years is like 10 years. Just want to make that point. (Dave laughs) This thing is going like wildfire. Any new tech which comes in that I think they're going against distribution channels. Lina Khan has commented time and again that the marketplace model is that she wants to have some grip on. Cloud marketplaces are a kind of monopolistic kind of way. >> I don't, I don't see this, I don't see a Chat AI. >> You told me it's not Bing, you had an interesting comment. >> No, no. First of all, this is great from Microsoft. If you're Microsoft- >> Why? >> Because Microsoft doesn't have the AI chops that Google has, right? Google is got so much core competency on how they run their search, how they run their backends, their cloud, even though they don't get a lot of cloud market share in the enterprise, they got a kick ass cloud cause they needed one. >> Totally. >> They've invented SRE. I mean Google's development and engineering chops are off the scales, right? Amazon's got some good chops, but Google's got like 10 times more chops than AWS in my opinion. Cloud's a whole different story. Microsoft gets AI, they get a playbook, they get a product they can render into, the not only Bing, productivity software, helping people write papers, PowerPoint, also don't forget the cloud AI can super help. We had this conversation on our Supercloud event, where AI's going to do a lot of the heavy lifting around understanding observability and managing service meshes, to managing microservices, to turning on and off applications, and or maybe writing code in real time. So there's a plethora of use cases for Microsoft to deploy this. combined with their R and D budgets, they can then turbocharge more research, build on it. So I think this gives them a car in the game, Google may have pole position with AI, but this puts Microsoft right in the game, and they already have a lot of stuff going on. But this just, I mean everything gets lifted up. Security, cloud, productivity suite, everything. >> What's under the hood at Google, and why aren't they talking about it? I mean they got to be freaked out about this. No? Or do they have kind of a magic bullet? >> I think they have the, they have the chops definitely. Magic bullet, I don't know where they are, as compared to the ChatGPT 3 or 4 models. Like they, but if you look at the online sort of activity and the videos put out there from Google folks, Google technology folks, that's account you should look at if you are looking there, they have put all these distinctions what ChatGPT 3 has used, they have been talking about for a while as well. So it's not like it's a secret thing that you cannot replicate. As you said earlier, like in the beginning of this segment, that anybody who has more data and the capacity to process that data, which Google has both, I think they will win this. >> Obviously living in Palo Alto where the Google founders are, and Google's headquarters next town over we have- >> We're so close to them. We have inside information on some of the thinking and that hasn't been reported by any outlet yet. And that is, is that, from what I'm hearing from my sources, is Google has it, they don't want to release it for many reasons. One is it might screw up their search monopoly, one, two, they're worried about the accuracy, 'cause Google will get sued. 'Cause a lot of people are jamming on this ChatGPT as, "Oh it does everything for me." when it's clearly not a hundred percent accurate all the time. >> So Lina Kahn is looming, and so Google's like be careful. >> Yeah so Google's just like, this is the third, could be a third rail. >> But the first thing you said is a concern. >> Well no. >> The disruptive (indistinct) >> What they will do is do a Waymo kind of thing, where they spin out a separate company. >> They're doing that. >> The discussions happening, they're going to spin out the separate company and put it over there, and saying, "This is AI, got search over there, don't touch that search, 'cause that's where all the revenue is." (chuckles) >> So, okay, so that's how they deal with the Clay Christensen dilemma. What's the business model here? I mean it's not advertising, right? Is it to charge you for a query? What, how do you make money at this? >> It's a good question, I mean my thinking is, first of all, it's cool to type stuff in and see a paper get written, or write a blog post, or gimme a marketing slogan for this or that or write some code. I think the API side of the business will be critical. And I think Howie Xu, I know you're going to reference some of his comments yesterday on Supercloud, I think this brings a whole 'nother user interface into technology consumption. I think the business model, not yet clear, but it will probably be some sort of either API and developer environment or just a straight up free consumer product, with some sort of freemium backend thing for business. >> And he was saying too, it's natural language is the way in which you're going to interact with these systems. >> I think it's APIs, it's APIs, APIs, APIs, because these people who are cooking up these models, and it takes a lot of compute power to train these and to, for inference as well. Somebody did the analysis on the how many cents a Google search costs to Google, and how many cents the ChatGPT query costs. It's, you know, 100x or something on that. You can take a look at that. >> A 100x on which side? >> You're saying two orders of magnitude more expensive for ChatGPT >> Much more, yeah. >> Than for Google. >> It's very expensive. >> So Google's got the data, they got the infrastructure and they got, you're saying they got the cost (indistinct) >> No actually it's a simple query as well, but they are trying to put together the answers, and they're going through a lot more data versus index data already, you know. >> Let me clarify, you're saying that Google's version of ChatGPT is more efficient? >> No, I'm, I'm saying Google search results. >> Ah, search results. >> What are used to today, but cheaper. >> But that, does that, is that going to confer advantage to Google's large language (indistinct)? >> It will, because there were deep science (indistinct). >> Google, I don't think Google search is doing a large language model on their search, it's keyword search. You know, what's the weather in Santa Cruz? Or how, what's the weather going to be? Or you know, how do I find this? Now they have done a smart job of doing some things with those queries, auto complete, re direct navigation. But it's, it's not entity. It's not like, "Hey, what's Dave Vellante thinking this week in Breaking Analysis?" ChatGPT might get that, because it'll get your Breaking Analysis, it'll synthesize it. There'll be some, maybe some clips. It'll be like, you know, I mean. >> Well I got to tell you, I asked ChatGPT to, like, I said, I'm going to enter a transcript of a discussion I had with Nir Zuk, the CTO of Palo Alto Networks, And I want you to write a 750 word blog. I never input the transcript. It wrote a 750 word blog. It attributed quotes to him, and it just pulled a bunch of stuff that, and said, okay, here it is. It talked about Supercloud, it defined Supercloud. >> It's made, it makes you- >> Wow, But it was a big lie. It was fraudulent, but still, blew me away. >> Again, vanilla content and non accurate content. So we are going to see a surge of misinformation on steroids, but I call it the vanilla content. Wow, that's just so boring, (indistinct). >> There's so many dangers. >> Make your point, cause we got to, almost out of time. >> Okay, so the consumption, like how do you consume this thing. As humans, we are consuming it and we are, like, getting a nicely, like, surprisingly shocked, you know, wow, that's cool. It's going to increase productivity and all that stuff, right? And on the danger side as well, the bad actors can take hold of it and create fake content and we have the fake sort of intelligence, if you go out there. So that's one thing. The second thing is, we are as humans are consuming this as language. Like we read that, we listen to it, whatever format we consume that is, but the ultimate usage of that will be when the machines can take that output from likes of ChatGPT, and do actions based on that. The robots can work, the robot can paint your house, we were talking about, right? Right now we can't do that. >> Data apps. >> So the data has to be ingested by the machines. It has to be digestible by the machines. And the machines cannot digest unorganized data right now, we will get better on the ingestion side as well. So we are getting better. >> Data, reasoning, insights, and action. >> I like that mall, paint my house. >> So, okay- >> By the way, that means drones that'll come in. Spray painting your house. >> Hey, it wasn't too long ago that robots couldn't climb stairs, as I like to point out. Okay, and of course it's no surprise the venture capitalists are lining up to eat at the trough, as I'd like to say. Let's hear, you'd referenced this earlier, John, let's hear what AI expert Howie Xu said at the Supercloud event, about what it takes to clone ChatGPT. Please, play the clip. >> So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest to get a, you know, another shot to the openAI sort of the level." You know, I did a (indistinct) >> Line up. >> A hundred million dollar is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So a hundred- >> Guys a hundred million dollars, that's an astoundingly low figure. What do you make of it? >> I was in an interview with, I was interviewing, I think he said hundred million or so, but in the hundreds of millions, not a billion right? >> You were trying to get him up, you were like "Hundreds of millions." >> Well I think, I- >> He's like, eh, not 10, not a billion. >> Well first of all, Howie Xu's an expert machine learning. He's at Zscaler, he's a machine learning AI guy. But he comes from VMware, he's got his technology pedigrees really off the chart. Great friend of theCUBE and kind of like a CUBE analyst for us. And he's smart. He's right. I think the barriers to entry from a dollar standpoint are lower than say the CapEx required to compete with AWS. Clearly, the CapEx spending to build all the tech for the run a cloud. >> And you don't need a huge sales force. >> And in some case apps too, it's the same thing. But I think it's not that hard. >> But am I right about that? You don't need a huge sales force either. It's, what, you know >> If the product's good, it will sell, this is a new era. The better mouse trap will win. This is the new economics in software, right? So- >> Because you look at the amount of money Lacework, and Snyk, Snowflake, Databrooks. Look at the amount of money they've raised. I mean it's like a billion dollars before they get to IPO or more. 'Cause they need promotion, they need go to market. You don't need (indistinct) >> OpenAI's been working on this for multiple five years plus it's, hasn't, wasn't born yesterday. Took a lot of years to get going. And Sam is depositioning all the success, because he's trying to manage expectations, To your point Sarbjeet, earlier. It's like, yeah, he's trying to "Whoa, whoa, settle down everybody, (Dave laughs) it's not that great." because he doesn't want to fall into that, you know, hero and then get taken down, so. >> It may take a 100 million or 150 or 200 million to train the model. But to, for the inference to, yeah to for the inference machine, It will take a lot more, I believe. >> Give it, so imagine, >> Because- >> Go ahead, sorry. >> Go ahead. But because it consumes a lot more compute cycles and it's certain level of storage and everything, right, which they already have. So I think to compute is different. To frame the model is a different cost. But to run the business is different, because I think 100 million can go into just fighting the Fed. >> Well there's a flywheel too. >> Oh that's (indistinct) >> (indistinct) >> We are running the business, right? >> It's an interesting number, but it's also kind of, like, context to it. So here, a hundred million spend it, you get there, but you got to factor in the fact that the ways companies win these days is critical mass scale, hitting a flywheel. If they can keep that flywheel of the value that they got going on and get better, you can almost imagine a marketplace where, hey, we have proprietary data, we're SiliconANGLE in theCUBE. We have proprietary content, CUBE videos, transcripts. Well wouldn't it be great if someone in a marketplace could sell a module for us, right? We buy that, Amazon's thing and things like that. So if they can get a marketplace going where you can apply to data sets that may be proprietary, you can start to see this become bigger. And so I think the key barriers to entry is going to be success. I'll give you an example, Reddit. Reddit is successful and it's hard to copy, not because of the software. >> They built the moat. >> Because you can, buy Reddit open source software and try To compete. >> They built the moat with their community. >> Their community, their scale, their user expectation. Twitter, we referenced earlier, that thing should have gone under the first two years, but there was such a great emotional product. People would tolerate the fail whale. And then, you know, well that was a whole 'nother thing. >> Then a plane landed in (John laughs) the Hudson and it was over. >> I think verticals, a lot of verticals will build applications using these models like for lawyers, for doctors, for scientists, for content creators, for- >> So you'll have many hundreds of millions of dollars investments that are going to be seeping out. If, all right, we got to wrap, if you had to put odds on it that that OpenAI is going to be the leader, maybe not a winner take all leader, but like you look at like Amazon and cloud, they're not winner take all, these aren't necessarily winner take all markets. It's not necessarily a zero sum game, but let's call it winner take most. What odds would you give that open AI 10 years from now will be in that position. >> If I'm 0 to 10 kind of thing? >> Yeah, it's like horse race, 3 to 1, 2 to 1, even money, 10 to 1, 50 to 1. >> Maybe 2 to 1, >> 2 to 1, that's pretty low odds. That's basically saying they're the favorite, they're the front runner. Would you agree with that? >> I'd say 4 to 1. >> Yeah, I was going to say I'm like a 5 to 1, 7 to 1 type of person, 'cause I'm a skeptic with, you know, there's so much competition, but- >> I think they're definitely the leader. I mean you got to say, I mean. >> Oh there's no question. There's no question about it. >> The question is can they execute? >> They're not Friendster, is what you're saying. >> They're not Friendster and they're more like Twitter and Reddit where they have momentum. If they can execute on the product side, and if they don't stumble on that, they will continue to have the lead. >> If they say stay neutral, as Sam is, has been saying, that, hey, Microsoft is one of our partners, if you look at their company model, how they have structured the company, then they're going to pay back to the investors, like Microsoft is the biggest one, up to certain, like by certain number of years, they're going to pay back from all the money they make, and after that, they're going to give the money back to the public, to the, I don't know who they give it to, like non-profit or something. (indistinct) >> Okay, the odds are dropping. (group talks over each other) That's a good point though >> Actually they might have done that to fend off the criticism of this. But it's really interesting to see the model they have adopted. >> The wildcard in all this, My last word on this is that, if there's a developer shift in how developers and data can come together again, we have conferences around the future of data, Supercloud and meshs versus, you know, how the data world, coding with data, how that evolves will also dictate, 'cause a wild card could be a shift in the landscape around how developers are using either machine learning or AI like techniques to code into their apps, so. >> That's fantastic insight. I can't thank you enough for your time, on the heels of Supercloud 2, really appreciate it. All right, thanks to John and Sarbjeet for the outstanding conversation today. Special thanks to the Palo Alto studio team. My goodness, Anderson, this great backdrop. You guys got it all out here, I'm jealous. And Noah, really appreciate it, Chuck, Andrew Frick and Cameron, Andrew Frick switching, Cameron on the video lake, great job. And Alex Myerson, he's on production, manages the podcast for us, Ken Schiffman as well. Kristen Martin and Cheryl Knight help get the word out on social media and our newsletters. Rob Hof is our editor-in-chief over at SiliconANGLE, does some great editing, thanks to all. Remember, all these episodes are available as podcasts. All you got to do is search Breaking Analysis podcast, wherever you listen. Publish each week on wikibon.com and siliconangle.com. Want to get in touch, email me directly, david.vellante@siliconangle.com or DM me at dvellante, or comment on our LinkedIn post. And by all means, check out etr.ai. They got really great survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, We'll see you next time on Breaking Analysis. (electronic music)
SUMMARY :
bringing you data-driven and ChatGPT have taken the world by storm. So I asked it, give it to the large language models to do that. So to your point, it's So one of the problems with ChatGPT, and he simply gave the system the prompts, or the OS to help it do but it kind of levels the playing- and the answers were coming as the data you can get. Yeah, and leveled to certain extent. I check the facts, save me about maybe- and then I write a killer because like if the it's, the law is we, you know, I think that's true and I ask the set of similar question, What's your counter point? and not it's underestimated long term. That's what he said. for the first time, wow. the overhyped at the No, it was, it was I got, right I mean? the internet in the early days, and it's only going to get better." So you're saying it's bifurcated. and possibly the debate the first mobile device. So I mean. on the right with ChatGPT, and convicted by the Department of Justice the scrutiny from the Fed, right, so- And the privacy and thing to do what Sam Altman- So even though it'll get like, you know, it's- It's more than clever. I mean you write- I think that's a big thing. I think he was doing- I was not impressed because You know like. And he did the same thing he's got a lot of hyperbole. the browser moment to me, So OpenAI could stay on the right side You're right, it was terrible, They could be the Netscape Navigator, and in the horizontal axis's So I guess that's the other point is, I mean to quote IBM's So the data problem factors and the government's around the world, and they're slow to catch up. Yeah, and now they got years, you know, OpenAI. But the problem with government to kill Big Tech, and the 20% is probably relevant, back in the day, right? are they going to apply it? and also to write code as well, that the marketplace I don't, I don't see you had an interesting comment. No, no. First of all, the AI chops that Google has, right? are off the scales, right? I mean they got to be and the capacity to process that data, on some of the thinking So Lina Kahn is looming, and this is the third, could be a third rail. But the first thing What they will do out the separate company Is it to charge you for a query? it's cool to type stuff in natural language is the way and how many cents the and they're going through Google search results. It will, because there were It'll be like, you know, I mean. I never input the transcript. Wow, But it was a big lie. but I call it the vanilla content. Make your point, cause we And on the danger side as well, So the data By the way, that means at the Supercloud event, So one of the VCs actually What do you make of it? you were like "Hundreds of millions." not 10, not a billion. Clearly, the CapEx spending to build all But I think it's not that hard. It's, what, you know This is the new economics Look at the amount of And Sam is depositioning all the success, or 150 or 200 million to train the model. So I think to compute is different. not because of the software. Because you can, buy They built the moat And then, you know, well that the Hudson and it was over. that are going to be seeping out. Yeah, it's like horse race, 3 to 1, 2 to 1, that's pretty low odds. I mean you got to say, I mean. Oh there's no question. is what you're saying. and if they don't stumble on that, the money back to the public, to the, Okay, the odds are dropping. the model they have adopted. Supercloud and meshs versus, you know, on the heels of Supercloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Lina Khan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Reid Hoffman | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Lena Khan | PERSON | 0.99+ |
Sam Altman | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
David Flynn | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Noah | PERSON | 0.99+ |
Ray Amara | PERSON | 0.99+ |
10 billion | QUANTITY | 0.99+ |
150 | QUANTITY | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Howie Xu | PERSON | 0.99+ |
Anderson | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Santa Cruz | LOCATION | 0.99+ |
1995 | DATE | 0.99+ |
Lina Kahn | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
50 words | QUANTITY | 0.99+ |
Hundreds of millions | QUANTITY | 0.99+ |
Compaq | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
two sentences | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Cameron | PERSON | 0.99+ |
100 million | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
one sentence | QUANTITY | 0.99+ |
10 million | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
Netscape | ORGANIZATION | 0.99+ |
Bob Muglia, George Gilbert & Tristan Handy | How Supercloud will Support a new Class of Data Apps
(upbeat music) >> Hello, everybody. This is Dave Vellante. Welcome back to Supercloud2, where we're exploring the intersection of data analytics and the future of cloud. In this segment, we're going to look at how the Supercloud will support a new class of applications, not just work that runs on multiple clouds, but rather a new breed of apps that can orchestrate things in the real world. Think Uber for many types of businesses. These applications, they're not about codifying forms or business processes. They're about orchestrating people, places, and things in a business ecosystem. And I'm pleased to welcome my colleague and friend, George Gilbert, former Gartner Analyst, Wiki Bond market analyst, former equities analyst as my co-host. And we're thrilled to have Tristan Handy, who's the founder and CEO of DBT Labs and Bob Muglia, who's the former President of Microsoft's Enterprise business and former CEO of Snowflake. Welcome all, gentlemen. Thank you for coming on the program. >> Good to be here. >> Thanks for having us. >> Hey, look, I'm going to start actually with the SuperCloud because both Tristan and Bob, you've read the definition. Thank you for doing that. And Bob, you have some really good input, some thoughts on maybe some of the drawbacks and how we can advance this. So what are your thoughts in reading that definition around SuperCloud? >> Well, I thought first of all that you did a very good job of laying out all of the characteristics of it and helping to define it overall. But I do think it can be tightened a bit, and I think it's helpful to do it in as short a way as possible. And so in the last day I've spent a little time thinking about how to take it and write a crisp definition. And here's my go at it. This is one day old, so gimme a break if it's going to change. And of course we have to follow the industry, and so that, and whatever the industry decides, but let's give this a try. So in the way I think you're defining it, what I would say is a SuperCloud is a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. >> Boom. Nice. Okay, great. I'm going to go back and read the script on that one and tighten that up a bit. Thank you for spending the time thinking about that. Tristan, would you add anything to that or what are your thoughts on the whole SuperCloud concept? >> So as I read through this, I fully realize that we need a word for this thing because I have experienced the inability to talk about it as well. But for many of us who have been living in the Confluence, Snowflake, you know, this world of like new infrastructure, this seems fairly uncontroversial. Like I read through this, and I'm just like, yeah, this is like the world I've been living in for years now. And I noticed that you called out Snowflake for being an example of this, but I think that there are like many folks, myself included, for whom this world like fully exists today. >> Yeah, I think that's a fair, I dunno if it's criticism, but people observe, well, what's the big deal here? It's just kind of what we're living in today. It reminds me of, you know, Tim Burns Lee saying, well, this is what the internet was supposed to be. It was supposed to be Web 2.0, so maybe this is what multi-cloud was supposed to be. Let's turn our attention to apps. Bob first and then go to Tristan. Bob, what are data apps to you? When people talk about data products, is that what they mean? Are we talking about something more, different? What are data apps to you? >> Well, to understand data apps, it's useful to contrast them to something, and I just use the simple term people apps. I know that's a little bit awkward, but it's clear. And almost everything we work with, almost every application that we're familiar with, be it email or Salesforce or any consumer app, those are applications that are targeted at responding to people. You know, in contrast, a data application reacts to changes in data and uses some set of analytic services to autonomously take action. So where applications that we're familiar with respond to people, data apps respond to changes in data. And they both do something, but they do it for different reasons. >> Got it. You know, George, you and I were talking about, you know, it comes back to SuperCloud, broad definition, narrow definition. Tristan, how do you see it? Do you see it the same way? Do you have a different take on data apps? >> Oh, geez. This is like a conversation that I don't know has an end. It's like been, I write a substack, and there's like this little community of people who all write substack. We argue with each other about these kinds of things. Like, you know, as many different takes on this question as you can find, but the way that I think about it is that data products are atomic units of functionality that are fundamentally data driven in nature. So a data product can be as simple as an interactive dashboard that is like actually had design thinking put into it and serves a particular user group and has like actually gone through kind of a product development life cycle. And then a data app or data application is a kind of cohesive end-to-end experience that often encompasses like many different data products. So from my perspective there, this is very, very related to the way that these things are produced, the kinds of experiences that they're provided, that like data innovates every product that we've been building in, you know, software engineering for, you know, as long as there have been computers. >> You know, Jamak Dagani oftentimes uses the, you know, she doesn't name Spotify, but I think it's Spotify as that kind of example she uses. But I wonder if we can maybe try to take some examples. If you take, like George, if you take a CRM system today, you're inputting leads, you got opportunities, it's driven by humans, they're really inputting the data, and then you got this system that kind of orchestrates the business process, like runs a forecast. But in this data driven future, are we talking about the app itself pulling data in and automatically looking at data from the transaction systems, the call center, the supply chain and then actually building a plan? George, is that how you see it? >> I go back to the example of Uber, may not be the most sophisticated data app that we build now, but it was like one of the first where you do have users interacting with their devices as riders trying to call a car or driver. But the app then looks at the location of all the drivers in proximity, and it matches a driver to a rider. It calculates an ETA to the rider. It calculates an ETA then to the destination, and it calculates a price. Those are all activities that are done sort of autonomously that don't require a human to type something into a form. The application is using changes in data to calculate an analytic product and then to operationalize that, to assign the driver to, you know, calculate a price. Those are, that's an example of what I would think of as a data app. And my question then I guess for Tristan is if we don't have all the pieces in place for sort of mainstream companies to build those sorts of apps easily yet, like how would we get started? What's the role of a semantic layer in making that easier for mainstream companies to build? And how do we get started, you know, say with metrics? How does that, how does that take us down that path? >> So what we've seen in the past, I dunno, decade or so, is that one of the most successful business models in infrastructure is taking hard things and rolling 'em up behind APIs. You take messaging, you take payments, and you all of a sudden increase the capability of kind of your median application developer. And you say, you know, previously you were spending all your time being focused on how do you accept credit cards, how do you send SMS payments, and now you can focus on your business logic, and just create the thing. One of, interestingly, one of the things that we still don't know how to API-ify is concepts that live inside of your data warehouse, inside of your data lake. These are core concepts that, you know, you would imagine that the business would be able to create applications around very easily, but in fact that's not the case. It's actually quite challenging to, and involves a lot of data engineering pipeline and all this work to make these available. And so if you really want to make it very easy to create some of these data experiences for users, you need to have an ability to describe these metrics and then to turn them into APIs to make them accessible to application developers who have literally no idea how they're calculated behind the scenes, and they don't need to. >> So how rich can that API layer grow if you start with metric definitions that you've defined? And DBT has, you know, the metric, the dimensions, the time grain, things like that, that's a well scoped sort of API that people can work within. How much can you extend that to say non-calculated business rules or governance information like data reliability rules, things like that, or even, you know, features for an AIML feature store. In other words, it starts, you started pragmatically, but how far can you grow? >> Bob is waiting with bated breath to answer this question. I'm, just really quickly, I think that we as a company and DBT as a product tend to be very pragmatic. We try to release the simplest possible version of a thing, get it out there, and see if people use it. But the idea that, the concept of a metric is really just a first landing pad. The really, there is a physical manifestation of the data and then there's a logical manifestation of the data. And what we're trying to do here is make it very easy to access the logical manifestation of the data, and metric is a way to look at that. Maybe an entity, a customer, a user is another way to look at that. And I'm sure that there will be more kind of logical structures as well. >> So, Bob, chime in on this. You know, what's your thoughts on the right architecture behind this, and how do we get there? >> Yeah, well first of all, I think one of the ways we get there is by what companies like DBT Labs and Tristan is doing, which is incrementally taking and building on the modern data stack and extending that to add a semantic layer that describes the data. Now the way I tend to think about this is a fairly major shift in the way we think about writing applications, which is today a code first approach to moving to a world that is model driven. And I think that's what the big change will be is that where today we think about data, we think about writing code, and we use that to produce APIs as Tristan said, which encapsulates those things together in some form of services that are useful for organizations. And that idea of that encapsulation is never going to go away. It's very, that concept of an API is incredibly useful and will exist well into the future. But what I think will happen is that in the next 10 years, we're going to move to a world where organizations are defining models first of their data, but then ultimately of their business process, their entire business process. Now the concept of a model driven world is a very old concept. I mean, I first started thinking about this and playing around with some early model driven tools, probably before Tristan was born in the early 1980s. And those tools didn't work because the semantics associated with executing the model were too complex to be written in anything other than a procedural language. We're now reaching a time where that is changing, and you see it everywhere. You see it first of all in the world of machine learning and machine learning models, which are taking over more and more of what applications are doing. And I think that's an incredibly important step. And learned models are an important part of what people will do. But if you look at the world today, I will claim that we've always been modeling. Modeling has existed in computers since there have been integrated circuits and any form of computers. But what we do is what I would call implicit modeling, which means that it's the model is written on a whiteboard. It's in a bunch of Slack messages. It's on a set of napkins in conversations that happen and during Zoom. That's where the model gets defined today. It's implicit. There is one in the system. It is hard coded inside application logic that exists across many applications with humans being the glue that connects those models together. And really there is no central place you can go to understand the full attributes of the business, all of the business rules, all of the business logic, the business data. That's going to change in the next 10 years. And we'll start to have a world where we can define models about what we're doing. Now in the short run, the most important models to build are data models and to describe all of the attributes of the data and their relationships. And that's work that DBT Labs is doing. A number of other companies are doing that. We're taking steps along that way with catalogs. People are trying to build more complete ontologies associated with that. The underlying infrastructure is still super, super nascent. But what I think we'll see is this infrastructure that exists today that's building learned models in the form of machine learning programs. You know, some of these incredible machine learning programs in foundation models like GPT and DALL-E and all of the things that are happening in these global scale models, but also all of that needs to get applied to the domains that are appropriate for a business. And I think we'll see the infrastructure developing for that, that can take this concept of learned models and put it together with more explicitly defined models. And this is where the concept of knowledge graphs come in and then the technology that underlies that to actually implement and execute that, which I believe are relational knowledge graphs. >> Oh, oh wow. There's a lot to unpack there. So let me ask the Colombo question, Tristan, we've been making fun of your youth. We're just, we're just jealous. Colombo, I'll explain it offline maybe. >> I watch Colombo. >> Okay. All right, good. So but today if you think about the application stack and the data stack, which is largely an analytics pipeline. They're separate. Do they, those worlds, do they have to come together in order to achieve Bob's vision? When I talk to practitioners about that, they're like, well, I don't want to complexify the application stack cause the data stack today is so, you know, hard to manage. But but do those worlds have to come together? And you know, through that model, I guess abstraction or translation that Bob was just describing, how do you guys think about that? Who wants to take that? >> I think it's inevitable that data and AI are going to become closer together? I think that the infrastructure there has been moving in that direction for a long time. Whether you want to use the Lakehouse portmanteau or not. There's also, there's a next generation of data tech that is still in the like early stage of being developed. There's a company that I love that is essentially Cross Cloud Lambda, and it's just a wonderful abstraction for computing. So I think that, you know, people have been predicting that these worlds are going to come together for awhile. A16Z wrote a great post on this back in I think 2020, predicting this, and I've been predicting this since since 2020. But what's not clear is the timeline, but I think that this is still just as inevitable as it's been. >> Who's that that does Cross Cloud? >> Let me follow up on. >> Who's that, Tristan, that does Cross Cloud Lambda? Can you name names? >> Oh, they're called Modal Labs. >> Modal Labs, yeah, of course. All right, go ahead, George. >> Let me ask about this vision of trying to put the semantics or the code that represents the business with the data. It gets us to a world that's sort of more data centric, where data's not locked inside or behind the APIs of different applications so that we don't have silos. But at the same time, Bob, I've heard you talk about building the semantics gradually on top of, into a knowledge graph that maybe grows out of a data catalog. And the vision of getting to that point, essentially the enterprise's metadata and then the semantics you're going to add onto it are really stored in something that's separate from the underlying operational and analytic data. So at the same time then why couldn't we gradually build semantics beyond the metric definitions that DBT has today? In other words, you build more and more of the semantics in some layer that DBT defines and that sits above the data management layer, but any requests for data have to go through the DBT layer. Is that a workable alternative? Or where, what type of limitations would you face? >> Well, I think that it is the way the world will evolve is to start with the modern data stack and, you know, which is operational applications going through a data pipeline into some form of data lake, data warehouse, the Lakehouse, whatever you want to call it. And then, you know, this wide variety of analytics services that are built together. To the point that Tristan made about machine learning and data coming together, you see that in every major data cloud provider. Snowflake certainly now supports Python and Java. Databricks is of course building their data warehouse. Certainly Google, Microsoft and Amazon are doing very, very similar things in terms of building complete solutions that bring together an analytics stack that typically supports languages like Python together with the data stack and the data warehouse. I mean, all of those things are going to evolve, and they're not going to go away because that infrastructure is relatively new. It's just being deployed by companies, and it solves the problem of working with petabytes of data if you need to work with petabytes of data, and nothing will do that for a long time. What's missing is a layer that understands and can model the semantics of all of this. And if you need to, if you want to model all, if you want to talk about all the semantics of even data, you need to think about all of the relationships. You need to think about how these things connect together. And unfortunately, there really is no platform today. None of our existing platforms are ultimately sufficient for this. It was interesting, I was just talking to a customer yesterday, you know, a large financial organization that is building out these semantic layers. They're further along than many companies are. And you know, I asked what they're building it on, and you know, it's not surprising they're using a, they're using combinations of some form of search together with, you know, textual based search together with a document oriented database. In this case it was Cosmos. And that really is kind of the state of the art right now. And yet those products were not built for this. They don't really, they can't manage the complicated relationships that are required. They can't issue the queries that are required. And so a new generation of database needs to be developed. And fortunately, you know, that is happening. The world is developing a new set of relational algorithms that will be able to work with hundreds of different relations. If you look at a SQL database like Snowflake or a big query, you know, you get tens of different joins coming together, and that query is going to take a really long time. Well, fortunately, technology is evolving, and it's possible with new join algorithms, worst case, optimal join algorithms they're called, where you can join hundreds of different relations together and run semantic queries that you simply couldn't run. Now that technology is nascent, but it's really important, and I think that will be a requirement to have this semantically reach its full potential. In the meantime, Tristan can do a lot of great things by building up on what he's got today and solve some problems that are very real. But in the long run I think we'll see a new set of databases to support these models. >> So Tristan, you got to respond to that, right? You got to, so take the example of Snowflake. We know it doesn't deal well with complex joins, but they're, they've got big aspirations. They're building an ecosystem to really solve some of these problems. Tristan, you guys are part of that ecosystem, and others, but please, your thoughts on what Bob just shared. >> Bob, I'm curious if, I would have no idea what you were talking about except that you introduced me to somebody who gave me a demo of a thing and do you not want to go there right now? >> No, I can talk about it. I mean, we can talk about it. Look, the company I've been working with is Relational AI, and they're doing this work to actually first of all work across the industry with academics and research, you know, across many, many different, over 20 different research institutions across the world to develop this new set of algorithms. They're all fully published, just like SQL, the underlying algorithms that are used by SQL databases are. If you look today, every single SQL database uses a similar set of relational algorithms underneath that. And those algorithms actually go back to system R and what IBM developed in the 1970s. We're just, there's an opportunity for us to build something new that allows you to take, for example, instead of taking data and grouping it together in tables, treat all data as individual relations, you know, a key and a set of values and then be able to perform purely relational operations on it. If you go back to what, to Codd, and what he wrote, he defined two things. He defined a relational calculus and relational algebra. And essentially SQL is a query language that is translated by the query processor into relational algebra. But however, the calculus of SQL is not even close to the full semantics of the relational mathematics. And it's possible to have systems that can do everything and that can store all of the attributes of the data model or ultimately the business model in a form that is much more natural to work with. >> So here's like my short answer to this. I think that we're dealing in different time scales. I think that there is actually a tremendous amount of work to do in the semantic layer using the kind of technology that we have on the ground today. And I think that there's, I don't know, let's say five years of like really solid work that there is to do for the entire industry, if not more. But the wonderful thing about DBT is that it's independent of what the compute substrate is beneath it. And so if we develop new platforms, new capabilities to describe semantic models in more fine grain detail, more procedural, then we're going to support that too. And so I'm excited about all of it. >> Yeah, so interpreting that short answer, you're basically saying, cause Bob was just kind of pointing to you as incremental, but you're saying, yeah, okay, we're applying it for incremental use cases today, but we can accommodate a much broader set of examples in the future. Is that correct, Tristan? >> I think you're using the word incremental as if it's not good, but I think that incremental is great. We have always been about applying incremental improvement on top of what exists today, but allowing practitioners to like use different workflows to actually make use of that technology. So yeah, yeah, we are a very incremental company. We're going to continue being that way. >> Well, I think Bob was using incremental as a pejorative. I mean, I, but to your point, a lot. >> No, I don't think so. I want to stop that. No, I don't think it's pejorative at all. I think incremental, incremental is usually the most successful path. >> Yes, of course. >> In my experience. >> We agree, we agree on that. >> Having tried many, many moonshot things in my Microsoft days, I can tell you that being incremental is a good thing. And I'm a very big believer that that's the way the world's going to go. I just think that there is a need for us to build something new and that ultimately that will be the solution. Now you can argue whether it's two years, three years, five years, or 10 years, but I'd be shocked if it didn't happen in 10 years. >> Yeah, so we all agree that incremental is less disruptive. Boom, but Tristan, you're, I think I'm inferring that you believe you have the architecture to accommodate Bob's vision, and then Bob, and I'm inferring from Bob's comments that maybe you don't think that's the case, but please. >> No, no, no. I think that, so Bob, let me put words into your mouth and you tell me if you disagree, DBT is completely useless in a world where a large scale cloud data warehouse doesn't exist. We were not able to bring the power of Python to our users until these platforms started supporting Python. Like DBT is a layer on top of large scale computing platforms. And to the extent that those platforms extend their functionality to bring more capabilities, we will also service those capabilities. >> Let me try and bridge the two. >> Yeah, yeah, so Bob, Bob, Bob, do you concur with what Tristan just said? >> Absolutely, I mean there's nothing to argue with in what Tristan just said. >> I wanted. >> And it's what he's doing. It'll continue to, I believe he'll continue to do it, and I think it's a very good thing for the industry. You know, I'm just simply saying that on top of that, I would like to provide Tristan and all of those who are following similar paths to him with a new type of database that can actually solve these problems in a much more architected way. And when I talk about Cosmos with something like Mongo or Cosmos together with Elastic, you're using Elastic as the join engine, okay. That's the purpose of it. It becomes a poor man's join engine. And I kind of go, I know there's a better answer than that. I know there is, but that's kind of where we are state of the art right now. >> George, we got to wrap it. So give us the last word here. Go ahead, George. >> Okay, I just, I think there's a way to tie together what Tristan and Bob are both talking about, and I want them to validate it, which is for five years we're going to be adding or some number of years more and more semantics to the operational and analytic data that we have, starting with metric definitions. My question is for Bob, as DBT accumulates more and more of those semantics for different enterprises, can that layer not run on top of a relational knowledge graph? And what would we lose by not having, by having the knowledge graph store sort of the joins, all the complex relationships among the data, but having the semantics in the DBT layer? >> Well, I think this, okay, I think first of all that DBT will be an environment where many of these semantics are defined. The question we're asking is how are they stored and how are they processed? And what I predict will happen is that over time, as companies like DBT begin to build more and more richness into their semantic layer, they will begin to experience challenges that customers want to run queries, they want to ask questions, they want to use this for things where the underlying infrastructure becomes an obstacle. I mean, this has happened in always in the history, right? I mean, you see major advances in computer science when the data model changes. And I think we're on the verge of a very significant change in the way data is stored and structured, or at least metadata is stored and structured. Again, I'm not saying that anytime in the next 10 years, SQL is going to go away. In fact, more SQL will be written in the future than has been written in the past. And those platforms will mature to become the engines, the slicer dicers of data. I mean that's what they are today. They're incredibly powerful at working with large amounts of data, and that infrastructure is maturing very rapidly. What is not maturing is the infrastructure to handle all of the metadata and the semantics that that requires. And that's where I say knowledge graphs are what I believe will be the solution to that. >> But Tristan, bring us home here. It sounds like, let me put pause at this, is that whatever happens in the future, we're going to leverage the vast system that has become cloud that we're talking about a supercloud, sort of where data lives irrespective of physical location. We're going to have to tap that data. It's not necessarily going to be in one place, but give us your final thoughts, please. >> 100% agree. I think that the data is going to live everywhere. It is the responsibility for both the metadata systems and the data processing engines themselves to make sure that we can join data across cloud providers, that we can join data across different physical regions and that we as practitioners are going to kind of start forgetting about details like that. And we're going to start thinking more about how we want to arrange our teams, how does the tooling that we use support our team structures? And that's when data mesh I think really starts to get very, very critical as a concept. >> Guys, great conversation. It was really awesome to have you. I can't thank you enough for spending time with us. Really appreciate it. >> Thanks a lot. >> All right. This is Dave Vellante for George Gilbert, John Furrier, and the entire Cube community. Keep it right there for more content. You're watching SuperCloud2. (upbeat music)
SUMMARY :
and the future of cloud. And Bob, you have some really and I think it's helpful to do it I'm going to go back and And I noticed that you is that what they mean? that we're familiar with, you know, it comes back to SuperCloud, is that data products are George, is that how you see it? that don't require a human to is that one of the most And DBT has, you know, the And I'm sure that there will be more on the right architecture is that in the next 10 years, So let me ask the Colombo and the data stack, which is that is still in the like Modal Labs, yeah, of course. and that sits above the and that query is going to So Tristan, you got to and that can store all of the that there is to do for the pointing to you as incremental, but allowing practitioners to I mean, I, but to your point, a lot. the most successful path. that that's the way the that you believe you have the architecture and you tell me if you disagree, there's nothing to argue with And I kind of go, I know there's George, we got to wrap it. and more of those semantics and the semantics that that requires. is that whatever happens in the future, and that we as practitioners I can't thank you enough John Furrier, and the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Steve Mullaney | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
Mike Dooley | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike Wolf | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Merim | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Rossi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Chris Wegmann | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Chris Hoff | PERSON | 0.99+ |
Jamak Dagani | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Caterpillar | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jerome | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lori MacVittie | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Glen Kurisingal & Nicholas Criss, T-Mobile | AWS re:Invent 2022
>>Good morning friends. Live from Las Vegas. It's the Cube Day four of our coverage of AWS. Reinvent continues. Lisa Martin here with Dave Valante. You >>Can tell it's day four. Yeah. >>You can tell, you >>Get punchy. >>Did you? Yes. Did you know that the Vegas rodeo is coming into town? I'm kind of bummed down, leaving tonight. >>Really? You rodeo >>Fan this weekend? No, but to see a bunch of cowboys in Vegas, >>I'd like to see the Raiders. I'd like to see the Raiders get tickets. >>Yeah. And the hockey team. Yeah. We have had an amazing event, Dave. The cubes. 10th year covering reinvent 11th. Reinvent >>Our 10th year here. Yeah. Yes. Yeah. I mean we covered remotely in during Covid, but >>Yes, yes, yes. Awesome content. Anything jump out at you that we really, we, we love talking to aws, the ecosystem. We got a customer next. Anything jump out at you that's really a kind of a key takeaway? >>Big story. The majority of aws, you know, I mean people ask me what's different under a Adam than under Andy. And I'm like, really? It's the maturity of AWS is what's different, you know, ecosystem, connecting the dots, moving towards solutions, you know, that's, that's the big thing. And it's, you know, in a way it's kind of boring relative to other reinvents, which are like, oh wow, oh my god, they announced outposts. So you don't see anything like that. It's more taking the platform to the next level, which is a good >>Thing. The next level it is a good thing. Speaking of next level, we have a couple of next level guests from T-Mobile joining us. We're gonna be talking through their customers story, their business transformation with aws. Glenn Curing joins us, the director product and technology. And Nick Chris, senior manager, product and technology guys. Welcome. Great to have you on brand. You're on T-Mobile brand. I love it. >>Yeah, >>I mean we are always T-Mobile. >>I love it. So, so everyone knows T-Mobile Blend, you guys are in the digital commerce domain. Talk to us about what that is, what functions that delivers for T-Mobile. Yeah, >>So the digital commerce domain operates and runs a platform called the Digital commerce platform. What this essentially does, it's a set of APIs that are headless that power the shopping experiences. When you talk about shopping experiences at T-Mobile, a customer comes to either a T-Mobile website or goes to a store. And what they do is they start with the discovery process of a phone. They take it through the process, they decide to purchase the phone day at, at the phone to cart, and then eventually they decide to, you know, basically pull the trigger and, and buy the phone at, at which point they submit the order. So that whole experience, essentially from start to finish is powered by the digital commerce platform. Just this year we have processed well over three and a half million orders amounting to a billion and a half dollars worth of business for T-Mobile. >>Wow. Big outcomes. Nick, talk about the before stage, obviously the, the customer experience is absolutely critical because if, if it goes awry, people churn. We know that and nobody wants, you know, brand reputation is is at stake. Yep. Talk about some of the challenges before that you guys faced and how did you work with AWS and part its partner ecosystem to address those challenges? >>Sure. Yeah. So actually before I started working with Glen on the commerce domain, I was part of T-Mobile's cloud team. So we were the team that kind of brought in AWS and commerce platform was really the first tier one system to go a hundred percent cloud native. And so for us it was very much a learning experience and a journey to learn how to operate on the cloud and which was fundamentally different from how we were doing things in the old on-prem days. When >>You talk about headless APIs, you talk, I dunno if you saw Warren a Vogel's keynote this morning, but you're talking about loosely coupled, a loosely coupled system that you can evolve without ripping out the whole system or without bringing the whole system down. Can you explain that in a little bit more >>Detail? Absolutely. So the concept of headless API exactly opens up that possibility. What it allows us to do is to build and operator platform that runs sort of loosely coupled from the user experiences. So when you think about this from a simplistic standpoint, you have a set of APIs that are headless and you've got the website that connects to it, the retail store applications that connect to it, as well as the customer care applications that connect to it. And essentially what that does is it allows us to basically operate all these platforms without being sort of tightly coupled to >>Each other. Yeah, he was talking about this morning when, when AWS announced s3, you know, there was just a handful of services maybe at just two or three. I think now there's 200 and you know, it's never gone down, it's never been, you know, replaced essentially. And so, you know, the whole thing was it's an asynchronous system that's loosely coupled and then you create that illusion of synchronicity for the customer. >>Exactly. >>Which was, I thought, you know, really well described, but maybe you guys could talk about what the genesis was for this system. Take us kind of to the, from the before or after, you know, the classic as as was and the, and as is. Did you talk about that? >>Yeah, I can start and then hand it off to Nick for some more details. So we started this journey back in 2016 and at that point T-Mobile had seven or eight different commerce platforms. Obviously you can think about the complexity involved in running and operating platforms. We've all talked about T-Mobile being the uncarrier. It's a brand that we have basically popularized in the telco industry. We would come out with these massive uncarrier moves and every time that announcement was made, teams have to scramble because you've got seven systems, seven teams, every single system needs to be updated, right? So that's where we started when we kicked off this transformational journey over time, essentially we have brought it down to one platform that supports all these experiences and what that allows us to do is not only time to market gets reduced immensely, but it also allows us to basically reduce our operational cost. Cuz we don't have to have teams running seven, eight systems. It's just one system with one team that can focus on making it a world class, you know, platform. >>Yeah, I think one of the strategies that definitely paid off for us, cuz going all the way back to the beginning, our little platform was powering just a tiny little corner of the, of the webspace, right? But even in those days we approached it from we're gonna build functions in a way that is sort of agnostic to what the experience is gonna be. So over time as we would build a capability that one particular channel needed primary, we were still thinking about all the other channels that needed it. So now over a few years that investment pays off and you have basically the same capabilities working in the same way across all the channels. >>When did the journey start? >>2016. >>2016, yeah. It's been, it's been six years. >>What are some of the game changers in, in this business transformation that you would say these are some of the things that really ignited our transformation? >>Yeah, there's particularly one thing that we feel pretty proud about, which is the fact that we now operate what we call active active stacks. And what that means is you've got a single stack of the eCommerce platform start to finish that can run in an independent manner, but we can also start adding additional stacks that are basically loosely coupled from each other but can, but can run to support the business. What that basically enables is it allows us to run in active active mode, which itself is a big deal from a system uptime perspective. It really changes the game. It allows us to push releases without worrying about any kind of downtime. We've done canary releases, we are in the middle of retail season and we can introduce changes without worrying about it. And more importantly, I think what it has also allowed us to do is essentially practice disaster recovery while doing a release. Cuz that's exactly what we do is every time we do a release we are switching between these separate stacks and essentially are practicing our DR strategy. >>So you do this, it's, it's you separate across regions I presume? Yes. Is that right? Yes. This was really interesting conversation because as you well know in the on-prem world, you never tested that disaster recovery was too risky because you're afraid you're gonna take your whole business down and you're essentially saying that the testing is fundamental to the implementation. >>Absolutely. >>It, it is the thing that you do for every release. So you know, at least every week or so you are doing this and you know, in the old world, the active passive world on paper you had a bunch of capabilities and in in incidents that are even less than say a full disaster recovery scenario, you would end up making the choice not to use that capability because there was too much complexity or risk or problem. When we put this in place. Now if I, I tell people everything we do got easier after that. >>Is it a challenge for you or how do you deal with the challenge? Correct me if it's not a, a challenge that sometimes Amazon services are not available in both regions. I think for instance, the observability thing that they just announced this week is it's not cross region or maybe I'm getting that wrong, but there are services where, you know, you might not be able to do data sharing across region. How do you manage that? Or maybe there's different, you know, levels of certifications. How do you manage that discontinuity or is that not an issue for you? >>Yeah, I mean it, it is certainly a concern and so the stacks, like Glen said, they are largely decoupled and that what that means is practically every component and there's a lot of lot of components in there. I have redundancy from an availability zone point of view. But then where the real magic happens is when you come in as a user to the stack, we're gonna initially kind of lock you on one stack. And then the key thing that we do is we, we understand the difference between what, what we would call the critical data. So think of like your shopping carts and then contextual data that we can relatively easily reload if we need to. And so that critical data is constantly in an async fashion. So it's not interrupting your performance, being broadcast out to a place where we can recover it if we need to, if we need to send you to another stack and then we call that dehydration. And if you end up getting bumped to a new stack, we rehydrate you on that stack and reload that, that contextual data. So to make that whole thing happen, we rely on something we call the global cart store and that's basically powered by Dynamo. So Dynamo is highly, highly reliable and multi >>Reason. So, and, and presume you're doing some form of server list for the stateless stuff and, and maybe taking control of the run time for the stateful things you, are you leaning into to servers and lambda or Not yet cuz you want control over the, the, the EC two and the memory configs. What, what's, I mean, I know we're going inside the plumbing a little bit, but it's kind of fun. >>That's always fun. You >>Went Yeah, and, and it has been a journey. Back in 2016 when we started, we were all on EC twos and across, you know, over the last three or four years we have kind of gone through that journey where we went from easy two to, to containers and we are at some point we'll get to where we will be serverless, we've got a few functions running. But you know, in that journey, I think when you look at the full end of the spectrum, we are somewhere towards the, the process of sort of going from, you know, containers to, to serverless. >>Yeah. So today your team is setting up the containers, they're fencing 'em off, fencing off the app and doing all that sort of sort of semi heavy lifting. Yeah. How do you deal with the, you know, this is one of the things Lisa, you and I were talking about is the skill sets. We always talk about this. What's that? What's your team look like and what are the skill sets that you've got that you're deploying? >>Yeah, I mean, as you can imagine, it's a challenge and it's a, a highly specialized skill set that you need. And you talk about cloud, you know, I, I tell developers when we bring new folks in, in the old days, you could just be like really good at Java and study that for and be good at that for decades. But in the cloud world, you have to be wide in, in your breadth. And so you have to understand those 200 services, right? And so one of the things that really has helped us is we've had a partner. So UST Global is a digital services company and they've really kind of been on the journey up the same timeline that we were. And I had worked with them on the cloud team, you know, before I came to commerce. And when I came to, to the commerce team, we were really struggling, especially from that operational perspective. >>The, the team was just not adapting to that new cloud reality. They were used to the on-prem world, but we brought these folks in because not only were they really able to understand the stuff, but they had built a lot of the platforms that we were gonna be leveraging for commerce with us on the cloud team. So for example, we have built, T-Mobile operates our own customized Kubernetes platform. We've done some stuff for serverless development, C I C D, cloud security. And so not only did these folks have the right skill sets, but they knew how we were approaching it from a T-mobile cloud perspective. And so it's kind of kind of fun to see, you know, when they came on board with this journey with us, we were both, both companies were relatively new and, and learning. Now I look and, you know, I I think that they're like a, a platinum sponsor these days here of aws and so it's kind of cool to see how we've all grown together, >>A lot of evolution, a lot of maturation. Glen, I wanna know from you when we're almost out of time here, but tell me the what the digital commerce domain, you kind of talked about this in the beginning, but I wanna know what's the value in it for me as a customer? All of this under the hood plumbing? Yeah, the maturation, the transformation. How does it benefit mean? >>Great question. So as a customer, all they care about is coming into, going to the website, walking into a store, and without spending too much time completed that transaction and walkout, they don't care about what's under the hood, right? So this transformational journey from, you know, like I talked about, we started with easy twos back in the day. It was what we call the wild west in the, on a cloud native platform to where we have reached today. You know, the journey we have collectively traversed with the USD has allowed us to basically build a system that allows a customer to walk into a store and not spend a whole hour dealing with a sales rep that's trying to sell them things. They can walk in and out quickly, they go to the website, literally within a couple minutes they can complete the transaction and leave. That's what customers want. It is. And that has really sort of helped us when you think about T-Mobile and the fact that we are now poised to be a leader in the US in telco at this whole concept of systems that really empower the customers to quickly complete their transaction has been one of the key components of allowing us to kind of make that growth. Right. So >>Right. And a big driver of revenue. >>Exactly. >>I have one final question for each of you. We're making a Instagram reel, so think about if you had 30 seconds to describe T-Mobile as a technology company that sells phones or a technology company that delights people, what, what would you say if you had a billboard, what would it say about that? Glen, what do you think? >>So T-Mobile, from a technology company perspective, the, the whole purpose of setting up T-mobile's, you know, shopping experience is about bringing customers in, surprising and delighting them with the frictionless shopping experiences that basically allow them to come in and complete the transaction and move on with their lives. It's not about keeping them in the store for too long when they don't want to do it. And essentially the idea is to just basically surprise and delight our customers. >>Perfect. Nick, what would you say, what's your billboard about T-Mobile as a technology company that's delivering great services to its customers? >>Yeah, I think, you know, Glen really covered it well. What I would just add to that is I think the way that we are approaching it these days, really starting from that 2016 period is we like to say we don't think of ourselves as a telco company anymore. We think of ourselves as a technology company that happens to do telco among other things, right? And so we've approached this from a point of view of we're here to provide the best possible experience we can to our customers and we take it personally when, when we don't reach that high bar. And so what we've done in the last few years as a transformation is really given us the toolbox that we need to be able to meet that promise. >>Awesome. Guys, it's been a pleasure having you on the program, talking about the transformation of T-Mobile. Great to hear what you're doing with aws, the maturation, and we look forward to having you back on to see what's next. Thank you. >>Awesome. Thank you so much. >>All right, for our guests and Dave Ante, I'm Lisa Martin, you watching The Cube, the leader in live enterprise and emerging tech coverage.
SUMMARY :
It's the Cube Day four of Yeah. I'm kind of bummed down, leaving tonight. I'd like to see the Raiders. We have had an amazing event, Dave. I mean we covered remotely in during Covid, Anything jump out at you that we really, It's the maturity of AWS is what's different, you know, Great to have you on brand. So, so everyone knows T-Mobile Blend, you guys are in the digital commerce domain. you know, basically pull the trigger and, and buy the phone at, at which point they submit Talk about some of the challenges before that you So we were the team that kind of brought in AWS and You talk about headless APIs, you talk, I dunno if you saw Warren a Vogel's keynote this morning, So when you think about this from And so, you know, the whole thing was it's an asynchronous system that's loosely coupled and Which was, I thought, you know, really well described, but maybe you guys could talk about you know, platform. So now over a few years that investment pays off and you have It's been, it's been six years. fact that we now operate what we call active active stacks. So you do this, it's, it's you separate across regions I presume? So you know, at least every week or so you are doing this and you know, you might not be able to do data sharing across region. we can recover it if we need to, if we need to send you to another stack and then we call that are you leaning into to servers and lambda or Not yet cuz you want control over the, You we were all on EC twos and across, you know, over the last three How do you deal with the, you know, this is one of the things Lisa, But in the cloud world, you have to be wide in, And so it's kind of kind of fun to see, you know, when they came on board with this but tell me the what the digital commerce domain, you kind of talked about this in the beginning, you know, like I talked about, we started with easy twos back in the day. And a big driver of revenue. what would you say if you had a billboard, what would it say about that? you know, shopping experience is about bringing customers in, surprising Nick, what would you say, what's your billboard about T-Mobile as a technology company that's delivering great services Yeah, I think, you know, Glen really covered it well. Guys, it's been a pleasure having you on the program, talking about the transformation of T-Mobile. Thank you so much. you watching The Cube, the leader in live enterprise and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Glen Kurisingal | PERSON | 0.99+ |
Nicholas Criss | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Ante | PERSON | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
Glen | PERSON | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
Glenn Curing | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
UST Global | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
seven | QUANTITY | 0.99+ |
Nick Chris | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
one system | QUANTITY | 0.99+ |
200 services | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one team | QUANTITY | 0.99+ |
Raiders | ORGANIZATION | 0.99+ |
one platform | QUANTITY | 0.99+ |
six years | QUANTITY | 0.99+ |
Dynamo | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Nick | PERSON | 0.99+ |
seven systems | QUANTITY | 0.99+ |
T-mobile | ORGANIZATION | 0.99+ |
10th year | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
seven teams | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
tonight | DATE | 0.99+ |
US | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
this week | DATE | 0.98+ |
The Cube | TITLE | 0.98+ |
Adam | PERSON | 0.98+ |
T-Mobile Blend | ORGANIZATION | 0.98+ |
hundred percent | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
200 | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
eight systems | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
both regions | QUANTITY | 0.97+ |
Java | TITLE | 0.97+ |
Covid | TITLE | 0.96+ |
this year | DATE | 0.96+ |
Day four | QUANTITY | 0.95+ |
ORGANIZATION | 0.95+ | |
a billion and a half dollars | QUANTITY | 0.95+ |
one final question | QUANTITY | 0.93+ |
day four | QUANTITY | 0.93+ |
Chuck Svoboda, Red Hat & Ted Stanton, AWS | AWS re:Invent 2022
>>Hey everyone, it's Vegas. Welcome back. We know you've been watching all day. We appreciate that. We always love being able to bring you some great content on the Cube Live from AWS Reinvented 22. Lisa Martin here with Paul Gill. And Paul, we've had such a great event. We've, I think we've done nearly 70 interviews since we started on the Cube on >>Monday night. I believe we just hit 70. Yeah, we just hit 70. You must feel like you've done half of >>Them. I really do. But we've been having great conversations. There's so much innovation going on at aws. Nothing slowed them down during the pandemic. We love also talking about the innovation, the flywheel that is their partner ecosystem. We're gonna have a great conversation about that >>Next. And as we've said, going back to day one, the energy of the show is remarkable. And here we are, we're getting late in the afternoon on day two, and there's just as much activity, just as much energy out there as, as the beginning of the first day. I have no doubt day three will be the >>Same. I agree. There's been no slowdown. We've got two guests here. We're gonna have a great conversation. Chuck Kubota joins us, senior Director of Cloud Services, GTM at Red Hat. Great to have you on the program. And Ted Stanton, global head of Sales, red Hat at IBM at aws. Welcome. >>Thanks for having us. >>How's the show going so far for you guys? >>It's a blur. Is it? Oh my gosh. >>Don't they all >>Blur? Well, yes, yes. I actually like last year a bit better. It was half the size. Yeah. And a lot easier to get around, but this is back to normal, so >>It is back to normal. Yeah. And and Ted, we're hearing north of 50,000 in-person attendees. I heard a, something I think was published. I heard the second hand over like 300,000 online attendees. This is maybe the biggest one we ever had. >>Yeah, yeah, I would agree. And frankly, it's my first time here, so I am massively impressed with the overall show, the meeting with partners, the meeting with customers, the announcements that were made, just fantastic. And >>If you remember back to two years ago, there were a lot of questions about whether in-person conferences would ever return and the volume that we used to see them. And that appears to be >>The case. I think we, I think we've answered, I think AWS has answered that for us, which I'm very pleased to see. Talk about some of those announcements. Ted. There's been so much that that's always one of the things we know and love about re men is there's slew of announcements. You were saying this morning, Paul, and then keynote, you lost, you stopped counting after I >>Lost 15, I lost count for 15. I think it was over 30 announcements this morning alone >>Where IBM and Red Hat are concern. What are some of the things that you are excited about in terms of some of the news, the innovation, and where the partnership is going? >>Well, definitely where the partnership is going, and I think even as we're speaking right now, is a keynote going on with Aruba, talking about some of the partners and the way in which we support partners and the new technologies and the new abilities for partners to take advantage of these technologies to frankly delight our customers is really what most excites me. >>Chuck, what about you? What's going on with Red Hat? You've been there a long time. Sales, everything, picking up customers, massively transforming. What are some of the things that you're seeing and that you're excited >>About? Yeah, I mean, first of all, you know, as customers have, you know, years ago discovered it's not competitively advantageous to manage their own data centers in most cases. So they would like to, you know, give that responsibility to Amazon. We're seeing them move further up the stack, right? So that would be more beyond the operating system, the application platforms like OpenShift. And now we have a managed application platform built on OpenShift called Red Out OpenShift service on AWS or Rosa. And then we're even further going up the stack with that with, we just announced this week that red out OpenShift data science is available in the AWS marketplace, runs on Rosa, helps break the land speed record to getting those data models out there that are so important to make, you know, help organizations become more, much more data driven to remain competitive themselves. >>So talk about Rosa and how it differs from previous iterations of, of OpenShift. I mean, you had, you had an online version of OpenShift several years ago. What's different about Rosa? >>Yeah, so the old OpenShift online that was several years old, right? For one thing, wasn't a joint partnership between Amazon and Red Hat. So we work together, right? Very closely on this, which is great. Also, the awesome thing about Rosa, you know, if you think about like OpenShift for, for, as a matter of fact, Amazon is the number one cloud that OpenShift runs on, right? So a lot of those customers want to take advantage of their committed spins, their EDPs, they want one bill. And so Rosa comes through the one bill comes through the marketplace, right? Which is, which is totally awesome. Not only that or financially backing OpenShift with a 99.95% financially backed sla, right? We didn't have that before either, right? >>When you say financially backed sla, >>What do you mean? That means that if we drop below 99.95% of availability, we're gonna give you some money back, right? So we're really, you know, for lack of better words, putting our money where our mouth is. Absolutely right. >>And, and some of the key reasons that we even work together to build Rosa was frankly we've had a mirror of customers and virtually every single region, every single industry been using OpenShift on AWS for years, right? And we listened to them, they wanted a more managed version of it and we worked very closely together. And what's really great about Rosa too is we built some really fantastic integrations with some of the AWS native services like API gateway, Amazon rds, private link, right? To make it very simple and easy for customers to get started. We talked a little bit about the marketplace, but it's also available just on the AWS console, right? So customers can get started in a pay as you go fashion start to use it. And if they wanna move into a more commitment, more of a set schedule of payments, they can move into a marketplace private offer. >>Chuck, talk about, how about Rosen? How is unlocking the power of technology like containers Kubernetes for customers while dialing down some of the complexity that's >>There? Yeah, I mean if you think about, you know, kind of what we did, you know, earlier on, right? If you think about like virtualization, how it dialed down the complexity of having to get something rack, get a blade rack, stack cable and cooled every time you wanted to deploy new application, right? So what we do is we, our message is this, we want developers to focus on what matters most. And that's build, deploy, and running applications. Most of our customers are not in the business of building app platforms. They're not in the business of building platforms like banks, I, you know, financials, right? Government, et cetera. Right? So what we do is we allow those developers that are, enable those developers that know Java and Node and springing and what have you, just to keep writing what they know. And then, you know, I don't wanna get too technical here, but get pushed through way and, and OpenShift takes care of the rest, builds it for them, runs it through a pipeline, a CICD pipeline, goes through all the testing and quality gates and things like that, deploys it, auto wires it up, you know, to monitoring which is what you need. >>And we have all kinds of other, you know, higher order services and an ecosystem around that. And oh, by the way, also plugging into and taking advantage of the services like rds, right? If you're gonna write an application, a tradition, a cloud native application on Amazon, you're probably going to wanna run it in Rosa and consuming one of those databases, right? Like RDS or Aurora, what have you. >>And I, and I would say it's not even just the customers. We have a variety of ecosystem partners, both of our partners leveraging it as well. We have solos built their executive management system that they go ahead and turn and sell to their customers, streamlines data and collects data from a variety of different sources. They decided, you know, it's better to run that on top of Rosa than manage OpenShift themselves. We've seen IBM restack a lot of their software, you know, to run on top of Rose, take advantage of that capabilities. So lots of partners as well as customers are taking advantage of fully managed stack of that OpenShift that that turnkey capabilities that it provides >>For, for OpenShift customers who wanna move to Rose, is that gonna be a one button migration? Is that gonna be, can they run both environments simultaneously and migrate over time? What kind of tools are you giving them? >>We have quite, we have quite a few migration tools such as conveyor, right? That's one of our projects, part of our migration application toolkit, right? And you know, with those, there's also partners like Trilio, right? Who can help move, you know, applications back 'em up. In fact, we're working on a pretty cool joint go to market with that right now. But generally speaking, the OpenShift experience that the customers that we have know and love and those who have never used OpenShift either are coming to it as well via Rosa, right? The experience is primarily the same. You don't have to really retrain your people, right? If anything, there's a reduction in operational cost. We increase developer productivity cuz we manage so much of the stack for you. We have SRE site reliability engineers that are backing the platform that proactively get ahead of anything that may go wrong. So maybe you don't even notice if something went wrong, wrong. And then also reactively fixing it if it comes to that, right? So, you know, all those kind of things that your customers are having to do on their own or hire a contractor, a consultant, what have to do Now we benefit from a managed offering in the cloud, right? In Amazon, right? And your developers still have that great experience too, like to say, you know, again, break the land speed record to prod. >>I >>Like that. And, and I would actually say migrations from OpenShift are on premise. OpenShift to Rosa maybe only represents about a third of the customers we have. About another third of the customers is frankly existing AWS customers. Maybe they're doing Kubernetes, do it, the, you know, do it themselves. We're struggling with some of the management of that. And so actually started to lean on top of using Rosa as a better platform to actually build upon their applications. And another third, we have quite a few customers that were frankly new OpenShift customers, new Red Hat customers and new AWS customers that were looking to build that next cloud native application. Lots of in the startup space that I've actually chosen to go with Rosa. >>It's funny you mention that because the largest Rosa consumer is new to OpenShift. Oh wow. Right. That's pretty, that's pretty powerful, right? It's not just for existing OpenShift customers, existing OpenShift. If you're running OpenShift, you know, on EC two, right. Self-managed, there's really no better way to run it than Rosa. You know, I think about whether this is the 10th year, 10 year anniversary of re right? Right. Yep. This is also the 10 year anniversary of OpenShift. Yeah, right. I think it one oh came out about sometime around a week, 10 years ago, right? When I came over to Red Hat in 2015. You know, if you, if you know your Kubernetes history was at July 25th, I think was when Kubernetes ga, July 25th, 2015 is when it g you have >>A good >>Memory. Well I remember those days back then, right? Those were fun, right? The, we had a, a large customer roll out on OpenShift three, which is our OpenShift RE based on Kubernetes. And where do you think they ran Amazon, right? Naturally. So, you know, as you move forward and, and, and OpenShift V four came out, the, reduces the operational complexity and becomes even more powerful through our operator framework and things like that. Now they revolved up to Rosa, right? And again, to help those customers focus on what matters most. And that's the applications, not the containers, not those underlying implementation and technical details while critically important, are not necessarily core to the business to most of our customers. >>Tremendous amount of innovation in OpenShift in a decade, >>Pardon me? >>Tremendous amount of innovation in OpenShift in the >>Last decade. Oh absolutely. And, and and tons more to come like every day. Right. I think what you're gonna see more of is, you know, as Kubernetes becomes more, more and more of the plumbing, you know, I call 'em productive abstractions on top of it, as you mentioned earlier, unlocking the power of these technologies while minimizing, even hiding the complexity of them so that you can just move fast Yeah. And safely move fast. >>I wanna be sure we get to, to marketplaces because you have been, red Hat has made, has really stepped up as commitment to the AWS marketplace. Why are you doing that now and how are, how are the marketplaces evolving as a channel for you? >>Well, cuz our customers want us to be there, right? I mean we, we, we are customer centric, customer first approach. Our customers want to buy through the marketplace. If you're an Amazon, if you're an Amazon customer, it's really easy for you to go procure software through the marketplace and have, instead of having to call up Red Hat and get on paper and write a second check, right? One stop shop one bill. Right? That is very, very attractive to our customers. Not only that, it opens up other ways to buy, you know, Ted mentioned earlier, you know, pay as you go buy the drink pricing using exactly what you need right now. Right? You know, AWS pioneered that, right? That provides that elasticity, you know, one of the core tenants at aws, AWS cloud, right? And we weren't able to get that with the traditional self-managed on Red Hat paper subscriptions. >>Talk a little bit about the go to market, what's, you talked about Ted, the kind of the three tenants of, of customer types. But talk a little bit about the gtm, the joint go to market, the joint engineering, so we get an understanding of how customers engage multiple options. >>Yeah, I mean, so if you think about go to market, you know, and the way I think of it is it's the intersection of a few areas, right? So the product and the product experience that we work together has to be so good that a customer or user, actually many start talk, talking about users now cuz it's self-service has a more than likely chance of getting their application to prod without ever talking to a person. Which is historically not what a lot of enterprise software companies are able to do, right? So that's one of those biggest things we do. We want customers to just be successful, turn it on, get going, be productive, right? At the same time we wanna to position the product in such a way that's differentiating that you can't get that experience anywhere else. And then part of that is ensuring that the education and enablement of our customers and our partners as such that they use the platform the right way to get as much value out of as possible. >>All backed by, you know, a very smart field that ensures that the customer get is making the right decision. A customer success org, this is attached to my org now that we can go on site and team with our customers to make sure that they get their first workloads up as quickly as possible, by the way, on our date, our, our dime. And then SRE and CEA backing that up with support and operational integrity to ensure that the service is always up and available so you can sleep, sleep, sleep well at night. Right? Right. One of our PMs of, of of Rosa, he says, what does he say? He says, Rosa allows organizations, enables organizations to go from 24 7 operations to nine to five innovation. Right? And that's powerful. That's how our customers remain more competitive running on Rosa with aws, >>When you're in customer conversations and you have 30 seconds, what are the key differentiators of the solution that you go boom, boom, boom, and they just go, I get it. >>Well, I mean, my 32nd elevator pitch, I think I've already said, I'll say it again. And that is OpenShift allows you to focus on your applications, build, deploy, and run applications while unlocking the power of the technologies like containers and Kubernetes and hiding or minimizing those complexities. So you can do as fast as possible. >>Mic drop Ted, question for you? Sure. Here we are at the, this is the, I leave the 11th reinvent, 10th anniversary, 11th event. You've been in the industry a long time. What is your biggest takeaway from what's been announced and discussed so far at Reinvent 22, where the AWS and and its partner ecosystem is concerned? If you had 30 seconds or if you had a bumper sticker to put on your DeLorean, what would you say? >>I would say we're continuing to innovate on behalf of our customers, but making sure we bring all of our partners and ecosystems along in that innovation. >>Yeah. I love the customer obsession on both sides there. Great work guides. Congrats on the 10th anniversary of OpenShift and so much evolution, the customer obsession is really clear for both of you guys. We appreciate your time. You're gonna have to come back now. Absolutely. Absolutely. Thank you. All right. Thank you so much for joining us. For our guests and for Paul Gillin. I'm Lisa Martin. You're watching The Cube, the leader in live enterprise and emerging tech coverage.
SUMMARY :
We always love being able to bring you some great content on the Cube Live from AWS Reinvented I believe we just hit 70. We love also talking about the innovation, And here we are, we're getting late in the afternoon on day two, and there's just as much activity, Great to have you on the program. It's a blur. And a lot easier to get around, I heard the second hand over overall show, the meeting with partners, the meeting with customers, the announcements And that appears to be of the things we know and love about re men is there's slew of announcements. I think it was over 30 announcements this morning alone What are some of the things that you are excited about in terms of some and the new abilities for partners to take advantage of these technologies to frankly delight our What are some of the things that you're seeing and Yeah, I mean, first of all, you know, as customers have, you know, years ago discovered I mean, you had, you had an online version of OpenShift several years ago. you know, if you think about like OpenShift for, for, as a matter of fact, So we're really, you know, for lack of better words, putting our money where our mouth is. And, and some of the key reasons that we even work together to build Rosa was frankly we've had a They're not in the business of building platforms like banks, I, you know, financials, And we have all kinds of other, you know, higher order services and an ecosystem around that. They decided, you know, it's better to run that on top of Rosa than manage OpenShift have that great experience too, like to say, you know, again, break the land speed record to prod. Lots of in the startup space that I've actually chosen to go with Rosa. It's funny you mention that because the largest Rosa consumer is new to OpenShift. And where do you think they ran Amazon, minimizing, even hiding the complexity of them so that you can just move fast Yeah. I wanna be sure we get to, to marketplaces because you have been, red That provides that elasticity, you know, Talk a little bit about the go to market, what's, you talked about Ted, the kind of the three tenants of, Yeah, I mean, so if you think about go to market, you know, and the way I think of it is it's the intersection of a few areas, and operational integrity to ensure that the service is always up and available so you can sleep, of the solution that you go boom, boom, boom, and they just go, I get it. And that is OpenShift allows you to focus on your applications, build, deploy, and run applications while If you had 30 seconds or if you had a bumper sticker to put on your of our partners and ecosystems along in that innovation. OpenShift and so much evolution, the customer obsession is really clear for both of you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ted Stanton | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Gill | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Chuck Kubota | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Ted | PERSON | 0.99+ |
Chuck Svoboda | PERSON | 0.99+ |
July 25th | DATE | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
red Hat | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
99.95% | QUANTITY | 0.99+ |
July 25th, 2015 | DATE | 0.99+ |
nine | QUANTITY | 0.99+ |
Chuck | PERSON | 0.99+ |
SRE | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
Monday night | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
last year | DATE | 0.98+ |
Red Hat | TITLE | 0.98+ |
one bill | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
10th year | QUANTITY | 0.98+ |
Vegas | LOCATION | 0.98+ |
One | QUANTITY | 0.98+ |
three tenants | QUANTITY | 0.98+ |
CEA | ORGANIZATION | 0.98+ |
The Cube | TITLE | 0.98+ |
Rosa | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Node | TITLE | 0.98+ |
first time | QUANTITY | 0.98+ |
one button | QUANTITY | 0.97+ |
first day | QUANTITY | 0.97+ |
10th anniversary | QUANTITY | 0.97+ |
second check | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
10 years ago | DATE | 0.97+ |
Reinvent 22 | EVENT | 0.97+ |
this week | DATE | 0.96+ |
The Truth About MySQL HeatWave
>>When Oracle acquired my SQL via the Sun acquisition, nobody really thought the company would put much effort into the platform preferring to focus all the wood behind its leading Oracle database, Arrow pun intended. But two years ago, Oracle surprised many folks by announcing my SQL Heatwave a new database as a service with a massively parallel hybrid Columbia in Mary Mary architecture that brings together transactional and analytic data in a single platform. Welcome to our latest database, power panel on the cube. My name is Dave Ante, and today we're gonna discuss Oracle's MySQL Heat Wave with a who's who of cloud database industry analysts. Holgar Mueller is with Constellation Research. Mark Stammer is the Dragon Slayer and Wikibon contributor. And Ron Westfall is with Fu Chim Research. Gentlemen, welcome back to the Cube. Always a pleasure to have you on. Thanks for having us. Great to be here. >>So we've had a number of of deep dive interviews on the Cube with Nip and Aggarwal. You guys know him? He's a senior vice president of MySQL, Heatwave Development at Oracle. I think you just saw him at Oracle Cloud World and he's come on to describe this is gonna, I'll call it a shock and awe feature additions to to heatwave. You know, the company's clearly putting r and d into the platform and I think at at cloud world we saw like the fifth major release since 2020 when they first announced MySQL heat wave. So just listing a few, they, they got, they taken, brought in analytics machine learning, they got autopilot for machine learning, which is automation onto the basic o l TP functionality of the database. And it's been interesting to watch Oracle's converge database strategy. We've contrasted that amongst ourselves. Love to get your thoughts on Amazon's get the right tool for the right job approach. >>Are they gonna have to change that? You know, Amazon's got the specialized databases, it's just, you know, the both companies are doing well. It just shows there are a lot of ways to, to skin a cat cuz you see some traction in the market in, in both approaches. So today we're gonna focus on the latest heat wave announcements and we're gonna talk about multi-cloud with a native MySQL heat wave implementation, which is available on aws MySQL heat wave for Azure via the Oracle Microsoft interconnect. This kind of cool hybrid action that they got going. Sometimes we call it super cloud. And then we're gonna dive into my SQL Heatwave Lake house, which allows users to process and query data across MyQ databases as heatwave databases, as well as object stores. So, and then we've got, heatwave has been announced on AWS and, and, and Azure, they're available now and Lake House I believe is in beta and I think it's coming out the second half of next year. So again, all of our guests are fresh off of Oracle Cloud world in Las Vegas. So they got the latest scoop. Guys, I'm done talking. Let's get into it. Mark, maybe you could start us off, what's your opinion of my SQL Heatwaves competitive position? When you think about what AWS is doing, you know, Google is, you know, we heard Google Cloud next recently, we heard about all their data innovations. You got, obviously Azure's got a big portfolio, snowflakes doing well in the market. What's your take? >>Well, first let's look at it from the point of view that AWS is the market leader in cloud and cloud services. They own somewhere between 30 to 50% depending on who you read of the market. And then you have Azure as number two and after that it falls off. There's gcp, Google Cloud platform, which is further way down the list and then Oracle and IBM and Alibaba. So when you look at AWS and you and Azure saying, hey, these are the market leaders in the cloud, then you start looking at it and saying, if I am going to provide a service that competes with the service they have, if I can make it available in their cloud, it means that I can be more competitive. And if I'm compelling and compelling means at least twice the performance or functionality or both at half the price, I should be able to gain market share. >>And that's what Oracle's done. They've taken a superior product in my SQL heat wave, which is faster, lower cost does more for a lot less at the end of the day and they make it available to the users of those clouds. You avoid this little thing called egress fees, you avoid the issue of having to migrate from one cloud to another and suddenly you have a very compelling offer. So I look at what Oracle's doing with MyQ and it feels like, I'm gonna use a word term, a flanking maneuver to their competition. They're offering a better service on their platforms. >>All right, so thank you for that. Holger, we've seen this sort of cadence, I sort of referenced it up front a little bit and they sat on MySQL for a decade, then all of a sudden we see this rush of announcements. Why did it take so long? And and more importantly is Oracle, are they developing the right features that cloud database customers are looking for in your view? >>Yeah, great question, but first of all, in your interview you said it's the edit analytics, right? Analytics is kind of like a marketing buzzword. Reports can be analytics, right? The interesting thing, which they did, the first thing they, they, they crossed the chasm between OTP and all up, right? In the same database, right? So major engineering feed very much what customers want and it's all about creating Bellevue for customers, which, which I think is the part why they go into the multi-cloud and why they add these capabilities. And they certainly with the AI capabilities, it's kind of like getting it into an autonomous field, self-driving field now with the lake cost capabilities and meeting customers where they are, like Mark has talked about the e risk costs in the cloud. So that that's a significant advantage, creating value for customers and that's what at the end of the day matters. >>And I believe strongly that long term it's gonna be ones who create better value for customers who will get more of their money From that perspective, why then take them so long? I think it's a great question. I think largely he mentioned the gentleman Nial, it's largely to who leads a product. I used to build products too, so maybe I'm a little fooling myself here, but that made the difference in my view, right? So since he's been charged, he's been building things faster than the rest of the competition, than my SQL space, which in hindsight we thought was a hot and smoking innovation phase. It kind of like was a little self complacent when it comes to the traditional borders of where, where people think, where things are separated between OTP and ola or as an example of adjacent support, right? Structured documents, whereas unstructured documents or databases and all of that has been collapsed and brought together for building a more powerful database for customers. >>So I mean it's certainly, you know, when, when Oracle talks about the competitors, you know, the competitors are in the, I always say they're, if the Oracle talks about you and knows you're doing well, so they talk a lot about aws, talk a little bit about Snowflake, you know, sort of Google, they have partnerships with Azure, but, but in, so I'm presuming that the response in MySQL heatwave was really in, in response to what they were seeing from those big competitors. But then you had Maria DB coming out, you know, the day that that Oracle acquired Sun and, and launching and going after the MySQL base. So it's, I'm, I'm interested and we'll talk about this later and what you guys think AWS and Google and Azure and Snowflake and how they're gonna respond. But, but before I do that, Ron, I want to ask you, you, you, you can get, you know, pretty technical and you've probably seen the benchmarks. >>I know you have Oracle makes a big deal out of it, publishes its benchmarks, makes some transparent on on GI GitHub. Larry Ellison talked about this in his keynote at Cloud World. What are the benchmarks show in general? I mean, when you, when you're new to the market, you gotta have a story like Mark was saying, you gotta be two x you know, the performance at half the cost or you better be or you're not gonna get any market share. So, and, and you know, oftentimes companies don't publish market benchmarks when they're leading. They do it when they, they need to gain share. So what do you make of the benchmarks? Have their, any results that were surprising to you? Have, you know, they been challenged by the competitors. Is it just a bunch of kind of desperate bench marketing to make some noise in the market or you know, are they real? What's your view? >>Well, from my perspective, I think they have the validity. And to your point, I believe that when it comes to competitor responses, that has not really happened. Nobody has like pulled down the information that's on GitHub and said, Oh, here are our price performance results. And they counter oracles. In fact, I think part of the reason why that hasn't happened is that there's the risk if Oracle's coming out and saying, Hey, we can deliver 17 times better query performance using our capabilities versus say, Snowflake when it comes to, you know, the Lakehouse platform and Snowflake turns around and says it's actually only 15 times better during performance, that's not exactly an effective maneuver. And so I think this is really to oracle's credit and I think it's refreshing because these differentiators are significant. We're not talking, you know, like 1.2% differences. We're talking 17 fold differences, we're talking six fold differences depending on, you know, where the spotlight is being shined and so forth. >>And so I think this is actually something that is actually too good to believe initially at first blush. If I'm a cloud database decision maker, I really have to prioritize this. I really would know, pay a lot more attention to this. And that's why I posed the question to Oracle and others like, okay, if these differentiators are so significant, why isn't the needle moving a bit more? And it's for, you know, some of the usual reasons. One is really deep discounting coming from, you know, the other players that's really kind of, you know, marketing 1 0 1, this is something you need to do when there's a real competitive threat to keep, you know, a customer in your own customer base. Plus there is the usual fear and uncertainty about moving from one platform to another. But I think, you know, the traction, the momentum is, is shifting an Oracle's favor. I think we saw that in the Q1 efforts, for example, where Oracle cloud grew 44% and that it generated, you know, 4.8 billion and revenue if I recall correctly. And so, so all these are demonstrating that's Oracle is making, I think many of the right moves, publishing these figures for anybody to look at from their own perspective is something that is, I think, good for the market and I think it's just gonna continue to pay dividends for Oracle down the horizon as you know, competition intens plots. So if I were in, >>Dave, can I, Dave, can I interject something and, and what Ron just said there? Yeah, please go ahead. A couple things here, one discounting, which is a common practice when you have a real threat, as Ron pointed out, isn't going to help much in this situation simply because you can't discount to the point where you improve your performance and the performance is a huge differentiator. You may be able to get your price down, but the problem that most of them have is they don't have an integrated product service. They don't have an integrated O L T P O L A P M L N data lake. Even if you cut out two of them, they don't have any of them integrated. They have multiple services that are required separate integration and that can't be overcome with discounting. And the, they, you have to pay for each one of these. And oh, by the way, as you grow, the discounts go away. So that's a, it's a minor important detail. >>So, so that's a TCO question mark, right? And I know you look at this a lot, if I had that kind of price performance advantage, I would be pounding tco, especially if I need two separate databases to do the job. That one can do, that's gonna be, the TCO numbers are gonna be off the chart or maybe down the chart, which you want. Have you looked at this and how does it compare with, you know, the big cloud guys, for example, >>I've looked at it in depth, in fact, I'm working on another TCO on this arena, but you can find it on Wiki bod in which I compared TCO for MySEQ Heat wave versus Aurora plus Redshift plus ML plus Blue. I've compared it against gcps services, Azure services, Snowflake with other services. And there's just no comparison. The, the TCO differences are huge. More importantly, thefor, the, the TCO per performance is huge. We're talking in some cases multiple orders of magnitude, but at least an order of magnitude difference. So discounting isn't gonna help you much at the end of the day, it's only going to lower your cost a little, but it doesn't improve the automation, it doesn't improve the performance, it doesn't improve the time to insight, it doesn't improve all those things that you want out of a database or multiple databases because you >>Can't discount yourself to a higher value proposition. >>So what about, I wonder ho if you could chime in on the developer angle. You, you followed that, that market. How do these innovations from heatwave, I think you used the term developer velocity. I've heard you used that before. Yeah, I mean, look, Oracle owns Java, okay, so it, it's, you know, most popular, you know, programming language in the world, blah, blah blah. But it does it have the, the minds and hearts of, of developers and does, where does heatwave fit into that equation? >>I think heatwave is gaining quickly mindshare on the developer side, right? It's not the traditional no sequel database which grew up, there's a traditional mistrust of oracles to developers to what was happening to open source when gets acquired. Like in the case of Oracle versus Java and where my sql, right? And, but we know it's not a good competitive strategy to, to bank on Oracle screwing up because it hasn't worked not on Java known my sequel, right? And for developers, it's, once you get to know a technology product and you can do more, it becomes kind of like a Swiss army knife and you can build more use case, you can build more powerful applications. That's super, super important because you don't have to get certified in multiple databases. You, you are fast at getting things done, you achieve fire, develop velocity, and the managers are happy because they don't have to license more things, send you to more trainings, have more risk of something not being delivered, right? >>So it's really the, we see the suite where this best of breed play happening here, which in general was happening before already with Oracle's flagship database. Whereas those Amazon as an example, right? And now the interesting thing is every step away Oracle was always a one database company that can be only one and they're now generally talking about heat web and that two database company with different market spaces, but same value proposition of integrating more things very, very quickly to have a universal database that I call, they call the converge database for all the needs of an enterprise to run certain application use cases. And that's what's attractive to developers. >>It's, it's ironic isn't it? I mean I, you know, the rumor was the TK Thomas Curian left Oracle cuz he wanted to put Oracle database on other clouds and other places. And maybe that was the rift. Maybe there was, I'm sure there was other things, but, but Oracle clearly is now trying to expand its Tam Ron with, with heatwave into aws, into Azure. How do you think Oracle's gonna do, you were at a cloud world, what was the sentiment from customers and the independent analyst? Is this just Oracle trying to screw with the competition, create a little diversion? Or is this, you know, serious business for Oracle? What do you think? >>No, I think it has lakes. I think it's definitely, again, attriting to Oracle's overall ability to differentiate not only my SQL heat wave, but its overall portfolio. And I think the fact that they do have the alliance with the Azure in place, that this is definitely demonstrating their commitment to meeting the multi-cloud needs of its customers as well as what we pointed to in terms of the fact that they're now offering, you know, MySQL capabilities within AWS natively and that it can now perform AWS's own offering. And I think this is all demonstrating that Oracle is, you know, not letting up, they're not resting on its laurels. That's clearly we are living in a multi-cloud world, so why not just make it more easy for customers to be able to use cloud databases according to their own specific, specific needs. And I think, you know, to holder's point, I think that definitely lines with being able to bring on more application developers to leverage these capabilities. >>I think one important announcement that's related to all this was the JSON relational duality capabilities where now it's a lot easier for application developers to use a language that they're very familiar with a JS O and not have to worry about going into relational databases to store their J S O N application coding. So this is, I think an example of the innovation that's enhancing the overall Oracle portfolio and certainly all the work with machine learning is definitely paying dividends as well. And as a result, I see Oracle continue to make these inroads that we pointed to. But I agree with Mark, you know, the short term discounting is just a stall tag. This is not denying the fact that Oracle is being able to not only deliver price performance differentiators that are dramatic, but also meeting a wide range of needs for customers out there that aren't just limited device performance consideration. >>Being able to support multi-cloud according to customer needs. Being able to reach out to the application developer community and address a very specific challenge that has plagued them for many years now. So bring it all together. Yeah, I see this as just enabling Oracles who ring true with customers. That the customers that were there were basically all of them, even though not all of them are going to be saying the same things, they're all basically saying positive feedback. And likewise, I think the analyst community is seeing this. It's always refreshing to be able to talk to customers directly and at Oracle cloud there was a litany of them and so this is just a difference maker as well as being able to talk to strategic partners. The nvidia, I think partnerships also testament to Oracle's ongoing ability to, you know, make the ecosystem more user friendly for the customers out there. >>Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able to be best of breed. That's the kind of surprising thing that I'm hearing about, about heatwave. I want to, I want to talk about Lake House because when I think of Lake House, I think data bricks, and to my knowledge data bricks hasn't been in the sites of Oracle yet. Maybe they're next, but, but Oracle claims that MySQL, heatwave, Lakehouse is a breakthrough in terms of capacity and performance. Mark, what are your thoughts on that? Can you double click on, on Lakehouse Oracle's claims for things like query performance and data loading? What does it mean for the market? Is Oracle really leading in, in the lake house competitive landscape? What are your thoughts? >>Well, but name in the game is what are the problems you're solving for the customer? More importantly, are those problems urgent or important? If they're urgent, customers wanna solve 'em. Now if they're important, they might get around to them. So you look at what they're doing with Lake House or previous to that machine learning or previous to that automation or previous to that O L A with O ltp and they're merging all this capability together. If you look at Snowflake or data bricks, they're tacking one problem. You look at MyQ heat wave, they're tacking multiple problems. So when you say, yeah, their queries are much better against the lake house in combination with other analytics in combination with O ltp and the fact that there are no ETLs. So you're getting all this done in real time. So it's, it's doing the query cross, cross everything in real time. >>You're solving multiple user and developer problems, you're increasing their ability to get insight faster, you're having shorter response times. So yeah, they really are solving urgent problems for customers. And by putting it where the customer lives, this is the brilliance of actually being multicloud. And I know I'm backing up here a second, but by making it work in AWS and Azure where people already live, where they already have applications, what they're saying is, we're bringing it to you. You don't have to come to us to get these, these benefits, this value overall, I think it's a brilliant strategy. I give Nip and Argo wallet a huge, huge kudos for what he's doing there. So yes, what they're doing with the lake house is going to put notice on data bricks and Snowflake and everyone else for that matter. Well >>Those are guys that whole ago you, you and I have talked about this. Those are, those are the guys that are doing sort of the best of breed. You know, they're really focused and they, you know, tend to do well at least out of the gate. Now you got Oracle's converged philosophy, obviously with Oracle database. We've seen that now it's kicking in gear with, with heatwave, you know, this whole thing of sweets versus best of breed. I mean the long term, you know, customers tend to migrate towards suite, but the new shiny toy tends to get the growth. How do you think this is gonna play out in cloud database? >>Well, it's the forever never ending story, right? And in software right suite, whereas best of breed and so far in the long run suites have always won, right? So, and sometimes they struggle again because the inherent problem of sweets is you build something larger, it has more complexity and that means your cycles to get everything working together to integrate the test that roll it out, certify whatever it is, takes you longer, right? And that's not the case. It's a fascinating part of what the effort around my SQL heat wave is that the team is out executing the previous best of breed data, bringing us something together. Now if they can maintain that pace, that's something to to, to be seen. But it, the strategy, like what Mark was saying, bring the software to the data is of course interesting and unique and totally an Oracle issue in the past, right? >>Yeah. But it had to be in your database on oci. And but at, that's an interesting part. The interesting thing on the Lake health side is, right, there's three key benefits of a lakehouse. The first one is better reporting analytics, bring more rich information together, like make the, the, the case for silicon angle, right? We want to see engagements for this video, we want to know what's happening. That's a mixed transactional video media use case, right? Typical Lakehouse use case. The next one is to build more rich applications, transactional applications which have video and these elements in there, which are the engaging one. And the third one, and that's where I'm a little critical and concerned, is it's really the base platform for artificial intelligence, right? To run deep learning to run things automatically because they have all the data in one place can create in one way. >>And that's where Oracle, I know that Ron talked about Invidia for a moment, but that's where Oracle doesn't have the strongest best story. Nonetheless, the two other main use cases of the lake house are very strong, very well only concern is four 50 terabyte sounds long. It's an arbitrary limitation. Yeah, sounds as big. So for the start, and it's the first word, they can make that bigger. You don't want your lake house to be limited and the terabyte sizes or any even petabyte size because you want to have the certainty. I can put everything in there that I think it might be relevant without knowing what questions to ask and query those questions. >>Yeah. And you know, in the early days of no schema on right, it just became a mess. But now technology has evolved to allow us to actually get more value out of that data. Data lake. Data swamp is, you know, not much more, more, more, more logical. But, and I want to get in, in a moment, I want to come back to how you think the competitors are gonna respond. Are they gonna have to sort of do a more of a converged approach? AWS in particular? But before I do, Ron, I want to ask you a question about autopilot because I heard Larry Ellison's keynote and he was talking about how, you know, most security issues are human errors with autonomy and autonomous database and things like autopilot. We take care of that. It's like autonomous vehicles, they're gonna be safer. And I went, well maybe, maybe someday. So Oracle really tries to emphasize this, that every time you see an announcement from Oracle, they talk about new, you know, autonomous capabilities. It, how legit is it? Do people care? What about, you know, what's new for heatwave Lakehouse? How much of a differentiator, Ron, do you really think autopilot is in this cloud database space? >>Yeah, I think it will definitely enhance the overall proposition. I don't think people are gonna buy, you know, lake house exclusively cause of autopilot capabilities, but when they look at the overall picture, I think it will be an added capability bonus to Oracle's benefit. And yeah, I think it's kind of one of these age old questions, how much do you automate and what is the bounce to strike? And I think we all understand with the automatic car, autonomous car analogy that there are limitations to being able to use that. However, I think it's a tool that basically every organization out there needs to at least have or at least evaluate because it goes to the point of it helps with ease of use, it helps make automation more balanced in terms of, you know, being able to test, all right, let's automate this process and see if it works well, then we can go on and switch on on autopilot for other processes. >>And then, you know, that allows, for example, the specialists to spend more time on business use cases versus, you know, manual maintenance of, of the cloud database and so forth. So I think that actually is a, a legitimate value proposition. I think it's just gonna be a case by case basis. Some organizations are gonna be more aggressive with putting automation throughout their processes throughout their organization. Others are gonna be more cautious. But it's gonna be, again, something that will help the overall Oracle proposition. And something that I think will be used with caution by many organizations, but other organizations are gonna like, hey, great, this is something that is really answering a real problem. And that is just easing the use of these databases, but also being able to better handle the automation capabilities and benefits that come with it without having, you know, a major screwup happened and the process of transitioning to more automated capabilities. >>Now, I didn't attend cloud world, it's just too many red eyes, you know, recently, so I passed. But one of the things I like to do at those events is talk to customers, you know, in the spirit of the truth, you know, they, you know, you'd have the hallway, you know, track and to talk to customers and they say, Hey, you know, here's the good, the bad and the ugly. So did you guys, did you talk to any customers my SQL Heatwave customers at, at cloud world? And and what did you learn? I don't know, Mark, did you, did you have any luck and, and having some, some private conversations? >>Yeah, I had quite a few private conversations. The one thing before I get to that, I want disagree with one point Ron made, I do believe there are customers out there buying the heat wave service, the MySEQ heat wave server service because of autopilot. Because autopilot is really revolutionary in many ways in the sense for the MySEQ developer in that it, it auto provisions, it auto parallel loads, IT auto data places it auto shape predictions. It can tell you what machine learning models are going to tell you, gonna give you your best results. And, and candidly, I've yet to meet a DBA who didn't wanna give up pedantic tasks that are pain in the kahoo, which they'd rather not do and if it's long as it was done right for them. So yes, I do think people are buying it because of autopilot and that's based on some of the conversations I had with customers at Oracle Cloud World. >>In fact, it was like, yeah, that's great, yeah, we get fantastic performance, but this really makes my life easier and I've yet to meet a DBA who didn't want to make their life easier. And it does. So yeah, I've talked to a few of them. They were excited. I asked them if they ran into any bugs, were there any difficulties in moving to it? And the answer was no. In both cases, it's interesting to note, my sequel is the most popular database on the planet. Well, some will argue that it's neck and neck with SQL Server, but if you add in Mariah DB and ProCon db, which are forks of MySQL, then yeah, by far and away it's the most popular. And as a result of that, everybody for the most part has typically a my sequel database somewhere in their organization. So this is a brilliant situation for anybody going after MyQ, but especially for heat wave. And the customers I talk to love it. I didn't find anybody complaining about it. And >>What about the migration? We talked about TCO earlier. Did your t does your TCO analysis include the migration cost or do you kind of conveniently leave that out or what? >>Well, when you look at migration costs, there are different kinds of migration costs. By the way, the worst job in the data center is the data migration manager. Forget it, no other job is as bad as that one. You get no attaboys for doing it. Right? And then when you screw up, oh boy. So in real terms, anything that can limit data migration is a good thing. And when you look at Data Lake, that limits data migration. So if you're already a MySEQ user, this is a pure MySQL as far as you're concerned. It's just a, a simple transition from one to the other. You may wanna make sure nothing broke and every you, all your tables are correct and your schema's, okay, but it's all the same. So it's a simple migration. So it's pretty much a non-event, right? When you migrate data from an O LTP to an O L A P, that's an ETL and that's gonna take time. >>But you don't have to do that with my SQL heat wave. So that's gone when you start talking about machine learning, again, you may have an etl, you may not, depending on the circumstances, but again, with my SQL heat wave, you don't, and you don't have duplicate storage, you don't have to copy it from one storage container to another to be able to be used in a different database, which by the way, ultimately adds much more cost than just the other service. So yeah, I looked at the migration and again, the users I talked to said it was a non-event. It was literally moving from one physical machine to another. If they had a new version of MySEQ running on something else and just wanted to migrate it over or just hook it up or just connect it to the data, it worked just fine. >>Okay, so every day it sounds like you guys feel, and we've certainly heard this, my colleague David Foyer, the semi-retired David Foyer was always very high on heatwave. So I think you knows got some real legitimacy here coming from a standing start, but I wanna talk about the competition, how they're likely to respond. I mean, if your AWS and you got heatwave is now in your cloud, so there's some good aspects of that. The database guys might not like that, but the infrastructure guys probably love it. Hey, more ways to sell, you know, EC two and graviton, but you're gonna, the database guys in AWS are gonna respond. They're gonna say, Hey, we got Redshift, we got aqua. What's your thoughts on, on not only how that's gonna resonate with customers, but I'm interested in what you guys think will a, I never say never about aws, you know, and are they gonna try to build, in your view a converged Oola and o LTP database? You know, Snowflake is taking an ecosystem approach. They've added in transactional capabilities to the portfolio so they're not standing still. What do you guys see in the competitive landscape in that regard going forward? Maybe Holger, you could start us off and anybody else who wants to can chime in, >>Happy to, you mentioned Snowflake last, we'll start there. I think Snowflake is imitating that strategy, right? That building out original data warehouse and the clouds tasking project to really proposition to have other data available there because AI is relevant for everybody. Ultimately people keep data in the cloud for ultimately running ai. So you see the same suite kind of like level strategy, it's gonna be a little harder because of the original positioning. How much would people know that you're doing other stuff? And I just, as a former developer manager of developers, I just don't see the speed at the moment happening at Snowflake to become really competitive to Oracle. On the flip side, putting my Oracle hat on for a moment back to you, Mark and Iran, right? What could Oracle still add? Because the, the big big things, right? The traditional chasms in the database world, they have built everything, right? >>So I, I really scratched my hat and gave Nipon a hard time at Cloud world say like, what could you be building? Destiny was very conservative. Let's get the Lakehouse thing done, it's gonna spring next year, right? And the AWS is really hard because AWS value proposition is these small innovation teams, right? That they build two pizza teams, which can be fit by two pizzas, not large teams, right? And you need suites to large teams to build these suites with lots of functionalities to make sure they work together. They're consistent, they have the same UX on the administration side, they can consume the same way, they have the same API registry, can't even stop going where the synergy comes to play over suite. So, so it's gonna be really, really hard for them to change that. But AWS super pragmatic. They're always by themselves that they'll listen to customers if they learn from customers suite as a proposition. I would not be surprised if AWS trying to bring things closer together, being morely together. >>Yeah. Well how about, can we talk about multicloud if, if, again, Oracle is very on on Oracle as you said before, but let's look forward, you know, half a year or a year. What do you think about Oracle's moves in, in multicloud in terms of what kind of penetration they're gonna have in the marketplace? You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at the, the Microsoft Azure deal. I think that's really interesting. I've, I've called it a little bit of early days of a super cloud. What impact do you think this is gonna have on, on the marketplace? But, but both. And think about it within Oracle's customer base, I have no doubt they'll do great there. But what about beyond its existing install base? What do you guys think? >>Ryan, do you wanna jump on that? Go ahead. Go ahead Ryan. No, no, no, >>That's an excellent point. I think it aligns with what we've been talking about in terms of Lakehouse. I think Lake House will enable Oracle to pull more customers, more bicycle customers onto the Oracle platforms. And I think we're seeing all the signs pointing toward Oracle being able to make more inroads into the overall market. And that includes garnishing customers from the leaders in, in other words, because they are, you know, coming in as a innovator, a an alternative to, you know, the AWS proposition, the Google cloud proposition that they have less to lose and there's a result they can really drive the multi-cloud messaging to resonate with not only their existing customers, but also to be able to, to that question, Dave's posing actually garnish customers onto their platform. And, and that includes naturally my sequel but also OCI and so forth. So that's how I'm seeing this playing out. I think, you know, again, Oracle's reporting is indicating that, and I think what we saw, Oracle Cloud world is definitely validating the idea that Oracle can make more waves in the overall market in this regard. >>You know, I, I've floated this idea of Super cloud, it's kind of tongue in cheek, but, but there, I think there is some merit to it in terms of building on top of hyperscale infrastructure and abstracting some of the, that complexity. And one of the things that I'm most interested in is industry clouds and an Oracle acquisition of Cerner. I was struck by Larry Ellison's keynote, it was like, I don't know, an hour and a half and an hour and 15 minutes was focused on healthcare transformation. Well, >>So vertical, >>Right? And so, yeah, so you got Oracle's, you know, got some industry chops and you, and then you think about what they're building with, with not only oci, but then you got, you know, MyQ, you can now run in dedicated regions. You got ADB on on Exadata cloud to customer, you can put that OnPrem in in your data center and you look at what the other hyperscalers are, are doing. I I say other hyperscalers, I've always said Oracle's not really a hyperscaler, but they got a cloud so they're in the game. But you can't get, you know, big query OnPrem, you look at outposts, it's very limited in terms of, you know, the database support and again, that that will will evolve. But now you got Oracle's got, they announced Alloy, we can white label their cloud. So I'm interested in what you guys think about these moves, especially the industry cloud. We see, you know, Walmart is doing sort of their own cloud. You got Goldman Sachs doing a cloud. Do you, you guys, what do you think about that and what role does Oracle play? Any thoughts? >>Yeah, let me lemme jump on that for a moment. Now, especially with the MyQ, by making that available in multiple clouds, what they're doing is this follows the philosophy they've had the past with doing cloud, a customer taking the application and the data and putting it where the customer lives. If it's on premise, it's on premise. If it's in the cloud, it's in the cloud. By making the mice equal heat wave, essentially a plug compatible with any other mice equal as far as your, your database is concern and then giving you that integration with O L A P and ML and Data Lake and everything else, then what you've got is a compelling offering. You're making it easier for the customer to use. So I look the difference between MyQ and the Oracle database, MyQ is going to capture market more market share for them. >>You're not gonna find a lot of new users for the Oracle debate database. Yeah, there are always gonna be new users, don't get me wrong, but it's not gonna be a huge growth. Whereas my SQL heatwave is probably gonna be a major growth engine for Oracle going forward. Not just in their own cloud, but in AWS and in Azure and on premise over time that eventually it'll get there. It's not there now, but it will, they're doing the right thing on that basis. They're taking the services and when you talk about multicloud and making them available where the customer wants them, not forcing them to go where you want them, if that makes sense. And as far as where they're going in the future, I think they're gonna take a page outta what they've done with the Oracle database. They'll add things like JSON and XML and time series and spatial over time they'll make it a, a complete converged database like they did with the Oracle database. The difference being Oracle database will scale bigger and will have more transactions and be somewhat faster. And my SQL will be, for anyone who's not on the Oracle database, they're, they're not stupid, that's for sure. >>They've done Jason already. Right. But I give you that they could add graph and time series, right. Since eat with, Right, Right. Yeah, that's something absolutely right. That's, that's >>A sort of a logical move, right? >>Right. But that's, that's some kid ourselves, right? I mean has worked in Oracle's favor, right? 10 x 20 x, the amount of r and d, which is in the MyQ space, has been poured at trying to snatch workloads away from Oracle by starting with IBM 30 years ago, 20 years ago, Microsoft and, and, and, and didn't work, right? Database applications are extremely sticky when they run, you don't want to touch SIM and grow them, right? So that doesn't mean that heat phase is not an attractive offering, but it will be net new things, right? And what works in my SQL heat wave heat phases favor a little bit is it's not the massive enterprise applications which have like we the nails like, like you might be only running 30% or Oracle, but the connections and the interfaces into that is, is like 70, 80% of your enterprise. >>You take it out and it's like the spaghetti ball where you say, ah, no I really don't, don't want to do all that. Right? You don't, don't have that massive part with the equals heat phase sequel kind of like database which are more smaller tactical in comparison, but still I, I don't see them taking so much share. They will be growing because of a attractive value proposition quickly on the, the multi-cloud, right? I think it's not really multi-cloud. If you give people the chance to run your offering on different clouds, right? You can run it there. The multi-cloud advantages when the Uber offering comes out, which allows you to do things across those installations, right? I can migrate data, I can create data across something like Google has done with B query Omni, I can run predictive models or even make iron models in different place and distribute them, right? And Oracle is paving the road for that, but being available on these clouds. But the multi-cloud capability of database which knows I'm running on different clouds that is still yet to be built there. >>Yeah. And >>That the problem with >>That, that's the super cloud concept that I flowed and I I've always said kinda snowflake with a single global instance is sort of, you know, headed in that direction and maybe has a league. What's the issue with that mark? >>Yeah, the problem with the, with that version, the multi-cloud is clouds to charge egress fees. As long as they charge egress fees to move data between clouds, it's gonna make it very difficult to do a real multi-cloud implementation. Even Snowflake, which runs multi-cloud, has to pass out on the egress fees of their customer when data moves between clouds. And that's really expensive. I mean there, there is one customer I talked to who is beta testing for them, the MySQL heatwave and aws. The only reason they didn't want to do that until it was running on AWS is the egress fees were so great to move it to OCI that they couldn't afford it. Yeah. Egress fees are the big issue but, >>But Mark the, the point might be you might wanna root query and only get the results set back, right was much more tinier, which been the answer before for low latency between the class A problem, which we sometimes still have but mostly don't have. Right? And I think in general this with fees coming down based on the Oracle general E with fee move and it's very hard to justify those, right? But, but it's, it's not about moving data as a multi-cloud high value use case. It's about doing intelligent things with that data, right? Putting into other places, replicating it, what I'm saying the same thing what you said before, running remote queries on that, analyzing it, running AI on it, running AI models on that. That's the interesting thing. Cross administered in the same way. Taking things out, making sure compliance happens. Making sure when Ron says I don't want to be American anymore, I want to be in the European cloud that is gets migrated, right? So tho those are the interesting value use case which are really, really hard for enterprise to program hand by hand by developers and they would love to have out of the box and that's yet the innovation to come to, we have to come to see. But the first step to get there is that your software runs in multiple clouds and that's what Oracle's doing so well with my SQL >>Guys. Amazing. >>Go ahead. Yeah. >>Yeah. >>For example, >>Amazing amount of data knowledge and, and brain power in this market. Guys, I really want to thank you for coming on to the cube. Ron Holger. Mark, always a pleasure to have you on. Really appreciate your time. >>Well all the last names we're very happy for Romanic last and moderator. Thanks Dave for moderating us. All right, >>We'll see. We'll see you guys around. Safe travels to all and thank you for watching this power panel, The Truth About My SQL Heat Wave on the cube. Your leader in enterprise and emerging tech coverage.
SUMMARY :
Always a pleasure to have you on. I think you just saw him at Oracle Cloud World and he's come on to describe this is doing, you know, Google is, you know, we heard Google Cloud next recently, They own somewhere between 30 to 50% depending on who you read migrate from one cloud to another and suddenly you have a very compelling offer. All right, so thank you for that. And they certainly with the AI capabilities, And I believe strongly that long term it's gonna be ones who create better value for So I mean it's certainly, you know, when, when Oracle talks about the competitors, So what do you make of the benchmarks? say, Snowflake when it comes to, you know, the Lakehouse platform and threat to keep, you know, a customer in your own customer base. And oh, by the way, as you grow, And I know you look at this a lot, to insight, it doesn't improve all those things that you want out of a database or multiple databases So what about, I wonder ho if you could chime in on the developer angle. they don't have to license more things, send you to more trainings, have more risk of something not being delivered, all the needs of an enterprise to run certain application use cases. I mean I, you know, the rumor was the TK Thomas Curian left Oracle And I think, you know, to holder's point, I think that definitely lines But I agree with Mark, you know, the short term discounting is just a stall tag. testament to Oracle's ongoing ability to, you know, make the ecosystem Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able So when you say, yeah, their queries are much better against the lake house in You don't have to come to us to get these, these benefits, I mean the long term, you know, customers tend to migrate towards suite, but the new shiny bring the software to the data is of course interesting and unique and totally an Oracle issue in And the third one, lake house to be limited and the terabyte sizes or any even petabyte size because you want keynote and he was talking about how, you know, most security issues are human I don't think people are gonna buy, you know, lake house exclusively cause of And then, you know, that allows, for example, the specialists to And and what did you learn? The one thing before I get to that, I want disagree with And the customers I talk to love it. the migration cost or do you kind of conveniently leave that out or what? And when you look at Data Lake, that limits data migration. So that's gone when you start talking about So I think you knows got some real legitimacy here coming from a standing start, So you see the same And you need suites to large teams to build these suites with lots of functionalities You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at Ryan, do you wanna jump on that? I think, you know, again, Oracle's reporting I think there is some merit to it in terms of building on top of hyperscale infrastructure and to customer, you can put that OnPrem in in your data center and you look at what the So I look the difference between MyQ and the Oracle database, MyQ is going to capture market They're taking the services and when you talk about multicloud and But I give you that they could add graph and time series, right. like, like you might be only running 30% or Oracle, but the connections and the interfaces into You take it out and it's like the spaghetti ball where you say, ah, no I really don't, global instance is sort of, you know, headed in that direction and maybe has a league. Yeah, the problem with the, with that version, the multi-cloud is clouds And I think in general this with fees coming down based on the Oracle general E with fee move Yeah. Guys, I really want to thank you for coming on to the cube. Well all the last names we're very happy for Romanic last and moderator. We'll see you guys around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Ron Holger | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Mark Stammer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ron Westfall | PERSON | 0.99+ |
Ryan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Holgar Mueller | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
17 times | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
David Foyer | PERSON | 0.99+ |
44% | QUANTITY | 0.99+ |
1.2% | QUANTITY | 0.99+ |
4.8 billion | QUANTITY | 0.99+ |
Jason | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Fu Chim Research | ORGANIZATION | 0.99+ |
Dave Ante | PERSON | 0.99+ |
Matt Butcher, Fermyon | KubeCon + Cloud NativeCon NA 2022
(upbeat music) >> Hello, brilliant humans and welcome back to theCUBE. We're live from Detroit, Michigan. My name is Savannah Peterson. Joined here with John Furrier, John, so exciting, day three. >> Day three, cranking along, doing great, final day of KubeCon, it wraps up. This next segment's going to be great. It's about WebAssembly, the hottest trend here, at KubeCon that nobody knows about cause they just got some funding and it's got some great traction. Multiple players in here. People are really interested in this and they're really discovering it. They're digging into it. So, we're going to hear from one of the founders of the company that's involved. So, it'll be great. >> Yeah, I think we're right at the tip of the iceberg really. We started off the show with Scott from Docker talking about this, but we have a thought leader in this space. Please welcome Matt Butcher the CEO and co-founder of Fermyon Thank you for being here. Welcome. >> Yeah, thanks so much for having me. Favorite thing to talk about is WebAssembly after that is coffee but WebAssembly first. >> Hey, it's the morning. We can talk about both those on the show. (all chuckles) >> It might get confusing, but I'm willing to try. >> If you can use coffee as a metaphor to teach everyone about WebAssembly throughout the rest of the show. >> All right. That would be awesome. >> All right I'll keep that in mind. >> So when we were talking before we got on here I thought it was really fun because I think the hype is just starting in the WebAssembly space. Very excited about it. Where do you think we're at, set the stage? >> Honestly, we were really excited to come here and see that kind of first wave of hype. We came here expecting to have to answer the question you know, what is WebAssembly and why is anybody looking at it in the cloud space, and instead people have been coming up to us and saying, you know this WebAssembly thing, we're hearing about it. What are the problems it's solving? >> Savannah: Yeah. >> We're really excited to hear about it. So, people literally have been stopping us in restaurants and walking down the street, hey, "You're at KubeCon, you're the WebAssembly people. Tell us more about what's going on." >> You're like awesome celeb. I love this. >> Yeah, and I, >> This is great >> You know the, the description I used was I expected to come here shouting into the void. Hey, you know anybody, somebody, let me tell you about WebAssembly. Instead it's been people coming to us and saying "We've heard about it. Get us excited about it," and I think that's a great place to be. >> You know, one of the things that's exciting too is that this kind of big trend with this whole extraction layer conversation, multicloud, it reminds me of the old app server days where, you know there was a separation between the back end and front end, and then we're kind of seeing that now with this WebAssembly Wasm trend where the developers just want to have the apps run everywhere and the coding to kind of fall in, take a minute to explain what this is, why it's important, why are people jazzed about there's other companies like Cosmonic is in there. There's a lot of open source movement behind it. You guys are out there, >> Savannah: Docker. >> 20 million in fresh funding. Why is this important? What is it and why is it relevant right now? Why are people talking about it? >> I mean, we can't... There is no penasia in the tech world much for the good of all of us, right? To keep us employed. But WebAssembly seems to be that technology that just sort of arose at the right time to solve a number of problems that were really feeling intractable not very long ago. You know, at the core of what is WebAssembly? Well it's a binary format, right? But there's, you know, built on the same, strain of development that Java was built on in the 90's and then the .net run time. But with a couple of little fundamental changes that are what have made it compelling today. So when we think about the cloud world, we think about, okay well security's a big deal to us. Virtual machines are a way for us to run other people's untrusted operating systems on our hardware. Containers come along, they're a... The virtual machine is really the heavyweight class. This is the big thing. The workhorse of the cloud. Then along come Containers, they're a little slimmer. They're kind of the middleweight class. They provide us this great way to sort of package up just the application, not the entire operating system just the application and the bits we care about and then be able to execute those in a trusted environment. Well you know, serverless was the buzzword a few years ago. But one thing that serverless really identified for us is that we didn't actually have the kind of cloud side architecture that was the compute layer that was going to be able to fulfill the promise of serverless. >> Yeah. >> And you know, at that time I was at Microsoft we got to see behind the curtain and see how Azure operates and see the frustration with going, okay how do we get this faster? How do we get this startup time down from seconds to hundreds of milliseconds, WebAssembly comes along and we're able to execute these things in sub one millisecond, which means there is almost no cost to starting up one of these. >> Sub one millisecond. I just want to let everyone rest on that for a second. We've talked a lot about velocity and scale on the show. I mean everyone here is trying to do things faster >> Yep >> Obviously, but that is a real linchpin that makes a very big difference when we're talking about deploying things. Yeah. >> Yeah, and I mean when you think about the ecological and the cost impact of what we're building with the cloud. When we leave a bunch of things running in idle we're consuming electricity if nothing else. The electricity bill keeps going up and we're paying for it via cloud service charges. If you can start something in sub one millisecond then there's no reason you have to leave it running when nobody's using it. >> Savannah: Doesn't need to be in the background. >> That's right. >> So the lightweight is awesome. So, this new class comes up. So, like Java was a great metaphor there. This is kind of like that for the modern era of apps. >> Yeah. >> Where is this going to apply most, do you think? Where's it going to impact most? >> Well, you know, I think there are really four big categories. I think there's the kind of thing I was just talking about I think serverless and edge computing and kind of the server class of problem space. I think IOT is going to benefit, Amazon, Disney Plus, >> Savannah: Yes, edge. >> And PBS, sorry BBC, they all use WebAssembly for the players because they need to run the same player on thousands of different devices. >> I didn't even think about that use case. What a good example. >> It's a brilliant way to apply it. IOT is a hard space period and to be able to have that kind of layer of abstraction. So, that's another good use case >> Savannah: Yeah. >> And then I think this kind of plugin model is another one. You see it was Envoy proxy using this as a way to extend the core features. And I think that one's going to be very, very promising as well. I'm forgetting one, but you know. (all chuckles) I think you end up with these kind of discreet compartments where you can easily fit WebAssembly in here and it's solving a problem that we didn't have the technology that was really adequately solving it before. >> No, I love that. One of the things I thought was interesting we were all at dinner, we were together on Tuesday. I was chatting with Paris who runs Deliveroo at Apple and I can't say I've heard this about too many tools but when we were talking about WebAssembly she said "This is good for everybody" And, it's really nice when technologies come along that will raise the water level across the board. And I love that you're leading this. Speaking of you just announced a huge series aid, 20 million dollars just a few days ago. What does that mean for you and the team? >> I mean there's a little bit of economic uncertainty and it's always nice, >> Savannah: Just a little bit. >> Little bit. >> Savannah: It's come up on the show a little bit this week >> Just smidge. and it's nice to know that we're at a critical time developing this kind of infrastructure layer developing this kind of developer experience where they can go from, you know, blinking cursor to deployed application in two minutes or less. It would be a tragedy if that got forestalled merely because you can't achieve the velocity you need to carry it out. So, what's very exciting about being able to raise around like that at this critical time is that gives us the ability to grow strategically, be able to continue releasing products, building a community around WebAssembly as a whole and of course around our products at Fermyon is a little smaller circle in the bigger circle, and that's why we are so excited about having closed around, that's the perfect one to extend a runway like that. >> Well I'm super excited by this because one I love the concept. I think it's very relevant, like how you progress heavyweight, middleweight, maybe this is lightweight class. >> I know, I'm here for the analogy. No, it's great, its great. >> Maybe it's a lightweight class. >> And we're slimming, which not many of us can say in these times so that's awesome. >> Maybe it's more like the tractor trailer, the van, now you got the sports car. >> Matt: Yeah, I can go.. >> Now you're getting Detroit on us. >> I was trying for a coffee, when I just couldn't figure it out. (all chuckles) >> So, you got 20 million. I noticed the investors amplify very good technical VC and early stage firm. >> Amazing, yeah. >> Insight, they do early stage, big early stage like this. Also they're on the board of Docker. Docker was intent to put a tool out there. There's other competition out there. Cosmonic is out there. They're funded. So you got VC funded companies like yourselves and Cosmonic and others. What's that mean? Different tool chains, is it going to create fragmentation? Is there a common mission? How do you look at the competition as you get into the market >> When you see an ecosystem form. So, here we are at KubeCon, the cloud native ecosystem at this point I like to think of them as like concentric rings. You have the kind of core and then networking and storage and you build these rings out and the farther out you get then the easier it is to begin talking about competition and differentiation. But, when you're looking at that core piece everybody's got to be in there together working on the same stuff, because we want interoperability, we want standards based solutions. We want common ways of building things. More than anything, we want the developers and operators and users who come into the ecosystem to be able to like instantly feel like, okay I don't have to learn. Like you said, you know, 50 different tools for 50 different companies. "I see how this works", and they're doing this and they're doing this. >> Are you guys all contributing into the same open source? >> Yep, yeah, so... >> All the funding happens. >> Both CNCF and the ByteCode Alliance are organizations that are really kind of pushing forward that core technology. You know, you mentioned Cosmonic, Microsoft, SOSA, Red Hat, VMware, they're all in here too. All contributing and again, with all of us knowing this is that nascent stage where we got to execute it. >> How? >> Do it together. >> How are you guys differentiating? Because you know, open source is a great thing. Rising Tide floats all boats. This is a hot area. Is there a differentiation discussion or is it more let's see how it goes, kind of thing? >> Well for us, we came into it knowing very specifically what the problem was we wanted to solve. We wanted this serverless architecture that executed in sub one millisecond to solve, to really create a new wave of microservices. >> KubeCon loves performance. They want to run their stuff on the fastest platform possible. >> Yeah, and it shouldn't be a roadblock, you know, yeah. >> And you look at someone like SingleStore who's a database company and they're in it because they want to be able to run web assemblies close to the data. Instead of doing a sequel select and pulling it way out here and munging it and then pushing it back in. They move the code in there and it's executing in there. So everybody's kind of finding a neat little niche. You know, Cosmonic has really gone more for an enterprise play where they're able to provide a lot of high level security guarantees. Whereas we've been more interested in saying, "Hey, this your first foray into WebAssembly and you're interested in serverless we'll get you going in like a couple of minutes". >> I want to ask you because we had Scott Johnston on earlier opening keynote so we kind of chatted one-on-one and I went off form cause I really wanted to talk to him because Docker is one of the most important companies since their pivot, when they did their little reset after the first Docker kind of then they sold the enterprise off to Mirantis they've been doing really, really well. What's your relationship to Docker? He was very bullish with you guys. Insights, joint investor. Is there a relationship? You guys talk, what's going on there? >> I mean, I'm going to have to admit a little bit of hero worship on my part. I think Scott is brilliant. I just do, and having come from the Kubernetes world the Fermyon team, we've always kind of kept an eye on Docker communicated with a lot of them. We've known Justin Cormack for years. Chris Cornett. (indistinct) I mean yeah, and so it has been a very natural >> Probably have been accused of every Docker Con and we've did the last three years on the virtual side with them. So, we know them really well. >> You've always got your finger on the pulse for them. >> Do you have a relationship besides a formal relationship or is it more of pass shoot score together in the industry? >> Yeah. No, I think it is kind of the multi-level one. You come in knowing people. You've worked together before and you like working with each other and then it sort of naturally extends onto saying, "Hey, what can we do together?" And also how do we start building this ecosystem around us with Docker? They've done an excellent job of articulating why WebAssembly is a complimentary technology with Containers. Which is something I believe very wholeheartedly. You need all three of the heavyweight, middleweight, lightweight. You can't do all the with just one, and to have someone like that sort of with a voice profoundly be able to express, look we're going to start integrating it to show you how it works this way and prevent this sort of like needless drama where people are going, oh Dockers dead, now everything's WebAssembly, and that's been a great.. >> This fight that's been going on. I mean, Docker, Kubernetes, WebAssembly, Containers. >> Yeah. >> We've seen on the show and we both know this hybrid is the future. We're all going to be using a variety of different tools to achieve our goals and I think that you are obviously one of them. I'm curious because just as we were going on you mentioned that you have a PhD in philosophy. (Matt chuckles) >> Matt: Yeah. >> Which is a wild card. You're actually our second PhD in philosophy working in a very technical role on the show this week, which is kind of cool. So, how does that translate into the culture at Fermyon? What's it like on the team? >> Well, you know, a philosophy degree if nothing else teaches you to think in systems and both human systems and formal systems. So that helps and when you approach the process of building a company, you need to be thinking both in terms of how are we organizing this? How are we organizing the product? How do we organize the team? We have really learned that culture is a major deal and culture philosophy, >> Savannah: Why I'm bringing it up. >> We like that, you know, we've been very forward. We have our chip values, curiosity, humility inclusivity and passion, and those are kind of the four things that we feel like that each of us every day should strive to be exhibiting these kinds of things. Curiosity, because you can't push the envelope if you don't ask the hard questions. Humility, because you know, it's easy to get cocky and talk about things as if you knew all the answers. We know we don't and that means we can learn from Docker and Microsoft >> Savannah: That's why you're curious. >> And the person who stops by the booth that we've never met before and says, "hey" and inclusivity, of course, building a community if you don't execute on that well you can't build a good community. The diversity of the community is what makes it stronger than a singular.. >> You have to come in and be cohesive with the community. >> Matt: Yeah. >> The app focus is a really, I think, relevant right now. The timing of this is right online. I think Scott had a good answer I thought on the relationship and how he sees it. I think it's going to be a nice extension to not a extension that way, but like. >> It probably will be as well. >> Almost a pun there John, almost a pun. >> There actually might be an extension, but evolution what we're going to get to which I think is going to be pure application server, like. >> Yep, yep. Like performance for new class of developer. Then now the question comes up and we've been watching developer productivity. That is a big theme and our belief is that if you take digital transformation to its conclusion IT and developers aren't a department serving the business they are the business. That means the developer workflows will have to be radically rebuilt to handle the velocity and new tech for just coding. I call it architectural list. >> I like that. I might steal that. >> It's a pun, but it's also brings up the provocative question. You shouldn't have to need an architecture to code. I mean, Java was great for that reason in many ways. So, if that happens if the developers are running the business that means more apps. The apps is the business. You got to have tool chains and productivity. You can't have fragmentation. Some people are saying WebAssembly might, fork tool chains, might challenge the developer productivity. what's your answer to that? How would you address that objection? >> I mean the threat of forking is always lurking in the corner in open source. In a way it's probably a positive threat because it keeps us honest it keeps us wanting to be inclusive again and keep people involved. Honestly though, I'm not particularly worried about it. I know that the W-3 as a standards body, of course, one of the most respected standards bodies on the planet. They do html, they do cascading style sheets. WebAssembly is in that camp and those of us in the core are really very interested in saying, you know, come on in, let's build something that's going to be where the core is solid and you know what you got and then you can go into the resurgence of the application server. I mean, I wholeheartedly agree with you on that, and we can only get there if we say, all right, here are the common paradigms that we're all going to agree to use, now let's go build stuff. >> And as we've been saying, developers are setting, I think are going to set the standards and they're going to vote with their code and their feet, if you will. >> Savannah: A hundred percent. >> They will decide if you're not aligning with what they want to do. okay. On how they want to self-serve and or work, you'll figure that out. >> Yep, yep. >> You'll get instant feedback. >> Yeah. >> Well, you know, again, I tell you a huge fan of Docker. One of the things that Docker understood at the very outset, is that they had an infrastructure tool and developers were the way to get adoption, and if you look at how fast they got adoption versus many, many other technologies that are profoundly impacted. >> Savannah: Wild. >> Yeah. >> Savannah: It's a cool story. >> It's because they got the developers to go, "This is amazing, hey infrastructure folks, here's an infrastructure tool that we like" and the infrastructure folks are used to code being tossed over the wall are going, "Are you for real?" I mean, and that was a brilliant way to do it and I think that what.. >> John: Yeah, yeah. >> We want to replay in the WebAssembly world is making it developer friendly and you know the kind of infrastructure that we can actually operate. >> Well congratulations to the entire community. We're huge fans of the concept. I kind of see where it's going with connect the dots. You guys getting a lot of buzz. I have to ask you, my final question is the hype is beyond all recognition at this point. People are super pumped and enthusiastic about it and people are looking at it maybe some challenging it, but that's all good things. How do you get to the next level where people are confident that this is actually going to go the next step? Hype to confidence. We've seen great hype. Envoy was hyped up big time before it came in, then it became great. That was one of my favorite examples. Hype is okay, but now you got to put some meat on the bone. The sizzle on the stake so to speak. So what's going to be the stake for you guys as you see this going forward? What's the need? >> Yeah, you know, I talk about our first guiding story was, you know, blinking cursor to deployed application in two minutes. That's what you need to win developers initially. So, what's the next story after that? It's got to be, Fermyon can run real world applications that solve real world problems. That's where hype often fails. If you can build something that's neat but nobody's quite sure what to do with it, to use it, maybe somebody will discover a good use. But, if you take that gambling asset, >> Savannah: It's that ending answer that makes the difference. >> Yeah, yeah. So we say, all right, what are developers trying to build with our platform and then relentlessly focus on making that easier and solving the real world problem that way. That's the crucial thing that's going to drive us out of that sort of early hype stage into a well adopted technology and I talk from Fermyon point of view but really that's for all of us in the WebAssembly. >> John: Absolutely. >> Very well stated Matt, just to wrap us up when we're interviewing you here on theCUBE next year, what do you hope to be able to say then that you can't say today? >> All this stuff about coffee we didn't cover today, but also.. (all chuckles) >> Savannah: Here for the coffee show. Only analogies, that's a great analogy. >> I want to walk here and say, you know last time we talked about being able to achieve density in servers that was, you know, 10 times Kubernetes. Next year I want to say no, we're actually thousands of times beyond Kubernetes that we're lowering people's electricity bill by making these servers more efficient and the developers love it. >> That your commitment to the environment is something I want to do an entirely different show on. We learned that 7-8% of all the world's powers actually used on data centers through the show this week which is jarring quite frankly. >> Yeah, yeah. Tragic would be a better way of saying that. >> Yeah, I'm holding back so that we don't go over time here quite frankly. But anyways, Matt Butcher thank you so much for being here with us. >> Thank you so much for having me it was pleasure.. >> You are worth the hype you are getting. I am grateful to have you as our WebAssembly thought leader. In addition to Scott today from Docker earlier in the show. John Furrier, thanks for being my co-host and thank all of you for tuning into theCUBE here, live from Detroit. I'm Savannah Peterson and we'll be back with more soon. (ambient music)
SUMMARY :
and welcome back to theCUBE. of the founders of the We started off the show with Scott Favorite thing to talk Hey, it's the morning. but I'm willing to try. of the show. That would be awesome. is just starting in the WebAssembly space. to us and saying, you know We're really excited to hear about it. I love this. and I think that's a great place to be. and the coding to kind of fall in, Why is this important? and the bits we care about and see the frustration with going, and scale on the show. but that is a real linchpin and the cost impact of what we're building to be in the background. This is kind of like that and kind of the server for the players because they need I didn't even think and to be able to have that kind And I think that one's going to be very, and the team? that's the perfect one to because one I love the concept. I know, I'm here for the analogy. And we're slimming, the van, now you got the sports car. I was trying for a coffee, I noticed the investors amplify is it going to create fragmentation? and the farther out you get Both CNCF and the ByteCode Alliance How are you guys differentiating? to solve, to really create the fastest platform possible. Yeah, and it shouldn't be a roadblock, They move the code in there is one of the most important companies and having come from the Kubernetes world on the virtual side with them. finger on the pulse for them. to show you how it works this way I mean, Docker, Kubernetes, and I think that you are on the show this week, Well, you know, a philosophy degree We like that, you know, The diversity of the community You have to come in and be cohesive I think it's going to be a nice extension to which I think is going to is that if you take digital transformation I like that. The apps is the business. I know that the W-3 as a standards body, and they're going to vote with their code and or work, you'll figure that out. and if you look at how the developers to go, and you know the kind of infrastructure The sizzle on the stake so to speak. Yeah, you know, I talk about makes the difference. that easier and solving the about coffee we didn't cover today, Savannah: Here for the coffee show. I want to walk here and say, you know of all the world's powers actually used Yeah, yeah. thank you so much for being here with us. Thank you so much for I am grateful to have you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Cornett | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Matt Butcher | PERSON | 0.99+ |
Cosmonic | ORGANIZATION | 0.99+ |
PBS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
BBC | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Justin Cormack | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Matt | PERSON | 0.99+ |
20 million | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
Deliveroo | ORGANIZATION | 0.99+ |
Next year | DATE | 0.99+ |
SOSA | ORGANIZATION | 0.99+ |
20 million dollars | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two minutes | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Scott Johnston | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Disney Plus | ORGANIZATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
Fermyon | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
this week | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
50 different companies | QUANTITY | 0.99+ |
hundreds of milliseconds | QUANTITY | 0.99+ |
Fermyon | ORGANIZATION | 0.99+ |
50 different tools | QUANTITY | 0.99+ |
WebAssembly | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
ByteCode Alliance | ORGANIZATION | 0.98+ |
10 times | QUANTITY | 0.98+ |
90's | DATE | 0.98+ |
Apple | ORGANIZATION | 0.98+ |
four things | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
day three | QUANTITY | 0.97+ |
Kubernetes | ORGANIZATION | 0.97+ |
Both | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
Day three | QUANTITY | 0.97+ |
Docker | PERSON | 0.97+ |
Ruchir Puri, IBM and Tom Anderson, Red Hat | AnsibleFest 2022
>>Good morning live from Chicago. It's the cube on the floor at Ansible Fast 2022. This is day two of our wall to wall coverage. Lisa Martin here with John Furrier. John, we're gonna be talking next in the segment with two alumni about what Red Hat and IBM are doing to give Ansible users AI superpowers. As one of our alumni guests said, just off the keynote stage, we're nearing an inflection point in ai. >>The power of AI with Ansible is really gonna be an innovative, I think an inflection point for a long time because Ansible does such great things. This segment's gonna explore that innovation, bringing AI and making people more productive and more importantly, you know, this whole low code, no code, kind of right in the sweet spot of the skills gap. So should be a great segment. >>Great segment. Please welcome back two of our alumni. Perry is here, the Chief scientist, IBM Research and IBM Fellow. And Tom Anderson joins us once again, VP and general manager at Red Hat. Gentlemen, great to have you on the program. We're gonna have you back. >>Thank you for having >>Us and thanks for joining us. Fresh off the keynote stage. Really enjoyed your keynote this morning. Very exciting news. You have a project called Project Wisdom. We're talking about this inflection point in ai. Tell the audience, the viewers, what is Project Wisdom And Wisdom differs from intelligence. How >>I think Project Wisdom is really about, as I said, sort of combining two major forces that are in many ways disrupting and, and really constructing many a aspects of our society, which are software and AI together. Yeah. And I truly believe it's gonna result in a se shift on how not just enterprises, but society carries forefront. And as I said, intelligence is, is, I would argue at least artificial intelligence is more, in some ways mechanical, if I may say it, it's about algorithms, it's about data, it's about compute. Wisdom is all about what is truly important to bring out. It's not just about when you bring out a, a insight, when you bring out a decision to be able to explain that decision as well. It's almost like humans have wisdom. Machines have intelligence and, and it's about project wisdom. That's why we called it wisdom. >>Because it is about being a, a assistant augmenting humans. Just like be there with the humans and, and almost think of it as behave and interact with them as another colleague will versus intelligence, which is, you know, as I said, more mechanical is about data. Computer algorithms crunch together and, and we wanna bring the power of project wisdom and artificial intelligence to developers to, as you said, close the skills gap to be able to really make them more productive and have wisdom for Ansible be their assistant. Yeah. To be able to get things for them that they would find many ways mundane, many ways hard to find and again, be an assistant and augmented, >>You know, you know what's interesting, I want to get into the origin, how it all happened, but interesting IBM research, well known for the deep tech, big engineering. And you guys have been doing this for a long time, so congratulations. But it's interesting here at this event, even on stage here event, you're starting to see the automation come in. So the question comes up, scale. So what happens, IBM buys Red Hat, you go raid the, the raid, the ip, Trevor Treasure trove of ai. I mean this cuz this is kind of like bringing two killer apps together. The Ansible configuration automation layer with ai just kind of a, >>Yeah, it's an amazing relationship. I was gonna say marriage, but I don't wanna say marriage cause I may be >>Last. I didn't mean say raid the Treasure Trobe, but the kind of >>Like, oh my God. An amazing relationship where we bring all this expertise around automation, obviously around IP and application infrastructure automation and IBM research, Richie and his team bring this amazing capacity and experience around ai. Bring those two things together and applying AI to automation for our teams is so incredibly fantastic. I just can't contain my enthusiasm about it. And you could feel it in the keynote this morning that Richie was doing the energy in the room and when folks saw that, it's just amazing. >>The geeks are gonna love it for sure. But here I wanna get into the whole evolution. Computers on computers, remember the old days thinking machines was a company generations ago that I think they've sold or went outta business, but self-learning, learning machines, computers, programming, computers was actually on your slide you kind of piece out this next wave of AI and machine learning, starting with expert systems really kind of, I'm almost say static, but like okay programs. Yeah, yeah. And then now with machine learning and that big debate was unsupervised, supervised, which is not really perfect. Deep learning, which now explores some things, but now we're at another wave. Take, take us through the thought there explaining what this transition looks like and why. >>I think we are, as I said, we are really at an inflection point in the journey of ai. And if ai, I think it's fair to say data is the pain of ai without data, AI doesn't exist. But if I were to train AI with what is known as supervised learning or or data that is labeled, you are almost sort of limited because there are only so many people who have that expertise. And interestingly, they all have day jobs. So they're not just gonna sit around and label this for you. Some people may be available, but you know, this is not, again, as I as Tom said, we are really trying to apply it to some very sort of key domains which require subject matter expertise. This is not like labeling cats and dogs that everybody else in the board knows there are, the community's very large, but still the skills to go around are not that many. >>And I truly believe to apply AI to the, to the word of, you know, enterprises information technology automation, you have to have unsupervised learning and that's the only way to skate. Yeah. And these two trends really about, you know, information technology percolating across every enterprise and unsupervised learning, which is learning on this very large amount of data with of course know very large compute with some very powerful algorithms like transformer architectures and others which have been disrupting the, the domain of natural language as well are coming together with what I described as foundation models. Yeah. Which anybody who plays with it, you'll be blown away. That's literally blown away. >>And you call that self supervision at scale, which is kind of the foundation. So I have to ask you, cuz this comes up a lot with cloud, cloud scale, everyone tells horizontally scalable cloud, but vertically specialized applications where domain expertise and data plays. So the better the data, the better the self supervision, better the learning. But if it's horizontally scalable is a lot to learn. So how do you create that data ops where it's where the machines are gonna be peaked to maximize what's addressable, but what's also in the domain too, you gotta have that kind of diversity. Can you share your thoughts on that? >>Absolutely. So in, in the domain of foundation models, there are two main stages I would say. One is what I'll describe as pre-training, which is think of it as the, the machine in this particular case is knowledgeable about the domain of code in general. It knows syntax of Python, Java script know, go see Java and so, so on actually, and, and also Yammel as well, which is obviously one would argue is the domain of information technology. And once you get to that level, it's a, it's almost like having a developer who knows all of this but may not be an expert at Ansible just yet. He or she can be an expert at Ansible but is not there yet. That's what I'll call background knowledge. And also in the, in the case of foundation models, they are very adept at natural language as well. So they can connect natural language to code, but they are not yet expert at the domain of Ansible. >>Now there's something called, the second stage of learning is called fine tuning, which is about this data ops where I take data, which is sort of the SME data in this particular case. And it's curated. So this is not just generic data, you pick off GitHub, you don't know what exists out there. This is the data which is governed, which we know is of high quality as well. And you think of it as you specialize the generic AI with pre-trained AI with that data. And those two stages, including the governance of that data that goes into it results in this sort of really breakthrough technology that we've been calling Project Wisdom for. Our first application is Ansible, but just watch out that area. There are many more to come and, and we are gonna really, I'm really excited about this partnership with Red Hat because across IBM and research, I think where wherever we, if there is one place where we can find excited, open source, open developer community, it is Right. That's, >>Yeah. >>Tom, talk about the, the role of open source and Project Wisdom, the involvement of the community and maybe Richard, any feedback that you've gotten since coming off stage? I'm sure you were mobbed. >>Yeah, so for us this is, it's called Project Wisdom, not Product Wisdom. Right? Sorry. Right. And so, no, you didn't say that but I wanna just emphasize that it is a project and for us that is a key word in the upstream community that this is where we're inviting the community to jump on board with us and bring their expertise. All these people that are here will start to participate. They're excited in it. They'll bring their expertise and experience and that fine tuning of the model will just get better and better. So we're really excited about introducing this now and involving the community because it's super nuts. Everything that Red Hat does is around the community and this is no different. And so we're really excited about Project Wisdom. >>That's interesting. The project piece because if you see in today's world the innovation strategy before where we are now, go back to say 15 years ago it was of standard, it's gotta have standard bodies. You can still innovate and differentiate, but yet with open source and community, it's a blending of research and practitioners. I think that to me is a big story here is that what you guys are demonstrating is the combination of research and practitioners in the project. Yes. So how does this play out? Cuz this is kind of like how things are gonna get done in the cloud cuz Amazon's not gonna just standardize their stack at at higher level services, nor is Azure and they might get some plumbing commonalities below, but for Project Project Wisdom to be successful, they can, it doesn't need to have standards. If I get this right, if I can my on point here, what do you guys think about that? React to that? Yeah, >>So I definitely, I think standardization in terms of what we will call ML ops pipeline for models to be deployed and managed and operated. It's like models, like any other code, there's standardization on DevOps ops pipeline, there's standardization on machine learning pipeline. And these models will be deployed in the cloud because they need to scale. The only way to scale to, you know, thousands of users is through cloud. And there is, there are standard pipelines that we are working and architecting together with the Red Hat community leveraging open source packages. Yeah. Is really to, to help scale out the AI models of wisdom together. And another point I wanted to pick up on just what Tom said, I've been sort of in the area of productizing AI for for long now having experience with Watson as well. The only scenario where I've seen AI being successful is in this scenario where, what I describe as it meets the criteria of flywheel of ai. >>What do I mean by flywheel of ai? It cannot be some research people build a model. It may be wowing, but you roll it out and there's no feedback. Yeah, exactly. Okay. We are duh. So what actually, the only way the more people use these models, the more they give you feedback, the better it gets because it knows what is right and what is not right. It will never be right the first time. Actually, you know, the data it is trained on is a depiction of reality. Yeah. It is not a reality in itself. Yeah. The reality is a constantly moving target and the only way to make AI successful is to close that loop with the community. And that's why I just wanted to reemphasize the point on why community is that important >>Actually. And what's interesting Tom is this is a difference between standards bodies, old school and communities. Because developers are very efficient in their feedback. Yes. They jump to patterns that serve their needs, whether it's self-service or whatever. You can kind of see what's going on. Yeah. It's either working or not. Yeah, yeah, >>Yeah. We get immediate feedback from the community and we know real fast when something isn't working, when something is working, there are no problems with the flow of data between the members of the community and, and the developers themselves. So yeah, it's, I'm it's great. It's gonna be fantastic. The energy around Project Wisdom already. I bet. We're gonna go down to the Project Wisdom session, the breakout session, and I bet you the room will be overflowed. >>How do people get involved real quick? Get, get a take a minute to explain how I would get involved. I'm a community member. Yep. I'm watching this video, I'm intrigued. This has got me enthusiastic. How do I get more confident with this opportunity? >>So you go to, first of all, you go to red hat.com/project Wisdom and you register your interests and you wanna participate. We're gonna start growing this process, bringing people in, getting ready to make the service available to people to start using and to experiment with. Start getting their feedback. So this is the beginning of, of a journey. This isn't the, you know, this isn't the midpoint of a journey, this is the begin. You know, even though the work has been going on for a year, this is the beginning of the community journey now. And so we're gonna start working together through channels like Discord and whatnot to be able to exchange information and bring people in. >>What are some of the key use cases, maybe Richie are starting with you that, that you think maybe dream use cases that you think the community will help to really uncover as we're looking at Project Wisdom really helping in this transformation of ai. >>So if I focus on let's say Ansible itself, there are much wider use cases, but Ansible itself and you know, I, I would say I had not realized, I've been working on AI for Good for long, but I had not realized the excitement and the power of Ansible community itself. It's very large, it's very bottom sum, which I love actually. But as I went to lot of like CTOs and CIOs of lot of our customers as well, it was becoming clear the use cases of, you know, I've got thousand Ansible developers or IT or automation experts. They write code all the time. I don't know what all of this code is about. So the, the system administrators, managers, they're trying to figure out sort of how to organize all of this together and think of it as Google for finding all of these automation code automation content. >>And I'm very excited about not just the use cases that we demonstrated today, that is beginning of the journey, but to be able to help enterprises in finding the right code through natural language interfaces, generating the code, helping Del us debug their code as well. Giving them predictive insights into this may happen. Just watch out for it when you deploy this. Something like that happened before, just watch out for it as well. So I'm, I'm excited about the entire life cycle of IT automation, Not just about at the build time, but also at the time of deployment. At the time of management. This is just a start of a journey, but there are many exciting use cases abound for Ansible and beyond. >>It's gonna be great to watch this as it unfolds. Obviously just announcing this today. We thank you both so much for joining us on the program, talking about Project wisdom and, and sharing how the community can get involved. So you're gonna have to come back next year. We're gonna have to talk about what's going on. Cause I imagine with the excitement of the community and the volume of the community, this is just the tip of the iceberg. Absolutely. >>This is absolutely exactly. You're excited about. >>Excellent. And you should be. Congratulations. Thank, thanks again for joining us. We really appreciate your insights. Thank you. Thank >>You for having >>Us. For our guests and John Furrier, I'm Lisa Barton and you're watching The Cube Lie from Chicago at Ansible Fest 22. This is day two of wall to wall coverage on the cube. Stick around. Our next guest joins us in just a minute.
SUMMARY :
It's the cube on the floor at Ansible Fast 2022. bringing AI and making people more productive and more importantly, you know, this whole low code, Gentlemen, great to have you on the program. Tell the audience, the viewers, what is Project Wisdom And Wisdom differs from intelligence. It's not just about when you bring out a, a insight, when you bring out a decision to to developers to, as you said, close the skills gap to And you guys have been doing this for a long time, I was gonna say marriage, And you could feel it in the keynote this morning And then now with machine learning and that big debate was unsupervised, This is not like labeling cats and dogs that everybody else in the board the domain of natural language as well are coming together with And you call that self supervision at scale, which is kind of the foundation. And once you So this is not just generic data, you pick off GitHub, of the community and maybe Richard, any feedback that you've gotten since coming off stage? Everything that Red Hat does is around the community and this is no different. story here is that what you guys are demonstrating is the combination of research and practitioners The only way to scale to, you know, thousands of users is through the only way to make AI successful is to close that loop with the community. They jump to patterns that serve the breakout session, and I bet you the room will be overflowed. Get, get a take a minute to explain how I would get involved. So you go to, first of all, you go to red hat.com/project Wisdom and you register your interests and you What are some of the key use cases, maybe Richie are starting with you that, that you think maybe dream use the use cases of, you know, I've got thousand Ansible developers So I'm, I'm excited about the entire life cycle of IT automation, and sharing how the community can get involved. This is absolutely exactly. And you should be. This is day two of wall to wall coverage on the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Lisa Barton | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Richard | PERSON | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Perry | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Richie | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Ruchir Puri | PERSON | 0.99+ |
two alumni | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two stages | QUANTITY | 0.99+ |
second stage | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
first application | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
both | QUANTITY | 0.98+ |
Discord | ORGANIZATION | 0.97+ |
15 years ago | DATE | 0.97+ |
AnsibleFest | EVENT | 0.97+ |
Trevor Treasure | PERSON | 0.97+ |
thousand | QUANTITY | 0.97+ |
red hat.com/project | OTHER | 0.96+ |
One | QUANTITY | 0.95+ |
The Cube Lie | TITLE | 0.93+ |
Ansible Fest 22 | EVENT | 0.93+ |
first time | QUANTITY | 0.93+ |
Project Wisdom | ORGANIZATION | 0.92+ |
two killer apps | QUANTITY | 0.92+ |
two major forces | QUANTITY | 0.92+ |
users | QUANTITY | 0.9+ |
IBM Research | ORGANIZATION | 0.9+ |
DevOps | TITLE | 0.89+ |
Azure | TITLE | 0.85+ |
Project Wisdom | TITLE | 0.85+ |
this morning | DATE | 0.85+ |
Yammel | TITLE | 0.82+ |
Project Wisdom | ORGANIZATION | 0.81+ |
a year | QUANTITY | 0.78+ |
Ansible Fast | ORGANIZATION | 0.75+ |
two main stages | QUANTITY | 0.74+ |
wave | EVENT | 0.72+ |
day | QUANTITY | 0.69+ |
first | QUANTITY | 0.67+ |
Project | ORGANIZATION | 0.66+ |
Project Project Wisdom | TITLE | 0.63+ |
Wisdom | TITLE | 0.61+ |
Architecting SaaS Superclouds | Supercloud22
>>Welcome back to super cloud 22, our inaugural event. It's a pilot event here in the cube studios we're live and streaming virtually until we do it in person. Maybe next year. I'm John fury, host of the cube with Dave Lon two great guests, distinguished engineers managers, CTOs investors. Mariana Tessel is a CTO of Intuit ins Ray founder of vertex ventures. Both have a lot of DNA. Founder allow cloud here with mark Andre and Ben Horowitz, a variety of other great ventures you've done. And now you're an investor. Yep. Maria, you've been a seasoned CTO, VP of engineering, VMware Docker Intuit. Now thanks for joining us. >>Absolutely. >>So super cloud is a, is a thing. And apparently it's got a lot of momentum and you guys got stats over there at, at Intuit in, so you're investing and we were challenged on super cloud. Our initial thesis was you build on the clouds, get all that leverage like snowflake, you get a good differentiation and then you compete and then move to other clouds. Now it's becoming a thing where I can do this. Every enterprise could possibly do it. So I want to get your guys thoughts on what you think of super cloud concept and where are the holes in it, what needs to be defined. And so we'll start with you. You've done a lot of cloud things in your day. What >>Do you think? Yeah, it's the whole cloud journey started with a desire to consolidate and desire to actually provide uniformity and, and standards driven ways of doing things. And I think Amazon was a leader there. They helped kind of teach everybody else. You know, when I was in loud cloud, we were trying to do it with proprietary stacks just wouldn't work. But once everyone standardized upon Unix and you know, the chip sets no longer became as relevant. They did a lot of good things there, but what's happened since then is now you've got competing standards at the API layer at the interface layer no longer at the chip set layer, no longer at the operating system layer. Right? So the evolution of the, the, the battles are still there. When you talk about multicloud and super cloud, though, like one of the big things you have to keep in mind is latency is not free. Latency is very expensive and it's getting even more expensive now with, with multi-cloud. So you have to really understand where the separations of boundaries are between your data, your compute, and, and the network is just there as a facilitator to help binding compute and data. Right? And I think there's a lot of bets being made across different vendors like CloudFlare Akamai, as well as Amazon Google Microsoft in terms of how they think we should take computing either to the edge, from the core or back and forth. >>These, this is structural change. I mean, this is structural, >>It's desired by incumbents, but it's not something that I'm seeing from the consumption. I'd love to hear, hear from our end's per perspective, from a consumption point of view, like how much edge computing really matters. Right. >>Mario. >>So I think there's like, there's kind of a, a story of like two, like it's kind of, you can cut it for both edges. No, no pun intended on one end. It is really simplifying to actually go into like a single cloud and standardize on it and just have everything there. But I think what over time companies find is that they end up in multiple clouds, whether like, you know, through acquisitions or through like needing to use a service in another cloud. So you do find yourself in a situation where you have multi multi-cloud and you have to kind of work through it and understand how to make it all like work and latency is an issue, but also for many, many workloads, you can work around it and you can make it work where you have workloads that actually span multiple vendors and clouds. You know, again, having said that, I would say the world is such, that is still a simplifying assumption. When if you go to a single cloud, it's much easier to just go and, and bet on that >>Easier in terms of everything's integrated, IAS works with SAS, they solve a lot of problems. >>Correct. And you can do like for your developers, you can actually provide an environment that's super homogenous, simple. You can use services easily up and down the stack. And, you know, we, we actually made that deliberate decision. When we started migrating to the cloud at the beginning, it was like, oh, let's do like hybrid we'll, you know, make it, so it work anywhere. It was so complicated. It was not worth it. >>When was the, when did you give up, what was the moment? Was there a flash point where you said, oh, this is terrible. This is >>Dead. Yeah. When, when we started to try to make it interoperable and you just see what it requires to do that and the complexity of the architecture that it just became not worth it for the gains you have. >>So speaking obviously as a SAS provider, right. So it just doesn't, it didn't make business case sense for you guys to do that. So it was super cloud. Then an infrastructure thing we just heard from Ben wa deja VI that they're not, they're going beyond instantiating their, their data cloud. They're actually running, you know, their own little snow grid. They called it. And, and then when I asked him, well, what about latency? He said, well, we copied data over, you know, so, okay. That's you have to do, but that's a singular experience with the same governance or the same security. Just wasn't worth it for you guys is what I'm hearing. >>Correct. But again, like for some workload or for some services that we want to use, we are gonna go there and we are gonna then figure out what is the work around the latency issue, whether it's like copy or, you know, redundancy. >>Well, the question I have Dave on snowflake is maybe the question for you and in the panel is snowflake a tan expansion opportunity, or is there a technical reason to go to other clouds? >>I think they wanted to leverage the hyperscale infrastructure globally. And they said that they're out there, it's a free gift. We're gonna go take it. I, I think it started with we're on AWS. Do you think? And then we're on Azure and then we're on Google. And then they said, why don't we just connect all these and make it a singular experience? And yeah, I guess it's a TA expansion as a differentiator and it's, it adds value. Right. If I can share data across that global network, >>We have customers on Azure now, >>Right? Yeah. Yeah. Of course. >>You guys don't need to go CP. What do you think about that? >>Well, I think Snowflake's in a good position cuz they work mostly with analytical workloads and you have capacity. That's always gonna increase like no one subtracts, their analytical workload like ever, right. So there was just compounded growth is like 50% or 80% for, you know, many enterprises despite their best intentions, not to collect more data, they just can't stop doing it. So it's different than if you're like an Oracle or a transactional database where you don't have those, you know, like kind of infinite growth paths. So Snowflake's gonna continue to expand footprint their customers. They don't mind as long as you, they can figure out the, the lowest cost on denominator for, for that. >>Yeah. So it makes sense to be in all the clouds >>For them, for, for them, for sure. Yeah. >>But, but, but Oracle just announced with Microsoft what I would call super cloud, a, a cross cloud database service running on OCI and Azure with very low latency and a database that looks like a, the singular experience. Yeah. With, with a PAs layers >>That lost me after OCI that's >>Okay. You know, but that's the, that's the, the BS answer for all U VCs. The do nobody develops on Oracle? Well, it's a 240 billion market cap company. Show me who you all want be. >>We're gonna talk about SRDF and em C next, you >>All want Oracle. So there we go. You throw that into, you all want Oracle to buy your companies, your funding, you know, cause, cause we all wanna be like Oracle with that kinda cash flow. But, but anyway, >>Here's, here's one thing that I'm noticing that is gonna be really practical. I think for companies that do run SA is because like, you know, you have all these solutions, whether it's like analytics or like monitoring or logging or whatever. And each one of them is very data hungry and all of them have like SAS solutions that end up copy the data, moving data to their cloud, and then they might charge you by the size of your data. It does become kind of overwhelming for companies to use that many tools and basically maybe have that data kind of charge for it, multiple places because you use it for different purposes or just in general, if you have a lot of data, you know, that that is becoming an issue. So that's something that I've noticed in our, in our own kind of, you know, a world, but it's just something that I think companies need to think about how they solve because eventually a lot of companies will say, I cannot have all these solutions, so there's no way I'm gonna be willing to have so many copies of the data and actually pay for that. >>So many times, just something to think about. >>But one of the criticisms of the super cloud concept is that it's just SAS. If I'm running workload on prem and I, and I've got, you know, a connection to the cloud, which you probably do, that's, that's SAS, what's, what's the big deal and that's not anything new or different. So I'd love to get your thoughts on that. But Goldman Sachs, for instance, just announced the service last reinvent with AWS, connecting their tools, their data, and their software from on-prem to AWS, they're offering it as a service. I'm like, Hmm. Kind of looking like Supercloud, but maybe it's just SAS. >>It could be. And like, what I'm talking about is not so much like, you know, like what you wanna connect your data. But the idea is like a lot of the providers of different services, like in the past and, and like higher layer, they're actually COPI the data. They need the data in their cloud or their solution. And it just becomes complicated and expensive is, is kind of like my point. So yes, connecting it like for you to have the data in one place and then be able to connect to it. I think that is a valid, if, if that's kinda what you think about as a super cloud, that is a valid need, I think that companies will >>Have where developers actually want access to tools that might exist. >>Also the key is developers, right? Yeah. Developers decide all decisions, not database on administrators, not, you know, a hundred percent security engineers, not admins. So what's really interesting is where are the developers going next? If you look at the current winners in the current ecosystem, companies like MongoDB, I mean, they capture the minds of yeah. The JavaScript, you know, no JS developers absolutely very early on. And I started catch base and I could tell you like the difference was that capture motion was so important. So developers are basically used to this game-like experience now where they want to see tools that are free, whether it's open source or not, they actually don't care. They just want, and they want it SAS. They want it SAS delivered on demand. Right. And pay as you go. And so there's a lot of these different frameworks coming out next generation, no code, low code, whether it's Java, JavaScript, rust, you know, whatever, you know, go Lang. And there's a lot of people fighting religious wars about how to develop the next kind of modern pattern design pattern. Okay. And that's where a lot of excitement is how we look at like investment opportunities. Like where are those big bets who are, you know, frustrated developers, who are they frustrated, what's wrong with their current environment? You know, do they really enjoy using Kubernetes or trying to use Kubernetes? Yeah. Right. Like developers have a very different view than operator, >>But you mentioned couch base. I mean, I look at couch base what they're doing with Capellas as a form of Supercloud. I mean, I think that's an excellent, they're bringing that out to the edge. We're gonna hear later on from someone from couch base. That's gonna talk about that now. It's kind of a lightweight, you know, sort of, it's gonna be a, a synchronization, but it's the beginning >>A cool new venture deal that I'm not in, but was like duck DB. I'm like, what's duck DB like, well, it's an Emory database that has like this like remote store thing. I'm like, okay, that sounds interesting. Like let's call Mike Olson cuz that sounds like sleepy cat redone red distributed world. But like it's, it's like there's a lot of people refactoring design patterns that we're all grew up with since the popup days of, you know, typical round. Right? >>Yeah. That's the refactory I think that's the big pattern. So I have to ask you guys, what are you guys investing in? We've got a couple minutes left to chat about that. What are you investing at into it from a, from a, a CTO engineering perspective and what are you investing in that feels super cloud like to you? >>Well, the, the thing that like I'm focused on is to make sure that we have absolutely best in the world development environment for our engineers, where it's modern, it's easy to use and it incorporates as many things as we can into that environment. So the engineers don't have to think about it. Like one big example would be security and how we incorporated that into development environment. So again, the engineers don't have to bother with trying to think through how they secure their workloads and every step of the way their other things that we incorporated, whether it's like rollbacks or monitoring or, you know, like baly enough other things. But I think that's really an investment that has panned off for us. We actually started investing in development environment several years ago. We started measure our development velocity and we, it actually went up by six X justly investing. So >>User experience, developer experience and productivity pretty much right. >>Yeah. AB absolutely. Yeah. That's like a big investment area for us that, you know, cloud cloud >>Sounds like super cloudlike factor and I'm assuming it's you're on AWS. >>We are mostly on AWS. Yes. >>And so what are you investing in that from a VC money doling out standpoint? That feels super cloudlike >>So very similar to what we just touched on a lot of developer tool experiences. We have a company that we've invested in called ops level that the service catalogs it's, it's helping, you know, understand your, where your services live and how they could be accessed and, and you know, enterprise kind of that come with that. And then we have a company called Lugo that helps you do serverless debugging container debugging, cuz it turns out debugging distributed, you know, applications is a real problem right now just you can only do so much by log tracing, right? We have a company haven't announced yet that's in the web assembly space. So we're looking at modernizing the next generation past stack and throwing everything out the window, including Java and all of the, you know, current prebuilt components because turns out 90% of enterprise workloads are actually not used. They're they're just policy code. You compiled with they're sitting there as vulnerabilities that no one's actually accessing, but you still have to compile with all of it. So we have a lot of bloatware happening in the enterprise. So we're thinking about how do you skinny that up with the next generation paths that's enterprise capable with security context and frameworks >>Super pass. >>Well, yeah, super pass. That's a kind of good way to, well, is >>It, is it a consistent developer experience across clouds? >>It is. And, and, and, and web assembly is a very raw standard if you can call it that. I mean it's, but it's supported by every modern browser, every major platform, vendor cloud, and Adobe and others, and are using it for their uses. And it's not just about your edge browser compute. It's really, you can take the same framework and compile it down to server side as well as client site, just like JavaScript was a client side tool before it became node. Right. Right. So we're looking at that as a very interesting opportunity. It's very nascent. Yeah. >>Great patterns. Yeah. Well, thanks so much for spending the time outta your busy day. Ariana. Thanks for your commentary. Appreciate your coming on the cubes first in IGUR super cloud event, pilot. Thanks for, for sharing. Thanks for having, thanks for having us. Okay. More coverage here. Super cloud 2022. I'm Jeff David Alane stay with us. We got our cloud ARA panel coming up next.
SUMMARY :
I'm John fury, host of the cube with Dave Lon two great guests, distinguished engineers managers, lot of momentum and you guys got stats over there at, at Intuit in, So you have to really understand where the separations of boundaries are between your data, I mean, this is structural, It's desired by incumbents, but it's not something that I'm seeing from the consumption. whether like, you know, through acquisitions or through like needing to use a service And you can do like for your developers, you can actually provide an environment When was the, when did you give up, what was the moment? just became not worth it for the gains you have. They're actually running, you know, their own little snow grid. issue, whether it's like copy or, you know, redundancy. Do you think? Right? What do you think about that? So there was just compounded growth is like 50% or 80% for, you know, many enterprises despite Yeah. that looks like a, the singular experience. Show me who you all want be. You throw that into, you all want Oracle to buy your companies, moving data to their cloud, and then they might charge you by the size of your data. and I, and I've got, you know, a connection to the cloud, which you probably do, that's, And like, what I'm talking about is not so much like, you know, like what you wanna connect your data. And I started catch base and I could tell you like the difference was It's kind of a lightweight, you know, sort of, patterns that we're all grew up with since the popup days of, you know, typical round. So I have to ask you guys, what are you guys investing in? So again, the engineers don't have to bother with trying to think through how you know, cloud cloud We are mostly on AWS. And then we have a company called Lugo that helps you do serverless debugging container debugging, That's a kind of good way to, well, is It's really, you can take the same framework and compile it down to server side as well as client Thanks for your commentary.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Lon | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Maria | PERSON | 0.99+ |
Ben Horowitz | PERSON | 0.99+ |
Mariana Tessel | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Ariana | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Mike Olson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jeff David Alane | PERSON | 0.99+ |
next year | DATE | 0.99+ |
240 billion | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
JavaScript | TITLE | 0.99+ |
John fury | PERSON | 0.99+ |
Lugo | ORGANIZATION | 0.99+ |
Intuit ins | ORGANIZATION | 0.99+ |
mark Andre | PERSON | 0.99+ |
both edges | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.98+ |
Both | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.97+ |
two | QUANTITY | 0.97+ |
Mario | PERSON | 0.97+ |
single cloud | QUANTITY | 0.97+ |
SAS | ORGANIZATION | 0.96+ |
two great guests | QUANTITY | 0.96+ |
VMware Docker Intuit | ORGANIZATION | 0.96+ |
each one | QUANTITY | 0.95+ |
Unix | TITLE | 0.95+ |
one place | QUANTITY | 0.95+ |
one end | QUANTITY | 0.95+ |
SRDF | ORGANIZATION | 0.94+ |
six X | QUANTITY | 0.94+ |
Snowflake | ORGANIZATION | 0.93+ |
one thing | QUANTITY | 0.93+ |
several years ago | DATE | 0.93+ |
one | QUANTITY | 0.92+ |
Superclouds | ORGANIZATION | 0.92+ |
Ben wa deja VI | PERSON | 0.92+ |
first | QUANTITY | 0.9+ |
IAS | TITLE | 0.88+ |
MongoDB | ORGANIZATION | 0.88+ |
Supercloud | ORGANIZATION | 0.88+ |
super cloud | ORGANIZATION | 0.88+ |
Supercloud22 | ORGANIZATION | 0.87+ |
Intuit in | ORGANIZATION | 0.85+ |
hundred percent | QUANTITY | 0.85+ |
node | TITLE | 0.84+ |
Capellas | ORGANIZATION | 0.84+ |
ARA | ORGANIZATION | 0.83+ |
OCI | ORGANIZATION | 0.81+ |
Azure | TITLE | 0.81+ |
couple | QUANTITY | 0.8+ |
IGUR super cloud | EVENT | 0.8+ |
super cloud 22 | EVENT | 0.78+ |
one big | QUANTITY | 0.77+ |
JS | TITLE | 0.76+ |
Emory | ORGANIZATION | 0.75+ |
CloudFlare | TITLE | 0.64+ |
Super cloud 2022 | EVENT | 0.59+ |
Akamai | ORGANIZATION | 0.54+ |
SAS | TITLE | 0.47+ |
Ray | PERSON | 0.38+ |
Matt LeBlanc & Tom Leyden, Kasten by Veeam | VMware Explore 2022
(upbeat music) >> Hey everyone and welcome back to The Cube. We are covering VMware Explore live in San Francisco. This is our third day of wall to wall coverage. And John Furrier is here with me, Lisa Martin. We are excited to welcome two guests from Kasten by Veeam, please welcome Tom Laden, VP of marketing and Matt LeBlanc, not Joey from friends, Matt LeBlanc, the systems engineer from North America at Kasten by Veeam. Welcome guys, great to have you. >> Thank you. >> Thank you for having us. >> Tom-- >> Great, go ahead. >> Oh, I was going to say, Tom, talk to us about some of the key challenges customers are coming to you with. >> Key challenges that they have at this point is getting up to speed with Kubernetes. So everybody has it on their list. We want to do Kubernetes, but where are they going to start? Back when VMware came on the market, I was switching from Windows to Mac and I needed to run a Windows application on my Mac and someone told me, "Run a VM." Went to the internet, I downloaded it. And in a half hour I was done. That's not how it works with Kubernetes. So that's a bit of a challenge. >> I mean, Kubernetes, Lisa, remember the early days of The Cube Open Stack was kind of transitioning, Cloud was booming and then Kubernetes was the paper that became the thing that pulled everybody together. It's now de facto in my mind. So that's clear, but there's a lot of different versions of it and you hear VMware, they call it the dial tone. Usually, remember, Pat Gelter, it's a dial tone. Turns out that came from Kit Colbert or no, I think AJ kind of coined the term here, but it's since been there, it's been adopted by everyone. There's different versions. It's open source. AWS is involved. How do you guys look at the relationship with Kubernetes here and VMware Explore with Kubernetes and the customers because they have choices. They can go do it on their own. They can add a little bit with Lambda, Serverless. They can do more here. It's not easy. It's not as easy as people think it is. And then this is a skill gaps problem too. We're seeing a lot of these problems out there. What's your take? >> I'll let Matt talk to that. But what I want to say first is this is also the power of the cloud native ecosystem. The days are gone where companies were selecting one enterprise application and they were building their stack with that. Today they're building applications using dozens, if not hundreds of different components from different vendors or open source platforms. And that is really what creates opportunities for those cloud native developers. So maybe you want to... >> Yeah, we're seeing a lot of hybrid solutions out there. So it's not just choosing one vendor, AKS, EKS, or Tanzu. We're seeing all the above. I had a call this morning with a large healthcare provider and they have a hundred clusters and that's spread across AKS, EKS and GKE. So it is covering everything. Plus the need to have a on-prem solution manage it all. >> I got a stat, I got to share that I want to get your reactions and you can laugh or comment, whatever you want to say. Talk to big CSO, CXO, executive, big company, I won't say the name. We got a thousand developers, a hundred of them have heard of Kubernetes, okay. 10 have touched it and used it and one's good at it. And so his point is that there's a lot of Kubernetes need that people are getting aware. So it shows that there's more and more adoption around. You see a lot of managed services out there. So it's clear it's happening and I'm over exaggerating the ratio probably. But the point is the numbers kind of make sense as a thousand developers. You start to see people getting adoption to it. They're aware of the value, but being good at it is what we're hearing is one of those things. Can you guys share your reaction to that? Is that, I mean, it's hyperbole at some level, but it does point to the fact of adoption trends. You got to get good at it, you got to know how to use it. >> It's very accurate, actually. It's what we're seeing in the market. We've been doing some research of our own, and we have some interesting numbers that we're going to be sharing soon. Analysts don't have a whole lot of numbers these days. So where we're trying to run our own surveys to get a grasp of the market. One simple survey or research element that I've done myself is I used Google trends. And in Google trends, if you go back to 2004 and you compare VMware against Kubernetes, you get a very interesting graph. What you're going to see is that VMware, the adoption curve is practically complete and Kubernetes is clearly taking off. And the volume of searches for Kubernetes today is almost as big as VMware. So that's a big sign that this is starting to happen. But in this process, we have to get those companies to have all of their engineers to be up to speed on Kubernetes. And that's one of the community efforts that we're helping with. We built a website called learning.kasten.io We're going to rebrand it soon at CubeCon, so stay tuned, but we're offering hands on labs there for people to actually come learn Kubernetes with us. Because for us, the faster the adoption goes, the better for our business. >> I was just going to ask you about the learning. So there's a big focus here on educating customers to help dial down the complexity and really get them, these numbers up as John was mentioning. >> And we're really breaking it down to the very beginning. So at this point we have almost 10 labs as we call them up and they start really from install a Kubernetes Cluster and people really hands on are going to install a Kubernetes Cluster. They learn to build an application. They learn obviously to back up the application in the safest way. And then there is how to tune storage, how to implement security, and we're really building it up so that people can step by step in a hands on way learn Kubernetes. >> It's interesting, this VMware Explore, their first new name change, but VMWorld prior, big community, a lot of customers, loyal customers, but they're classic and they're foundational in enterprises and let's face it. Some of 'em aren't going to rip out VMware anytime soon because the workloads are running on it. So in Broadcom we'll have some good action to maybe increase prices or whatnot. So we'll see how that goes. But the personas here are definitely going cloud native. They did with Tanzu, was a great thing. Some stuff was coming off, the fruit's coming off the tree now, you're starting to see it. CNCF has been on this for a long, long time, CubeCon's coming up in Detroit. And so that's just always been great, 'cause you had the day zero event and you got all kinds of community activity, tons of developer action. So here they're talking, let's connect to the developer. There the developers are at CubeCon. So the personas are kind of connecting or overlapping. I'd love to get your thoughts, Matt on? >> So from the personnel that we're talking to, there really is a split between the traditional IT ops and a lot of the people that are here today at VMWare Explore, but we're also talking with the SREs and the dev ops folks. What really needs to happen is we need to get a little bit more experience, some more training and we need to get these two groups to really start to coordinate and work together 'cause you're basically moving from that traditional on-prem environment to a lot of these traditional workloads and the only way to get that experience is to get your hands dirty. >> Right. >> So how would you describe the persona specifically here versus say CubeCon? IT ops? >> Very, very different, well-- >> They still go ahead. Explain. >> Well, I mean, from this perspective, this is all about VMware and everything that they have to offer. So we're dealing with a lot of administrators from that regard. On the Kubernetes side, we have site reliability engineers and their goal is exactly as their title describes. They want to architect arch applications that are very resilient and reliable and it is a different way of working. >> I was on a Twitter spaces about SREs and dev ops and there was people saying their title's called dev ops. Like, no, no, you do dev ops, you don't really, you're not the dev ops person-- >> Right, right. >> But they become the dev ops person because you're the developer running operations. So it's been weird how dev ops been co-opted as a position. >> And that is really interesting. One person told me earlier when I started Kasten, we have this new persona. It's the dev ops person. That is the person that we're going after. But then talking to a few other people who were like, "They're not falling from space." It's people who used to do other jobs who now have a more dev ops approach to what they're doing. It's not a new-- >> And then the SRE conversation was in site, reliable engineer comes from Google, from one person managing multiple clusters to how that's evolved into being the dev ops. So it's been interesting and this is really the growth of scale, the 10X developer going to more of the cloud native, which is okay, you got to run ops and make the developer go faster. If you look at the stuff we've been covering on The Cube, the trends have been cloud native developers, which I call dev ops like developers. They want to go faster. They want self-service and they don't want to slow down. They don't want to deal with BS, which is go checking security code, wait for the ops team to do something. So data and security seem to be the new ops. Not so much IT ops 'cause that's now cloud. So how do you guys see that in, because Kubernetes is rationalizing this, certainly on the compute side, not so much on storage yet but it seems to be making things better in that grinding area between dev and these complicated ops areas like security data, where it's constantly changing. What do you think about that? >> Well there are still a lot of specialty folks in that area in regards to security operations. The whole idea is be able to script and automate as much as possible and not have to create a ticket to request a VM to be billed or an operating system or an application deployed. They're really empowered to automatically deploy those applications and keep them up. >> And that was the old dev ops role or person. That was what dev ops was called. So again, that is standard. I think at CubeCon, that is something that's expected. >> Yes. >> You would agree with that. >> Yeah. >> Okay. So now translating VM World, VMware Explore to CubeCon, what do you guys see as happening between now and then? Obviously got re:Invent right at the end in that first week of December coming. So that's going to be two major shows coming in now back to back that're going to be super interesting for this ecosystem. >> Quite frankly, if you compare the persona, maybe you have to step away from comparing the personas, but really compare the conversations that we're having. The conversations that you're having at a CubeCon are really deep dives. We will have people coming into our booth and taking 45 minutes, one hour of the time of the people who are supposed to do 10 minute demos because they're asking more and more questions 'cause they want to know every little detail, how things work. The conversations here are more like, why should I learn Kubernetes? Why should I start using Kubernetes? So it's really early day. Now, I'm not saying that in a bad way. This is really exciting 'cause when you hear CNCF say that 97% of enterprises are using Kubernetes, that's obviously that small part of their world. Those are their members. We now want to see that grow to the entire ecosystem, the larger ecosystem. >> Well, it's actually a great thing, actually. It's not a bad thing, but I will counter that by saying I am hearing the conversation here, you guys'll like this on the Veeam side, the other side of the Veeam, there's deep dives on ransomware and air gap and configuration errors on backup and recovery and it's all about Veeam on the other side. Those are the guys here talking deep dive on, making sure that they don't get screwed up on ransomware, not Kubernete, but they're going to Kub, but they're now leaning into Kubernetes. They're crossing into the new era because that's the apps'll end up writing the code for that. >> So the funny part is all of those concepts, ransomware and recovery, they're all, there are similar concepts in the world of Kubernetes and both on the Veeam side as well as the Kasten side, we are supporting a lot of those air gap solutions and providing a ransomware recovery solution and from a air gap perspective, there are a many use cases where you do need to live. It's not just the government entity, but we have customers that are cruise lines in Europe, for example, and they're disconnected. So they need to live in that disconnected world or military as well. >> Well, let's talk about the adoption of customers. I mean this is the customer side. What's accelerating their, what's the conversation with the customer at base, not just here but in the industry with Kubernetes, how would you guys categorize that? And how does that get accelerated? What's the customer situation? >> A big drive to Kubernetes is really about the automation, self-service and reliability. We're seeing the drive to and reduction of resources, being able to do more with less, right? This is ongoing the way it's always been. But I was talking to a large university in Western Canada and they're a huge Veeam customer worth 7000 VMs and three months ago, they said, "Over the next few years, we plan on moving all those workloads to Kubernetes." And the reason for it is really to reduce their workload, both from administration side, cost perspective as well as on-prem resources as well. So there's a lot of good business reasons to do that in addition to the technical reliability concerns. >> So what is those specific reasons? This is where now you start to see the rubber hit the road on acceleration. >> So I would say scale and flexibility that ecosystem, that opportunity to choose any application from that or any tool from that cloud native ecosystem is a big driver. I wanted to add to the adoption. Another area where I see a lot of interest is everything AI, machine learning. One example is also a customer coming from Veeam. We're seeing a lot of that and that's a great thing. It's an AI company that is doing software for automated driving. They decided that VMs alone were not going to be good enough for all of their workloads. And then for select workloads, the more scalable one where scalability was more of a topic, would move to Kubernetes. I think at this point they have like 20% of their workloads on Kubernetes and they're not planning to do away with VMs. VMs are always going to be there just like mainframes still exist. >> Yeah, oh yeah. They're accelerating actually. >> We're projecting over the next few years that we're going to go to a 50/50 and eventually lean towards more Kubernetes than VMs, but it was going to be a mix. >> Do you have a favorite customer example, Tom, that you think really articulates the value of what Kubernetes can deliver to customers where you guys are really coming in and help to demystify it? >> I would think SuperStereo is a really great example and you know the details about it. >> I love the SuperStereo story. They were a AWS customer and they're running OpenShift version three and they need to move to OpenShift version four. There is no upgrade in place. You have to migrate all your apps. Now SuperStereo is a large French IT firm. They have over 700 developers in their environment and it was by their estimation that this was going to take a few months to get that migration done. We're able to go in there and help them with the automation of that migration and Kasten was able to help them architect that migration and we did it in the course of a weekend with two people. >> A weekend? >> A weekend. >> That's a hackathon. I mean, that's not real come on. >> Compared to thousands of man hours and a few months not to mention since they were able to retire that old OpenShift cluster, the OpenShift three, they were able to stop paying Jeff Bezos for a couple of those months, which is tens of thousands of dollars per month. >> Don't tell anyone, keep that down low. You're going to get shot when you leave this place. No, seriously. This is why I think the multi-cloud hybrid is interesting because these kinds of examples are going to be more than less coming down the road. You're going to see, you're going to hear more of these stories than not hear them because what containerization now Kubernetes doing, what Dockers doing now and the role of containers not being such a land grab is allowing Kubernetes to be more versatile in its approach. So I got to ask you, you can almost apply that concept to agility, to other scenarios like spanning data across clouds. >> Yes, and that is what we're seeing. So the call I had this morning with a large insurance provider, you may have that insurance provider, healthcare provider, they're across three of the major hyperscalers clouds and they do that for reliability. Last year, AWS went down, I think three times in Q4 and to have a plan of being able to recover somewhere else, you can actually plan your, it's DR, it's a planned migration. You can do that in a few hours. >> It's interesting, just the sidebar here for a second. We had a couple chats earlier today. We had the influences on and all the super cloud conversations and trying to get more data to share with the audience across multiple areas. One of them was Amazon and that super, the hyper clouds like Amazon, as your Google and the rest are out there, Oracle, IBM and everyone else. There's almost a consensus that maybe there's time for some peace amongst the cloud vendors. Like, "Hey, you've already won." (Tom laughs) Everyone's won, now let's just like, we know where everyone is. Let's go peace time and everyone, then 'cause the relationship's not going to change between public cloud and the new world. So there's a consensus, like what does peace look like? I mean, first of all, the pie's getting bigger. You're seeing ecosystems forming around all the big new areas and that's good thing. That's the tides rise and the pie's getting bigger, there's bigger market out there now so people can share and share. >> I've never worked for any of these big players. So I would have to agree with you, but peace would not drive innovation. And in my heart is with tech innovation. I love it when vendors come up with new solutions that will make things better for customers and if that means that we're moving from on-prem to cloud and back to on-prem, I'm fine with that. >> What excites me is really having the flexibility of being able to choose any provider you want because you do have open standards, being cloud native in the world of Kubernetes. I've recently discovered that the Canadian federal government had mandated to their financial institutions that, "Yes, you may have started all of your on cloud presence in Azure, you need to have an option to be elsewhere." So it's not like-- >> Well, the sovereign cloud is one of those big initiatives, but also going back to Java, we heard another guest earlier, we were thinking about Java, right once ran anywhere, right? So you can't do that today in a cloud, but now with containers-- >> You can. >> Again, this is, again, this is the point that's happening. Explain. >> So when you have, Kubernetes is a strict standard and all of the applications are written to that. So whether you are deploying MongoDB or Postgres or Cassandra or any of the other cloud native apps, you can deploy them pretty much the same, whether they're in AKS, EKS or on Tanzu and it makes it much easier. The world became just a lot less for proprietary. >> So that's the story that everybody wants to hear. How does that happen in a way that is, doesn't stall the innovation and the developer growth 'cause the developers are driving a lot of change. I mean, for all the talk in the industry, the developers are doing pretty good right now. They've got a lot of open source, plentiful, open source growing like crazy. You got shifting left in the CICD pipeline. You got tools coming out with Kubernetes. Infrastructure has code is almost a 100% reality right now. So there's a lot of good things going on for developers. That's not an issue. The issue is just underneath. >> It's a skillset and that is really one of the biggest challenges I see in our deployments is a lack of experience. And it's not everyone. There are some folks that have been playing around for the last couple of years with it and they do have that experience, but there are many people that are still young at this. >> Okay, let's do, as we wrap up, let's do a lead into CubeCon, it's coming up and obviously re:Invent's right behind it. Lisa, we're going to have a lot of pre CubeCon interviews. We'll interview all the committee chairs, program chairs. We'll get the scoop on that, we do that every year. But while we got you guys here, let's do a little pre-pre-preview of CubeCon. What can we expect? What do you guys think is going to happen this year? What does CubeCon look? You guys our big sponsor of CubeCon. You guys do a great job there. Thanks for doing that. The community really recognizes that. But as Kubernetes comes in now for this year, you're looking at probably the what third year now that I would say Kubernetes has been on the front burner, where do you see it on the hockey stick growth? Have we kicked the curve yet? What's going to be the level of intensity for Kubernetes this year? How's that going to impact CubeCon in a way that people may or may not think it will? >> So I think first of all, CubeCon is going to be back at the level where it was before the pandemic, because the show, as many other shows, has been suffering from, I mean, virtual events are not like the in-person events. CubeCon LA was super exciting for all the vendors last year, but the attendees were not really there yet. Valencia was a huge bump already and I think Detroit, it's a very exciting city I heard. So it's going to be a blast and it's going to be a huge attendance, that's what I'm expecting. Second I can, so this is going to be my third personally, in-person CubeCon, comparing how vendors evolved between the previous two. There's going to be a lot of interesting stories from vendors, a lot of new innovation coming onto the market. And I think the conversations that we're going to be having will yet, again, be much more about live applications and people using Kubernetes in production rather than those at the first in-person CubeCon for me in LA where it was a lot about learning still, we're going to continue to help people learn 'cause it's really important for us but the exciting part about CubeCon is you're talking to people who are using Kubernetes in production and that's really cool. >> And users contributing projects too. >> Also. >> I mean Lyft is a poster child there and you've got a lot more. Of course you got the stealth recruiting going on there, Apple, all the big guys are there. They have a booth and no one's attending you like, "Oh come on." Matt, what's your take on CubeCon? Going in, what do you see? And obviously a lot of dynamic new projects. >> I'm going to see much, much deeper tech conversations. As experience increases, the more you learn, the more you realize you have to learn more. >> And the sharing's going to increase too. >> And the sharing, yeah. So I see a lot of deep conversations. It's no longer the, "Why do I need Kubernetes?" It's more, "How do I architect this for my solution or for my environment?" And yeah, I think there's a lot more depth involved and the size of CubeCon is going to be much larger than we've seen in the past. >> And to finish off what I think from the vendor's point of view, what we're going to see is a lot of applications that will be a lot more enterprise-ready because that is the part that was missing so far. It was a lot about the what's new and enabling Kubernetes. But now that adoption is going up, a lot of features for different components still need to be added to have them enterprise-ready. >> And what can the audience expect from you guys at CubeCon? Any teasers you can give us from a marketing perspective? >> Yes. We have a rebranding sitting ready for learning website. It's going to be bigger and better. So we're not no longer going to call it, learning.kasten.io but I'll be happy to come back with you guys and present a new name at CubeCon. >> All right. >> All right. That sounds like a deal. Guys, thank you so much for joining John and me breaking down all things Kubernetes, talking about customer adoption, the challenges, but also what you're doing to demystify it. We appreciate your insights and your time. >> Thank you so much. >> Thank you very much. >> Our pleasure. >> Thanks Matt. >> For our guests and John Furrier, I'm Lisa Martin. You've been watching The Cube's live coverage of VMware Explore 2022. Thanks for joining us. Stay safe. (gentle music)
SUMMARY :
We are excited to welcome two customers are coming to you with. and I needed to run a and you hear VMware, they the cloud native ecosystem. Plus the need to have a They're aware of the value, And that's one of the community efforts to help dial down the And then there is how to tune storage, So the personas are kind of and a lot of the people They still go ahead. and everything that they have to offer. the dev ops person-- So it's been weird how dev ops That is the person that we're going after. the 10X developer going to and not have to create a ticket So again, that is standard. So that's going to be two of the people who are but they're going to Kub, and both on the Veeam side not just here but in the We're seeing the drive to to see the rubber hit the road that opportunity to choose any application They're accelerating actually. over the next few years and you know the details about it. and they need to move to I mean, that's not real come on. and a few months not to mention since and the role of containers and to have a plan of being and that super, the and back to on-prem, I'm fine with that. that the Canadian federal government this is the point that's happening. and all of the applications and the developer growth and that is really one of How's that going to impact and it's going to be a huge attendance, and no one's attending you like, the more you learn, And the sharing's and the size of CubeCon because that is the part It's going to be bigger and better. adoption, the challenges, of VMware Explore 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt LeBlanc | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Pat Gelter | PERSON | 0.99+ |
Tom Leyden | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Tom Laden | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one hour | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Joey | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10 minute | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
2004 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
Western Canada | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
7000 VMs | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
97% | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
20% | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two groups | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
Windows | TITLE | 0.99+ |
third day | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
over 700 developers | QUANTITY | 0.99+ |
learning.kasten.io | OTHER | 0.98+ |
AKS | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Veeam | PERSON | 0.98+ |
VMware Explore 2022 | TITLE | 0.98+ |
VMWare Explore | ORGANIZATION | 0.98+ |
CubeCon | EVENT | 0.98+ |
One example | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
three months ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
EKS | ORGANIZATION | 0.97+ |
Lyft | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
Kasten | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
three times | QUANTITY | 0.97+ |
SuperStereo | TITLE | 0.97+ |
third year | QUANTITY | 0.96+ |
Muddu Sudhakkar, Aisera | VMare Explore 2022
(upbeat music) >> Good morning, everyone. Welcome back to "theCUBE." Lisa Martin here with John Furrier. This is day three of our wall-to-wall coverage of VMware Explore. John and I are pleased to welcome back one of our alumni, Muddu Sudhakar, the CEO of AISERA. Welcome to the program, Muddu. It's great to meet you. >> Thank you, Lisa. Thanks for having me. Thank you, John. >> Great to see you again. You're like an industry analyst coming on "theCUBE". You should be like a guest analyst, breaking down. I know you got your own company to run, and by the way, the recent funding you had, congratulations. >> Thank you. >> In a market that's not getting a lot of funding. You get an up around. Congratulations on that. >> Thank you. >> Business is good? >> Very good, thank you. Look, Goldman Sachs Investing, along with Zoom and Thoma Bravo, it was great for us. >> Great stuff. Well, I'm glad we could get you in. This day three, Lisa and I and Dave Vellante and Dave Nicholson have all been talking to everyone for two days here at VMware Explore, formerly VMworld, our 12th year covering their annual conference, as you know, and we've been telling the executives, but day three is more of, we're going to mix it up. We're going to bring people in and get their opinions about Supercloud, does VMware go post-Broadcom? Obviously, that's going to happen. Looks like nothing's going to stop that from happening. What's next? What's the impact? Who wins? Who loses? VMware certainly not acting like they're going to get gutted. They're all full throttle ahead. They're laying down some announcements, vSphere 8, you got vSAN 8, they got cloud-native, they're talking multi-cloud. VMware's not looking like they're flinching. What's going on, in your view, outside of the bubble that we're here in San Francisco, out in the real world, in the trenches. What are people talking about? What do you see? >> Lot to unpack. (all laugh) >> Start at wherever you want. >> Yes. You know, I was a VMware alumni too. >> Yes >> You sold the company to VMware. You know the inside. Okay, So then, even then- >> I worked with Paul and Pat and Raghu. It's great to be back at VMware now. I think there's a lot going on in VMware. VMware is here to stay. The brand will stay. The VMware customers will stay for years to come. I think Broadcom and VMware, I think it's a great industry consolidation, the way in which I see it. And it is going to help all the customers too, right? Broadcom, having such a large foot play into both CA, the software business, the hardware business. I think what will happen is that Broadcom will try to create a hybrid cloud of their own with VMware. So there'll be a fourth player in the cloud industry. And then back to John, your Supercloud. The Supercloud by definition, there'll be private clouds, public clouds, hybrid clouds. I think Broadcom with VMware will help your vision of the Supercloud and what your customers are asking. >> Yeah, one of the things I want to get your thoughts on, Lisa and I were talking yesterday with the executives, AJ Patel in particular, he's a middleware guy. >> Right. >> So what he did was Oracle. He did a lot of the fusion stuff at Oracle. He now runs Modern Apps. And you came in at the time, I think, when they were just getting that app vision going, and Paul Moritz actually had it early with his 2010 vision, but too early on the app side. But that ended up happening too. So the question is, is Broadcom going to be this middleware layer, and treat the cloud like hardware. And then, apps or apps. Companies are apps. In a digital transformation, technology is the company. >> Right >> So the company is the app. >> That's right, >> Is an application. So apps and hardware, middle, a middleware model emerging. Do you think they're going for that? Or am I just making this up in my head? >> No, I think to me, I see Broadcom as much more, they're like a peer company at the high level. So they're funded by- >> Like a private equity company. >> Private equity company. >> You mean from a dollar standpoint. >> From a dollar standpoint. So Broadcom is going to fund companies. They're going to buy companies. They bought CA, they bought all the other assets. So Broadcom will have always hardware. The middle level could be VMware, but they also have CA, right? They have a bunch of apps here. So I see the Broadcom is also using VMware to run applications. So the consolidation will be they'll create a Supercloud using VMware. They're going to own their own apps. I don't think Broadcom's story is stopped. Its journey to come. They're going to buy more acquisitions, more apps companies. I won't be surprised, in the future, they buy Zendesk. I won't be surprised, in the future, they buy other apps companies, SaaS companies and cloud enterprise companies. Right? So that's where the P is coming. So the broad conversion is, I need a base middleware, like you're saying. There's no other middleware on top of hardware better than VMware. >> So do you think that they'll keep the stuff that's coming out of the other? 'Cause we've been speculating on "theCUBE" this week. They have the core business, but there's all this stuff that's kind of coming out of the oven that's not EBITDA-oriented yet. Do you think they keep that or they let it go? >> I think that's a great question to hang their CEO of Broadcom. But to me, I think, knowing them, they're going to keep, and if you look at Symantec, they kept parts of Symantec, this whole parts of it. So I think all options are on the table for them, right? They'll do whatever it is. But I think it has to be the ones that high growth companies they may give it. It all goes back to is it a profitability to it or not? But his vision is very good. I want to own the middleware, right? He will own the middleware using VMware to your vision, create a Supercloud and own the apps. So I think you'll see Broadcom is the fourth vendor in the cloud race. You have Microsoft, AWS, Google, and Broadcom is actually going to compete with this four. >> So you think there'll be a hyper scale? They'll be in the top three or four. >> There'll be top four. >> Okay. >> Along with Oracle. So now, we are talking about the five vendors will be Amazon, Azure, Google, Oracle, and Broadcom. >> We had Amazon guy on, Steve Jones. I should have asked him that question. I just don't see that happening yet. They have to have the full hardware side. How do you see that coming in? 'Cause Amazon's innovating at the atom level and they're working on stuff that's physical, transit, physics stuff, like down to the root level. >> I think Broadcom figure, look, they own the chips out right, at the end of the day. They also have a lot of chips such to supply to both mobile and this. So if there's anybody who can figure out the hardware, it will be Broadcom. That is their core of area. They didn't have the core in the software and the middleware. VMware is going to give them the OS, the Kubernetes, the VMs. Once you have that layer, I think you can innovate both up and below, right? So I think, John, I think Broadcom VMware will be a force to reckon with and I think these guys are going to get into healthcare space though. So if you see the way they battle, you and me are talking Lisa, like Microsoft bought new ones, Oracle bought Cerner. So they all paid 30 billion each. So the next battle ground will be, they'll start in the healthcare industry. Somebody's going to go look at the healthcare apps like Epic, right? They're going to look at how we can do the hospitals. They're going to look at hospital healthcare professionals. That area will be disrupted a lot in the same. >> What other industries do you think, besides healthcare, are ripe for disruption with Broadcom VMware? >> I think endpoint management, like remember VMware bought AirWatch when I was there back then, right? That whole area is called digital experience management. So that endpoint mainly will be disrupted. So Broadcom with VMware will go again into endpoint. I'm talking endpoint could be the servers, desktops, VMware Max, right? Virtual Desktop VDI. So that whole management of mobile devices to desktop, that whole industry will be disrupted. A lot of players are there trying to do more consulting services. I think VMware is a great assets and tools. If I'm Broadcom, my chip sets are going into the endpoint. So that area will be disrupted a lot with Broadcom in VMware. >> Yeah, one of the things that VMware, people have been talking about, is that the CA acquisition that Broadcom did was the playbooks public. Everyone saw what they did. They killed sales and market and they killed all the execs, metaphorically speaking. They fired them. VMware's got a different vibe here. I'm feeling like it could go one way or the other. I think they should keep them, personally. But you don't know. If they're a PE company, they EBIDA driven, maybe it's just simply numbers. >> Right. >> If that's the case, then I'm worried. But VMware's got pride, they got mojo, and they've got expertise in software. Maybe a little bit different circumstance? What's take on this? Or do you think it's going to be black and white to the numbers? >> I think, knowing Hank's playbook, if he knows what he's going to do, right? His playbook will be consistent with Symantec. >> You think he already knows what he wants to do? >> I think so. I think at that level, both with Simulink and Broadcom, they already know the playbook. At this stage the games, people already know their game. It's like a chess move. They already know. They'll look at VMware and see which assets to keep, which one not to keep, which organization, but I think Hank is a master at this one. To me, I'm personally excited with the VMware Broadcom combination. It's a great thing for the industry. It's great for VMware and VMware customers and partners. >> Well, John, you and Dave had a chance to sit down with Raghu. What were some of the things that he unpacked about the Broadcom acquisition? >> He was on talking points. He was on message. He was saying the things that any CEO was going to make a lot of cash on this deal. And he's proud. I think it wasn't about the money for him. I sensed that he's certainly going to make a lot of cash on this deal as an executive, but he's a long time VMware employee and a well loved and revered person. He's done a lot of great work, technically set the agenda. So I think their mindset is we're going to just continue to do an amazing job as VMware as we are and then let Broadcom, let the chips fall where they may, and hopefully, if they do a good job, maybe they'll either refactor some of their base plans or they laid it all out in the field, so to speak. So that's my vibe. Now specifically, he made some comments, like, "Yeah, we're really proud." And he staying technical. He's still like, "This is really happening." So I think he's going to, essentially, to the very end, be like, "Cross cloud and hybrid cloud. This is our third generation." So there he's hanging onto the VMware third act that they're saying, and he hopes that it comes home. And I think he's going to just deal with it. He didn't seem flustered and he didn't seem overly confident. >> Okay. >> I guess that's my opinion. What do you think? >> Personally worked with Raghu, worked for Raghu, so I think of him as the greatest CEO for VMware ever could have, right? It's a journey. It was Paul Maritz, then Pat Gelsinger, now Raghu. I think he's in the right place, right time to lead VMware, and Raghu's doing a fantastic job. And personally, getting these two companies married, I think Raghu did the right partnership with Broadcom. >> Well, I think if this event's any indication if they're just sitting back and waiting, they're not, and this event was well done, it was pulled off. The branding's amazing. I thought they did a good job with the name change. And then in light of all the Broadcom issues, the execution was great. It was not a bad show here. It was a good show. It wasn't terrible at all. People were excited. I think the ecosystem also felt that Broadcom, like an electronic shock to the system, like something's going to happen. Let's wait and see. I'm going to go to the event to see if it's going to be around and kind of getting a feel first party, in person, what's happening. Again, remember VMware didn't have an event since 2019. This is a community that thrives on physical, face to face camaraderie, community. And so, I think the show was a success. And I think that's a result of Raghu and his team. >> Because we have a booth there for AISERA, my company, we have a booth. We are offering coffee and donuts. You guys should come by and tell people. You'll get a free coffee and a donut, but it's one of the best shows I've seen. Well, I think people after pandemic are back, people are interacting. We have 500 people in one day at our booth. So for a startup company like us, getting that much crowd is unheard of. So it's great. We're very excited. >> The vibe from the partner community, I had a chance to talk with a lot of partners, AWS, NetApp, Rackspace, really seems like the partnerships side of VMware is very, very strong and the partners are excited about what's next for VMware. Did you have a chance to talk with any of the partners? >> Actually, look. I'm actually meeting with Karen. So Karen Egan is my contact at VMware too, and Sumit, (indistinct) a bunch of the customer success organization. We talk to people in their digital experience management team. We are very excited to be partner with both VMware's customer, partner, and all experts, right? I'll need the VMware ecosystem for my company to thrive. So for us, VMware customers are my customers and leveraging VMware APIs into VMware, that's that's important for us. >> Lisa, that's a great question because that brings us to the question of, okay, clearly this show also proves to us from our conversations and exploring the floor, the wave is coming. This next cloud wave is here. We're calling it Supercloud, whatever you want to call it, it's coming and it's real, and people know it. And also the lines of sight into economics around where people can fit in this next level ecosystem is becoming clear. So I think people kind of know what's the right side of the street to be on in this next shift. So that's coming. That's independent of Broadcom. So the floor represents to me the excitement for not only the VMware workload powering software, with or without Broadcom, but the next wave. So the question is if Broadcom goes down their path and Hank does what he does, who wins and who loses on where things flow? Because this energy is going to flow somewhere. Is it going to flow to AWS? Is it going to flow to Microsoft? Is it going to flow to HPE with Green Lake getting some great traction? NetApp's doing great. We just heard from them. So the partners aren't hurting. It's only going to get better. re:Invent's right around the corner. That's a packed house. Their ecosystem's growing like a weed. Who wins? 'Cause the customers at VMware are enterprise customers. They're used to being serviced. They have sales reps from Microsoft, they got sales reps from Hewlett Packard Enterprise, real senior enterprise stakeholders there. So someone's going to end up filling in as VMware settles into their broad composition. Who wins and who loses, in your mind? >> A Very good question. So my thing is, I think it's... Well, I put Microsoft and Amazon the winners. In that way, actually mean Microsoft will win because in a true Supercloud, your vision, back to hybrid cloud on-prem and public cloud, VMware disruption with Broadcom, as if there's any bridge in the market, Microsoft will take advantage of it. Azure, right? Amazon VMware is there. Then, you have Google and VMware. So I think Azure will probably try to take advantage of this, but very next will be Amazon, right away there. That leaves you with Google Cloud, right? Google Cloud is the one. So they're the people that are able to figure out what to do in this equation. And then, obviously, the other one is Oracle. Oracle has no hearts in this game. So to me, the people who are going to probably lose impact model will be Oracle if the Broadcom and VMware will happen. So it's Azure, Amazon winning the race, probably Google is right behind them. Oracle will be distinct. Other side is Dell. Actually, Dell has no game in this. Our Broadcom and VMware, Dell should be the one. >> Dell might have a little secret sauce on the table with Michael Dell. >> That's true. >> If he convert his shares, he might be the largest shareholder at Broadcom. >> That's true. >> He could end up owning all the back. >> So he may be the winner all the time. (all laugh) >> Don't count him out. Well, this is a good question. I want to just double click on this. So you get customer dynamic. Where do they go? You get the community, which is a big force multiplier in this world, and if you had to bet on community between Microsoft and Amazon Web Services, Amazon trumps Microsoft on force multiplier community. Ecosystem, AWS beats Microsoft on that one. So it's interesting because it's now multiple dimensions we're talking about here. It's customers. That's the top order, right? The customers. But also, you got community, the people who put on sessions, the people in the community that are the influencers that are leading the trends, and developers are very trending, relative to what kind of code they use, what's their environments? So the developers is changing that landscape and, ultimately, the ecosystem of partners, right? 'Cause there's a lot more overlap between AWS and VMware's ecosystem than there is between Microsoft and that. And HPE is just starting an ecosystem. So it's going to be very interesting. >> It is. It is. I think Broadcom and VMware cannot be any best time for the industry, right? As you said. HP is coming in. Oracle is coming in. And to your point, VMware and AWS are another best partners. Now, this going to create any gap for Microsoft to enter for Azure? I think that's where the market is saying that it's going to open up a hybrid cloud player for Microsoft to enter what is to be a tight relationship with VMware and Amazon. Right? So people will rethink through their apps. And more importantly, the end point to me. See, the key is, like you talk about with Supercloud, nobody's talking about Supercloud for the endpoint. >> You mean Edge or security? >> Not an Edge endpoint. Endpoint could be your devices, laptop, desktop. >> Or a building or a light bulb or whatever. >> Desktop or VDI desktop services servers, right? So we call it endpoint cloud. There's no endpoint Supercloud. John, that's an area that you should double click on. Super cloud for the servers is different from Supercloud for endpoint. >> Well, SuperCloud.World is the URL out there. If you're interested in Supercloud, we are adding tracks to that body of work. So we had our event on August 9th. It was virtual event, where Dave and I are going to add a data track, we're going to add a security track, and we should add, maybe, an endpoint workspace, work. >> That's a VMware brand, Workspace and Horizon. So that whole workspace endpoint for Supercloud is going to happen. >> Yes. >> Right. That kind of deviates from- >> Do you like Supercloud? Are you bullish on Supercloud? >> I'm very bullish on Supercloud because I, myself, is running on-prem in VPCs, public clouds, private clouds. Supercloud kind of composites it so app should be designed. 'Cause I don't want to design an app for one cloud. It's not going to work. So it's like how Java came and I can run it on any platform. The ideas you build it on Supercloud, run it, whatever you want. Right? >> That's exactly it. So what would you want to see in Supercloud as it evolves? And we were part of this open conversation. This is our point for today. We're going to have a great panel come up later today. We're going to have the influencers come on to debate what Supercloud should or shouldn't be. If you want to add to the contribution, we'll add this into the work, what should what's needed in Supercloud? What's table stakes. >> I think we need a Java compiler that will happen for Supercloud. I build it once, execute in any place I want, right? Using the Terraform, HashiCorp (indistinct) So what I don't want is keep building this thing for every cloud. I want to abstract that out. The whole idea of Supercloud is how Java gave me the abstraction for hardware 20 years back or 30 years back, we need the same abstraction for the cloud today. Otherwise, I'm customizing for VM Cloud, I'm customizing for AWS, Azure, Google Cloud. We, as an application vendor, it's too hard to keep doing it. I have now thousand tuners. I don't need thousand DevOps people. I need maybe 10 DevOps people. So there's a clear abstraction complexity that industry should develop, and your concept Supercloud with everybody thinking that, and it has to start from the grassroots with ecosystem. >> What do you think about the participants in this abstraction layer? Because someone said on "theCUBE" here this week, the people in the abstraction layer shouldn't be participants in the below or above the abstraction. >> I think it should be everybody, right? It's all inclusive. You need the apps guys to come in. You need the OS players to come in. You need the cloud vendors to come in, infrastructure. So you need everybody. >> Okay, let's just say that you were the spokesperson for the Supercloud organization, Supercloud.World. How would you sell AWS on why it's important for them? >> It's because they can build it and sell it in AWS and multiple AWS Gov Cloud, AWS On-prem, VPCs. It's even important for them, their expansion, their market time upfront. If I'm (indistinct), if I'm built on Supercloud, I can increase my time share. Otherwise I'm bringing only to public cloud. >> Okay, so I'll say, I'm Amazon and we have a concept called "One Way Doors." We don't want to go through a one way door. Is Supercloud a one way door for them? What's in it for them? Do they make more? Does it help their ecosystem? And the same question from Microsoft Azure and Google cloud. >> They're make more money. They're making their apps run in multiple places. It's a natural expansion. You are solving your customer problems for Amazon and DGC, right? My job is give people choices. I give choice to Lisa. Lisa can run it on public cloud. John, you can run it on VPC, AWS. >> So you're saying, so you think customers are asking for this right now? >> Everybody's asking. >> But don't really know how to say it? >> Customers are asking. Partners are asking. All of us are asking. >> Okay, what's the ask? >> Ask is give me a one place to build applications and run it anywhere without adding the complexity. >> Okay. Done. That's Supercloud. It'll ship tomorrow. (Lisa laughs) Well done. (John laughs) All right, well done. Final question for you. Lisa and I have been talking with folks here. What advice would you give the folks that are in here? 'Cause we have a lot of activity, people with marketing their solutions and products. They're trying to put a voice out there around thought leadership and trying to figure out what side of the street they should be on relative to the next 10 years as they're here at VMware Explore, as the next gen cloud comes around. What's the right narrative? What's the right positioning for companies to be on right now to be the most relevant and in the flow? >> I don't know about 10 years, but right now we are in difficult economic times, right? Markets are down. Inflation is up. So I think the fastest cost, people should focus on cost. How can it take cost? Automation is the key, right? Whether you use AI or automation , like you and me talking, John, last week, right? That's important. Every CEO I talk to is focused on cost. How do I cut my cost? How can I do with fewer resources? How can I do with fewer people, right? So the new budget right now is cut your budget in half. So every company, every exec should think about how can you be a good citizen? How can I get growth and scale? How can I do more with less? And that should be the next 12 months. >> That was a lot of the theme of conversations that I had with the VMware ecosystem, doing more with less. So that's definitely on everyone's minds. >> Right, and that's what my company is fully focused on. AISERA is all about AI automation. How can we solve your thing? We want to be solving customer problem. We are like your automation engine for your enterprise, right? We are a platform of platform. That's why I like the Supercloud. I can run AISERA as a platform on top of Supercloud. >> Excellent. >> Wow! If only we had more time! I know that you guys could really dig into Supercloud and take it even further. So you have to come back, Muddu. >> I will. >> He always wants to come back. >> I will be back. >> He's on the team. He's has contributed to the open source effort of Supercloud. Thank you. >> Yes. >> All right, thank you so much for joining John and me and kind of breaking down your vision on VMware Broadcom and the future. Next step, we've got to get some customers on here. I really want to understand what the customer experience is going to be like, but we'll have to another segment on that one. >> We will do that. Thank you, Lisa, for having me. >> My pleasure. >> John. >> Thank you very much. Thank you. >> For our guest and John Furrier, I'm Lisa Martin. You're watching "theCUBE" live on day three of our coverage of VMware Explore. We'll be back after a short break. (upbeat corporate music)
SUMMARY :
John and I are pleased to Thank you, John. and by the way, the recent You get an up around. along with Zoom and Thoma Bravo, What's the impact? Lot to unpack. You know, I was a VMware alumni too. the company to VMware. of the Supercloud and what Yeah, one of the things I So the question is, So apps and hardware, middle, No, I think to me, So the consolidation will be So do you think that But I think it has to be the They'll be in the top three or four. about the five vendors They have to have the full hardware side. So the next battle ground will be, are going into the endpoint. is that the CA acquisition If that's the case, I think, knowing Hank's playbook, I think so. to sit down with Raghu. in the field, so to speak. I guess that's my opinion. I think he's in the the execution was great. but it's one of the best shows I've seen. and the partners are excited a bunch of the customer of the street to be on in this next shift. So to me, the people who are going secret sauce on the table he might be the largest owning all the back. So he may be the winner all the time. So it's going to be very interesting. And more importantly, the end point to me. Endpoint could be your Or a building or a Super cloud for the servers is different is the URL out there. is going to happen. That kind of deviates from- It's not going to work. So what would you want to see and it has to start from the the people in the abstraction layer You need the apps guys to come in. for the Supercloud only to public cloud. And the same question from I give choice to Lisa. All of us are asking. adding the complexity. What's the right narrative? So the new budget right now So that's definitely on everyone's minds. Right, and that's what my I know that you guys could He always He's on the team. and the future. We will do that. Thank you very much. of our coverage of VMware Explore.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Karen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Steve Jones | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AJ Patel | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Muddu Sudhakar | PERSON | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Muddu Sudhakkar | PERSON | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Paul Moritz | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Karen Egan | PERSON | 0.99+ |
AISERA | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
August 9th | DATE | 0.99+ |
Opening Keynote | Supercloud22
(bright music) >> Welcome back to Supercloud 22. I'm John Furrier, host of "theCUBE" with Dave Vellante, with the opening keynote conversation with Vittorio Viarengo. He's the Vice President of Cross-Cloud at VMware, Cube Alumni. Vittorio, great to see you. Thanks for coming on. >> Ah, my pleasure. >> So you're kicking off the Supercloud event. Again, a pilot. Again, we were texting just a few months ago around some of the momentum. You identified this right away. You saw it, you saw the momentum. What's the reality around supercloud? What's your perspective? >> Well, I think that we have to go back to the history of IT, over the last ever. I feel like in IT, we're always running after the developers. The developers, they're smart. They go for the path of least resistance, and they create innovations, and then the entire stacks moves around, and if you look at developers over the last, you know, 15 years, they've been going to the cloud, right? And the reason they're going for the cloud is, you now, they say software is eating the world. Is really who builds software? Developers, so I think it's developers are eating the world, and so initially, there was one game in town, so they went with AWS, but eventually, we got the multiple clouds, and now, the reality is that the applications there, it's how we make money, how we save money. They're running on multiple cloud, the 75% of the companies running on multiple clouds today, and so, I think that creates the new computing platform for the next, you know, 10 years, 15 years, and I think that that multi-cloud world brings tremendous advantages, as we just talked, but also some challenges, and it's prime to a simplification, and that's where we're trying. >> One of the things we observe is this abstraction layer across clouds to create a consistent experience for customers, and very importantly, as you point out, developers. So when you think about the history of abstractions, we see another one sort of forming in the 2020s, which is really different, as you pointed out, that we had in the 2010s, where there was really, you know, one main cloud. Now, you have all these clouds. What are your thoughts on the history of abstractions? >> Well, if you look at IT, we always needed abstraction to unleash the next level of growth, right? I grew up as a... I started my career as a C++ developer. So initially, you know, on Windows, if you wanted to open a window on the screen, you had to write 200 lines of code. Then the MFC library came in, and now, you still have to be a C++ developer, but now, with a one line of code, you can initiate, open the yellow world and start to build your applications, but it's only when Visual Basic comes along, then now, we get five millions developers building applications that are 20 years later, we're still using, okay? And then the list goes on and on, and in the application integration, we used to look at the bytes on the bus and say, "Okay, this is the customers, and we're going to map it to SAP," and then we went one level higher with SOA and web services and the rest of history, and then unleashed tremendous, you know, growth and look at, you know, how we now, you know, we be able to throw APIs, integrate anything, and so then the ultimate example of abstraction is virtualization. We made all these different servers and networking and storage look like one, and now, you know, and the business never cares if you're running SAP back on-prem on HP or some other piece of hard drive. They care that it runs, right? And so I think that now, we need to bring a level of abstraction in the cloud that not only abstracts the low level APIs at the highest level, but also uniforms and unify the APIs and the way do management and security across multiple cloud. >> Let's unpack that because I think the virtualization angle is interesting 'cause with virtualization enabled AWS. If you look at AWS' success, virtualization, the Hypervisor, got them going, and that established that value. Now, the new structural change is happening. How do you define that specifically? What is supercloud in your mind? >> So in our mind, supercloud is a set of cloud native services that, first of all... Let's unpack that and go back to the virtualization. Virtualization was a great way to do it on-prem and is no wonder that AWS and Azure, they did it on their cloud, right? But the lingo franca of the cloud is not the virtualization layer. That's taken, it's hidden. It's down there, it just does its thing. The lingo franca of cloud is microservices, API, Kubernetes as the orchestration layer, and one would think, "Okay, now, we have Kubernetes, life is good. I just, you know, deploy on- Well, there are six, seven, eight Kubernetes distribution, and so to us, the supercloud is the ability to take, to factor out the common things that you can do across cloud and give you a single pane or glass to manage your application and single pipeline so you can build your application once and deploy it consistently across multiple clouds, and then, basically, factor out the other two important things with the security and observability of the application. >> One of the trade-offs of abstraction, you go back to the mainframe. They had to squeeze out the performance overheads. VMware had to do the same and done a tremendous job of it. So are we going to see that across clouds with multi-cloud or what we call supercloud. Are you going to see a trade-off? What trade-off do you see that the industry, technically, has to attack? >> Abstractions are always about trade-offs, right? You're trading off the speed. You know, I'm writing C++ code goes really fast for scale. You know, now, I have five million developers writing applications, but I think, eventually, what happens is that or you're trading off specialized skills for, you know, more valuable skills, and if I had a dollar every time I heard, "Oh, we cannot run Oracle Databases on virtualization," well, or the JVM is too slow, but guess what? How many Java developers, how many Java application are running out on the JVM? So I think, eventually, there will be trade-offs, but the technology catches up and it's a matter of like how much value are you getting in terms of scales and saving cost versus maybe the performance trade-off you were making on the lower level. >> On the evolution of hybrid cloud, 'cause right now, hybrid cloud is a steady state. People see that clearly, you know, on-premise and Edge, right around the corner. Public native cloud, there's benefits to be in the native cloud. How does multi-cloud fit? 'Cause by default, people have multiple clouds. If they run on Azure, they probably have some sort of productivity software with Microsoft or other Microsoft products, but it's best to breed. It's not yet connected. So multi-cloud has kind of become a default kind of thing. It's not yet a strategy in some people's minds, yet some people are thinking about it. So we think, and I think you might agree, that multi-cloud will happen, multiple clouds in the sense of workloads running seamlessly. Is that a pipe dream or is that near in our future? (men laugh) >> So there is a lot of unpack there. First of all, our definition of multi-cloud is that because most customers are operating their on-prem as the cloud, so the moment you have your on-prem cloud and AWS, your multi-cloud, so 75%, 85% going to 85%- >> You mean Private Cloud on-premise cloud operations? >> Yeah, and then you have another cloud, you're already multi-cloud. >> I'm assuming the experiences is identical, right? That's the assumption you- >> Well, initially, it's not identical, right? That's why you need a supercloud, right? >> Yeah, exactly. >> And most customers though are in denial, meaning that I see them being in five stages of acceptance or adoption of the multi-cloud. One is denial. We are on-prem and maybe we have one cloud. We're standardized. The second one is euphoria. Oh, look, you know, look how fast we go. All these developers are happy to do whatever they want, and then the third one is like, holy crap. They got the first bill. They realize that the security share responsibility model to deal with. They realize that somebody is to deploy this application and manage the application. Nobody does it for them, and then they go into like, (indistinct). Okay, now, we need to do something about this, right? It's a new normal, and then you end up with the enlightment, right? Now, we're really being productive and strategic about how we use multi-cloud. Very, very few customers are in that stage. Most customers are still within the denial and the new normal, and within the spectrum, you see multi-cloud as, "Okay, I have an application here, an application there. Okay, great, big deal." The next level is, "Okay, I have an application here that uses a pieces of a service of an application over there. Okay, now, I'm coordinating application. I'm using microservices," and then the third stage is like, "Okay, I am designing my application to use multiple services or multiple cloud because each uses differentiated features of that particular cloud." >> Is it part of the problem too, Vittorio, that the industry, the technology industry, you guys have not caught up. The cloud vendors aren't solving that problem. What's VMware doing to solve that problem? >> So we have seen this coming four or five years ago, right? That's why we acquired Pivotal, and then we made a number of acquisition around it because we saw that... Well, let's go back. What is VMware DNA? If you look, I've been running engineering, product management in the company then I moved to the dark side, more on the marketing side, but I've seen, and I sweat with those engineers, and when I look at those engineers, these people know how to make stuff that was not designed to work together work together and deliver value, and so if we go back to, you know, on-prem, we did it with virtualization. In the cloud, we did a new level of abstraction, which is, you know, at the APIs at the... And so over the last five years, we built what we believe is very comprehensive portfolio that unified how you build, you run, manage, secure, and access any application across any cloud. No Hypervisor required. >> So that's the game changer right there. So let me ask you a question. How does the choice factor come in because can VMware do all this or do they need to rely on partners? Because most customers have HashiCorp and other companies in there doing services for them as well. So how do you see the multi-partner strategy approach? Can you do it alone or are you going to need help from the ecosystem? >> First of all, if you look at the success of your event today, look how many vendors from multiple backgrounds and multiple level of the stack that are coming together to talk about the supercloud. So that to me is success already, and, of course, there are tremendous companies that are going to deliver fantastic value for, you know, management like HashiCorp or security and the development experience. Our approach is to bring them together as an integrated platform, and I think VMware has both the DNA and the muscles, the investment to be able to pull that off. >> Okay, you saw Keith Townsend. He had that very cool blackboard, and he called, this was maybe eight or nine months ago, he called the supercloud and VMware's multi-cloud vision aspirational. When is this going to be real? >> I think it's absolutely real today in some of the pieces. Right, there's always an aspiration. You have to look at a company like VMware as a company that looks out five, 10 years, right? You know, we have Raghu as our CEO, you know, which is a technical visionary, and so he saw five years ago, the advent of multi-cloud, and we invested in first part of the stack. What is it? How to build applications natively in the cloud using Tanzu. So with Tanzu, you can build application, manage Kubernetes cluster, secure, creating this service match, and so that's the reality today. Then on the next step is security. We recently announced our security approach. We have a very peculiar position in the stack to be able to see security, not just on the endpoint, not just, you know, in the application, but in between, right? By looking at all the Hypervisor, if you're using Hypervisor. You looking at East-West traffic with NSX and cross cloud networks, and so these are the three main places that are in place today, right? And then I cannot spoil our user conference coming in a couple of weeks where we're going to make more announcement around the supercloud, which we called cross-cloud services. >> Vittorio, I remember in 2016, I interviewed Andy Jassy and Raghu when they announced the deal with VMware. VMware and AWS had the relationship, and you're running on the cloud on AWS VMware, and you look at what's happened since, and this is where the supercloud conversation starts to kick in where Amazon's really good at moving bits around and optimizing the power and the silicon of the infrastructure, which means that the higher level services are going to be much more open for people to innovate around. So Dave calls it, the super pass. This area platform is a service to change the SaaS game. So I have to ask you, how do you see the SaaS game changing with supercloud? Because if you have a Private Cloud or Edge, you're now multiple clouds, technically, as you pointed out. How has that changed the SaaS configuration? Because SaaS and IaaS and PaaS had great relationships in native clouds to solve problems. Now, you have the multi-cloud. How do you see this platform as a service area changing or maybe enabling? >> So I think that that's where the innovation, the ability to aggregate common... Because look, there is a reason why people use multiple cloud, right? They choose it because they have differentiated features. So we don't want to ever hide those features, like if you're using Google, because you need AI capabilities, absolutely. We don't want to prevent that, right? But at the PaaS level, you know, when you are orchestrated these microservices, you don't want to do it in five different ways, right? So those are the areas where I think are prime for aggregation and simplification. How you, you know, look at all this Kubernetes environment and being able to monitor your application and force security policies, both from a resource consumption, this group of developers can only use this many resources, but also a run time that you don't run out of like, you know, you get that bill shock, and so those are the areas where I think there's this more ability for us to innovate and deliver value, not at the lower level which is taken by the- >> So you try to have your cake and eat it too, which is if you can pull that off it's game over, right? You have a specific set of cross-cloud services that are unique and value added that are differentiable in the industry, but at the same time, you're trying to give access to developers, if in fact, they want access to those primitives, right? >> Yeah. >> That's a bold aspiration. >> Well, we want to have the cake, eat it, and lose weight. (men laugh) But seriously, I think, going back to your point about the ecosystem, of course, we're not going to do it alone, right? If we were doing it alone, there is not a market, right? And so I think that the market is so big and the area of challenges for IT is so large that there's room for many companies to add value, and I think that, as I said, our approach is to, you know, we're a platform company, right? So you're going to find tremendous companies that will solve one problem for multiple clouds. You're going to find the hyperscaler that have a platform approach for one cloud. We like to think that we can position ourself in that two by two as the company that has a platform approach across multiple clouds. >> You know, it's great. That's where we've known each other for a long time. It's 12 years of "CUBE" coverage. Watching things like the CNCF emerge and do great work, watching cloud native kind of go that next level's been fun to watch, and the developers have had a great run. I mean, open sources booming, developer goodness is out there. People are shifting left, a lot of great stuff going with containers and Kubernetes. So looking good on the developer experience front right now, and I think it's only going to get better, but developers don't think about locking. They just want to get the job done. Move on to the next line of code. It's the ops teams that we're hearing from that are saying, "Hey, we love this, too, but we got to align with the developer." Level up, so to speak. So ops and security teams are saying, "Hey, I got to run this with automation with the higher level services." So there seems to be a focus around the supercloud conversation around ops teams. This is your wheelhouse, VMware. You guys do a lot of IT operations and things of that nature. How do you see that and what's the message cross-cloud brings to and supercloud brings to the development teams and the ops teams who are really going to be doing DevOps together and/or faster? >> I think if you go back to what where we started, right? Developers run the show, and I think there's been a little bit of inertia in IT organization on the op side and the security side in catching up to see how to catch up to where developers are, right? And with the DevOps revolution, if operators don't really understand what the developers need and get ahead of that, they're going to be left behind. So I'll give you an example, like SMB Global, one of our customers, their band runs their operation. Basically, told me I had to sit down and figure out what these developers were doing because I was being left behind and then or Cerner, one of our partners and customers, same thing they say, okay, we sat down. We realized that we needed to get ahead of the developers and set those guard rails, right? These are the Kubernetes environment you want to use? Okay, this is how we're going to set them up. This is want to make sure that we shift left security, that we have a single pipeline that feeds that, and Cerner, using our technology was able to... They made a business decision to move from one hyperscaler, was going to go unnamed to another hyperscaler, It was going to go unnamed, and they managed to change all the deployments in four hours. So that's the power of the supercloud, being able to say, "Hey, developers, do whatever you want, but these are the guard rails, and we're going to be able to like stay ahead of you and give you the flexibility, but also, make sure that operation and security, as a saying." >> Shift left shield right, basically. >> Awesome, awesome stuff. We've got 15 seconds. What is supercloud? What's the bumper sticker? >> The supercloud is a level of abstraction across any of the public clouds that allows developers to go fast, operators to make sense of what's happening, security to enforce security, and end users to access any application with a great user experience and security. >> And it's inclusive of on-prem. I'll just throw that in. (John laughs) >> All right, great stuff. Thanks for coming on. We're going to have a industry panel to talk about and debate Supercloud 22. We'll be right back after this break.
SUMMARY :
He's the Vice President of Cross-Cloud around some of the momentum. for the next, you know, One of the things we observe and in the application integration, Now, the new structural and observability of the application. see that the industry, are running out on the JVM? So we think, and I think you might agree, so the moment you have Yeah, and then you have another cloud, and manage the application. that the industry, the In the cloud, we did a So that's the game changer right there. the investment to be When is this going to be real? and so that's the reality today. VMware and AWS had the relationship, But at the PaaS level, you know, and the area of challenges and the developers have had a great run. and give you the flexibility, What's the bumper sticker? across any of the public clouds And it's inclusive of on-prem. We're going to have a industry panel
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Vittorio Viarengo | PERSON | 0.99+ |
SMB Global | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
200 lines | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
2020s | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
12 years | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
2010s | DATE | 0.99+ |
15 seconds | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Visual Basic | TITLE | 0.99+ |
Vittorio | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Keith Townsend | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
one line | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
C++ | TITLE | 0.99+ |
five stages | QUANTITY | 0.99+ |
one game | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
four | DATE | 0.99+ |
eight | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
third stage | QUANTITY | 0.99+ |
one cloud | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
second one | QUANTITY | 0.98+ |
first bill | QUANTITY | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
Java | TITLE | 0.98+ |
Tanzu | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
two important things | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
six | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
third one | QUANTITY | 0.98+ |
Raghu | PERSON | 0.98+ |
20 years later | DATE | 0.98+ |
Pivotal | ORGANIZATION | 0.97+ |
five millions developers | QUANTITY | 0.97+ |
one problem | QUANTITY | 0.97+ |
SaaS | TITLE | 0.97+ |
four hours | QUANTITY | 0.97+ |
Supercloud | EVENT | 0.97+ |
PaaS | TITLE | 0.97+ |
five years ago | DATE | 0.96+ |
Cerner | ORGANIZATION | 0.96+ |
Supercloud 22 | EVENT | 0.95+ |
five different ways | QUANTITY | 0.95+ |
Windows | TITLE | 0.95+ |
first part | QUANTITY | 0.95+ |
VMware DNA | ORGANIZATION | 0.94+ |
both | QUANTITY | 0.93+ |
single pipeline | QUANTITY | 0.93+ |
a dollar | QUANTITY | 0.92+ |
single pipeline | QUANTITY | 0.92+ |
Azure | ORGANIZATION | 0.92+ |
Christian Hernandez, Codefresh | CUBE Conversation
>>And welcome to this cube conversation here in Palo Alto, California. I'm John furrier, host of the cube. We have a great guest coming in remotely from LA Christian Hernandez developer experienced lead at code fresh code fresh IO. Recently they were on our feature at a startup showcase series, season two episode one cloud data innovations, open source innovations, all good stuff, Christian. Thanks for coming on this cube conversation. >>Thank you. Thank you, John. Thank you for having me on, >>You know, I'm I was really impressed with code fresh. My met with the founders on here on the cube because GI ops AI, everything's something ops devs dev sec ops. You've got AI ops. You've got now GI ops, essentially operationalizing the software future is here and software's eating the world is, was written many years ago, but it's open source is now all. So all things software's open source and that's kind of a done deal. It's only getting better and better. Mainstream companies are contributing. You guys are on this wave of, of this open source tsunami and you got cloud scale. Automation's right there, machine learning, all this stuff is now the next gen of, of, of code, right? So you, your code fresh and your title is developer experience lead. What does that mean right now? What does it mean to be a developer experience lead? Like you make sure people having a good experience. Are you developing you figuring out the product? What does that mean? >>Yeah. That's and it's also part of the, the whole Debre explosion that's happening right now. I believe it's, you know, everyone's always asking, well, what, you know, what is developer advocate? What does that mean developer experience? What does that mean? So, so you, you kind of hit the nail on the head a little bit up there in, in the beginning, is that the, the experience of the developer when using a particular platform, right? Especially the code flash platform. That is my responsibility there at code fresh to enable, to enable end users, to enable partners, to enable, you know, anyone that wants to use the code fresh platform for their C I C D and get ops square flows. So that's, that's really my, my corner of the world is to make sure their experience is great. So that's, it's really what, what I'm here to do >>At food fresh. You know, one of the things I can say of my career, you've been kind of become a historian over time. When I was a developer back in the old days, it was simply you compiled stuff, you did QA on it. You packaged it out. You wanted out the door and you know, that was a workflow right now with the cloud. I was talking with your founders, you got new abstraction layers. Cloud has changed again again, open source. So newer things are coming, right? Like, like, like Kubernetes for instance is a great example that came out of the open source kind of the innovations. But that, and Hadoop, we were mentioning before he came on camera from a storage standpoint, kind of didn't make it because it was just too hard. Right. And it made the developer's job harder. And then it made the developer's requirements to be specialized. >>So you had kind of two problems. You had hard to use a lot of friction and then it required certain expertise when the developers just want to code. Right. So, so you have now the motion of, with GI ops, you guys are in the middle of kinda this idea of frictionless based software delivery with the cloud. So what's different now, can you talk about that specific point because no one wants to be, do hard work and have to redo things. Yeah. Shift left and all that good stuff. What's hard now, what do you guys solve? What's the, what's the friction that you're taking out what's to become frictionless. >>Yeah. Yeah. And you, you, you mentioned a very interesting point about how, you know, things that are coming out almost makes it seem harder nowadays to develop an application. You used to have it to where, you know, kind of a, sort of a waterfall sort of workflow where, you know, you develop your code, you know, you compile it. Right. You know, I guess back in the day, Java was king. I think Java still is, has a, is a large footprint out there where you would just compile it, deploy it. If it works, it works. Alright cool. And you have it and you kind of just move it along in its process. Whereas I think the, the whole idea of, I think Netflix came out with like the, the fail often fail fast release often, you know, the whole Atlassian C I C D thing, agile thing came into play. >>Where now it's, it's a little bit more complex to get your code out there delivered to get your code from one environment to the other environment, especially with the, the Avan of Kubernetes and cloud native architecture, where you can deploy and have this imutable infrastructure where you can just deploy and automate so quickly. So often that there needs to be some sort of new process now into place where to have a new process, like GI ops to where it'll, it it's frictionless, meaning that it's, it, it makes it that process a little easier makes that little, that comp that complex process of deploying onto like a cloud native architecture easier. So that way, as you said before, returning the developers to back to what they care about, mot, the most is just code. I just want to code. >>Yeah. You know, the other thing, cool thing, Christian, I wanna bring up and we'll get into some of the specifics around Argo specifically CD is that the community is responding as a kind of, it takes a village kind of mindset. People are getting into this just saying, Hey, if we can get our act together around some de facto workflows and de facto capabilities, everyone wins. It's a rising tide, floats all boats, kind of concept. CNCF certainly has been a big part of that. Even seen some of the big hyper scales getting behind it. But you guys are part of the founding members of the open get ups working group, Amazon Azure, GitHub, red hat Weaveworks and then a ton of contributors. Okay. So this is kind of cool. This means that there's like people behind this thing. Look, we gotta get here faster. What happened at co con this year? You guys had some news around Argo and you had some news around the hosted solution. Can you take a minute to explain two things, one the open community vibe, and then two, what you guys announced at Coon in Spain. >>Yeah. Yeah. So as far as open get ups, that was, you know, as you said before, code fresh was part of that, that founding committee. Right. Of, of group of people trying to figure out, define what get ups is. Right. We're trying to bring it beyond the, you know, the, the hype word, right beyond just like a marketing term to where we actually define what it actually is, because it is actually something that's out there that people are doing. Right. A lot of people, you know, remember that the, the Chick-fil-A story where it's like, they, they are completely doing, you know, this get ops thing, we're just now wanting, putting definition around it. So that was just amazing to see out at there in, in Cuban. And, but like you said, in QAN, we, you know, we're, we're, we're taking some of that, that acceleration that we see in the community to, and we, we announce our, our hosted get ops offering. >>Right. So hosted get ops is something that our customers have been asking for for a while. Many times when, you know, someone wants to use something like Argo CD, the, in, they install it on their cluster, they get up and running. And, but with, with all that comes like the feed and care of that platform, and, you know, not only just keeping the lights on, but also management security, you know, general maintenance, you know, all the things that, that come along with managing a system. And on top of that comes like the scale aspect of it. Right. And so with scale, so a lot of people go with like a hub and spoke others, go with like a fleet design in, in either case, right. There's, there's a challenge for the feet and care of it. Right. And so with code fresh coast of get ups, we take that management headache away. >>Right? So we, we take the, the, the management of, of Argo CD, the management of, of all of that, and kind of just offer Argo CD as a surface, right. Which offers, you know, allows users to, you know, let us take care of all the, of the get offs, runtime. And so they can concentrate on, you know, their application deployments. Right. And you also get things like Dora metrics, right. Integrated with the platform, you have the ability to integrate multiple CI providers, you know, like get hub actions or whatever, existing Jenkins pipelines. And really that, that code fresh platform becomes like your get ops platform becomes like, you know, your, your central view of the world of, of your, you know, get ups processes. >>Yeah. I mean, that whole single source of truth concept is really kind of needed. I gotta ask you though, with the popularity of the Argo CD on get ups internally, right. That's been clear, right. Kubernetes, the way that's going, it's accelerating fast. People want simple it's scaling, you got automation built in all that good stuff. What was the driver behind the hosted get up solution? Was it customer needs? Was it efficiency all the above? What was specifically and, and why would someone want to have the hosted versus say internal? >>Yeah. So it's, it was really driven by, you know, customer need been something that the customers have been asking for. And it's also been something that, you know, you, you, you have a process of developing an application to, you know, you know, a fleet of clusters in a traditional, you know, I keep saying traditional, get outs practice as if get outs are so old. And, you know, in, you know, when, when, when people first start out, they'll start, you know, installing Argo city on all these clusters and trying to manage that at scale it's, it's, it, it seemed like there was, you know, it it'd be nice if we can just like, be able to consume this as a service. So we don't have to like, worry about, you know, you know, best practices. We don't have to worry about security. We don't just, all of that is taken care of and managed by us at code fresh. So this is like something that, you know, has been asked for and, and something that, you know, we believe will accelerate, you know, developers into actually developing their, their applications. They don't have to worry about managing >>The platform. So just getting this right. Hosted, managed service by you guys on this one, >>Correct? Yes. >>Okay. Got it. All right. So let me, let me get in the Argo real quick, just to kind of just level set for the folks that are, are leaning into this and then kicking the tires. Where are we with Argo? What, why was it so popular? What did it do specifically? Did it just make it easier for developers to manage and monitor Kubernetes, keep 'em updated? What was the specific value behind Argo? Where, where, where did it come from and why is it so popular? >>Yeah, so Argo the Argo project, which is made up of, of a few tools, usually when people say Argo, they meet, they they're talking about Argo CD, but there's also Argo workflows, Argo events, Argo notifications. And, and like I said before, CD with that, and that is something that was developed internally at Intuit. Right? So for those of who don't know, Intuit is the company behind turbo tax. So for those, those of us in the us, we, we know, you know, we know that season all too well, the tax season. And so that was a tool that was developed internally. >>And by the way, Intuit we've done many years. They're very huge cloud adopters. They've been on that train from the day one. They've been, they've been driving a lot of cloud scale too. Sorry >>To interrupt. Yeah. And, and, and yeah, no, and, and, and also, you know, they, they were always open source first, right. So they've always had, you know, they developed something internally. They always had the, the intention of opensourcing it. And so it was really a tool that was born internally, and it was a tool that helped them, you know, get stuff done with Kubernetes. And that's kind of like the tagline they use for, for the Argo project is you need to get stuff done. They wanted their developers to focus less on deploying the application and more right. More than on writing the application itself. And so the, and so the Argo project is a suite of tools essentially that helps deploy onto Kubernetes, you know, using get ups as that, you know, that cornerstone in design, right in the design philosophy, it's so popular because of the ease of use and developer friendliness aspect of it. It's, it's, it's, it's meant to be simple right. In and simple in a, in a good sense of getting up and running, which attracted, you know, developers from, you know, all around the world. You know, other companies like red hat got into it as well. BlackRock also is, is a, is a big contributor, thousands of other independent contributors as well to the Argo project. >>Yeah. Christian, if you bring up a good point and I'm gonna go on a little tangent here, but I wanna get your reaction to something that Dave ante and I, and our cube team has been kind of riffing on lately. You mentioned, you know, Netflix earlier, you mentioned Intuit. There's a kind of a story that's been developing and, and with traction and momentum and trajectory over the past, say 10 years, the companies that went on the cloud, like Netflix into it, snowflake, snowflake, not so much now, but in terms of open source, they're all contributing lift. They're all contributing back to open source, but they're not cloud providers. Right. So you're seeing that kind of first generation, I's a massive contribution to open source. So open source been around for a while, remember the early days, and we'd all participate on projects, but now you have real companies building IP going open source first because they're on a hyperscale cloud, but they're not the cloud themselves. They took advantage of that. So there's kind of this cycle of flywheel of cloud to open source, not from the vendors themselves like Amazon, which services or Azure, but the people who rode their CapEx and built on that scale, feeding into the open source. And then coming back, this is kind of an interesting dynamic. What's your reaction to that? Do you see that? Yeah. Super cloud kind of vibe there. >>Yeah. Yeah. Well, and, and also it, it, I think it's, it's a, it's indicative that, you know, open source is not only, you know, a way to develop, you know, applications, a way to engineer, you know, your project, but also kind of like a strategic advantage in, in, in such a way. Right. You know, you, you see, you see companies like, like, like even like Microsoft has been going into, you know, open source, right. They they've been going to open source first. They made a, a huge pivot to, you know, using open source as, you know, like, like a, like a strategic direction for, for the company. And I think that goes back to, you know, a little bit for my roots, you know, I, I, I always, I always talk about, you know, I always talk about red hat, right. I always talk about, you know, I was, I was, I was in red hat previously and, you know, you know, red hat being, you know, the first billion dollar open source company. >>Right. I, we always joke is like, well, you know, internally, like we know you were a billion dollar company that sold free software. How, you know, how, how does that happen? But it's, it's, it's really, you know, built into the, built into being able to tap into those expert resources. Yeah. You know, people love using software. People love the software they love using, and they wanna improve it. Companies are now just getting out of their way. Yeah. You know, companies now, essentially, it's just like, let's just get out of the way. Let's let people work on, you know, what they wanna work on. They love the software. They wanna improve it. Let's let them, >>It's interesting. A lot of people love the clouds have all this power. If you think about what we are just riffing on and what you just said, the economics and the organic self-governing has always been the open source way where commercial value is enabled. If you play ball, right. Like, oh, red hat, for instance. And now you're seeing the community kind of be that arbiter of the cloud. So, Hey, if everyone can create value on say AWS or Azure, bring it to open source, everyone benefits across all clouds hope eventually. So the choice aspect comes in. So this community angle is huge. And I think it's changing a lot for the better. And I think this is where we're seeing a lot of that growth. And you guys have been the middle level with the Argo project and get ups specifically in that, in that sector. How have you seen that growth? What some dynamics have you seen power dynamics, organic? Is it governed well, whats some of the, the successes, what are some of the challenges? Can you share your thoughts on the community's growth around get ops and Argo project? >>Yeah, yeah. Yeah. So I've been, you know, part of some of these communities, right? Like the, the open, get, get ops community, the Argos community pretty much from the beginning and, and seeing it developed from an idea to, you know, having all these contributors, having, you know, the, the, the buzzword come out of it, you know, the get ups and it be that being the, you know, having it, you know, all over the, you know, social media, all over LinkedIn, all over all, all these, all these different channels, you know, I I've seen things like get ops con, right. So, you know, being part of the, get ops open, get ops community, you know, one of the things we did was we did get ops con it started as a meetup, you know, couple years ago. And now, you know, it was a, you know, we had an actual event at Cuan in Los Angeles. >>You know, we had like, you know, about 50 people there, but then, you know, Cuan in Valencia this past Cuan we had over 200 people, it was a second largest co-located events in, at Cuan. So that just, just seeing that community and, you know, from a personal standpoint, you know, be being part of that, that the, the community being the, the event chair, right. Yeah. Being, being one of the co-chairs was a, was a moment of pride for me being able to stand up there and just seeing a sea of people was like, wow, we just started with a handful of people at a meetup. And now, you know, we're actually having conferences and, and, and speaking of conference, like the Argo community as well, we put in, you know, we put on a virtual only event on Argo con last year. We're gonna do it in person today. You know, this year. >>Do you have a date on that? Do you have a date on that Argo con 22? >>Two? Yeah, yeah, yeah. Argo con September 19th, 2022. So, you know, mark your calendars, it it's, you know, it's a multi-day event, you know, it's, it's part of something else that I've seen in the community where, you know, first we're talk talking about these meetups. Now we're doing multi-day events. We're, you know, in talks of the open, get ups, you know, get ups can also make that a multi-day event. There's just so many talks in so many people that want to be involved in network that, you know, we're saying, well, we're gonna need more days because there's just so many people coming to these events, you know, in, in, you know, seeing these communities grow, not just from like the engineering standpoint, but also from the end user standpoint, but also from the people that are actually doing these things. And, you know, seeing some of these use cases, seeing some of the success, seeing some of the failures, right? Like people love listening to those talks about postmortems, I think are part of my favorite talks as well. So seeing that community grow is, is, you know, on a personal level, it's, it's a point >>It's like CSI for software developers. You want to curious about >>Exactly >>What happened. You know, you know, it's interesting, you mentioned about the, the multiple events at Coon. You know, the vibe that's going on is a very festival vibe, right? You have organic groups coming together. I remember when they had just started doing the day zero programs. Now you have like, almost like multiple stages of content at these events. It feels like, like a Coachella vibe or some sort of like festival vibe, like a lot of things going on and you, and if you pick your kind of area, but you can move around, I find that the kind of the format de Azure I think is going well these days. What do you think about that? >>Yeah, yeah. No, for sure. It's and, and, and I love that that analogy of Coachella, it does feel like, you know, it's, there's something for everyone and you can find what you like, and you'll find a little, you know, a little group, right. A little click of, of, of people that's probably the wrong term to use, but you know, you, you find, you know, you, you know, like-minded people and, you know, passionate about the same thing, right? Like the security guys, they, you know, you see them all clump together, right? Like you see like the, the developer C I CD get ops guys, we all kind of clump together and start talking, you know, about everything that we're doing. And it's, that's, that's, I think that's really something special that coupon, you know, some, you know, it's gotten so big that it's almost impossible to fit everything in a, in a week, because unless there's just so much to do. And there's so much that that interests, you know, someone, but it's >>A code, a code party is what we call it. It's a code party. Yeah. >>It's, it's a code party for sure. For >>Sure. Nerd nerd Fest on, on steroids. Hey, I gotta get, I wanna wrap this up and give you the final word, Christian. Thanks for coming on. Great insight, great conversation. There's a huge, you guys are in the middle of a hot area, obviously large scale data growth. Kubernetes is scaling beautifully and making it easier at managed services. What people want machine learning's kicking in and, and you get automation building in all favoring, the developer and C I CD pipeline and all that good stuff. People want to learn more. Can you take a minute to put the plug in for code fresh on the certification? How do I get involved? Where are you? Is there levels if I want to jump in and get trained and get fluent on code fresh, can you share commentary and, and, and what the status is? >>Yeah, yeah, for sure. So code fresh is offering a free certification, right? For get ups or Argo CD and get ops. The first of it's kind for Argo CD, first of it's kind for get ops is you can actually go get certified with Argo CD and get ops. You know, we there level one is out right now. You can go take that code, fresh.io/certification. It's out there, sign up, you know, you, you don't, you don't need to pay anything, right. It's, it's something it's a, of a free course. You could take level two is coming soon. Right? So level two is coming soon in the next few months, I believe I don't wanna quote a specific day, but soon because I, but soon I, it it's soon, soon as in, as in months. Right? So, you know, we're, we're counting that down where you can not only level one cert level certification, but a level, two more advanced certification for those who have been using Argo for a while, they can still, you know, take that and be, you know, be able to get, you know, another level of certification for that. So also, you know, Argo con will be there. We're, we're part of the programming committee for Argo con, right? This is a community driven event, but, you know, code fresh is a proud diamond sponsor. So we'll be there. >>Where's it located up to us except for eptember 19th multiday or one day >>It's a, it's a multi-day event. So Argo con from 19, 19 20 and 21 in a mountain view. So it'll be in mountain view in the bay area. So for those of you who are local, you can just drive in. Great. >>I'm write that down. I'll plug it. I'll put in the show notes. >>Awesome. Awesome. Yeah. And you will be there so you can talk to me, you can talk to anyone else at code, fresh talking about Argo CD, you know, find, find out more about hosted, get ups code, fresh.io. You know, you can find us in the Argo project, open, get ups community, you know, we're, we're, we're deep in the community for both Argo and get ups. So, you know, you can find us there as well. >>Well, let's do a follow up in when you're in town, so's only a couple months away and getting through the summer, it's already, I can't believe events are back. So it's really great to see face to face in the community. And there was responding. I mean, co con in October, I think that was kind of on the, that was a tough call and then get to see your own in Spain. I couldn't make it. Unfortunately, I had got COVID came down with it, but our team was there. Open sources, booming continues to go. The next level, new power dynamics are developing in a great way. Christian. Thanks for coming on, sharing your insights as the developer experience lead at code fresh. Thanks so much. >>Thank you, John. I appreciate it. >>Okay. This is a cube conversation. I'm John feer, host of the cube. Thanks for watching.
SUMMARY :
I'm John furrier, host of the cube. Thank you. Are you developing you figuring out the product? I believe it's, you know, everyone's always asking, well, what, you know, You wanted out the door and you know, that was a workflow right now So, so you have now the motion of, with GI ops, you guys are in the middle of kinda this idea of frictionless workflow where, you know, you develop your code, you know, you compile it. So that way, as you said before, You guys had some news around Argo and you had some news around the hosted solution. A lot of people, you know, remember that the, the Chick-fil-A story where and, you know, not only just keeping the lights on, but also management security, you know, Which offers, you know, allows users to, you know, let us take care of all the, People want simple it's scaling, you got automation built in all that good stuff. you know, we believe will accelerate, you know, developers into actually developing their, Hosted, managed service by you guys on this one, So let me, let me get in the Argo real quick, just to kind of just level set for the folks that So for those, those of us in the us, we, we know, you know, we know that season all too well, the tax And by the way, Intuit we've done many years. and it was a tool that helped them, you know, You mentioned, you know, you know, applications, a way to engineer, you know, your project, but also kind of like I, we always joke is like, well, you know, internally, like we know you were a billion dollar company that And you guys have been the middle level with the Argo project and come out of it, you know, the get ups and it be that being the, you know, You know, we had like, you know, about 50 people there, but then, you know, Cuan in Valencia this you know, it's, it's part of something else that I've seen in the community where, you know, first we're talk talking about these meetups. You want to curious about You know, you know, it's interesting, you mentioned about the, the multiple events at Coon. Like the security guys, they, you know, you see them all clump together, Yeah. It's, it's a code party for sure. Hey, I gotta get, I wanna wrap this up and give you the final word, you know, be able to get, you know, another level of certification So for those of you who are local, I'll put in the show notes. So, you know, you can find us there as well. So it's really great to see face to face in the community. I'm John feer, host of the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Spain | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
October | DATE | 0.99+ |
Valencia | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Intuit | ORGANIZATION | 0.99+ |
Christian Hernandez | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
two problems | QUANTITY | 0.99+ |
Cuan | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
Argo con | EVENT | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
September 19th, 2022 | DATE | 0.99+ |
Christian | PERSON | 0.99+ |
this year | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.98+ |
eptember 19th | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
one | QUANTITY | 0.98+ |
Argos | ORGANIZATION | 0.98+ |
thousands | QUANTITY | 0.98+ |
one day | QUANTITY | 0.98+ |
first generation | QUANTITY | 0.98+ |
Argo | ORGANIZATION | 0.98+ |
Cuban | LOCATION | 0.98+ |
over 200 people | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
couple years ago | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
Coachella | EVENT | 0.97+ |
19 | DATE | 0.97+ |
Two | QUANTITY | 0.96+ |
Argo con 22 | EVENT | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
Los Angeles | LOCATION | 0.95+ |
Coon | ORGANIZATION | 0.95+ |
Kubernetes | TITLE | 0.95+ |
10 year | QUANTITY | 0.95+ |
first billion dollar | QUANTITY | 0.95+ |
Java | TITLE | 0.94+ |
level one | QUANTITY | 0.94+ |
level two | QUANTITY | 0.93+ |
Argo con | ORGANIZATION | 0.93+ |
21 | DATE | 0.93+ |
about 50 people | QUANTITY | 0.93+ |
C I CD | ORGANIZATION | 0.91+ |
COVID | ORGANIZATION | 0.9+ |
one environment | QUANTITY | 0.9+ |
second largest co | QUANTITY | 0.89+ |
Weaveworks | ORGANIZATION | 0.88+ |
Nerd nerd Fest | EVENT | 0.88+ |
QAN | LOCATION | 0.87+ |
Azure | TITLE | 0.84+ |
billion dollar | QUANTITY | 0.82+ |
season two | QUANTITY | 0.81+ |
Amazon Azure | ORGANIZATION | 0.81+ |
Benoit Dageville, Snowflake | Snowflake Summit 2022
(upbeat music) >> Welcome back everyone, theCUBE's three days of wall to wall coverage of Snowflake Summit '22 is coming to an end, but Dave Vellante and I, Lisa Martin are so pleased to have our final guest as none other than the co-founder and president of products at Snowflake, Benoit Dageville. Benoit, thank you so much for joining us on the program. Welcome. >> Thank you. Thank you, thank you. >> So this is day four, 'cause you guys started on Monday. This is Thursday. The amount of people that are still here speaks volumes. We've had close to 10,000 people here. >> Yeah. >> Could you ever have imagined back in the day, 10 years ago that it would come to something like this in such a short period of time? >> Absolutely not. And I always say if I had imagined that I might not have started Snowflake, right. This is somehow scary. I mean and yeah, it's huge. And you can feel the excitement of everyone. It is like mind boggling and the fact that so many people are still there after four days is great. >> Your keynote on Tuesday was fantastic. Your energy was off the charts. It was standing room only. There were overflow rooms. Like we just mentioned, a lot of people are still here. Talk about the evolution of Snowflake, this week's announcements and what it means for the future of the data cloud. >> Yeah, so evolution, I mean, I will start with the evolution. It's true that that's what we have announced. This week is not where we started necessarily. So we started really very quickly with big data combined with data warehouse as one thing. We saw that the world was moving into fragmented siloing data and we thought with Thierry, we are going to combine big data and data warehouse in one system for the cloud with this elasticity and this service simplicity. So simplicity, amazing elasticity, which is this multi workload architecture that I was explaining during the keynotes and really extreme simplicity with the service. Then we realized that there is one other attribute in the cloud, which is unique, which doesn't exist on-premise, which is collaboration. How you can connect different tenets of the platform together. And Google showed that with Google Docs. I always say to me, it was amazing that you could share document and have direct access to document that you didn't produce and you can collaborate on this document. So we wanted to do the same thing for data and this is where we created the data cloud and the marketplace where you can have all these data sets available and really the next evolution I would say is really about applications that are (indistinct) by that data, but are way simpler to use for all the tenets of the data cloud. And this is the way you can share expertise also, including, ML model, everyone talks about ML and the democratization of ML. How are you going to democratize ML? It's not by making necessary training super easy. Such that everyone can train their ML for themselves. It's by having very specialized application where data and ML is at the core, which are shared, through the marketplace and we shall leverage by many tenets of this marketplace that have no necessary knowledge about building this ML models. So that's where, yeah. >> When you and Thierry started the company, I go back to the improbable rise of Kubernetes and there were other more sophisticated container management systems back then, but they chose to focus on simplicity. And you've told me before, that was our main tenet. We are not going to worry about all the complex database stuff. You knew how to do that, but you chose not to. So my question is, did you envision solving those complex problems over time yourselves or through an ecosystem? Was this by design or did you... As you started to get into it, say let's not even try to go there let's partner to go there. >> Yeah, I mean, it's both. It's a combination of both. Snowflake, the simplicity of the platform is really important because if our partners are struggling to put their solution and build solution on top of Snowflake they will not build it. So it's very important that number one, our platform is really easy to use from day one. And that really has to be built inside the platform. You cannot build simplicity on top. You cannot have a complex solution and all of a sudden realize that, oh, this is complex. I need to build another layer on top of it to make it simpler, that will not work. So it had to be built from day one, but you're right. What is going to be Snowflake? I always say in 10 years from now, we just turn 10 years old or we are going to turn 10 years old in few months. Actually a few months, yes. >> Right. >> So for the next 10 years I really believe that most of Snowflake will not be built by Snowflake. And that's the power of the partners and these applications. When you are going to say I'm using Snowflake, actually, probably you are not going to use directly code developed by Snowflake. That code will leverage our platform, but you will use a solution that has been built on top of Snowflake. And this is the way we are going to decouple, the effort of Snowflake and multiply it. >> It's an interesting balance, isn't it? When I think of what you did with Apache Iceberg, if I use Iceberg and I'm not going to get as much functionality, but I may want that openness, but I'm going to get more functionality inside of the data cloud. And I don't know, but if you know the answer to what's going to happen. >> No, that's a super good question. So to explain what we did with Apache Iceberg, and the fact that now it's a native format for us. So everything that you can do with our internal formats, you can do it with Apache Iceberg, including security, defining masking, data masking all the governors that we have, fine grain security aspects, the replications you can define you can use (indistinct) on top of... >> But there's a but, right? But if I do that with native Snowflake tools, I'm going to get an even greater advantage, am I not? >> Yes. So that's what I'm saying. So that's why we embraced Iceberg, because I think we can bring all the benefit of Snowflake to people who have decided to use Iceberg, I mean open formats. Iceberg is a table format. So and why it was important because people had massive investments in open source in Hadoop. And we had a lot of companies saying, we love Snowflake. We want to be a Snowflake customer, but we cannot really migrate all our data. I mean, it will be really costly. And we have a lot of tools that need access, direct access. So this is why we created Iceberg because we can really... I mean, we really think that we can bring the benefit of Snowflake to this data. >> Gives customers optionality. Okay. I use this term super cloud. You don't use the term, but that's okay. And I get a lot of heat for it. But to me, what you're doing is quite a bit different than multicloud because you're creating that abstraction layer. You're bringing value above it. My question to you is, the most of the heat I get is, oh, that's just SaaS. Are you just SaaS? >> No. I mean, no, absolutely not. I mean, you're right we are a super cloud. I mean it's a much better word than saying we are multicloud. Multicloud is often viewed as oh, I have my system and now I can run this system in the different cloud providers. Snowflake is different. We have one single platform for the world, which happens to have some regions are AWS region, some regions are Azure, some regions are GCP, Google and we merge them together. We have this Snowgrid technology that connects all our regions together so that we have really one platform for the world. And that's very important because when you talk about connections of data and expertise applications you want to have global reach, right. It doesn't exist. We are not siloed by region of the world, right? You have a lot of companies which are multinational that have presence everywhere. And you want to have this global reach. The world is not a independent set of regions and countries, right. And that's the realization. So we had to create this global platform for our customers. >> And now you have people building clouds on top of your data cloud, well that to me is the next signal. In your keynote, you talked about seven pillars, all data, all workloads, global architecture, self-managed, programmable, marketplace, governance, which ones are the most important? >> All of them. It's like when you have kids, you don't want to pick and say, this one is my preferred one, so they are really important. All of them, as I said without data, there is no Snowflake, right? So all data is so important that we can reach every data, wherever it is. And Iceberg is a part of that, but all workload is really important because you don't want to put your data in one platform, if you cannot run all your workloads and workloads are much broader than just data warehousing, there is data engineering, data science, ML engineering, (indistinct) all these workloads applications. So that's critical. Programmable is where we are moving, right. We want to be the place where data applications are built. And we think we have a lot of advantages because data application needs to use many workloads at once, right? It's not that that application will do only data warehousing, they need to store their states, they need to use this new workload that we define, which is Unistore. They need to do data engineering because they need to get data, right. They have to save this data. So they need to combine many workload and if they have to stitch this workload, because the platform was not designed as one single product where everything is consistent and works together, that you have to stitch, it's complicated for this application to make it work. So Snowflake is we believe an ideal platform to run these data applications. So all workloads, programmable, obviously, so that you can program. And programmable has two aspects, which is big part of our announcement. Is both data programmability, which is running Python against petabyte, terabytes of data at scale and doing it scale out. So that's what we call data programmability. So both Java, Python and (indistinct), but also running applications like UI. And we had this acquisition of Streamlit. Streamlit now has been fully integrated in Snowflake. We announced that such that not only you can have this data programmability, but you can expose your data through this nice UIs, interactive UI to business users potentially. So it goes all the way there. Global is super important. As we say, we want to be one platform for the world. And of course, as I said, the last pillar, which is somehow critical for us, because we are cloud, we need to have governance. We need to have security of our data. And why it took us so long to do Python is not because it's out to run Python, right? Everyone can run Python it's because we had to secure it. And I talk about it creating this amazing sandboxing technology, such that when you include third party libraries and third party codes, you are guaranteed that this third party code will not reach to infiltrate your data, right. We control the environment that Snowflake provides. >> Can you share us some of the feedback from the customer? You probably had many customer conversations over the last four days. >> Look at that smile. (interviewer laughing) (Lisa laughing) >> Actually not because I was so busy everywhere. Unfortunately, I didn't speak to many customers. Saying that, I had everyone stopping me and talking about what they heard and yeah, there is a huge excitement about all of this. >> What's been the feedback around the theme of the event? The world of data collaboration. Data collaboration is so critical as every company these days must be a data company to compete, to win. What's been from just some of the feedback that you've had customers really embracing data collaboration, what Snowflake is enabling. >> Yeah. I mean, almost every company which is using Snowflake, is collaborating with data. You have heard, the number of stable edges that we have, and there is a real need for that because your data alone... You cannot make sense of your data if it is just alone. It needs to be connected with other data. You haven't not generated. So all data, when you say the first pillar of Snowflake is all data is not only about your data, but is about all the data that's created around you. That puts perspective on your own data. And that's critical and it's so painful to get. I mean, even your data is difficult to have access to your data, but imagine data that you didn't produce. And so yes, so the data collaboration is critical, and then now we expanded it to application and expertise, sharing models, for example, That's going to have a huge impact. >> All data includes now transaction data, right? >> Yes. >> That's a big part of the announcements that you guys made. >> Yeah. So and that's the motivation for that was really, if we want to run application, full application, we announced native applications, which are fully executed and run inside the (indistinct) data cloud, right. They need all the services that application need and in particular managing their states. And so we created Unistore, which is a new workload, which allows you to combine transactional data, which are generated by this application. And at the same time being able to do analytics directly on this data. So we call it Hybrid Table because it has this hybrid aspect. You can do both transactional access to this data and at the same time analytic here without having data pipeline and moving data and transforming it from the transactional system to the analytical system, right. Snowflake is one system. Again, in the spirit of simplifying everything, this is the Snowflake (indistinct). >> I can ask the same question I ask at first, (indistinct) when was the aha moment that you and Thierry had that said, this is not just a better data warehouse, it's actually more than that. You probably didn't call it a data cloud until later on, but did you know that from the beginning or was that something you kind of stumbled into? >> No. So as I said, we founded Snowflake in 2012 and Thierry and I, we locked in my apartment and we were doing the blueprint of Snowflake and trying to find what is the revolution with the cloud for this data warehouse system and analytical system, both big data and data warehouse. And the aha moment was but of course cloud, okay. What is cloud? It's elasticity, it's service and later collaboration. So in the elasticity aspect, when you ask database people, what is elasticity, they will tell you, oh, you have a cluster of nodes. Like if it is Oracle, it would be a (indistinct) cluster. And the elasticities that you can add one node, two node to this cluster without having too much impact on the existing workload, because you need to shuffle data, right. It's hard and doing it online, right, that's elasticity. If you can do that, you are elastic. We thought that that was not very interesting to do that. What is interesting with elasticity is to plug new workloads. You can plug a workload like that and that workload is running without having any impact on other workloads, which are running on the platform. So elasticity for us was having dedicated computer resources to workloads. And these computer resources could start and be part as soon as the workload starts and will shut down when the workload finishes and they will be sized exactly for the demand of that workload. And we thought the aha moment was, okay if we can do that, now we can run a workload with, let's say 10X more computer resources than what you would have used or 100X more. Okay, let's say 100X more because we paralyzed things. Now this workload can run 100X faster, right? That's assuming we do a good job in the scale, which is our IP. And if we can do that, now the computer resources that you have used, you have used them for 100 times less. So you have used 100 times more resources because you have more nodes, but because you go fast, you use them for less time, right? So if you multiply the two it's constant. So you can run and accelerate workload dramatically 10X, 100X for the same price. Even if we are not better in efficiency than competition, just having that was the magic, right? >> You know how Google founders originally had trouble raising money because who needs another search engine? Did you get from original, like when you started going to raise money, Amazon's got a database, so who needs another cloud database? Did you get that early on or was it just obvious Speiser and companies as well. >> Speiser is a little bit on the crazy side and ambitious and so Speiser is Speiser. And of course he had no doubt, but even him was saying Benoit, Thierry, Hadoop, right. Everyone is saying Hadoop is going to be the revolution. And you guys are betting actually against Hadoop because we told Speiser, Hadoop is a bad system, it's going to fail, but at the time everyone was so bullish about Hadoop, everyone was implementing Hadoop that it didn't look like it was going to fail and we were probably wrong. So there was a lot of skepticism about not leveraging Hadoop and not being an Hadoop. Okay, something being on top of Hadoop. That was number one. There was no cloud warehouse at the time we started. Redshift was not started. It was the pioneer somewhere when Snowflake was founded. So creating a data warehouse in the cloud sounded crazy to people. How am I going to move my data over there? And security and what about security, the cloud is not secure. So that was another... >> So you guys predated that Parexel move by... >> Yes. >> Okay, so that's interesting. And I thought when Redshift... I mean, Amazon announced Redshift, I was sure that Mike Speiser will come and say, guys it's too sad, but they beat you guys and they build something and actually it was the reverse. Mike Speiser was super excited and so it was interesting to me. >> Wow, that's amazing. 'Cause John Furrier and I, we were early with theCUBE. when theCUBE started it was like the beginning of Hadoop. And so we brought theCUBE to, I think it was the second Hadoop World and we was rubbing nickels together at the time. And I was so excited bring compute to storage and it made so much sense. But I remember and I won't say who it was, but an early Hadoop committer told me this is going to fail. And I'm like, what? And he started going age basis crap and all this stuff. And I was sad because I was so excited, but it turned out that you had the same (indistinct). >> Because of complexity. Okay, Hadoop failed for two reasons. One is because they decided that, oh, a lot of this database thing, you don't need transaction, you don't need SQL, you don't necessarily, you don't need to go fast. It'll be batch, normal real time interaction with data, no one needs that. >> Cheap storage. >> So a lot of compromise on the very important technology. And at the same time, extreme complexity and complexity for me was, where I was I knew that it was going to fail big time and we bet Snowflake on the failure of Hadoop indeed. >> And there was no cloud early on in Hadoop. >> And there was no cloud too. >> And that was what killed it. That was like... >> You're right. And the model that Hadoop had for data didn't work on block storage. Block storage is not as efficient as HGFS. So that was also another figure. >> Do you ever sit back and think about... So you think about how much money has poured in to separating compute from storage and cloud databases and you started it all. (interviewer laughing) >> Yeah. No, this is... >> Pretty amazing. >> Yeah. >> Right, so that's good. That means that you're onto a good idea, but a lot of people get confused that again, they think that you're a cloud data warehouse and you're not, I mean, you're much more than that. >> Yeah, I hate that. I have to say, because from day one we were not a cloud data warehouse. As I said, it was all about combining the big data, massive amount of unstructured data, petabytes stored as files. Okay, that's very important, store as files where it's very easy to drop data in the system without... Very low cost to combine with data warehouse, full multi statement transaction when people will tell you today, oh, now we are a data warehouse. They don't have multi statement transaction, right. So we had from day one multi statement transaction really efficient SQL. You could run your dashboard. So combining these two worlds was I think the crazy thing, that's the crazy innovation that Snowflake did initially. >> Yeah. >> And I know it's really easy to build data warehouse somewhere, because if you don't think about big data, petabytes, extremely structured data, you remove a lot of complexity. >> This is why Lisa, when you get excited about technology, but you always have to have a, somebody who really deeply understands technology to stink test it, all right so awesome. Thank you for sharing that story. >> Yeah. >> Fantastic. So over 5,900 customers now. I saw over 500 in the Forbes G2K, over almost 10,000 people here this year. If we think back to 2019, there was about what? Less than 2000 people. >> Yeah. >> What do you think is going to happen next year? >> I don't know. I don't like to think about next year. I mean, I always say, Snowflake is so exciting to me because it is like a TV show, right. Where you wait the next season and we have one season every year. So I'm really excited to know what is going to happen next year. And I don't want to project what I think will happen, but all these movements to the Snowflake being the platform for data application. I want to see what people are going to build on our platform. I mean, that's the excitement. >> Season 11 coming up. >> Yes. Season 11. Yes. >> No binge watching here. Benoit, it's been a pleasure to have you on the program. >> Thank you. >> Congratulations on incredible success, the momentum, the energy is contagious. We love it. (Benoit laughing) >> Thank you so much. >> Thank you. >> Bye bye. >> For Benoit Dageville and Dave Vellante, I'm Lisa Martin. You're watching theCUBE's coverage of Snowflake Summit '22. Dave and I will be right back with a wrap. (upbeat music)
SUMMARY :
is coming to an end, Thank you, thank you. you guys started on Monday. And you can feel the future of the data cloud. and the marketplace where you So my question is, did you envision And that really has to be And that's the power of the and I'm not going to get So everything that you can the benefit of Snowflake to this data. My question to you is, the And that's the realization. And now you have people building clouds And of course, as I said, the last pillar, the feedback from the customer? Look at that smile. I was so busy everywhere. the feedback that you've had but imagine data that you didn't produce. announcements that you guys made. So and that's the motivation I can ask the same question And the elasticities that you can add like when you started at the time we started. So you guys predated and so it was interesting to me. And I was so excited you don't need to go fast. And at the same time, extreme complexity And there was no And that was what killed it. And the model that Hadoop had for data and you started it all. No, this is... but a lot of people get I have to say, because from day one because if you don't think about big data, This is why Lisa, when you I saw over 500 in the Forbes G2K, I mean, that's the excitement. Yes. to have you on the program. the momentum, the energy is contagious. Dave and I will be right back with a wrap.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Mike Speiser | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
100X | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Mike Speiser | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Benoit Dageville | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Benoit | PERSON | 0.99+ |
Monday | DATE | 0.99+ |
Thierry | PERSON | 0.99+ |
Thursday | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Tuesday | DATE | 0.99+ |
Snowflake | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
next year | DATE | 0.99+ |
two aspects | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
This week | DATE | 0.99+ |
one season | QUANTITY | 0.99+ |
two reasons | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Hadoop | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Snowflake Summit '22 | EVENT | 0.99+ |
this week | DATE | 0.99+ |
one platform | QUANTITY | 0.99+ |
Streamlit | TITLE | 0.99+ |
Speiser | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
one platform | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
one system | QUANTITY | 0.98+ |
one node | QUANTITY | 0.98+ |
Less than 2000 people | QUANTITY | 0.98+ |
Snowflake | EVENT | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
two node | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
John Furrier | PERSON | 0.98+ |
Hadoop | TITLE | 0.97+ |
over 5,900 customers | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
one single product | QUANTITY | 0.97+ |
first pillar | QUANTITY | 0.97+ |
Google Docs | TITLE | 0.97+ |
Snowflake | ORGANIZATION | 0.97+ |
Multicloud | TITLE | 0.97+ |
over 500 | QUANTITY | 0.97+ |
Parexel | ORGANIZATION | 0.96+ |
Joe Nolte, Allegis Group & Torsten Grabs, Snowflake | Snowflake Summit 2022
>>Hey everyone. Welcome back to the cube. Lisa Martin, with Dave ante. We're here in Las Vegas with snowflake at the snowflake summit 22. This is the fourth annual there's close to 10,000 people here. Lots going on. Customers, partners, analysts, cross media, everyone talking about all of this news. We've got a couple of guests joining us. We're gonna unpack snow park. Torston grabs the director of product management at snowflake and Joe. No NTY AI and MDM architect at Allegis group. Guys. Welcome to the program. Thank >>You so much for having >>Us. Isn't it great to be back in person? It is. >>Oh, wonderful. Yes, it >>Is. Indeed. Joe, talk to us a little bit about Allegis group. What do you do? And then tell us a little bit about your role specifically. >>Well, Allegis group is a collection of OPCA operating companies that do staffing. We're one of the biggest staffing companies in north America. We have a presence in AMEA and in the APAC region. So we work to find people jobs, and we help get 'em staffed and we help companies find people and we help individuals find >>People incredibly important these days, excuse me, incredibly important. These days. It is >>Very, it very is right >>There. Tell me a little bit about your role. You are the AI and MDM architect. You wear a lot of hats. >>Okay. So I'm a architect and I support both of those verticals within the company. So I work, I have a set of engineers and data scientists that work with me on the AI side, and we build data science models and solutions that help support what the company wants to do, right? So we build it to make business business processes faster and more streamlined. And we really see snow park and Python helping us to accelerate that and accelerate that delivery. So we're very excited about it. >>Explain snow park for, for people. I mean, I look at it as this, this wonderful sandbox. You can bring your own developer tools in, but, but explain in your words what it >>Is. Yeah. So we got interested in, in snow park because increasingly the feedback was that everybody wants to interact with snowflake through SQL. There are other languages that they would prefer to use, including Java Scala and of course, Python. Right? So then this led down to the, our, our work into snow park where we're building an infrastructure that allows us to host other languages natively on the snowflake compute platform. And now here, what we're, what we just announced is snow park for Python in public preview. So now you have the ability to natively run Python code on snowflake and benefit from the thousands of packages and libraries that the open source community around Python has contributed over the years. And that's a huge benefit for data scientists. It is ML practitioners and data engineers, because those are the, the languages and packages that are popular with them. So yeah, we very much look forward to working with the likes of you and other data scientists and, and data engineers around the Python ecosystem. >>Yeah. And, and snow park helps reduce the architectural footprint and it makes the data pipelines a little easier and less complex. We have a, we had a pipeline and it works on DMV data. And we converted that entire pipeline from Python, running on a VM to directly running down on snowflake. Right. We were able to eliminate code because you don't have to worry about multi threading, right? Because we can just set the warehouse size through a task, no more multi threading, throw that code away. Don't need to do it anymore. Right. We get the same results, but the architecture to run that pipeline gets immensely easier because it's a store procedure that's already there. And implementing that calling to that store procedure is very easy. The architecture that we use today uses six different components just to be able to run that Python code on a VM within our ecosystem to make sure that it runs on time and is scheduled and all of that. Right. But with snowflake, with snowflake and snow park and snowflake Python, it's two components. It's the store procedure and our ETL tool calling it. >>Okay. So you've simplified that, that stack. Yes. And, and eliminated all the other stuff that you had to do that now Snowflake's doing, am I correct? That you're actually taking the application development stack and the analytics stack and bringing them together? Are they merging? >>I don't know. I think in a way I'm not real sure how I would answer that question to be quite honest. I think with stream lit, there's a little bit of application that's gonna be down there. So you could maybe start to say that I'd have to see how that carries out and what we do and what we produce to really give you an answer to that. But yeah, maybe in a >>Little bit. Well, the reason I asked you is because you talk, we always talk about injecting data into apps, injecting machine intelligence and ML and AI into apps, but there are two separate stacks today. Aren't they >>Certainly the two are getting closer >>To Python Python. It gets a little better. Explain that, >>Explain, explain how >>That I just like in the keynote, right? The other day was SRE. When she showed her sample application, you can start to see that cuz you can do some data pipelining and data building and then throw that into a training module within Python, right down inside a snowflake and have it sitting there. Then you can use something like stream lit to, to expose it to your users. Right? We were talking about that the other day, about how do you get an ML and AI, after you have it running in front of people, we have a model right now that is a Mo a predictive and prescriptive model of one of our top KPIs. Right. And right now we can show it to everybody in the company, but it's through a Jupyter notebook. How do I deliver it? How do I get it in the front of people? So they can use it well with what we saw was streamlet, right? It's a perfect match. And then we can compile it. It's right down there on snowflake. And it's completely easier time to delivery to production because since it's already part of snowflake, there's no architectural review, right. As long as the code passes code review, and it's not poorly written code and isn't using a library that's dangerous, right. It's a simple deployment to production. So because it's encapsulated inside of that snowflake environment, we have approval to just use it. However we see fit. >>It's very, so that code delivery, that code review has to occur irrespective of, you know, not always whatever you're running it on. Okay. So I get that. And, and, but you, it's a frictionless environment you're saying, right. What would you have had to do prior to snowflake that you don't have to do now? >>Well, one, it's a longer review process to allow me to push the solution into production, right. Because I have to explain to my InfoSec people, right? My other it's not >>Trusted. >>Well, well don't use that word. No. Right? It got, there are checks and balances in everything that we do, >>It has to be verified. And >>That's all, it's, it's part of the, the, what I like to call the good bureaucracy, right? Those processes are in place to help all of us stay protected. >>It's the checklist. Yeah. That you >>Gotta go to. >>That's all it is. It's like fly on a plane. You, >>But that checklist gets smaller. And sometimes it's just one box now with, with Python through snow park, running down on the snowflake platform. And that's, that's the real advantage because we can do things faster. Right? We can do things easier, right? We're doing some mathematical data science right now and we're doing it through SQL, but Python will open that up much easier and allow us to deliver faster and more accurate results and easier not to mention, we're gonna try to bolt on the hybrid tables to that afterwards. >>Oh, we had talk about that. So can you, and I don't, I don't need an exact metric, but when you say faster talking 10% faster, 20% faster, 50% path >>Faster, it really depends on the solution. >>Well, gimme a range of, of the worst case, best case. >>I, I really don't have that. I don't, I wish I did. I wish I had that for you, but I really don't have >>It. I mean, obviously it's meaningful. I mean, if >>It is meaningful, it >>Has a business impact. It'll >>Be FA I think what it will do is it will speed up our work inside of our iterations. So we can then, you know, look at the code sooner. Right. And evaluate it sooner, measure it sooner, measure it faster. >>So is it fair to say that as a result, you can do more. Yeah. That's to, >>We be able do more well, and it will enable more of our people because they're used to working in Python. >>Can you talk a little bit about, from an enablement perspective, let's go up the stack to the folks at Allegis who are on the front lines, helping people get jobs. What are some of the benefits that having snow park for Python under the hood, how does it facilitate them being able to get access to data, to deliver what they need to, to their clients? >>Well, I think what we would use snowflake for a Python for there is when we're building them tools to let them know whether or not a user or a piece of talent is already within our system. Right. Things like that. Right. That's how we would leverage that. But again, it's also new. We're still figuring out what solutions we would move to Python. We are, we have some targeted, like we're, I have developers that are waiting for this and they're, and they're in private preview. Now they're playing around with it. They're ready to start using it. They're ready to start doing some analytical work on it, to get some of our analytical work out of, out of GCP. Right. Because that's where it is right now. Right. But all the data's in snowflake and it just, but we need to move that down now and take the data outta the data wasn't in snowflake before. So there, so the dashboards are up in GCP, but now that we've moved all of that data down in, down in the snowflake, the team that did that, those analytical dashboards, they want to use Python because that's the way it's written right now. So it's an easier transformation, an easier migration off of GCP and get us into snow, doing everything in snowflake, which is what we want. >>So you're saying you're doing the visualization in GCP. Is that righting? >>It's just some dashboarding. That's all, >>Not even visualization. You won't even give for. You won't even give me that. Okay. Okay. But >>Cause it's not visualization. It's just some D boardings of numbers and percentages and things like that. It's no graphic >>And it doesn't make sense to run that in snowflake, in GCP, you could just move it into AWS or, or >>No, we, what we'll be able to do now is all that data before was in GCP and all that Python code was running in GCP. We've moved all that data outta GCP, and now it's in snowflake and now we're gonna work on taking those Python scripts that we thought we were gonna have to rewrite differently. Right. Because Python, wasn't available now that Python's available, we have an easier way of getting those dashboards back out to our people. >>Okay. But you're taking it outta GCP, putting it to snowflake where anywhere, >>Well, the, so we'll build the, we'll build those, those, those dashboards. And they'll actually be, they'll be displayed through Tableau, which is our enterprise >>Tool for that. Yeah. Sure. Okay. And then when you operationalize it it'll go. >>But the idea is it's an easier pathway for us to migrate our code, our existing code it's in Python, down into snowflake, have it run against snowflake. Right. And because all the data's there >>Because it's not a, not a going out and coming back in, it's all integrated. >>We want, we, we want our people working on the data in snowflake. We want, that's our data platform. That's where we want our analytics done. Right. We don't want, we don't want, 'em done in other places. We when get all that data down and we've, we've over our data cloud journey, we've worked really hard to move all of that data. We use out of existing systems on prem, and now we're attacking our, the data that's in GCP and making sure it's down. And it's not a lot of data. And we, we fixed it with one data. Pipeline exposes all that data down on, down in snowflake now. And we're just migrating our code down to work against the snowflake platform, which is what we want. >>Why are you excited about hybrid tables? What's what, what, what's the >>Potential hybrid tables I'm excited about? Because we, so some of the data science that we do inside of snowflake produces a set of results and there recommendations, well, we have to get those recommendations back to our people back into our, our talent management system. And there's just some delays. There's about an hour delay of delivering that data back to that team. Well, with hybrid tables, I can just write it to the hybrid table. And that hybrid table can be directly accessed from our talent management system, be for the recruiters and for the hiring managers, to be able to see those recommendations and near real time. And that that's the value. >>Yep. We learned that access to real time. Data it in recent years is no longer a nice to have. It's like a huge competitive differentiator for every industry, including yours guys. Thank you for joining David me on the program, talking about snow park for Python. What that announcement means, how Allegis is leveraging the technology. We look forward to hearing what comes when it's GA >>Yeah. We're looking forward to, to it. Nice >>Guys. Great. All right guys. Thank you for our guests and Dave ante. I'm Lisa Martin. You're watching the cubes coverage of snowflake summit 22 stick around. We'll be right back with our next guest.
SUMMARY :
This is the fourth annual there's close to Us. Isn't it great to be back in person? Yes, it Joe, talk to us a little bit about Allegis group. So we work to find people jobs, and we help get 'em staffed and we help companies find people and we help It is You are the AI and MDM architect. on the AI side, and we build data science models and solutions I mean, I look at it as this, this wonderful sandbox. and libraries that the open source community around Python has contributed over the years. And implementing that calling to that store procedure is very easy. And, and eliminated all the other stuff that you had to do that now Snowflake's doing, am I correct? we produce to really give you an answer to that. Well, the reason I asked you is because you talk, we always talk about injecting data into apps, It gets a little better. And it's completely easier time to delivery to production because since to snowflake that you don't have to do now? Because I have to explain to my InfoSec we do, It has to be verified. Those processes are in place to help all of us stay protected. It's the checklist. That's all it is. And that's, that's the real advantage because we can do things faster. I don't need an exact metric, but when you say faster talking 10% faster, I wish I had that for you, but I really don't have I mean, if Has a business impact. So we can then, you know, look at the code sooner. So is it fair to say that as a result, you can do more. We be able do more well, and it will enable more of our people because they're used to working What are some of the benefits that having snow park of that data down in, down in the snowflake, the team that did that, those analytical dashboards, So you're saying you're doing the visualization in GCP. It's just some dashboarding. You won't even give for. It's just some D boardings of numbers and percentages and things like that. gonna have to rewrite differently. And they'll actually be, they'll be displayed through Tableau, which is our enterprise And then when you operationalize it it'll go. And because all the data's there And it's not a lot of data. so some of the data science that we do inside of snowflake produces a set of results and We look forward to hearing what comes when it's GA Thank you for our guests and Dave ante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Allegis | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Allegis Group | ORGANIZATION | 0.99+ |
Joe Nolte | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
north America | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Java Scala | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
one box | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Snowflake Summit 2022 | EVENT | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
Tableau | TITLE | 0.98+ |
six different components | QUANTITY | 0.98+ |
two components | QUANTITY | 0.98+ |
Python Python | TITLE | 0.98+ |
Torsten Grabs | PERSON | 0.97+ |
one | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Torston | PERSON | 0.96+ |
Allegis group | ORGANIZATION | 0.96+ |
OPCA | ORGANIZATION | 0.95+ |
one data | QUANTITY | 0.95+ |
two separate stacks | QUANTITY | 0.94+ |
InfoSec | ORGANIZATION | 0.91+ |
Dave ante | PERSON | 0.9+ |
fourth annual | QUANTITY | 0.88+ |
Jupyter | ORGANIZATION | 0.88+ |
park | TITLE | 0.85+ |
snowflake summit 22 | EVENT | 0.84+ |
10,000 people | QUANTITY | 0.82+ |
Snowflake | ORGANIZATION | 0.78+ |
AMEA | LOCATION | 0.77+ |
snow park | TITLE | 0.76+ |
snow | ORGANIZATION | 0.66+ |
couple of guests | QUANTITY | 0.65+ |
NTY | ORGANIZATION | 0.6+ |
Snowflake | EVENT | 0.59+ |
MDM | ORGANIZATION | 0.58+ |
APAC | ORGANIZATION | 0.58+ |
prem | ORGANIZATION | 0.52+ |
GA | LOCATION | 0.5+ |
snow | TITLE | 0.46+ |
SRE | TITLE | 0.46+ |
lit | ORGANIZATION | 0.43+ |
stream | TITLE | 0.41+ |
22 | QUANTITY | 0.4+ |
Shishir Shrivastava, TEKsystems & Devang Pandya, TEKsystems | Snowflake Summit 2022
>>Welcome back everyone to the Cube's live coverage of snowflake summit 22, we are live in Las Vegas. Caesar's forum, Lisa Martin, Dave Valante, Dave. This is day one of a lot of wall action on the, >>Yeah. A lot of content on day one. It, it feels like, you know, the, the reinvent fire hose yes. Of announcements feels like a little mini version of that. >>It does. That's a good, that's a good way of putting it. We've been unpacking a lot of the news. That's come out, stick around, lots more coming. We've got two guests joining us from tech systems global services. Please welcome Devon. Pania managing director and Shai Sheva of us senior and Shire. Shrivastava senior manager, guys. Great to have you on the cube. >>Thank you so much. Good to see you. And it's great to be in person. Finally, it's been a long UE, so excited to be here. >>Agree. The keynote this morning was not only standing room only, but there was an overflow area. >>Oh my goodness. We have a hard time getting in and it is unbelievable announcement that we have heard looking forward for an exciting time. Next two days here >>Absolutely exciting. The, the cannon shotgun of announcements this morning was amazing. The innovation that has been happening at snowflake and you know, this clearly as partner has been, it just seems like it's the innovation flywheel is getting faster and faster and faster. Talk to us a little bit, Devon about tech systems. Give us the audience a little bit of an overview of the company, and then talk to us about the partnership with snowflake. >>Sure. Thank you. Lisa tech system global services is a full stack global system integrator working with 8% of fortune 500 customers helping in accelerating their business as well as technology modernization journey. We have been a snowflake partner since 2019, and we are one of the highest accredited sales and technical certification with snowflake. And that's what we have earned as a elite partner or sorry, emerging partner with snowflake last year. And we are one of the top elite partner as well. >>Yeah. So since 2019, I mean, in the keynote this morning, Frank showed it. I think Christian showed it as well in terms of the amount of, of change innovation that's happened since 2019 Ellen, we were talking before we went live to share about the, the last two years, the acceleration of innovation cloud adoption digital transformation. The last two years is kind of knock your head back. You need a yeah. A whiplash collar to deal with that. Talk about what you've seen in the last three years, particularly with the partnership and how quickly they are moving and listening to their customers. >>Yeah. Yeah. I think last two years really has given pretty much every organization, including us and our customers a complete different perspective. And that's, that's the exact thing which Christian was talking about, you know, disruption, that's the that's that has been the core message, which we have seen and we've got it from the customers. And we have worked on that right from the get go. We have, you know, all our tools and technology. We are working hand in hand with snowflake in terms of our offerings, working with customers, we have tools. We talk about, you know, accelerators quote unquote that's that helps our customers, you know, to take it from on-prem systems to all the way to the snowflake data cloud and that too, you know, fraction of seconds. You talk about data, you talk about, you know, code conversion, you talk about data validation. So, you know, there are ample amount of things, you know, in terms of, you know, innovation, all workload, I've heard, you know, those are the buzzwords today, and those are like such an exciting time out here. >>So before the pandemic, you know, digital transformation, it was, it was sort of a thing, but it was, it was also a lot of complacency around it. And then of course, if you weren't in a digital business, you were out of the business and boom. So you talked to bang about the stack. You guys obviously do a lot in cloud migration. What's changed in cloud migration. And how is the stack evolving to accommodate that? >>That's a great question there when last two years, it's absolutely a game changer in terms of the digital transformation. Can we believe that 90% of world's data that we have produced and captured is in last two years? It's, isn't that amazing? Right. And what IDC is predicting by 20 25, 200 terabytes of data is going to be generated. And most of them is going to be unstructured. And what we are fascinated about is only 0.5% of unstructured data is currently analyzed by the organization to look at the immense opportunity in front of us and with Snowflake's data cloud, as well as some of the retail data cloud finance and healthcare data cloud launching, it's going to immensely help in processing that unstructured data and really bring life to the data in making organization and market leader. >>Quick, quick fall, if I could, why is, is such a small, why is so much data dark and not accessible to organizations? What's >>The, that's a, that's a great question. I think it's a legacy that we have been trained such a way that data has to be structured. It needs to be modeled, but last decade or so we have seen note it hasn't required that way. And all the social media data being generated, how we communicate in a world is all arm structure, right? We don't create structured data and put it into the CSV and things like that. It's just a natural human behavior. And I think that's where we see a lot of potential in mining that dataset and bringing, you know, AI ML capabilities from descriptive to diagnostic analysis, moving forward with prescriptive and predictive analytics. And that's what we heard from snowflake in Christian announce, Hey, machine learning workload is going to be the key lot of investment happening last 10 years. Now it's going to, you know, capitalize on those ROI in making quick decisions. >>Should you talk to me about those customer conversations? Obviously they have they've transformed and evolved considerably. Yeah. But for customers that have this tremendous amount of unstructured data, a lot of potential as you talked about dung, but there's gotta be, it's gotta be a daunting task. Oh yeah. But these days, every company has to be a data company to be successful, to be competitive and to deliver the experience that the demanding consumers expect. Yeah. How do you start with customers? Where do they start? What's that conversation like and how can tech systems help them get rid of that kind of that daunting iceberg, if you will and get around >>It. Yeah, yeah, yeah, exactly. And I think you got the right point there. Unstructured data is just the tip of the iceberg we are talking about and we have just scratched little surface of it, you know, it's it's and as the one was mentioning earlier, it's, it's gone out those days, you know, where we are talking about, you know, gigabytes of data or, you know, terabytes. Now we are talking about petabytes and Zab bytes of data, and there are so many, and that's, that's the data insight we are looking for and what else, you know, what best platform you can get better than, you know, snowflake data cloud. You have everything in there. You talk about programmability today. You know, Christian was talking about snow park, you know, that, that gives you all the cutting edge languages. You talk about Java, you talk about scale, you talk about Python, you know, all those languages. >>I mean, there were days when these languages, you need to bring that data to a separate platform, process it and then connect it. Now it is right there. You can connect it and just process it. So I think that's, that's the beginning. And to start the conversation, we always, you know, go ahead and talk to the customers and, you know, understand their perspective, know where they want to start, you know, what are their pain points and where they, they want to go, you know, what's their end goal, you know, how they want to pro proceed, you know, how they want to mature in terms of, you know, data agility and flexibility and you know, how do they want to offer their customers? So that's, that's the basically, you know, that's our, the path forward and that's how we see it. >>And just, >>Just to add on top of that, Dave, sorry about that. What we have seen with our customers, the legacy mindset of creating the data silos, primarily because it's not that they wanted it that way, but there were limitations in terms of either the infrastructure or the unlimited scalability and flexibility and accent extensibility, right? That's why those kind of, you know, work around has been built. But with snowflake unified data cloud platform, you have everything in unified platform and what we are telling our customers, we need to eliminate the Datalog. Yes, data is a new oil, but we need to make sure that you eliminate the Datalog within the enterprise, as well as outside the enterprise to really combine then and get a, you know, valuable insight to be the market leader. >>You know, when the cube started, it was 2010. And I remember we went to Hadoop world and it was a lot of excitement around big data and yes, and it turned out, it didn't quite live up to the expectations. That's an understatement, but we, we learned a lot and we made some strides and, and now we're sort of entering this, this new era, but you know, the, the, the last era was largely this big batch job right now, today. You're seeing real time, you know, we've, we've projected out real time in, is gonna become more and more of a thing. How do you guys see the, the sort of data patterns changing and again, where do you see snowflake fitting in? >>Yeah. Great question. And they, what I would have to say, just in a one word is removing the complexity and moving towards the simplicity. Why the legacy solutions such as big data didn't really work out well, it had all the capabilities, but it was a complex environment. You need to really be, you know, knowing a lot of technical aspect of it. And your data analyst were struggling with that kind of a tool set. So with snowflake simplicity, you can bring citizen data scientists, you can bring your data scientists, you can bring your data analysts, all of them under one platform, and they can all mine the data because it's all sitting in the one environment, are >>You seeing organizations change the way they architect their data teams? And specifically, are you seeing a decentralization of data teams or you see, you mentioned citizen data scientists, are you seeing lines of business take more ownership of the data or is it still cuz again, that big data era created this data science role, the data engineering role, the data pipeline, and it was sort of an extension of the sort of EDW. We had a, a few people, maybe one or two experts who knew how to use the system and you build cubes. And it was sort of a, you know, in order of magnitude more complex than that could maybe do more, but are you seeing it being pushed out to the lines of business? >>That's a great question. And I think what we are seeing in the organization today is this time is absolutely both it and business coming together, hand in hand. It's not that, Hey, it, you do this data pipeline work. And then I will analyze this data. And then we'll, you know, share the dashboards to the CEO. We are seeing more and more cohesiveness within the organization in making a path forward in making the decision intelligence very, very rapid. So I think that's a great change. We don't need to operate in silos. I think it's coming together. And I think it's going to create a win-win combination for our >>Customers. Just to add one more point, what the one has mentioned. I think it's the world of data democratization we are talking about, you know, data is available there, insights. We need to pull it out and you know, just give it to every consumer of the organization and they're ready to consume it. They are, they are hungry. They are ready to take it. You know, that's, that's, that's something, you know, we need to look forward for. >>Well, absolutely look forward to it. And as you talked about, there's so much potential it's we see the tip of the iceberg, right? There's so much underneath that guys. I wish we had more time to continue unpacking this, but thank you so much for joining Dave and me on the program, talking about tech systems and snowflake, what you guys are doing together and what you're enabling those end customers to achieve. We appreciate your insights. >>Yeah. Thank you so much. It's an exciting time for us. And we have been, you know, partnering with snowflake on retail data cloud launch, as well as some upcoming opportunity with manufacturing and also the financial competency that we have earned. So I think it's a great time for us ahead in future. So >>Excellent. Lots to come from Texas systems guys. Thank you. We appreciate your time. Thank you. >>Appreciate it. Thank you. Let it snow. I would say let >>It snow, snow. Let it snow. I like that. You're heard of your life from hot Las Vegas for our guests and Dave ante. I'm Lisa Martin. We are live in Las Vegas. It's not snowing. It's very hot here. We're at the snowflake summit, 22 covering that stick around Dave and I will be joined where next guests in just a moment.
SUMMARY :
Welcome back everyone to the Cube's live coverage of snowflake summit 22, It, it feels like, you know, the, the reinvent fire hose yes. Great to have you on the cube. Thank you so much. The keynote this morning was not only standing room only, but there was an overflow area. We have a hard time getting in and it is unbelievable announcement that we have The innovation that has been happening at snowflake and you know, this clearly as partner has been, And we are one of the top elite partner as well. I think Christian showed it as well in terms of the amount of, of change innovation that's happened since that's the exact thing which Christian was talking about, you know, disruption, that's the that's that has been the So before the pandemic, you know, digital transformation, it was, it was sort of a thing, And most of them is going to be unstructured. in mining that dataset and bringing, you know, AI ML capabilities from descriptive a lot of potential as you talked about dung, but there's gotta be, it's gotta be a daunting task. of the iceberg we are talking about and we have just scratched little surface of it, you know, it's it's and as the one was mentioning And to start the conversation, we always, you know, go ahead and talk to the customers and, That's why those kind of, you know, work around has been built. and now we're sort of entering this, this new era, but you know, the, the, the last era was largely this big you know, knowing a lot of technical aspect of it. And it was sort of a, you know, in order of magnitude more And then we'll, you know, share the dashboards to the CEO. We need to pull it out and you know, And as you talked about, there's so much potential it's we see the And we have been, you know, partnering with snowflake on Lots to come from Texas systems guys. Let it snow. We're at the snowflake summit, 22 covering that stick around Dave and I will be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Shai Sheva | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Shishir Shrivastava | PERSON | 0.99+ |
Shire | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
Shrivastava | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pania | PERSON | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
8% | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
two experts | QUANTITY | 0.98+ |
one word | QUANTITY | 0.98+ |
Devang Pandya | PERSON | 0.98+ |
Caesar | PERSON | 0.98+ |
Ellen | PERSON | 0.98+ |
one platform | QUANTITY | 0.98+ |
Devon | PERSON | 0.98+ |
0.5% | QUANTITY | 0.97+ |
Snowflake Summit 2022 | EVENT | 0.97+ |
today | DATE | 0.97+ |
Christian | ORGANIZATION | 0.96+ |
day one | QUANTITY | 0.96+ |
20 25, 200 terabytes | QUANTITY | 0.94+ |
TEKsystems | ORGANIZATION | 0.92+ |
this morning | DATE | 0.91+ |
pandemic | EVENT | 0.9+ |
one environment | QUANTITY | 0.89+ |
IDC | ORGANIZATION | 0.88+ |
one more point | QUANTITY | 0.87+ |
last two years | DATE | 0.87+ |
last three years | DATE | 0.86+ |
last 10 years | DATE | 0.85+ |
TEK | PERSON | 0.84+ |
last decade | DATE | 0.8+ |
Lisa | PERSON | 0.74+ |
snowflake summit 22 | EVENT | 0.74+ |
fortune 500 customers | QUANTITY | 0.71+ |
Cube | ORGANIZATION | 0.66+ |
two days | QUANTITY | 0.61+ |
highest accredited sales | QUANTITY | 0.59+ |
petabytes | QUANTITY | 0.55+ |
Devon | ORGANIZATION | 0.52+ |
terabytes | QUANTITY | 0.52+ |
22 | QUANTITY | 0.37+ |
Ahmad Khan, Snowflake & Kurt Muehmel, Dataiku | Snowflake Summit 2022
>>Hey everyone. Welcome back to the Cube's live coverage of snowflake summit 22 live from Las Vegas. Caesar's forum. Lisa Martin here with Dave Valante. We've got a couple of guests here. We're gonna be talking about every day. AI. You wanna know what that means? You're in the right spot. Kurt UL joins us, the chief customer officer at data ICU and the mod Conn, the head of AI and ML strategy at snowflake guys. Great to have you on the program. >>It's wonderful to be here. Thank you so much. >>So we wanna understand Kurt what everyday AI means, but before we do that for the audience who might not be familiar with data, I could give them a little bit of an overview. What about what you guys do your mission and maybe a little bit about the partnership? >>Yeah, great. Uh, very happy to do so. And thanks so much for this opportunity. Um, well, data IKU, we are a collaborative platform, uh, for enterprise AI. And what that means is it's a software, you know, that sits on top of incredible infrastructure, notably snowflake that allows people from different backgrounds of data, analysts, data, scientists, data, engineers, all to come together, to work together, to build out machine learning models and ultimately the AI that's gonna be the future, uh, of their business. Um, and so we're very excited to, uh, to be here, uh, and you know, very proud to be a, a, a very close partner of snowflake. >>So Amad, what is Snowflake's AI strategy? Is it to, is it to partner? Where do, where do you pick up? And Frank said today, we, we're not doing it all. Yeah. The ecosystem by design. >>Yeah. Yeah, absolutely. So we believe in the best of breed look. Um, I think, um, we, we think that we're the best data platform and for data science and machine learning, we want our customers to really use the best tool for their use cases. Right. And, you know, data ICU is, is our leading partner in that space. And so, you know, when, when you talk about, uh, machine learning and data science, people talk about training a model, but it's really the difficult part and challenges are really, before you train the model, how do you get access to the right data? And then after you train the model, how do you then run the model? And then how do you manage the model? Uh, that's very, very important. And that's where our partnership with, with data, uh, IKU comes in place. Snowflake provides the platform that can process data at scale for the pre-processing bit and, and data IKU comes in and really, uh, simplifies the process for deploying the models and managing the model. >>Got it. Thank >>You. You talk about KD data. Aico talks about everyday AI. I wanna break that down. What do you mean by that? And how is this partnership with snowflake empowering you to deliver that to companies? >>Yeah, absolutely. So everyday AI for us is, uh, you know, kind of a future state that we are building towards where we believe that AI will become so pervasive in all of the business processes, all the decision making that organizations have to go through that it's no longer this special thing that we talk about. It's just the, the day to day life of, uh, of our businesses. And we can't do that without partners like snowflake and, uh, because they're bringing together all of that data and ensuring that there is the, uh, the computational horsepower behind that to drive that we heard that this morning in some of the keynote talking about that broad democratization and the, um, let's call it the, uh, you know, the pressure that that's going to put on the underlying infrastructure. Um, and so ultimately everyday AI for us is where companies own that AI capability. They're building it themselves very broad, uh, participation in the development of that. And all that work then is being pushed down into best of breed, uh, infrastructure, notably of course, snowflake. Well, >>You said push down, you, you guys, you there's a term in the industry push down optimization. What does that mean? How is it evolving? Why is it so important? >>So Amma, do you want to take a first step at that? >>Yeah, absolutely. So, I mean, when, when you're, you know, processing data, so saying data, um, before you train a, uh, a model, you have to do it at scale, that that, that data is, is coming from all different sources. It's human generated machine generated data, we're talking millions and billions of rows of data. Uh, and you have to make sense of it. You have to transform that data into the right kind of features into the right kind of signals that inform the machine learning model that you're trying to, uh, train. Uh, and so that's where, you know, any kind of large scale data processing is automatically pushed down by data IQ, into snowflakes, scalable infrastructure. Um, so you don't get into like memory issues. You don't get into, um, uh, situations where you're where your pipeline is running overnight, and it doesn't finish in time. Right? And so, uh, you can really take advantage of the scalable nature of cloud computing, uh, using Snowflake's infrastructure. So a lot of that processing is actually getting pushed down from data I could down into the scalable snowflake compute engine. How >>Does this affect the life of a data scientist? You always hear a data scientist spend 80% of the time wrangling data. Uh, I presume there's an infrastructure component around that you trying, we heard this morning, you're making infrastructure, my words, infrastructure, self serve, uh, does this directly address that problem and, and talk about that. And what else are you doing to address that 80% problem? >>It, it certainly does, right? Uh, that's how you solve for, uh, data scientists needing to have on demand access to computing resources, or of course, to the, uh, to the underlying data, um, is by ensuring that that work doesn't have to run on their laptop, doesn't have to run on some, you know, constrained, uh, physical machines, uh, in, in a data center somewhere. Instead it gets pushed down into snowflake and can be executed at scale with incredible parallelization. Now what's really, uh, I important is the ongoing development, uh, between the two products, uh, and within that technology. And so today snowflake, uh, announced the introduction of Python within snow park, um, which is really, really exciting, uh, because that really opens up this capability to a much wider audience. Now DataCo provides that both through a visual interface, um, in historically, uh, since last year through Java UDFs, but that's kind of the, the two extremes, right? You have people who don't code on one side, you know, very no code or a low code, uh, population, and then a very high code population. On the other side, this Python, uh, integration really allows us to, to touch really kind the, the fat center of the data science population, who, uh, who, for whom, you know, Python really is the lingua franca that they've been learning for, uh, for decades now. Sure. So >>Talking about the data scientist, I wanna elevate that a little bit because you both are enterprise customers, data ICO, and snowflake Kurt as the chief customer officer, obviously you're with customers all the time. If we look at the macro environment of all the challenges, companies have to be a data company these days, if you're not, you're not gonna be successful. It's how do we do that? Extract insights, value, action, take it. But I'm just curious if your customer conversations are elevating up to the C-suite or, or the board in terms of being able to get democratize access to data, to be competitive, new products, new services, we've seen tremendous momentum, um, on, on the, the part of customer's growth on the snowflake side. But what are you hearing from customers as they're dealing with some of these current macro pains? >>Yeah, no, I, I think it is the conversation today, uh, at that sea level is not only how do we, you know, leverage, uh, new infrastructure, right. You know, they they're, you know, most of them now are starting to have snowflake. I think Frank said, uh, you know, 50% of the, uh, fortune 500, so we can say most, um, have that in place. Um, but now the question is, how do we, how do we ensure that we're getting access to that data, to that, to that computational horsepower, to a broader group of people so that it becomes truly a transformational initiative and not just an it initiative, not just a technology initiative, but really a core business initiative. And that, that really has been a pivot. You know, I've been, you know, with my company now for almost eight years, right. Uh, and we've really seen a change in that discussion going from, you know, much more niche discussions at the team or departmental level now to truly corporate strategic level. How do we build AI into our corporate strategy? How do we really do that in practice? And >>We hear a lot about, Hey, I want to inject data into apps, AI, and machine intelligence into applications. And we've talked about, those are separate stacks. You got the data stack and analytics stack over here. You got the application development, stack the databases off in the corner. And so we see you guys bringing those worlds together. And my question is, what does that stack look like? I took a snapshot. I think it was Frank's presentation today. He had infrastructure at the lowest level live data. So infrastructure's cloud live data. That's multiple data sources coming in workload execution. You made some announcements there. Mm-hmm, <affirmative>, uh, to expend expand that application development. That's the tooling that is needed. Uh, and then marketplace, that's how you bring together this ecosystem. Yes. Monetization is how you turn data into data products and make money. Is that the stack, is that the new stack that's emerging here? Are you guys defining that? >>Absolutely. Absolutely. You talked about like the 80% of the time being spent by data scientists and part of that is actually discovering the right data. Right. Um, being able to give the right access to the right people and being able to go and discover that data. And so you, you, you go from that angle all the way to processing, training a model. And then all those predictions that are insights that are coming out of the model are being consumed downstream by data applications. And so the two major announcements I'm super excited about today is, is the ability to run Python, which is snow park, uh, in, in snowflake. Um, that will do, you know, you can now as a Python developer come and bring the processing to where the data lives rather than move the data out to where the processing lives. Right. Um, so both SQL developers, Python developers, fully enabled. Um, and then the predictions that are coming out of models that are being trained by data ICU are then being used downstream by these data applications for most of our customers. And so that's where number, the second announcement with streamlet is super exciting. I can write a complete data application without writing a single line of JavaScript CSS or HTML. I can write it completely in Python. It's it makes me super excited as, as a Python developer, myself >>And you guys have joint customers that are headed in this direction, doing this today. Where, where can you talk about >>That? Yeah, we do. Uh, you know, there's a few that we're very proud of. Um, you know, company, well known companies like, uh, like REI or emeritus. Um, but one that was mentioned today, uh, this morning by Frank again, uh, Novartis, uh, pharmaceutical company, you know, they have been extremely successful, uh, in accelerating their AI and ML development by expanding access to their data. And that's a combination of, uh, both the data ICU, uh, layer, you know, allowing for that work to be developed in that, uh, in that workspace. Um, but of course, without, you know, the, the underlying, uh, uh, platform of snowflake, right, they, they would not have been able to, to have re realized those, uh, those gains. And they were talking about, you know, very, very significant increases in inefficiency everything from data access to the actual model development to the deployment. Um, it's just really, really honestly inspiring to see. >>And it was great to see Novartis mentioned on the main stage, massive time to value there. We've actually got them on the program later this week. So that was great. Another joint customer, you mentioned re I we'll let you go, cuz you're off to do a, a session with re I, is that right? >>Yes, that's exactly right. So, uh, so we're going to be doing a fireside chat, uh, talking about, in fact, you know, much of the same, all of the success that they've had in accelerating their, uh, analytics, workflow development, uh, the actual development of AI capabilities within, uh, of course that, uh, that beloved brand. >>Excellent guys, thank you so much for joining Dave and me talking about everyday AI, what you're doing together, data ICO, and snowflake to empower organizations to actually achieve that and live it. We appreciate your insights. Thank you both. You guys. Thank you for having us for our guests and Dave ante. I'm Lisa Martin. You're watching the Cube's live coverage of snowflake summit 22 from Las Vegas. Stick around our next guest joins us momentarily.
SUMMARY :
Great to have you on the program. Thank you so much. What about what you guys do Um, and so we're very excited to, uh, to be here, uh, and you know, Where do, where do you pick up? And so, you know, when, Thank And how is this partnership with snowflake empowering you to deliver uh, you know, the pressure that that's going to put on the underlying infrastructure. Why is it so important? Uh, and so that's where, you know, any kind of And what else are you doing to address that 80% problem? You have people who don't code on one side, you know, very no code or a low code, Talking about the data scientist, I wanna elevate that a little bit because you both are enterprise customers, I think Frank said, uh, you know, 50% of the, uh, And so we see you guys Um, that will do, you know, you can now as a Python developer And you guys have joint customers that are headed in this direction, doing this today. And that's a combination of, uh, both the data ICU, uh, layer, you know, you go, cuz you're off to do a, a session with re I, is that right? you know, much of the same, all of the success that they've had in accelerating their, uh, analytics, Thank you both.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Frank | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Novartis | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Kurt | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Ahmad Khan | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Python | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
two products | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two extremes | QUANTITY | 0.99+ |
Kurt Muehmel | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Snowflake Summit 2022 | EVENT | 0.98+ |
Amma | PERSON | 0.98+ |
Kurt UL | PERSON | 0.98+ |
second announcement | QUANTITY | 0.98+ |
JavaScript | TITLE | 0.98+ |
Caesar | PERSON | 0.98+ |
billions | QUANTITY | 0.97+ |
first step | QUANTITY | 0.97+ |
REI | ORGANIZATION | 0.97+ |
HTML | TITLE | 0.97+ |
two major announcements | QUANTITY | 0.97+ |
later this week | DATE | 0.97+ |
Snowflake | ORGANIZATION | 0.96+ |
Amad | PERSON | 0.94+ |
this morning | DATE | 0.94+ |
single line | QUANTITY | 0.94+ |
Aico | ORGANIZATION | 0.93+ |
SQL | TITLE | 0.93+ |
Snowflake | TITLE | 0.93+ |
one side | QUANTITY | 0.91+ |
fortune 500 | QUANTITY | 0.91+ |
Java UDFs | TITLE | 0.9+ |
almost eight years | QUANTITY | 0.9+ |
emeritus | ORGANIZATION | 0.89+ |
snowflake summit 22 | EVENT | 0.85+ |
IKU | ORGANIZATION | 0.85+ |
Cube | ORGANIZATION | 0.85+ |
Cube | PERSON | 0.82+ |
decades | QUANTITY | 0.78+ |
IKU | TITLE | 0.74+ |
streamlet | TITLE | 0.72+ |
snowflake | ORGANIZATION | 0.7+ |
Dataiku | PERSON | 0.65+ |
couple of | QUANTITY | 0.64+ |
DataCo | ORGANIZATION | 0.63+ |
CSS | TITLE | 0.59+ |
one | QUANTITY | 0.55+ |
data ICU | ORGANIZATION | 0.51+ |
rows | QUANTITY | 0.49+ |
Conn | ORGANIZATION | 0.35+ |