Breaking Analysis: Databricks faces critical strategic decisions…here’s why
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Spark became a top level Apache project in 2014, and then shortly thereafter, burst onto the big data scene. Spark, along with the cloud, transformed and in many ways, disrupted the big data market. Databricks optimized its tech stack for Spark and took advantage of the cloud to really cleverly deliver a managed service that has become a leading AI and data platform among data scientists and data engineers. However, emerging customer data requirements are shifting into a direction that will cause modern data platform players generally and Databricks, specifically, we think, to make some key directional decisions and perhaps even reinvent themselves. Hello and welcome to this week's wikibon theCUBE Insights, powered by ETR. In this Breaking Analysis, we're going to do a deep dive into Databricks. We'll explore its current impressive market momentum. We're going to use some ETR survey data to show that, and then we'll lay out how customer data requirements are changing and what the ideal data platform will look like in the midterm future. We'll then evaluate core elements of the Databricks portfolio against that vision, and then we'll close with some strategic decisions that we think the company faces. And to do so, we welcome in our good friend, George Gilbert, former equities analyst, market analyst, and current Principal at TechAlpha Partners. George, good to see you. Thanks for coming on. >> Good to see you, Dave. >> All right, let me set this up. We're going to start by taking a look at where Databricks sits in the market in terms of how customers perceive the company and what it's momentum looks like. And this chart that we're showing here is data from ETS, the emerging technology survey of private companies. The N is 1,421. What we did is we cut the data on three sectors, analytics, database-data warehouse, and AI/ML. The vertical axis is a measure of customer sentiment, which evaluates an IT decision maker's awareness of the firm and the likelihood of engaging and/or purchase intent. The horizontal axis shows mindshare in the dataset, and we've highlighted Databricks, which has been a consistent high performer in this survey over the last several quarters. And as we, by the way, just as aside as we previously reported, OpenAI, which burst onto the scene this past quarter, leads all names, but Databricks is still prominent. You can see that the ETR shows some open source tools for reference, but as far as firms go, Databricks is very impressively positioned. Now, let's see how they stack up to some mainstream cohorts in the data space, against some bigger companies and sometimes public companies. This chart shows net score on the vertical axis, which is a measure of spending momentum and pervasiveness in the data set is on the horizontal axis. You can see that chart insert in the upper right, that informs how the dots are plotted, and net score against shared N. And that red dotted line at 40% indicates a highly elevated net score, anything above that we think is really, really impressive. And here we're just comparing Databricks with Snowflake, Cloudera, and Oracle. And that squiggly line leading to Databricks shows their path since 2021 by quarter. And you can see it's performing extremely well, maintaining an elevated net score and net range. Now it's comparable in the vertical axis to Snowflake, and it consistently is moving to the right and gaining share. Now, why did we choose to show Cloudera and Oracle? The reason is that Cloudera got the whole big data era started and was disrupted by Spark. And of course the cloud, Spark and Databricks and Oracle in many ways, was the target of early big data players like Cloudera. Take a listen to Cloudera CEO at the time, Mike Olson. This is back in 2010, first year of theCUBE, play the clip. >> Look, back in the day, if you had a data problem, if you needed to run business analytics, you wrote the biggest check you could to Sun Microsystems, and you bought a great big, single box, central server, and any money that was left over, you handed to Oracle for a database licenses and you installed that database on that box, and that was where you went for data. That was your temple of information. >> Okay? So Mike Olson implied that monolithic model was too expensive and inflexible, and Cloudera set out to fix that. But the best laid plans, as they say, George, what do you make of the data that we just shared? >> So where Databricks has really come up out of sort of Cloudera's tailpipe was they took big data processing, made it coherent, made it a managed service so it could run in the cloud. So it relieved customers of the operational burden. Where they're really strong and where their traditional meat and potatoes or bread and butter is the predictive and prescriptive analytics that building and training and serving machine learning models. They've tried to move into traditional business intelligence, the more traditional descriptive and diagnostic analytics, but they're less mature there. So what that means is, the reason you see Databricks and Snowflake kind of side by side is there are many, many accounts that have both Snowflake for business intelligence, Databricks for AI machine learning, where Snowflake, I'm sorry, where Databricks also did really well was in core data engineering, refining the data, the old ETL process, which kind of turned into ELT, where you loaded into the analytic repository in raw form and refine it. And so people have really used both, and each is trying to get into the other. >> Yeah, absolutely. We've reported on this quite a bit. Snowflake, kind of moving into the domain of Databricks and vice versa. And the last bit of ETR evidence that we want to share in terms of the company's momentum comes from ETR's Round Tables. They're run by Erik Bradley, and now former Gartner analyst and George, your colleague back at Gartner, Daren Brabham. And what we're going to show here is some direct quotes of IT pros in those Round Tables. There's a data science head and a CIO as well. Just make a few call outs here, we won't spend too much time on it, but starting at the top, like all of us, we can't talk about Databricks without mentioning Snowflake. Those two get us excited. Second comment zeros in on the flexibility and the robustness of Databricks from a data warehouse perspective. And then the last point is, despite competition from cloud players, Databricks has reinvented itself a couple of times over the year. And George, we're going to lay out today a scenario that perhaps calls for Databricks to do that once again. >> Their big opportunity and their big challenge for every tech company, it's managing a technology transition. The transition that we're talking about is something that's been bubbling up, but it's really epical. First time in 60 years, we're moving from an application-centric view of the world to a data-centric view, because decisions are becoming more important than automating processes. So let me let you sort of develop. >> Yeah, so let's talk about that here. We going to put up some bullets on precisely that point and the changing sort of customer environment. So you got IT stacks are shifting is George just said, from application centric silos to data centric stacks where the priority is shifting from automating processes to automating decision. You know how look at RPA and there's still a lot of automation going on, but from the focus of that application centricity and the data locked into those apps, that's changing. Data has historically been on the outskirts in silos, but organizations, you think of Amazon, think Uber, Airbnb, they're putting data at the core, and logic is increasingly being embedded in the data instead of the reverse. In other words, today, the data's locked inside the app, which is why you need to extract that data is sticking it to a data warehouse. The point, George, is we're putting forth this new vision for how data is going to be used. And you've used this Uber example to underscore the future state. Please explain? >> Okay, so this is hopefully an example everyone can relate to. The idea is first, you're automating things that are happening in the real world and decisions that make those things happen autonomously without humans in the loop all the time. So to use the Uber example on your phone, you call a car, you call a driver. Automatically, the Uber app then looks at what drivers are in the vicinity, what drivers are free, matches one, calculates an ETA to you, calculates a price, calculates an ETA to your destination, and then directs the driver once they're there. The point of this is that that cannot happen in an application-centric world very easily because all these little apps, the drivers, the riders, the routes, the fares, those call on data locked up in many different apps, but they have to sit on a layer that makes it all coherent. >> But George, so if Uber's doing this, doesn't this tech already exist? Isn't there a tech platform that does this already? >> Yes, and the mission of the entire tech industry is to build services that make it possible to compose and operate similar platforms and tools, but with the skills of mainstream developers in mainstream corporations, not the rocket scientists at Uber and Amazon. >> Okay, so we're talking about horizontally scaling across the industry, and actually giving a lot more organizations access to this technology. So by way of review, let's summarize the trend that's going on today in terms of the modern data stack that is propelling the likes of Databricks and Snowflake, which we just showed you in the ETR data and is really is a tailwind form. So the trend is toward this common repository for analytic data, that could be multiple virtual data warehouses inside of Snowflake, but you're in that Snowflake environment or Lakehouses from Databricks or multiple data lakes. And we've talked about what JP Morgan Chase is doing with the data mesh and gluing data lakes together, you've got various public clouds playing in this game, and then the data is annotated to have a common meaning. In other words, there's a semantic layer that enables applications to talk to the data elements and know that they have common and coherent meaning. So George, the good news is this approach is more effective than the legacy monolithic models that Mike Olson was talking about, so what's the problem with this in your view? >> So today's data platforms added immense value 'cause they connected the data that was previously locked up in these monolithic apps or on all these different microservices, and that supported traditional BI and AI/ML use cases. But now if we want to build apps like Uber or Amazon.com, where they've got essentially an autonomously running supply chain and e-commerce app where humans only care and feed it. But the thing is figuring out what to buy, when to buy, where to deploy it, when to ship it. We needed a semantic layer on top of the data. So that, as you were saying, the data that's coming from all those apps, the different apps that's integrated, not just connected, but it means the same. And the issue is whenever you add a new layer to a stack to support new applications, there are implications for the already existing layers, like can they support the new layer and its use cases? So for instance, if you add a semantic layer that embeds app logic with the data rather than vice versa, which we been talking about and that's been the case for 60 years, then the new data layer faces challenges that the way you manage that data, the way you analyze that data, is not supported by today's tools. >> Okay, so actually Alex, bring me up that last slide if you would, I mean, you're basically saying at the bottom here, today's repositories don't really do joins at scale. The future is you're talking about hundreds or thousands or millions of data connections, and today's systems, we're talking about, I don't know, 6, 8, 10 joins and that is the fundamental problem you're saying, is a new data error coming and existing systems won't be able to handle it? >> Yeah, one way of thinking about it is that even though we call them relational databases, when we actually want to do lots of joins or when we want to analyze data from lots of different tables, we created a whole new industry for analytic databases where you sort of mung the data together into fewer tables. So you didn't have to do as many joins because the joins are difficult and slow. And when you're going to arbitrarily join thousands, hundreds of thousands or across millions of elements, you need a new type of database. We have them, they're called graph databases, but to query them, you go back to the prerelational era in terms of their usability. >> Okay, so we're going to come back to that and talk about how you get around that problem. But let's first lay out what the ideal data platform of the future we think looks like. And again, we're going to come back to use this Uber example. In this graphic that George put together, awesome. We got three layers. The application layer is where the data products reside. The example here is drivers, rides, maps, routes, ETA, et cetera. The digital version of what we were talking about in the previous slide, people, places and things. The next layer is the data layer, that breaks down the silos and connects the data elements through semantics and everything is coherent. And then the bottom layers, the legacy operational systems feed that data layer. George, explain what's different here, the graph database element, you talk about the relational query capabilities, and why can't I just throw memory at solving this problem? >> Some of the graph databases do throw memory at the problem and maybe without naming names, some of them live entirely in memory. And what you're dealing with is a prerelational in-memory database system where you navigate between elements, and the issue with that is we've had SQL for 50 years, so we don't have to navigate, we can say what we want without how to get it. That's the core of the problem. >> Okay. So if I may, I just want to drill into this a little bit. So you're talking about the expressiveness of a graph. Alex, if you'd bring that back out, the fourth bullet, expressiveness of a graph database with the relational ease of query. Can you explain what you mean by that? >> Yeah, so graphs are great because when you can describe anything with a graph, that's why they're becoming so popular. Expressive means you can represent anything easily. They're conducive to, you might say, in a world where we now want like the metaverse, like with a 3D world, and I don't mean the Facebook metaverse, I mean like the business metaverse when we want to capture data about everything, but we want it in context, we want to build a set of digital twins that represent everything going on in the world. And Uber is a tiny example of that. Uber built a graph to represent all the drivers and riders and maps and routes. But what you need out of a database isn't just a way to store stuff and update stuff. You need to be able to ask questions of it, you need to be able to query it. And if you go back to prerelational days, you had to know how to find your way to the data. It's sort of like when you give directions to someone and they didn't have a GPS system and a mapping system, you had to give them turn by turn directions. Whereas when you have a GPS and a mapping system, which is like the relational thing, you just say where you want to go, and it spits out the turn by turn directions, which let's say, the car might follow or whoever you're directing would follow. But the point is, it's much easier in a relational database to say, "I just want to get these results. You figure out how to get it." The graph database, they have not taken over the world because in some ways, it's taking a 50 year leap backwards. >> Alright, got it. Okay. Let's take a look at how the current Databricks offerings map to that ideal state that we just laid out. So to do that, we put together this chart that looks at the key elements of the Databricks portfolio, the core capability, the weakness, and the threat that may loom. Start with the Delta Lake, that's the storage layer, which is great for files and tables. It's got true separation of compute and storage, I want you to double click on that George, as independent elements, but it's weaker for the type of low latency ingest that we see coming in the future. And some of the threats highlighted here. AWS could add transactional tables to S3, Iceberg adoption is picking up and could accelerate, that could disrupt Databricks. George, add some color here please? >> Okay, so this is the sort of a classic competitive forces where you want to look at, so what are customers demanding? What's competitive pressure? What are substitutes? Even what your suppliers might be pushing. Here, Delta Lake is at its core, a set of transactional tables that sit on an object store. So think of it in a database system, this is the storage engine. So since S3 has been getting stronger for 15 years, you could see a scenario where they add transactional tables. We have an open source alternative in Iceberg, which Snowflake and others support. But at the same time, Databricks has built an ecosystem out of tools, their own and others, that read and write to Delta tables, that's what makes the Delta Lake and ecosystem. So they have a catalog, the whole machine learning tool chain talks directly to the data here. That was their great advantage because in the past with Snowflake, you had to pull all the data out of the database before the machine learning tools could work with it, that was a major shortcoming. They fixed that. But the point here is that even before we get to the semantic layer, the core foundation is under threat. >> Yep. Got it. Okay. We got a lot of ground to cover. So we're going to take a look at the Spark Execution Engine next. Think of that as the refinery that runs really efficient batch processing. That's kind of what disrupted the DOOp in a large way, but it's not Python friendly and that's an issue because the data science and the data engineering crowd are moving in that direction, and/or they're using DBT. George, we had Tristan Handy on at Supercloud, really interesting discussion that you and I did. Explain why this is an issue for Databricks? >> So once the data lake was in place, what people did was they refined their data batch, and Spark has always had streaming support and it's gotten better. The underlying storage as we've talked about is an issue. But basically they took raw data, then they refined it into tables that were like customers and products and partners. And then they refined that again into what was like gold artifacts, which might be business intelligence metrics or dashboards, which were collections of metrics. But they were running it on the Spark Execution Engine, which it's a Java-based engine or it's running on a Java-based virtual machine, which means all the data scientists and the data engineers who want to work with Python are really working in sort of oil and water. Like if you get an error in Python, you can't tell whether the problems in Python or where it's in Spark. There's just an impedance mismatch between the two. And then at the same time, the whole world is now gravitating towards DBT because it's a very nice and simple way to compose these data processing pipelines, and people are using either SQL in DBT or Python in DBT, and that kind of is a substitute for doing it all in Spark. So it's under threat even before we get to that semantic layer, it so happens that DBT itself is becoming the authoring environment for the semantic layer with business intelligent metrics. But that's again, this is the second element that's under direct substitution and competitive threat. >> Okay, let's now move down to the third element, which is the Photon. Photon is Databricks' BI Lakehouse, which has integration with the Databricks tooling, which is very rich, it's newer. And it's also not well suited for high concurrency and low latency use cases, which we think are going to increasingly become the norm over time. George, the call out threat here is customers want to connect everything to a semantic layer. Explain your thinking here and why this is a potential threat to Databricks? >> Okay, so two issues here. What you were touching on, which is the high concurrency, low latency, when people are running like thousands of dashboards and data is streaming in, that's a problem because SQL data warehouse, the query engine, something like that matures over five to 10 years. It's one of these things, the joke that Andy Jassy makes just in general, he's really talking about Azure, but there's no compression algorithm for experience. The Snowflake guy started more than five years earlier, and for a bunch of reasons, that lead is not something that Databricks can shrink. They'll always be behind. So that's why Snowflake has transactional tables now and we can get into that in another show. But the key point is, so near term, it's struggling to keep up with the use cases that are core to business intelligence, which is highly concurrent, lots of users doing interactive query. But then when you get to a semantic layer, that's when you need to be able to query data that might have thousands or tens of thousands or hundreds of thousands of joins. And that's a SQL query engine, traditional SQL query engine is just not built for that. That's the core problem of traditional relational databases. >> Now this is a quick aside. We always talk about Snowflake and Databricks in sort of the same context. We're not necessarily saying that Snowflake is in a position to tackle all these problems. We'll deal with that separately. So we don't mean to imply that, but we're just sort of laying out some of the things that Snowflake or rather Databricks customers we think, need to be thinking about and having conversations with Databricks about and we hope to have them as well. We'll come back to that in terms of sort of strategic options. But finally, when come back to the table, we have Databricks' AI/ML Tool Chain, which has been an awesome capability for the data science crowd. It's comprehensive, it's a one-stop shop solution, but the kicker here is that it's optimized for supervised model building. And the concern is that foundational models like GPT could cannibalize the current Databricks tooling, but George, can't Databricks, like other software companies, integrate foundation model capabilities into its platform? >> Okay, so the sound bite answer to that is sure, IBM 3270 terminals could call out to a graphical user interface when they're running on the XT terminal, but they're not exactly good citizens in that world. The core issue is Databricks has this wonderful end-to-end tool chain for training, deploying, monitoring, running inference on supervised models. But the paradigm there is the customer builds and trains and deploys each model for each feature or application. In a world of foundation models which are pre-trained and unsupervised, the entire tool chain is different. So it's not like Databricks can junk everything they've done and start over with all their engineers. They have to keep maintaining what they've done in the old world, but they have to build something new that's optimized for the new world. It's a classic technology transition and their mentality appears to be, "Oh, we'll support the new stuff from our old stuff." Which is suboptimal, and as we'll talk about, their biggest patron and the company that put them on the map, Microsoft, really stopped working on their old stuff three years ago so that they could build a new tool chain optimized for this new world. >> Yeah, and so let's sort of close with what we think the options are and decisions that Databricks has for its future architecture. They're smart people. I mean we've had Ali Ghodsi on many times, super impressive. I think they've got to be keenly aware of the limitations, what's going on with foundation models. But at any rate, here in this chart, we lay out sort of three scenarios. One is re-architect the platform by incrementally adopting new technologies. And example might be to layer a graph query engine on top of its stack. They could license key technologies like graph database, they could get aggressive on M&A and buy-in, relational knowledge graphs, semantic technologies, vector database technologies. George, as David Floyer always says, "A lot of ways to skin a cat." We've seen companies like, even think about EMC maintained its relevance through M&A for many, many years. George, give us your thought on each of these strategic options? >> Okay, I find this question the most challenging 'cause remember, I used to be an equity research analyst. I worked for Frank Quattrone, we were one of the top tech shops in the banking industry, although this is 20 years ago. But the M&A team was the top team in the industry and everyone wanted them on their side. And I remember going to meetings with these CEOs, where Frank and the bankers would say, "You want us for your M&A work because we can do better." And they really could do better. But in software, it's not like with EMC in hardware because with hardware, it's easier to connect different boxes. With software, the whole point of a software company is to integrate and architect the components so they fit together and reinforce each other, and that makes M&A harder. You can do it, but it takes a long time to fit the pieces together. Let me give you examples. If they put a graph query engine, let's say something like TinkerPop, on top of, I don't even know if it's possible, but let's say they put it on top of Delta Lake, then you have this graph query engine talking to their storage layer, Delta Lake. But if you want to do analysis, you got to put the data in Photon, which is not really ideal for highly connected data. If you license a graph database, then most of your data is in the Delta Lake and how do you sync it with the graph database? If you do sync it, you've got data in two places, which kind of defeats the purpose of having a unified repository. I find this semantic layer option in number three actually more promising, because that's something that you can layer on top of the storage layer that you have already. You just have to figure out then how to have your query engines talk to that. What I'm trying to highlight is, it's easy as an analyst to say, "You can buy this company or license that technology." But the really hard work is making it all work together and that is where the challenge is. >> Yeah, and well look, I thank you for laying that out. We've seen it, certainly Microsoft and Oracle. I guess you might argue that well, Microsoft had a monopoly in its desktop software and was able to throw off cash for a decade plus while it's stock was going sideways. Oracle had won the database wars and had amazing margins and cash flow to be able to do that. Databricks isn't even gone public yet, but I want to close with some of the players to watch. Alex, if you'd bring that back up, number four here. AWS, we talked about some of their options with S3 and it's not just AWS, it's blob storage, object storage. Microsoft, as you sort of alluded to, was an early go-to market channel for Databricks. We didn't address that really. So maybe in the closing comments we can. Google obviously, Snowflake of course, we're going to dissect their options in future Breaking Analysis. Dbt labs, where do they fit? Bob Muglia's company, Relational.ai, why are these players to watch George, in your opinion? >> So everyone is trying to assemble and integrate the pieces that would make building data applications, data products easy. And the critical part isn't just assembling a bunch of pieces, which is traditionally what AWS did. It's a Unix ethos, which is we give you the tools, you put 'em together, 'cause you then have the maximum choice and maximum power. So what the hyperscalers are doing is they're taking their key value stores, in the case of ASW it's DynamoDB, in the case of Azure it's Cosmos DB, and each are putting a graph query engine on top of those. So they have a unified storage and graph database engine, like all the data would be collected in the key value store. Then you have a graph database, that's how they're going to be presenting a foundation for building these data apps. Dbt labs is putting a semantic layer on top of data lakes and data warehouses and as we'll talk about, I'm sure in the future, that makes it easier to swap out the underlying data platform or swap in new ones for specialized use cases. Snowflake, what they're doing, they're so strong in data management and with their transactional tables, what they're trying to do is take in the operational data that used to be in the province of many state stores like MongoDB and say, "If you manage that data with us, it'll be connected to your analytic data without having to send it through a pipeline." And that's hugely valuable. Relational.ai is the wildcard, 'cause what they're trying to do, it's almost like a holy grail where you're trying to take the expressiveness of connecting all your data in a graph but making it as easy to query as you've always had it in a SQL database or I should say, in a relational database. And if they do that, it's sort of like, it'll be as easy to program these data apps as a spreadsheet was compared to procedural languages, like BASIC or Pascal. That's the implications of Relational.ai. >> Yeah, and again, we talked before, why can't you just throw this all in memory? We're talking in that example of really getting down to differences in how you lay the data out on disk in really, new database architecture, correct? >> Yes. And that's why it's not clear that you could take a data lake or even a Snowflake and why you can't put a relational knowledge graph on those. You could potentially put a graph database, but it'll be compromised because to really do what Relational.ai has done, which is the ease of Relational on top of the power of graph, you actually need to change how you're storing your data on disk or even in memory. So you can't, in other words, it's not like, oh we can add graph support to Snowflake, 'cause if you did that, you'd have to change, or in your data lake, you'd have to change how the data is physically laid out. And then that would break all the tools that talk to that currently. >> What in your estimation, is the timeframe where this becomes critical for a Databricks and potentially Snowflake and others? I mentioned earlier midterm, are we talking three to five years here? Are we talking end of decade? What's your radar say? >> I think something surprising is going on that's going to sort of come up the tailpipe and take everyone by storm. All the hype around business intelligence metrics, which is what we used to put in our dashboards where bookings, billings, revenue, customer, those things, those were the key artifacts that used to live in definitions in your BI tools, and DBT has basically created a standard for defining those so they live in your data pipeline or they're defined in their data pipeline and executed in the data warehouse or data lake in a shared way, so that all tools can use them. This sounds like a digression, it's not. All this stuff about data mesh, data fabric, all that's going on is we need a semantic layer and the business intelligence metrics are defining common semantics for your data. And I think we're going to find by the end of this year, that metrics are how we annotate all our analytic data to start adding common semantics to it. And we're going to find this semantic layer, it's not three to five years off, it's going to be staring us in the face by the end of this year. >> Interesting. And of course SVB today was shut down. We're seeing serious tech headwinds, and oftentimes in these sort of downturns or flat turns, which feels like this could be going on for a while, we emerge with a lot of new players and a lot of new technology. George, we got to leave it there. Thank you to George Gilbert for excellent insights and input for today's episode. I want to thank Alex Myerson who's on production and manages the podcast, of course Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our EIC over at Siliconangle.com, he does some great editing. Remember all these episodes, they're available as podcasts. Wherever you listen, all you got to do is search Breaking Analysis Podcast, we publish each week on wikibon.com and siliconangle.com, or you can email me at David.Vellante@siliconangle.com, or DM me @DVellante. Comment on our LinkedIn post, and please do check out ETR.ai, great survey data, enterprise tech focus, phenomenal. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis.
SUMMARY :
bringing you data-driven core elements of the Databricks portfolio and pervasiveness in the data and that was where you went for data. and Cloudera set out to fix that. the reason you see and the robustness of Databricks and their big challenge and the data locked into in the real world and decisions Yes, and the mission of that is propelling the likes that the way you manage that data, is the fundamental problem because the joins are difficult and slow. and connects the data and the issue with that is the fourth bullet, expressiveness and it spits out the and the threat that may loom. because in the past with Snowflake, Think of that as the refinery So once the data lake was in place, George, the call out threat here But the key point is, in sort of the same context. and the company that put One is re-architect the platform and architect the components some of the players to watch. in the case of ASW it's DynamoDB, and why you can't put a relational and executed in the data and manages the podcast, of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Mike Olson | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Erik Bradley | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
60 years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Databricks' | ORGANIZATION | 0.99+ |
two places | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
M&A | ORGANIZATION | 0.99+ |
Frank Quattrone | PERSON | 0.99+ |
second element | QUANTITY | 0.99+ |
Daren Brabham | PERSON | 0.99+ |
TechAlpha Partners | ORGANIZATION | 0.99+ |
third element | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
50 year | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Dominique Bastos, Persistent Systems | International Women's Day 2023
(gentle upbeat music) >> Hello, everyone, welcome to theCUBE's coverage of International Women's Day. I'm John Furrier host here in Palo Alto, California. theCUBE's second year covering International Women's Day. It's been a great celebration of all the smart leaders in the world who are making a difference from all kinds of backgrounds, from technology to business and everything in between. Today we've got a great guest, Dominique Bastos, who's the senior Vice President of Cloud at Persistent Systems, formerly with AWS. That's where we first met at re:Invent. Dominique, great to have you on the program here for International Women's Day. Thanks for coming on. >> Thank you John, for having me back on theCUBE. This is an honor, especially given the theme. >> Well, I'm excited to have you on, I consider you one of those typecast personas where you've kind of done a lot of things. You're powerful, you've got great business acumen you're technical, and we're in a world where, you know the world's coming completely digital and 50% of the world is women, 51%, some say. So you got mostly male dominated industry and you have a dual engineering background and that's super impressive as well. Again, technical world, male dominated you're in there in the mix. What inspires you to get these engineering degrees? >> I think even it was more so shifted towards males. When I had the inspiration to go to engineering school I was accused as a young girl of being a tomboy and fiddling around with all my brother's toys versus focusing on my dolls and other kind of stereotypical toys that you would give a girl. I really had a curiosity for building, a curiosity for just breaking things apart and putting them back together. I was very lucky in that my I guess you call it primary school, maybe middle school, had a program for, it was like electronics, that was the class electronics. So building circuit boards and things like that. And I really enjoyed that aspect of building. I think it was more actually going into engineering school. Picking that as a discipline was a little bit, my mom's reaction to when I announced that I wanted to do engineering which was, "No, that's for boys." >> Really. >> And that really, you know, I think she, it came from a good place in trying to protect me from what she has experienced herself in terms of how women are received in those spaces. So I kind of shrugged it off and thought "Okay, well I'm definitely now going to do this." >> (laughs) If I was told not to, you're going to do it. >> I was told not to, that's all I needed to hear. And also, I think my passion was to design cars and I figured if I enroll in an industrial engineering program I could focus on ergonomic design and ultimately, you know have a career doing something that I'm passionate about. So yeah, so my inspiration was kind of a little bit of don't do this, a lot of curiosity. I'm also a very analytical person. I've been, and I don't know what the science is around left right brain to be honest, but been told that I'm a very much a logical person versus a feeler. So I don't know if that's good or bad. >> Straight shooter. What were your engineering degrees if you don't mind sharing? >> So I did industrial engineering and so I did a dual degree, industrial engineering and robotics. At the time it was like a manufacturing robotics program. It was very, very cool because we got to, I mean now looking back, the evolution of robotics is just insane. But you, you know, programmed a robotic arm to pick things up. I actually crashed the Civil Engineering School's Concrete Canoe Building Competition where you literally have to design a concrete canoe and do all the load testing and the strength testing of the materials and basically then, you know you go against other universities to race the canoe in a body of water. We did that at, in Alabama and in Georgia. So I was lucky to experience that two times. It was a lot of fun. >> But you knew, so you knew, deep down, you were technical you had a nerd vibe you were geeking out on math, tech, robotics. What happened next? I mean, what were some of the challenges you faced? How did you progress forward? Did you have any blockers and roadblocks in front of you and how did you handle those? >> Yeah, I mean I had, I had a very eye-opening experience with, in my freshman year of engineering school. I kind of went in gung-ho with zero hesitation, all the confidence in the world, 'cause I was always a very big nerd academically, I hate admitting this but myself and somebody else got most intellectual, voted by the students in high school. It's like, you don't want to be voted most intellectual when you're in high school. >> Now it's a big deal. (laughs) >> Yeah, you want to be voted like popular or anything like that? No, I was a nerd, but in engineering school, it's a, it was very humbling. That whole confidence that I had. I experienced prof, ooh, I don't want to name the school. Everybody can google it though, but, so anyway so I had experience with some professors that actually looked at me and said, "You're in the wrong program. This is difficult." I, and I think I've shared this before in other forums where, you know, my thermodynamic teacher basically told me "Cheerleading's down the hall," and it it was a very shocking thing to hear because it really made me wonder like, what am I up against here? Is this what it's going to be like going forward? And I decided not to pay attention to that. I think at the moment when you hear something like that you just, you absorb it and you also don't know how to react. And I decided immediately to just walk right past him and sit down front center in the class. In my head I was cursing him, of course, 'cause I mean, let's be real. And I was like, I'm going to show this bleep bleep. And proceeded to basically set the curve class crushed it and was back to be the teacher's assistant. So I think that was one. >> But you became his teacher assistant after, or another one? >> Yeah, I gave him a mini speech. I said, do not do this. You, you could, you could have broken me and if you would've done this to somebody who wasn't as steadfast in her goals or whatever, I was really focused like I'm doing this, I would've backed out potentially and said, you know this isn't something I want to experience on the daily. So I think that was actually a good experience because it gave me an opportunity to understand what I was up against but also double down in how I was going to deal with it. >> Nice to slay the misogynistic teachers who typecast people. Now you had a very technical career but also you had a great career at AWS on the business side you've handled 'em all of the big accounts, I won't say the names, but like we're talking about monster accounts, sales and now basically it's not really selling, you're managing a big account, it's like a big business. It's a business development thing. Technical to business transition, how do you handle that? Was that something you were natural for? Obviously you, you stared down the naysayers out of the gate in college and then in business, did that continue and how did you drive through that? >> So I think even when I was coming out of university I knew that I wanted to have a balance between the engineering program and business. A lot of my colleagues went on to do their PEs so continue to get their masters basically in engineering or their PhDs in engineering. I didn't really have an interest for that. I did international business and finance as my MBA because I wanted to explore the ability of taking what I had learned in engineering school and applying it to building businesses. I mean, at the time I didn't have it in my head that I would want to do startups but I definitely knew that I wanted to get a feel for what are they learning in business school that I missed out in engineering school. So I think that helped me when I transitioned, well when I applied, I was asked to come apply at AWS and I kind of went, no I'm going to, the DNA is going to be rejected. >> You thought, you thought you'd be rejected from AWS. >> I thought I'd be, yeah, because I have very much a startup founder kind of disruptive personality. And to me, when I first saw AWS at the stage early 2016 I saw it as a corporation. Even though from a techie standpoint, I was like, these people are insane. This is amazing what they're building. But I didn't know what the cultural vibe would feel like. I had been with GE at the beginning of my career for almost three years. So I kind of equated AWS Amazon to GE given the size because in between, I had done startups. So when I went to AWS I think initially, and I do have to kind of shout out, you know Todd Weatherby basically was the worldwide leader for ProServe and it was being built, he built it and I went into ProServe to help from that standpoint. >> John: ProServe, Professional services >> Professional services, right. To help these big enterprise customers. And specifically my first customer was an amazing experience in taking, basically the company revolves around strategic selling, right? It's not like you take a salesperson with a conventional schooling that salespeople would have and plug them into AWS in 2016. It was very much a consultative strategic approach. And for me, having a technical background and loving to solve problems for customers, working with the team, I would say, it was a dream team that I joined. And also the ability to come to the table with a technical background, knowing how to interact with senior executives to help them envision where they want to go, and then to bring a team along with you to make that happen. I mean, that was like magical for me. I loved that experience. >> So you like the culture, I mean, Andy Jassy, I've interviewed many times, always talked about builders and been a builder mentality. You mentioned that earlier at the top of this interview you've always building things, curious and you mentioned potentially your confidence might have been shaken. So you, you had the confidence. So being a builder, you know, being curious and having confidence seems to be what your superpower is. A lot of people talk about the confidence angle. How important is that and how important is that for encouraging more women to get into tech? Because I still hear that all the time. Not that they don't have confidence, but there's so many signals that potentially could shake confidence in industry >> Yeah, that's actually a really good point that you're making. A lot of signals that women get could shake their confidence and that needs to be, I mean, it's easy to say that it should be innate. I mean that's kind of like textbook, "Oh it has to come from within." Of course it does. But also, you know, we need to understand that in a population where 50% of the population is women but only 7% of the positions in tech, and I don't know the most current number in tech leadership, is women, and probably a smaller percentage in the C-suite. When you're looking at a woman who's wanting to go up the trajectory in a tech company and then there's a subconscious understanding that there's a limit to how far you'll go, your confidence, you know, in even subconsciously gets shaken a little bit because despite your best efforts, you're already seeing the cap. I would say that we need to coach girls to speak confidently to navigate conflict versus running away from it, to own your own success and be secure in what you bring to the table. And then I think a very important thing is to celebrate each other and the wins that we see for women in tech, in the industry. >> That's awesome. What's, the, in your opinion, the, you look at that, the challenges for this next generation women, and women in general, what are some of the challenges for them and that they need to overcome today? I mean, obviously the world's changed for the better. Still not there. I mean the numbers one in four women, Rachel Thornton came on, former CMO of AWS, she's at MessageBird now. They had a study where only one in four women go to the executive board level. And so there's still, still numbers are bad and then the numbers still got to get up, up big time. That's, and the industry's working on that, but it's changed. But today, what are some of the challenges for this current generation and the next generation of women and how can we and the industry meet, we being us, women in the industry, be strong role models for them? >> Well, I think the challenge is one of how many women are there in the pipeline and what are we doing to retain them and how are we offering up the opportunities to fill. As you know, as Rachel said and I haven't had an opportunity to see her, in how are we giving them this opportunity to take up those seats in the C-suite right, in these leadership roles. And I think this is a little bit exacerbated with the pandemic in that, you know when everything shut down when people were going back to deal with family and work at the same time, for better or for worse the brunt of it fell on probably, you know the maternal type caregiver within the family unit. You know, I've been, I raised my daughter alone and for me, even without the pandemic it was a struggle constantly to balance the risk that I was willing to take to show up for those positions versus investing even more of that time raising a child, right? Nevermind the unconscious bias or cultural kind of expectations that you get from the male counterparts where there's zero understanding of what a mom might go through at home to then show up to a meeting, you know fully fresh and ready to kind of spit out some wisdom. It's like, you know, your kid just freaking lost their whatever and you know, they, so you have to sort a bunch of things out. I think the challenge that women are still facing and will we have to keep working at it is making sure that there's a good pipeline. A good amount of young ladies of people taking interest in tech. And then as they're, you know, going through the funnel at stages in their career, we're providing the mentoring we're, there's representation, right? To what they're aspiring to. We're celebrating their interest in the field, right? And, and I think also we're doing things to retain them, because again, the pandemic affected everybody. I think women specifically and I don't know the statistics but I was reading something about this were the ones to tend to kind of pull it back and say well now I need to be home with, you know you name how many kids and pets and the aging parents, people that got sick to take on that position. In addition to the career aspirations that they might have. We need to make it easier basically. >> I think that's a great call out and I appreciate you bringing that up about family and being a single mom. And by the way, you're savage warrior to doing that. It's amazing. You got to, I know you have a daughter in computer science at Stanford, I want to get to that in a second. But that empathy and I mentioned Rachel Thornton, who's the CMO MessageBird and former CMO of AWS. Her thing right now to your point is mentoring and sponsorship is very key. And her company and the video that's on the site here people should look at that and reference that. They talk a lot about that empathy of people's situation whether it's a single mom, family life, men and women but mainly women because they're the ones who people aren't having a lot of empathy for in that situation, as you called it out. This is huge. And I think remote work has opened up this whole aperture of everyone has to have a view into how people are coming to the table at work. So, you know, props are bringing that up, and I recommend everyone look at check out Rachel Thornton. So how do you balance that, that home life and talk about your daughter's journey because sounds like she's nerding out at Stanford 'cause you know Stanford's called Nerd Nation, that's their motto, so you must be proud. >> I am so proud, I'm so proud. And I will say, I have to admit, because I did encounter so many obstacles and so many hurdles in my journey, it's almost like I forgot that I should set that aside and not worry about my daughter. My hope for her was for her to kind of be artistic and a painter or go into something more lighthearted and fun because I just wanted to think, I guess my mom had the same idea, right? She, always been very driven. She, I want to say that I got very lucky that she picked me to be her mom. Biologically I'm her mom, but I told her she was like a little star that fell from the sky and I, and ended up with me. I think for me, balancing being a single mom and a career where I'm leading and mentoring and making big decisions that affect people's lives as well. You have to take the best of everything you get from each of those roles. And I think that the best way is play to your strengths, right? So having been kind of a nerd and very organized person and all about, you know, systems for effectiveness, I mean, industrial engineering, parenting for me was, I'm going to make it sound super annoying and horrible, but (laughs) >> It's funny, you know, Dave Vellante and I when we started SiliconANGLE and theCUBE years ago, one of the things we were all like sports lovers. So we liked sports and we are like we looked at the people in tech as tech athletes and except there's no men and women teams, it's one team. It's all one thing. So, you know, I consider you a tech athlete you're hard charging strong and professional and smart and beautiful and brilliant, all those good things. >> Thank you. >> Now this game is changing and okay, and you've done startups, and you've done corporate jobs, now you're in a new role. What's the current tech landscape from a, you know I won't say athletic per standpoint but as people who are smart. You have all kinds of different skill sets. You have the startup warriors, you have the folks who like to be in the middle of the corporate world grow up through corporate, climb the corporate ladder. You have investors, you have, you know, creatives. What have you enjoyed most and where do you see all the action? >> I mean, I think what I've enjoyed the most has been being able to bring all of the things that I feel I'm strong at and bring it together to apply that to whatever the problem is at hand, right? So kind of like, you know if you look at a renaissance man who can kind of pop in anywhere and, oh, he's good at, you know sports and he's good at reading and, or she's good at this or, take all of those strengths and somehow bring them together to deal with the issue at hand, versus breaking up your mindset into this is textbook what I learned and this is how business should be done and I'm going to draw these hard lines between personal life and work life, or between how you do selling and how you do engineering. So I think my, the thing that I loved, really loved about AWS was a lot of leaders saw something in me that I potentially didn't see, which was, yeah you might be great at running that big account but we need help over here doing go to market for a new product launch and boom, there you go. Now I'm in a different org helping solve that problem and getting something launched. And I think if you don't box yourself in to I'm only good at this, or, you know put a label on yourself as being the rockstar in that. It leaves room for opportunities to present themselves but also it leaves room within your own mind to see yourself as somebody capable of doing anything. Right, I don't know if I answered the question accurately. >> No, that's good, no, that's awesome. I love the sharing, Yeah, great, great share there. Question is, what do you see, what do you currently during now you're building a business of Persistent for the cloud, obviously AWS and Persistent's a leader global system integrator around the world, thousands and thousands of customers from what we know and been reporting on theCUBE, what's next for you? Where do you see yourself going? Obviously you're going to knock this out of the park. Where do you see yourself as you kind of look at the continuing journey of your mission, personal, professional what's on your mind? Where do you see yourself going next? >> Well, I think, you know, again, going back to not boxing yourself in. This role is an amazing one where I have an opportunity to take all the pieces of my career in tech and apply them to building a business within a business. And that involves all the goodness of coaching and mentoring and strategizing. And I'm loving it. I'm loving the opportunity to work with such great leaders. Persistent itself is very, very good at providing opportunities, very diverse opportunities. We just had a huge Semicolon; Hackathon. Some of the winners were females. The turnout was amazing in the CTO's office. We have very strong women leading the charge for innovation. I think to answer your question about the future and where I may see myself going next, I think now that my job, well they say the job is never done. But now that Chloe's kind of settled into Stanford and kind of doing her own thing, I have always had a passion to continue leading in a way that brings me to, into the fold a lot more. So maybe, you know, maybe in a VC firm partner mode or another, you know CEO role in a startup, or my own startup. I mean, I never, I don't know right now I'm super happy but you never know, you know where your drive might go. And I also want to be able to very deliberately be in a role where I can continue to mentor and support up and coming women in tech. >> Well, you got the smarts but you got really the building mentality, the curiosity and the confidence really sets you up nicely. Dominique great story, great inspiration. You're a role model for many women, young girls out there and women in tech and in celebration. It's a great day and thank you for sharing that story and all the good nuggets there. Appreciate you coming on theCUBE, and it's been my pleasure. Thanks for coming on. >> Thank you, John. Thank you so much for having me. >> Okay, theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE here in Palo Alto getting all the content, check out the other interviews some amazing stories, lessons learned, and some, you know some funny stories and some serious stories. So have some fun and enjoy the rest of the videos here for International Women's Days, thanks for watching. (gentle inspirational music)
SUMMARY :
Dominique, great to have you on Thank you John, for and 50% of the world is I guess you call it primary And that really, you know, (laughs) If I was told not design and ultimately, you know if you don't mind sharing? and do all the load testing the challenges you faced? I kind of went in gung-ho Now it's a big deal. and you also don't know how to react. and if you would've done this to somebody Was that something you were natural for? and applying it to building businesses. You thought, you thought and I do have to kind And also the ability to come to the table Because I still hear that all the time. and that needs to be, I mean, That's, and the industry's to be home with, you know and I appreciate you bringing that up and all about, you know, It's funny, you know, and where do you see all the action? And I think if you don't box yourself in I love the sharing, Yeah, I think to answer your and all the good nuggets there. Thank you so much for having me. learned, and some, you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rachel Thornton | PERSON | 0.99+ |
Rachel | PERSON | 0.99+ |
Todd Weatherby | PERSON | 0.99+ |
Georgia | LOCATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Dominique Bastos | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Alabama | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dominique | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Chloe | PERSON | 0.99+ |
two times | QUANTITY | 0.99+ |
International Women's Days | EVENT | 0.99+ |
International Women's Day | EVENT | 0.99+ |
51% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Persistent | ORGANIZATION | 0.99+ |
ProServe | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
Persistent Systems | ORGANIZATION | 0.99+ |
MessageBird | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
7% | QUANTITY | 0.99+ |
early 2016 | DATE | 0.98+ |
one team | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
single | QUANTITY | 0.98+ |
Civil Engineering School | ORGANIZATION | 0.98+ |
four women | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Today | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
pandemic | EVENT | 0.97+ |
first customer | QUANTITY | 0.97+ |
International Women's Day 2023 | EVENT | 0.95+ |
single mom | QUANTITY | 0.95+ |
Amazon | ORGANIZATION | 0.94+ |
Cloud | ORGANIZATION | 0.88+ |
one thing | QUANTITY | 0.87+ |
almost three years | QUANTITY | 0.87+ |
zero understanding | QUANTITY | 0.86+ |
Concrete Canoe Building Competition | EVENT | 0.86+ |
Nerd Nation | ORGANIZATION | 0.84+ |
zero | QUANTITY | 0.84+ |
second | QUANTITY | 0.8+ |
CTO | ORGANIZATION | 0.76+ |
SiliconANGLE | ORGANIZATION | 0.74+ |
Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1
(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)
SUMMARY :
This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ankur Mehrotra | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Clem | PERSON | 0.99+ |
Ori Goshen | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Ori | PERSON | 0.99+ |
Clem Delangue | PERSON | 0.99+ |
Hugging Face | ORGANIZATION | 0.99+ |
Julien | PERSON | 0.99+ |
Ankur | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Ankur | ORGANIZATION | 0.99+ |
second round | QUANTITY | 0.99+ |
AI21 Labs | ORGANIZATION | 0.99+ |
two separate categories | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
over 15,000 companies | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
over three years | QUANTITY | 0.98+ |
three customers | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Trainium | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Alexa | TITLE | 0.98+ |
Stable Diffusion | ORGANIZATION | 0.97+ |
Swami | PERSON | 0.97+ |
Inferentia | ORGANIZATION | 0.96+ |
GPT-J | ORGANIZATION | 0.96+ |
SageMaker | TITLE | 0.96+ |
AI21 Labs | ORGANIZATION | 0.95+ |
Riding the Wave | TITLE | 0.95+ |
ControlNet | ORGANIZATION | 0.94+ |
one way | QUANTITY | 0.94+ |
a million lines | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.92+ |
few months ago | DATE | 0.92+ |
second wave | EVENT | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
few years ago | DATE | 0.91+ |
CodeWhisperer | TITLE | 0.9+ |
AI21 | ORGANIZATION | 0.89+ |
Joseph Nelson, Roboflow | AWS Startup Showcase
(chill electronic music) >> Hello everyone, welcome to theCUBE's presentation of the AWS Startups Showcase, AI and machine learning, the top startups building generative AI on AWS. This is the season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talk about AI and machine learning. Can't believe it's three years and season one. I'm your host, John Furrier. Got a great guest today, we're joined by Joseph Nelson, the co-founder and CEO of Roboflow, doing some cutting edge stuff around computer vision and really at the front end of this massive wave coming around, large language models, computer vision. The next gen AI is here, and it's just getting started. We haven't even scratched a service. Thanks for joining us today. >> Thanks for having me. >> So you got to love the large language model, foundation models, really educating the mainstream world. ChatGPT has got everyone in the frenzy. This is educating the world around this next gen AI capabilities, enterprise, image and video data, all a big part of it. I mean the edge of the network, Mobile World Conference is happening right now, this month, and it's just ending up, it's just continue to explode. Video is huge. So take us through the company, do a quick explanation of what you guys are doing, when you were founded. Talk about what the company's mission is, and what's your North Star, why do you exist? >> Yeah, Roboflow exists to really kind of make the world programmable. I like to say make the world be read and write access. And our North Star is enabling developers, predominantly, to build that future. If you look around, anything that you see will have software related to it, and can kind of be turned into software. The limiting reactant though, is how to enable computers and machines to understand things as well as people can. And in a lot of ways, computer vision is that missing element that enables anything that you see to become software. So in the virtue of, if software is eating the world, computer vision kind of makes the aperture infinitely wide. It's something that I kind of like, the way I like to frame it. And the capabilities are there, the open source models are there, the amount of data is there, the computer capabilities are only improving annually, but there's a pretty big dearth of tooling, and an early but promising sign of the explosion of use cases, models, and data sets that companies, developers, hobbyists alike will need to bring these capabilities to bear. So Roboflow is in the game of building the community around that capability, building the use cases that allow developers and enterprises to use computer vision, and providing the tooling for companies and developers to be able to add computer vision, create better data sets, and deploy to production, quickly, easily, safely, invaluably. >> You know, Joseph, the word in production is actually real now. You're seeing a lot more people doing in production activities. That's a real hot one and usually it's slower, but it's gone faster, and I think that's going to be more the same. And I think the parallel between what we're seeing on the large language models coming into computer vision, and as you mentioned, video's data, right? I mean we're doing video right now, we're transcribing it into a transcript, linking up to your linguistics, times and the timestamp, I mean everything's data and that really kind of feeds. So this connection between what we're seeing, the large language and computer vision are coming together kind of cousins, brothers. I mean, how would you compare, how would you explain to someone, because everyone's like on this wave of watching people bang out their homework assignments, and you know, write some hacks on code with some of the open AI technologies, there is a corollary directly related to to the vision side. Can you explain? >> Yeah, the rise of large language models are showing what's possible, especially with text, and I think increasingly will get multimodal as the images and video become ingested. Though there's kind of this still core missing element of basically like understanding. So the rise of large language models kind of create this new area of generative AI, and generative AI in the context of computer vision is a lot of, you know, creating video and image assets and content. There's also this whole surface area to understanding what's already created. Basically digitizing physical, real world things. I mean the Metaverse can't be built if we don't know how to mirror or create or identify the objects that we want to interact with in our everyday lives. And where computer vision comes to play in, especially what we've seen at Roboflow is, you know, a little over a hundred thousand developers now have built with our tools. That's to the tune of a hundred million labeled open source images, over 10,000 pre-trained models. And they've kind of showcased to us all of the ways that computer vision is impacting and bringing the world to life. And these are things that, you know, even before large language models and generative AI, you had pretty impressive capabilities, and when you add the two together, it actually unlocks these kind of new capabilities. So for example, you know, one of our users actually powers the broadcast feeds at Wimbledon. So here we're talking about video, we're streaming, we're doing things live, we've got folks that are cropping and making sure we look good, and audio/visual all plugged in correctly. When you broadcast Wimbledon, you'll notice that the camera controllers need to do things like track the ball, which is moving at extremely high speeds and zoom crop, pan tilt, as well as determine if the ball bounced in or out. The very controversial but critical key to a lot of tennis matches. And a lot of that has been historically done with the trained, but fallible human eye and computer vision is, you know, well suited for this task to say, how do we track, pan, tilt, zoom, and see, track the tennis ball in real time, run at 30 plus frames per second, and do it all on the edge. And those are capabilities that, you know, were kind of like science fiction, maybe even a decade ago, and certainly five years ago. Now the interesting thing, is that with the advent of of generative AI, you can start to do things like create your own training data sets, or kind of create logic around once you have this visual input. And teams at Tesla have actually been speaking about, of course the autopilot team's focused on doing vision tasks, but they've combined large language models to add reasoning and logic. So given that you see, let's say the tennis ball, what do you want to do? And being able to combine the capabilities of what LLM's represent, which is really a lot of basically, core human reasoning and logic, with computer vision for the inputs of what's possible, creates these new capabilities, let alone multimodality, which I'm sure we'll talk more about. >> Yeah, and it's really, I mean it's almost intoxicating. It's amazing that this is so capable because the cloud scales here, you got the edge developing, you can decouple compute power, and let Moore's law and all the new silicone and the processors and the GPUs do their thing, and you got open source booming. You're kind of getting at this next segment I wanted to get into, which is the, how people should be thinking about these advances of the computer vision. So this is now a next wave, it's here. I mean I'd love to have that for baseball because I'm always like, "Oh, it should have been a strike." I'm sure that's going to be coming soon, but what is the computer vision capable of doing today? I guess that's my first question. You hit some of it, unpack that a little bit. What does general AI mean in computer vision? What's the new thing? Because there are old technology's been around, proprietary, bolted onto hardware, but hardware advances at a different pace, but now you got new capabilities, generative AI for vision, what does that mean? >> Yeah, so computer vision, you know, at its core is basically enabling machines, computers, to understand, process, and act on visual data as effective or more effective than people can. Traditionally this has been, you know, task types like classification, which you know, identifying if a given image belongs in a certain category of goods on maybe a retail site, is the shoes or is it clothing? Or object detection, which is, you know, creating bounding boxes, which allows you to do things like count how many things are present, or maybe measure the speed of something, or trigger an alert when something becomes visible in frame that wasn't previously visible in frame, or instant segmentation where you're creating pixel wise segmentations for both instance and semantic segmentation, where you often see these kind of beautiful visuals of the polygon surrounding objects that you see. Then you have key point detection, which is where you see, you know, athletes, and each of their joints are kind of outlined is another more traditional type problem in signal processing and computer vision. With generative AI, you kind of get a whole new class of problem types that are opened up. So in a lot of ways I think about generative AI in computer vision as some of the, you know, problems that you aimed to tackle, might still be better suited for one of the previous task types we were discussing. Some of those problem types may be better suited for using a generative technique, and some are problem types that just previously wouldn't have been possible absent generative AI. And so if you make that kind of Venn diagram in your head, you can think about, okay, you know, visual question answering is a task type where if I give you an image and I say, you know, "How many people are in this image?" We could either build an object detection model that might count all those people, or maybe a visual question answering system would sufficiently answer this type of problem. Let alone generative AI being able to create new training data for old systems. And that's something that we've seen be an increasingly prominent use case for our users, as much as things that we advise our customers and the community writ large to take advantage of. So ultimately those are kind of the traditional task types. I can give you some insight, maybe, into how I think about what's possible today, or five years or ten years as you sort go back. >> Yes, definitely. Let's get into that vision. >> So I kind of think about the types of use cases in terms of what's possible. If you just imagine a very simple bell curve, your normal distribution, for the longest time, the types of things that are in the center of that bell curve are identifying objects that are very common or common objects in context. Microsoft published the COCO Dataset in 2014 of common objects and contexts, of hundreds of thousands of images of chairs, forks, food, person, these sorts of things. And you know, the challenge of the day had always been, how do you identify just those 80 objects? So if we think about the bell curve, that'd be maybe the like dead center of the curve, where there's a lot of those objects present, and it's a very common thing that needs to be identified. But it's a very, very, very small sliver of the distribution. Now if you go out to the way long tail, let's go like deep into the tail of this imagined visual normal distribution, you're going to have a problem like one of our customers, Rivian, in tandem with AWS, is tackling, to do visual quality assurance and manufacturing in production processes. Now only Rivian knows what a Rivian is supposed to look like. Only they know the imagery of what their goods that are going to be produced are. And then between those long tails of proprietary data of highly specific things that need to be understood, in the center of the curve, you have a whole kind of messy middle, type of problems I like to say. The way I think about computer vision advancing, is it's basically you have larger and larger and more capable models that eat from the center out, right? So if you have a model that, you know, understands the 80 classes in COCO, well, pretty soon you have advances like Clip, which was trained on 400 million image text pairs, and has a greater understanding of a wider array of objects than just 80 classes in context. And over time you'll get more and more of these larger models that kind of eat outwards from that center of the distribution. And so the question becomes for companies, when can you rely on maybe a model that just already exists? How do you use your data to get what may be capable off the shelf, so to speak, into something that is usable for you? Or, if you're in those long tails and you have proprietary data, how do you take advantage of the greatest asset you have, which is observed visual information that you want to put to work for your customers, and you're kind of living in the long tails, and you need to adapt state of the art for your capabilities. So my mental model for like how computer vision advances is you have that bell curve, and you have increasingly powerful models that eat outward. And multimodality has a role to play in that, larger models have a role to play in that, more compute, more data generally has a role to play in that. But it will be a messy and I think long condition. >> Well, the thing I want to get, first of all, it's great, great mental model, I appreciate that, 'cause I think that makes a lot of sense. The question is, it seems now more than ever, with the scale and compute that's available, that not only can you eat out to the middle in your example, but there's other models you can integrate with. In the past there was siloed, static, almost bespoke. Now you're looking at larger models eating into the bell curve, as you said, but also integrating in with other stuff. So this seems to be part of that interaction. How does, first of all, is that really happening? Is that true? And then two, what does that mean for companies who want to take advantage of this? Because the old model was operational, you know? I have my cameras, they're watching stuff, whatever, and like now you're in this more of a, distributed computing, computer science mindset, not, you know, put the camera on the wall kind of- I'm oversimplifying, but you know what I'm saying. What's your take on that? >> Well, to the first point of, how are these advances happening? What I was kind of describing was, you know, almost uni-dimensional in that you have like, you're only thinking about vision, but the rise of generative techniques and multi-modality, like Clip is a multi-modal model, it has 400 million image text pairs. That will advance the generalizability at a faster rate than just treating everything as only vision. And that's kind of where LLMs and vision will intersect in a really nice and powerful way. Now in terms of like companies, how should they be thinking about taking advantage of these trends? The biggest thing that, and I think it's different, obviously, on the size of business, if you're an enterprise versus a startup. The biggest thing that I think if you're an enterprise, and you have an established scaled business model that is working for your customers, the question becomes, how do you take advantage of that established data moat, potentially, resource moats, and certainly, of course, establish a way of providing value to an end user. So for example, one of our customers, Walmart, has the advantage of one of the largest inventory and stock of any company in the world. And they also of course have substantial visual data, both from like their online catalogs, or understanding what's in stock or out of stock, or understanding, you know, the quality of things that they're going from the start of their supply chain to making it inside stores, for delivery of fulfillments. All these are are visual challenges. Now they already have a substantial trove of useful imagery to understand and teach and train large models to understand each of the individual SKUs and products that are in their stores. And so if I'm a Walmart, what I'm thinking is, how do I make sure that my petabytes of visual information is utilized in a way where I capture the proprietary benefit of the models that I can train to do tasks like, what item was this? Or maybe I'm going to create AmazonGo-like technology, or maybe I'm going to build like delivery robots, or I want to automatically know what's in and out of stock from visual input fees that I have across my in-store traffic. And that becomes the question and flavor of the day for enterprises. I've got this large amount of data, I've got an established way that I can provide more value to my own customers. How do I ensure I take advantage of the data advantage I'm already sitting on? If you're a startup, I think it's a pretty different question, and I'm happy to talk about. >> Yeah, what's startup angle on this? Because you know, they're going to want to take advantage. It's like cloud startups, cloud native startups, they were born in the cloud, they never had an IT department. So if you're a startup, is there a similar role here? And if I'm a computer vision startup, what's that mean? So can you share your your take on that, because there'll be a lot of people starting up from this. >> So the startup on the opposite advantage and disadvantage, right? Like a startup doesn't have an proven way of delivering repeatable value in the same way that a scaled enterprise does. But it does have the nimbleness to identify and take advantage of techniques that you can start from a blank slate. And I think the thing that startups need to be wary of in the generative AI enlarged language model, in multimodal world, is building what I like to call, kind of like sandcastles. A sandcastle is maybe a business model or a capability that's built on top of an assumption that is going to be pretty quickly wiped away by improving underlying model technology. So almost like if you imagine like the ocean, the waves are coming in, and they're going to wipe away your progress. You don't want to be in the position of building sandcastle business where, you don't want to bet on the fact that models aren't going to get good enough to solve the task type that you might be solving. In other words, don't take a screenshot of what's capable today. Assume that what's capable today is only going to continue to become possible. And so for a startup, what you can do, that like enterprises are quite comparatively less good at, is embedding these capabilities deeply within your products and delivering maybe a vertical based experience, where AI kind of exists in the background. >> Yeah. >> And we might not think of companies as, you know, even AI companies, it's just so embedded in the experience they provide, but that's like the vertical application example of taking AI and making it be immediately usable. Or, of course there's tons of picks and shovels businesses to be built like Roboflow, where you're enabling these enterprises to take advantage of something that they have, whether that's their data sets, their computes, or their intellect. >> Okay, so if I hear that right, by the way, I love, that's horizontally scalable, that's the large language models, go up and build them the apps, hence your developer focus. I'm sure that's probably the reason that the tsunami of developer's action. So you're saying picks and shovels tools, don't try to replicate the platform of what could be the platform. Oh, go to a VC, I'm going to build a platform. No, no, no, no, those are going to get wiped away by the large language models. Is there one large language model that will rule the world, or do you see many coming? >> Yeah, so to be clear, I think there will be useful platforms. I just think a lot of people think that they're building, let's say, you know, if we put this in the cloud context, you're building a specific type of EC2 instance. Well, it turns out that Amazon can offer that type of EC2 instance, and immediately distribute it to all of their customers. So you don't want to be in the position of just providing something that actually ends up looking like a feature, which in the context of AI, might be like a small incremental improvement on the model. If that's all you're doing, you're a sandcastle business. Now there's a lot of platform businesses that need to be built that enable businesses to get to value and do things like, how do I monitor my models? How do I create better models with my given data sets? How do I ensure that my models are doing what I want them to do? How do I find the right models to use? There's all these sorts of platform wide problems that certainly exist for businesses. I just think a lot of startups that I'm seeing right now are making the mistake of assuming the advances we're seeing are not going to accelerate or even get better. >> So if I'm a customer, if I'm a company, say I'm a startup or an enterprise, either one, same question. And I want to stand up, and I have developers working on stuff, I want to start standing up an environment to start doing stuff. Is that a service provider? Is that a managed service? Is that you guys? So how do you guys fit into your customers leaning in? Is it just for developers? Are you targeting with a specific like managed service? What's the product consumption? How do you talk to customers when they come to you? >> The thing that we do is enable, we give developers superpowers to build automated inventory tracking, self-checkout systems, identify if this image is malignant cancer or benign cancer, ensure that these products that I've produced are correct. Make sure that that the defect that might exist on this electric vehicle makes its way back for review. All these sorts of problems are immediately able to be solved and tackled. In terms of the managed services element, we have solutions as integrators that will often build on top of our tools, or we'll have companies that look to us for guidance, but ultimately the company is in control of developing and building and creating these capabilities in house. I really think the distinction is maybe less around managed service and tool, and more around ownership in the era of AI. So for example, if I'm using a managed service, in that managed service, part of their benefit is that they are learning across their customer sets, then it's a very different relationship than using a managed service where I'm developing some amount of proprietary advantages for my data sets. And I think that's a really important thing that companies are becoming attuned to, just the value of the data that they have. And so that's what we do. We tell companies that you have this proprietary, immense treasure trove of data, use that to your advantage, and think about us more like a set of tools that enable you to get value from that capability. You know, the HashiCorp's and GitLab's of the world have proven like what these businesses look like at scale. >> And you're targeting developers. When you go into a company, do you target developers with freemium, is there a paid service? Talk about the business model real quick. >> Sure, yeah. The tools are free to use and get started. When someone signs up for Roboflow, they may elect to make their work open source, in which case we're able to provide even more generous usage limits to basically move the computer vision community forward. If you elect to make your data private, you can use our hosted data set managing, data set training, model deployment, annotation tooling up to some limits. And then usually when someone validates that what they're doing gets them value, they purchase a subscription license to be able to scale up those capabilities. So like most developer centric products, it's free to get started, free to prove, free to poke around, develop what you think is possible. And then once you're getting to value, then we're able to capture the commercial upside in the value that's being provided. >> Love the business model. It's right in line with where the market is. There's kind of no standards bodies these days. The developers are the ones who are deciding kind of what the standards are by their adoption. I think making that easy for developers to get value as the model open sources continuing to grow, you can see more of that. Great perspective Joseph, thanks for sharing that. Put a plug in for the company. What are you guys doing right now? Where are you in your growth? What are you looking for? How should people engage? Give the quick commercial for the company. >> So as I mentioned, Roboflow is I think one of the largest, if not the largest collections of computer vision models and data sets that are open source, available on the web today, and have a private set of tools that over half the Fortune 100 now rely on those tools. So we're at the stage now where we know people want what we're working on, and we're continuing to drive that type of adoption. So companies that are looking to make better models, improve their data sets, train and deploy, often will get a lot of value from our tools, and certainly reach out to talk. I'm sure there's a lot of talented engineers that are tuning in too, we're aggressively hiring. So if you are interested in being a part of making the world programmable, and being at the ground floor of the company that's creating these capabilities to be writ large, we'd love to hear from you. >> Amazing, Joseph, thanks so much for coming on and being part of the AWS Startup Showcase. Man, if I was in my twenties, I'd be knocking on your door, because it's the hottest trend right now, it's super exciting. Generative AI is just the beginning of massive sea change. Congratulations on all your success, and we'll be following you guys. Thanks for spending the time, really appreciate it. >> Thanks for having me. >> Okay, this is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about the hottest things in tech. I'm John Furrier, your host. Thanks for watching. (chill electronic music)
SUMMARY :
of the AWS Startups Showcase, of what you guys are doing, of the explosion of use and you know, write some hacks on code and do it all on the edge. and the processors and of the traditional task types. Let's get into that vision. the greatest asset you have, eating into the bell curve, as you said, and flavor of the day for enterprises. So can you share your your take on that, that you can start from a blank slate. but that's like the that right, by the way, How do I find the right models to use? Is that you guys? and GitLab's of the world Talk about the business model real quick. in the value that's being provided. The developers are the that over half the Fortune and being part of the of the ongoing series
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joseph Nelson | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
400 million | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
80 objects | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
ten years | QUANTITY | 0.99+ |
80 classes | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Roboflow | ORGANIZATION | 0.99+ |
Wimbledon | EVENT | 0.99+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
GitLab | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
North Star | ORGANIZATION | 0.98+ |
first point | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
over 10,000 pre-trained models | QUANTITY | 0.97+ |
a decade ago | DATE | 0.97+ |
Rivian | ORGANIZATION | 0.97+ |
Mobile World Conference | EVENT | 0.95+ |
over a hundred thousand developers | QUANTITY | 0.94+ |
EC2 | TITLE | 0.94+ |
this month | DATE | 0.93+ |
season one | QUANTITY | 0.93+ |
30 plus frames per second | QUANTITY | 0.93+ |
twenties | QUANTITY | 0.93+ |
sandcastle | ORGANIZATION | 0.9+ |
HashiCorp | ORGANIZATION | 0.89+ |
theCUBE | ORGANIZATION | 0.88+ |
hundreds of thousands | QUANTITY | 0.87+ |
wave | EVENT | 0.87+ |
North Star | ORGANIZATION | 0.86+ |
400 million image text pairs | QUANTITY | 0.78+ |
season three | QUANTITY | 0.78+ |
episode one | QUANTITY | 0.76+ |
AmazonGo | ORGANIZATION | 0.76+ |
over half | QUANTITY | 0.69+ |
a hundred million | QUANTITY | 0.68+ |
Startup Showcase | EVENT | 0.66+ |
Fortune 100 | TITLE | 0.66+ |
COCO | TITLE | 0.65+ |
Roboflow | PERSON | 0.6+ |
ChatGPT | ORGANIZATION | 0.58+ |
Dataset | TITLE | 0.53+ |
Moore | PERSON | 0.5+ |
COCO | ORGANIZATION | 0.39+ |
Adam Wenchel & John Dickerson, Arthur | AWS Startup Showcase S3 E1
(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI Machine Learning Top Startups Building Generative AI on AWS. This is season 3, episode 1 of the ongoing series covering the exciting startup from the AWS ecosystem to talk about AI and machine learning. I'm your host, John Furrier. I'm joined by two great guests here, Adam Wenchel, who's the CEO of Arthur, and Chief Scientist of Arthur, John Dickerson. Talk about how they help people build better LLM AI systems to get them into the market faster. Gentlemen, thank you for coming on. >> Yeah, thanks for having us, John. >> Well, I got to say I got to temper my enthusiasm because the last few months explosion of interest in LLMs with ChatGPT, has opened the eyes to everybody around the reality of that this is going next gen, this is it, this is the moment, this is the the point we're going to look back and say, this is the time where AI really hit the scene for real applications. So, a lot of Large Language Models, also known as LLMs, foundational models, and generative AI is all booming. This is where all the alpha developers are going. This is where everyone's focusing their business model transformations on. This is where developers are seeing action. So it's all happening, the wave is here. So I got to ask you guys, what are you guys seeing right now? You're in the middle of it, it's hitting you guys right on. You're in the front end of this massive wave. >> Yeah, John, I don't think you have to temper your enthusiasm at all. I mean, what we're seeing every single day is, everything from existing enterprise customers coming in with new ways that they're rethinking, like business things that they've been doing for many years that they can now do an entirely different way, as well as all manner of new companies popping up, applying LLMs to everything from generating code and SQL statements to generating health transcripts and just legal briefs. Everything you can imagine. And when you actually sit down and look at these systems and the demos we get of them, the hype is definitely justified. It's pretty amazing what they're going to do. And even just internally, we built, about a month ago in January, we built an Arthur chatbot so customers could ask questions, technical questions from our, rather than read our product documentation, they could just ask this LLM a particular question and get an answer. And at the time it was like state of the art, but then just last week we decided to rebuild it because the tooling has changed so much that we, last week, we've completely rebuilt it. It's now way better, built on an entirely different stack. And the tooling has undergone a full generation worth of change in six weeks, which is crazy. So it just tells you how much energy is going into this and how fast it's evolving right now. >> John, weigh in as a chief scientist. I mean, you must be blown away. Talk about kid in the candy store. I mean, you must be looking like this saying, I mean, she must be super busy to begin with, but the change, the acceleration, can you scope the kind of change you're seeing and be specific around the areas you're seeing movement and highly accelerated change? >> Yeah, definitely. And it is very, very exciting actually, thinking back to when ChatGPT was announced, that was a night our company was throwing an event at NeurIPS, which is maybe the biggest machine learning conference out there. And the hype when that happened was palatable and it was just shocking to see how well that performed. And then obviously over the last few months since then, as LLMs have continued to enter the market, we've seen use cases for them, like Adam mentioned all over the place. And so, some things I'm excited about in this space are the use of LLMs and more generally, foundation models to redesign traditional operations, research style problems, logistics problems, like auctions, decisioning problems. So moving beyond the already amazing news cases, like creating marketing content into more core integration and a lot of the bread and butter companies and tasks that drive the American ecosystem. And I think we're just starting to see some of that. And in the next 12 months, I think we're going to see a lot more. If I had to make other predictions, I think we're going to continue seeing a lot of work being done on managing like inference time costs via shrinking models or distillation. And I don't know how to make this prediction, but at some point we're going to be seeing lots of these very large scale models operating on the edge as well. So the time scales are extremely compressed, like Adam mentioned, 12 months from now, hard to say. >> We were talking on theCUBE prior to this session here. We had theCUBE conversation here and then the Wall Street Journal just picked up on the same theme, which is the printing press moment created the enlightenment stage of the history. Here we're in the whole nother automating intellect efficiency, doing heavy lifting, the creative class coming back, a whole nother level of reality around the corner that's being hyped up. The question is, is this justified? Is there really a breakthrough here or is this just another result of continued progress with AI? Can you guys weigh in, because there's two schools of thought. There's the, "Oh my God, we're entering a new enlightenment tech phase, of the equivalent of the printing press in all areas. Then there's, Ah, it's just AI (indistinct) inch by inch. What's your guys' opinion? >> Yeah, I think on the one hand when you're down in the weeds of building AI systems all day, every day, like we are, it's easy to look at this as an incremental progress. Like we have customers who've been building on foundation models since we started the company four years ago, particular in computer vision for classification tasks, starting with pre-trained models, things like that. So that part of it doesn't feel real new, but what does feel new is just when you apply these things to language with all the breakthroughs and computational efficiency, algorithmic improvements, things like that, when you actually sit down and interact with ChatGPT or one of the other systems that's out there that's building on top of LLMs, it really is breathtaking, like, the level of understanding that they have and how quickly you can accelerate your development efforts and get an actual working system in place that solves a really important real world problem and makes people way faster, way more efficient. So I do think there's definitely something there. It's more than just incremental improvement. This feels like a real trajectory inflection point for the adoption of AI. >> John, what's your take on this? As people come into the field, I'm seeing a lot of people move from, hey, I've been coding in Python, I've been doing some development, I've been a software engineer, I'm a computer science student. I'm coding in C++ old school, OG systems person. Where do they come in? Where's the focus, where's the action? Where are the breakthroughs? Where are people jumping in and rolling up their sleeves and getting dirty with this stuff? >> Yeah, all over the place. And it's funny you mentioned students in a different life. I wore a university professor hat and so I'm very, very familiar with the teaching aspects of this. And I will say toward Adam's point, this really is a leap forward in that techniques like in a co-pilot for example, everybody's using them right now and they really do accelerate the way that we develop. When I think about the areas where people are really, really focusing right now, tooling is certainly one of them. Like you and I were chatting about LangChain right before this interview started, two or three people can sit down and create an amazing set of pipes that connect different aspects of the LLM ecosystem. Two, I would say is in engineering. So like distributed training might be one, or just understanding better ways to even be able to train large models, understanding better ways to then distill them or run them. So like this heavy interaction now between engineering and what I might call traditional machine learning from 10 years ago where you had to know a lot of math, you had to know calculus very well, things like that. Now you also need to be, again, a very strong engineer, which is exciting. >> I interviewed Swami when he talked about the news. He's ahead of Amazon's machine learning and AI when they announced Hugging Face announcement. And I reminded him how Amazon was easy to get into if you were developing a startup back in 2007,8, and that the language models had that similar problem. It's step up a lot of content and a lot of expense to get provisioned up, now it's easy. So this is the next wave of innovation. So how do you guys see that from where we are right now? Are we at that point where it's that moment where it's that cloud-like experience for LLMs and large language models? >> Yeah, go ahead John. >> I think the answer is yes. We see a number of large companies that are training these and serving these, some of which are being co-interviewed in this episode. I think we're at that. Like, you can hit one of these with a simple, single line of Python, hitting an API, you can boot this up in seconds if you want. It's easy. >> Got it. >> So I (audio cuts out). >> Well let's take a step back and talk about the company. You guys being featured here on the Showcase. Arthur, what drove you to start the company? How'd this all come together? What's the origination story? Obviously you got a big customers, how'd get started? What are you guys doing? How do you make money? Give a quick overview. >> Yeah, I think John and I come at it from slightly different angles, but for myself, I have been a part of a number of technology companies. I joined Capital One, they acquired my last company and shortly after I joined, they asked me to start their AI team. And so even though I've been doing AI for a long time, I started my career back in DARPA. It was the first time I was really working at scale in AI at an organization where there were hundreds of millions of dollars in revenue at stake with the operation of these models and that they were impacting millions of people's financial livelihoods. And so it just got me hyper-focused on these issues around making sure that your AI worked well and it worked well for your company and it worked well for the people who were being affected by it. At the time when I was doing this 2016, 2017, 2018, there just wasn't any tooling out there to support this production management model monitoring life phase of the life cycle. And so we basically left to start the company that I wanted. And John has a his own story. I'll let let you share that one, John. >> Go ahead John, you're up. >> Yeah, so I'm coming at this from a different world. So I'm on leave now from a tenured role in academia where I was leading a large lab focusing on the intersection of machine learning and economics. And so questions like fairness or the response to the dynamism on the underlying environment have been around for quite a long time in that space. And so I've been thinking very deeply about some of those more like R and D style questions as well as having deployed some automation code across a couple of different industries, some in online advertising, some in the healthcare space and so on, where concerns of, again, fairness come to bear. And so Adam and I connected to understand the space of what that might look like in the 2018 20 19 realm from a quantitative and from a human-centered point of view. And so booted things up from there. >> Yeah, bring that applied engineering R and D into the Capital One, DNA that he had at scale. I could see that fit. I got to ask you now, next step, as you guys move out and think about LLMs and the recent AI news around the generative models and the foundational models like ChatGPT, how should we be looking at that news and everyone watching might be thinking the same thing. I know at the board level companies like, we should refactor our business, this is the future. It's that kind of moment, and the tech team's like, okay, boss, how do we do this again? Or are they prepared? How should we be thinking? How should people watching be thinking about LLMs? >> Yeah, I think they really are transformative. And so, I mean, we're seeing companies all over the place. Everything from large tech companies to a lot of our large enterprise customers are launching significant projects at core parts of their business. And so, yeah, I would be surprised, if you're serious about becoming an AI native company, which most leading companies are, then this is a trend that you need to be taking seriously. And we're seeing the adoption rate. It's funny, I would say the AI adoption in the broader business world really started, let's call it four or five years ago, and it was a relatively slow adoption rate, but I think all that kind of investment in and scaling the maturity curve has paid off because the rate at which people are adopting and deploying systems based on this is tremendous. I mean, this has all just happened in the few months and we're already seeing people get systems into production. So, now there's a lot of things you have to guarantee in order to put these in production in a way that basically is added into your business and doesn't cause more headaches than it solves. And so that's where we help customers is where how do you put these out there in a way that they're going to represent your company well, they're going to perform well, they're going to do their job and do it properly. >> So in the use case, as a customer, as I think about this, there's workflows. They might have had an ML AI ops team that's around IT. Their inference engines are out there. They probably don't have a visibility on say how much it costs, they're kicking the tires. When you look at the deployment, there's a cost piece, there's a workflow piece, there's fairness you mentioned John, what should be, I should be thinking about if I'm going to be deploying stuff into production, I got to think about those things. What's your opinion? >> Yeah, I'm happy to dive in on that one. So monitoring in general is extremely important once you have one of these LLMs in production, and there have been some changes versus traditional monitoring that we can dive deeper into that LLMs are really accelerated. But a lot of that bread and butter style of things you should be looking out for remain just as important as they are for what you might call traditional machine learning models. So the underlying environment of data streams, the way users interact with these models, these are all changing over time. And so any performance metrics that you care about, traditional ones like an accuracy, if you can define that for an LLM, ones around, for example, fairness or bias. If that is a concern for your particular use case and so on. Those need to be tracked. Now there are some interesting changes that LLMs are bringing along as well. So most ML models in production that we see are relatively static in the sense that they're not getting flipped in more than maybe once a day or once a week or they're just set once and then not changed ever again. With LLMs, there's this ongoing value alignment or collection of preferences from users that is often constantly updating the model. And so that opens up all sorts of vectors for, I won't say attack, but for problems to arise in production. Like users might learn to use your system in a different way and thus change the way those preferences are getting collected and thus change your system in ways that you never intended. So maybe that went through governance already internally at the company and now it's totally, totally changed and it's through no fault of your own, but you need to be watching over that for sure. >> Talk about the reinforced learnings from human feedback. How's that factoring in to the LLMs? Is that part of it? Should people be thinking about that? Is that a component that's important? >> It certainly is, yeah. So this is one of the big tweaks that happened with InstructGPT, which is the basis model behind ChatGPT and has since gone on to be used all over the place. So value alignment I think is through RLHF like you mentioned is a very interesting space to get into and it's one that you need to watch over. Like, you're asking humans for feedback over outputs from a model and then you're updating the model with respect to that human feedback. And now you've thrown humans into the loop here in a way that is just going to complicate things. And it certainly helps in many ways. You can ask humans to, let's say that you're deploying an internal chat bot at an enterprise, you could ask humans to align that LLM behind the chatbot to, say company values. And so you're listening feedback about these company values and that's going to scoot that chatbot that you're running internally more toward the kind of language that you'd like to use internally on like a Slack channel or something like that. Watching over that model I think in that specific case, that's a compliance and HR issue as well. So while it is part of the greater LLM stack, you can also view that as an independent bit to watch over. >> Got it, and these are important factors. When people see the Bing news, they freak out how it's doing great. Then it goes off the rails, it goes big, fails big. (laughing) So these models people see that, is that human interaction or is that feedback, is that not accepting it or how do people understand how to take that input in and how to build the right apps around LLMs? This is a tough question. >> Yeah, for sure. So some of the examples that you'll see online where these chatbots go off the rails are obviously humans trying to break the system, but some of them clearly aren't. And that's because these are large statistical models and we don't know what's going to pop out of them all the time. And even if you're doing as much in-house testing at the big companies like the Go-HERE's and the OpenAI's of the world, to try to prevent things like toxicity or racism or other sorts of bad content that might lead to bad pr, you're never going to catch all of these possible holes in the model itself. And so, again, it's very, very important to keep watching over that while it's in production. >> On the business model side, how are you guys doing? What's the approach? How do you guys engage with customers? Take a minute to explain the customer engagement. What do they need? What do you need? How's that work? >> Yeah, I can talk a little bit about that. So it's really easy to get started. It's literally a matter of like just handing out an API key and people can get started. And so we also offer alternative, we also offer versions that can be installed on-prem for models that, we find a lot of our customers have models that deal with very sensitive data. So you can run it in your cloud account or use our cloud version. And so yeah, it's pretty easy to get started with this stuff. We find people start using it a lot of times during the validation phase 'cause that way they can start baselining performance models, they can do champion challenger, they can really kind of baseline the performance of, maybe they're considering different foundation models. And so it's a really helpful tool for understanding differences in the way these models perform. And then from there they can just flow that into their production inferencing, so that as these systems are out there, you have really kind of real time monitoring for anomalies and for all sorts of weird behaviors as well as that continuous feedback loop that helps you make make your product get better and observability and you can run all sorts of aggregated reports to really understand what's going on with these models when they're out there deciding. I should also add that we just today have another way to adopt Arthur and that is we are in the AWS marketplace, and so we are available there just to make it that much easier to use your cloud credits, skip the procurement process, and get up and running really quickly. >> And that's great 'cause Amazon's got SageMaker, which handles a lot of privacy stuff, all kinds of cool things, or you can get down and dirty. So I got to ask on the next one, production is a big deal, getting stuff into production. What have you guys learned that you could share to folks watching? Is there a cost issue? I got to monitor, obviously you brought that up, we talked about the even reinforcement issues, all these things are happening. What is the big learnings that you could share for people that are going to put these into production to watch out for, to plan for, or be prepared for, hope for the best plan for the worst? What's your advice? >> I can give a couple opinions there and I'm sure Adam has. Well, yeah, the big one from my side is, again, I had mentioned this earlier, it's just the input data streams because humans are also exploring how they can use these systems to begin with. It's really, really hard to predict the type of inputs you're going to be seeing in production. Especially, we always talk about chatbots, but then any generative text tasks like this, let's say you're taking in news articles and summarizing them or something like that, it's very hard to get a good sampling even of the set of news articles in such a way that you can really predict what's going to pop out of that model. So to me, it's, adversarial maybe isn't the word that I would use, but it's an unnatural shifting input distribution of like prompts that you might see for these models. That's certainly one. And then the second one that I would talk about is, it can be hard to understand the costs, the inference time costs behind these LLMs. So the pricing on these is always changing as the models change size, it might go up, it might go down based on model size, based on energy cost and so on, but your pricing per token or per a thousand tokens and that I think can be difficult for some clients to wrap their head around. Again, you don't know how these systems are going to be used after all so it can be tough. And so again that's another metric that really should be tracked. >> Yeah, and there's a lot of trade off choices in there with like, how many tokens do you want at each step and in the sequence and based on, you have (indistinct) and you reject these tokens and so based on how your system's operating, that can make the cost highly variable. And that's if you're using like an API version that you're paying per token. A lot of people also choose to run these internally and as John mentioned, the inference time on these is significantly higher than a traditional classifi, even NLP classification model or tabular data model, like orders of magnitude higher. And so you really need to understand how that, as you're constantly iterating on these models and putting out new versions and new features in these models, how that's affecting the overall scale of that inference cost because you can use a lot of computing power very quickly with these profits. >> Yeah, scale, performance, price all come together. I got to ask while we're here on the secret sauce of the company, if you had to describe to people out there watching, what's the secret sauce of the company? What's the key to your success? >> Yeah, so John leads our research team and they've had a number of really cool, I think AI as much as it's been hyped for a while, it's still commercial AI at least is really in its infancy. And so the way we're able to pioneer new ways to think about performance for computer vision NLP LLMs is probably the thing that I'm proudest about. John and his team publish papers all the time at Navs and other places. But I think it's really being able to define what performance means for basically any kind of model type and give people really powerful tools to understand that on an ongoing basis. >> John, secret sauce, how would you describe it? You got all the action happening all around you. >> Yeah, well I going to appreciate Adam talking me up like that. No, I. (all laughing) >> Furrier: Robs to you. >> I would also say a couple of other things here. So we have a very strong engineering team and so I think some early hires there really set the standard at a very high bar that we've maintained as we've grown. And I think that's really paid dividends as scalabilities become even more of a challenge in these spaces, right? And so that's not just scalability when it comes to LLMs, that's scalability when it comes to millions of inferences per day, that kind of thing as well in traditional ML models. And I think that's compared to potential competitors, that's really... Well, it's made us able to just operate more efficiently and pass that along to the client. >> Yeah, and I think the infancy comment is really important because it's the beginning. You really is a long journey ahead. A lot of change coming, like I said, it's a huge wave. So I'm sure you guys got a lot of plannings at the foundation even for your own company, so I appreciate the candid response there. Final question for you guys is, what should the top things be for a company in 2023? If I'm going to set the agenda and I'm a customer moving forward, putting the pedal to the metal, so to speak, what are the top things I should be prioritizing or I need to do to be successful with AI in 2023? >> Yeah, I think, so number one, as we talked about, we've been talking about this entire episode, the things are changing so quickly and the opportunities for business transformation and really disrupting different applications, different use cases, is almost, I don't think we've even fully comprehended how big it is. And so really digging in to your business and understanding where I can apply these new sets of foundation models is, that's a top priority. The interesting thing is I think there's another force at play, which is the macroeconomic conditions and a lot of places are, they're having to work harder to justify budgets. So in the past, couple years ago maybe, they had a blank check to spend on AI and AI development at a lot of large enterprises that was limited primarily by the amount of talent they could scoop up. Nowadays these expenditures are getting scrutinized more. And so one of the things that we really help our customers with is like really calculating the ROI on these things. And so if you have models out there performing and you have a new version that you can put out that lifts the performance by 3%, how many tens of millions of dollars does that mean in business benefit? Or if I want to go to get approval from the CFO to spend a few million dollars on this new project, how can I bake in from the beginning the tools to really show the ROI along the way? Because I think in these systems when done well for a software project, the ROI can be like pretty spectacular. Like we see over a hundred percent ROI in the first year on some of these projects. And so, I think in 2023, you just need to be able to show what you're getting for that spend. >> It's a needle moving moment. You see it all the time with some of these aha moments or like, whoa, blown away. John, I want to get your thoughts on this because one of the things that comes up a lot for companies that I talked to, that are on my second wave, I would say coming in, maybe not, maybe the front wave of adopters is talent and team building. You mentioned some of the hires you got were game changing for you guys and set the bar high. As you move the needle, new developers going to need to come in. What's your advice given that you've been a professor, you've seen students, I know a lot of computer science people want to shift, they might not be yet skilled in AI, but they're proficient in programming, is that's going to be another opportunity with open source when things are happening. How do you talk to that next level of talent that wants to come in to this market to supplement teams and be on teams, lead teams? Any advice you have for people who want to build their teams and people who are out there and want to be a coder in AI? >> Yeah, I've advice, and this actually works for what it would take to be a successful AI company in 2023 as well, which is, just don't be afraid to iterate really quickly with these tools. The space is still being explored on what they can be used for. A lot of the tasks that they're used for now right? like creating marketing content using a machine learning is not a new thing to do. It just works really well now. And so I'm excited to see what the next year brings in terms of folks from outside of core computer science who are, other engineers or physicists or chemists or whatever who are learning how to use these increasingly easy to use tools to leverage LLMs for tasks that I think none of us have really thought about before. So that's really, really exciting. And so toward that I would say iterate quickly. Build things on your own, build demos, show them the friends, host them online and you'll learn along the way and you'll have somebody to show for it. And also you'll help us explore that space. >> Guys, congratulations with Arthur. Great company, great picks and shovels opportunities out there for everybody. Iterate fast, get in quickly and don't be afraid to iterate. Great advice and thank you for coming on and being part of the AWS showcase, thanks. >> Yeah, thanks for having us on John. Always a pleasure. >> Yeah, great stuff. Adam Wenchel, John Dickerson with Arthur. Thanks for coming on theCUBE. I'm John Furrier, your host. Generative AI and AWS. Keep it right there for more action with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
of the AWS Startup Showcase has opened the eyes to everybody and the demos we get of them, but the change, the acceleration, And in the next 12 months, of the equivalent of the printing press and how quickly you can accelerate As people come into the field, aspects of the LLM ecosystem. and that the language models in seconds if you want. and talk about the company. of the life cycle. in the 2018 20 19 realm I got to ask you now, next step, in the broader business world So in the use case, as a the way users interact with these models, How's that factoring in to that LLM behind the chatbot and how to build the Go-HERE's and the OpenAI's What's the approach? differences in the way that are going to put So the pricing on these is always changing and in the sequence What's the key to your success? And so the way we're able to You got all the action Yeah, well I going to appreciate Adam and pass that along to the client. so I appreciate the candid response there. get approval from the CFO to spend You see it all the time with some of A lot of the tasks that and being part of the Yeah, thanks for having us Generative AI and AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Adam Wenchel | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Dickerson | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
3% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Arthur | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
millions | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
each step | QUANTITY | 0.99+ |
2018 20 19 | DATE | 0.99+ |
two schools | QUANTITY | 0.99+ |
couple years ago | DATE | 0.99+ |
once a week | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
Swami | PERSON | 0.98+ |
four years ago | DATE | 0.98+ |
four | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
Arthur | ORGANIZATION | 0.98+ |
two great guests | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
once a day | QUANTITY | 0.98+ |
six weeks | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
ChatGPT | TITLE | 0.97+ |
second one | QUANTITY | 0.96+ |
three people | QUANTITY | 0.96+ |
front | EVENT | 0.95+ |
second wave | EVENT | 0.95+ |
January | DATE | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
five years ago | DATE | 0.94+ |
about a month ago | DATE | 0.94+ |
tens of millions | QUANTITY | 0.93+ |
today | DATE | 0.92+ |
next 12 months | DATE | 0.91+ |
LangChain | ORGANIZATION | 0.91+ |
over a hundred percent | QUANTITY | 0.91+ |
million dollars | QUANTITY | 0.89+ |
millions of inferences | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.88+ |
Jay Marshall, Neural Magic | AWS Startup Showcase S3E1
(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)
SUMMARY :
of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jay | PERSON | 0.99+ |
Jay Marshall | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Nir Shavit | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
Alexa | TITLE | 0.99+ |
2010s | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
each core | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
nine years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
BERT | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
ChatGPT | TITLE | 0.98+ |
20 years | QUANTITY | 0.98+ |
over 50% | QUANTITY | 0.97+ |
second nature | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
ARM | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
DeepSparse | TITLE | 0.94+ |
neuralmagic.com/graviton | OTHER | 0.94+ |
SiliconANGLE | ORGANIZATION | 0.94+ |
WebSphere | TITLE | 0.94+ |
nine | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.93+ |
five milliseconds | QUANTITY | 0.92+ |
AWS Startup Showcase | EVENT | 0.91+ |
two | QUANTITY | 0.9+ |
YOLO | ORGANIZATION | 0.89+ |
CUBE | ORGANIZATION | 0.88+ |
OPT | TITLE | 0.88+ |
last six months | DATE | 0.88+ |
season three | QUANTITY | 0.86+ |
double | QUANTITY | 0.86+ |
one customer | QUANTITY | 0.86+ |
Supercloud | EVENT | 0.86+ |
one side | QUANTITY | 0.85+ |
Vice | PERSON | 0.85+ |
x86 | OTHER | 0.83+ |
AI/ML: Top Startups Building Foundational Models | TITLE | 0.82+ |
ECS | TITLE | 0.81+ |
$100 billion | QUANTITY | 0.81+ |
DevOps | TITLE | 0.81+ |
WebLogic | TITLE | 0.8+ |
EKS | TITLE | 0.8+ |
a minute | QUANTITY | 0.8+ |
neuralmagic.com | OTHER | 0.79+ |
Luis Ceze & Anna Connolly, OctoML | AWS Startup Showcase S3 E1
(soft music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. AI and Machine Learning: Top Startups Building Foundational Model Infrastructure. This is season 3, episode 1 of the ongoing series covering the exciting stuff from the AWS ecosystem, talking about machine learning and AI. I'm your host, John Furrier and today we are excited to be joined by Luis Ceze who's the CEO of OctoML and Anna Connolly, VP of customer success and experience OctoML. Great to have you on again, Luis. Anna, thanks for coming on. Appreciate it. >> Thank you, John. It's great to be here. >> Thanks for having us. >> I love the company. We had a CUBE conversation about this. You guys are really addressing how to run foundational models faster for less. And this is like the key theme. But before we get into it, this is a hot trend, but let's explain what you guys do. Can you set the narrative of what the company's about, why it was founded, what's your North Star and your mission? >> Yeah, so John, our mission is to make AI sustainable and accessible for everyone. And what we offer customers is, you know, a way of taking their models into production in the most efficient way possible by automating the process of getting a model and optimizing it for a variety of hardware and making cost-effective. So better, faster, cheaper model deployment. >> You know, the big trend here is AI. Everyone's seeing the ChatGPT, kind of the shot heard around the world. The BingAI and this fiasco and the ongoing experimentation. People are into it, and I think the business impact is clear. I haven't seen this in all of my career in the technology industry of this kind of inflection point. And every senior leader I talk to is rethinking about how to rebuild their business with AI because now the large language models have come in, these foundational models are here, they can see value in their data. This is a 10 year journey in the big data world. Now it's impacting that, and everyone's rebuilding their company around this idea of being AI first 'cause they see ways to eliminate things and make things more efficient. And so now they telling 'em to go do it. And they're like, what do we do? So what do you guys think? Can you explain what is this wave of AI and why is it happening, why now, and what should people pay attention to? What does it mean to them? >> Yeah, I mean, it's pretty clear by now that AI can do amazing things that captures people's imaginations. And also now can show things that are really impactful in businesses, right? So what people have the opportunity to do today is to either train their own model that adds value to their business or find open models out there that can do very valuable things to them. So the next step really is how do you take that model and put it into production in a cost-effective way so that the business can actually get value out of it, right? >> Anna, what's your take? Because customers are there, you're there to make 'em successful, you got the new secret weapon for their business. >> Yeah, I think we just see a lot of companies struggle to get from a trained model into a model that is deployed in a cost-effective way that actually makes sense for the application they're building. I think that's a huge challenge we see today, kind of across the board across all of our customers. >> Well, I see this, everyone asking the same question. I have data, I want to get value out of it. I got to get these big models, I got to train it. What's it going to cost? So I think there's a reality of, okay, I got to do it. Then no one has any visibility on what it costs. When they get into it, this is going to break the bank. So I have to ask you guys, the cost of training these models is on everyone's mind. OctoML, your company's focus on the cost side of it as well as the efficiency side of running these models in production. Why are the production costs such a concern and where specifically are people looking at it and why did it get here? >> Yeah, so training costs get a lot of attention because normally a large number, but we shouldn't forget that it's a large, typically one time upfront cost that customers pay. But, you know, when the model is put into production, the cost grows directly with model usage and you actually want your model to be used because it's adding value, right? So, you know, the question that a customer faces is, you know, they have a model, they have a trained model and now what? So how much would it cost to run in production, right? And now without the big wave in generative AI, which rightfully is getting a lot of attention because of the amazing things that it can do. It's important for us to keep in mind that generative AI models like ChatGPT are huge, expensive energy hogs. They cost a lot to run, right? And given that model usage growth directly, model cost grows directly with usage, what you want to do is make sure that once you put a model into production, you have the best cost structure possible so that you're not surprised when it's gets popular, right? So let me give you an example. So if you have a model that costs, say 1 to $2 million to train, but then it costs about one to two cents per session to use it, right? So if you have a million active users, even if they use just once a day, it's 10 to $20,000 a day to operate that model in production. And that very, very quickly, you know, get beyond what you paid to train it. >> Anna, these aren't small numbers, and it's cost to train and cost to operate, it kind of reminds me of when the cloud came around and the data center versus cloud options. Like, wait a minute, one, it costs a ton of cash to deploy, and then running it. This is kind of a similar dynamic. What are you seeing? >> Yeah, absolutely. I think we are going to see increasingly the cost and production outpacing the costs and training by a lot. I mean, people talk about training costs now because that's what they're confronting now because people are so focused on getting models performant enough to even use in an application. And now that we have them and they're that capable, we're really going to start to see production costs go up a lot. >> Yeah, Luis, if you don't mind, I know this might be a little bit of a tangent, but, you know, training's super important. I get that. That's what people are doing now, but then there's the deployment side of production. Where do people get caught up and miss the boat or misconfigure? What's the gotcha? Where's the trip wire or so to speak? Where do people mess up on the cost side? What do they do? Is it they don't think about it, they tie it to proprietary hardware? What's the issue? >> Yeah, several things, right? So without getting really technical, which, you know, I might get into, you know, you have to understand relationship between performance, you know, both in terms of latency and throughput and cost, right? So reducing latency is important because you improve responsiveness of the model. But it's really important to keep in mind that it often leads diminishing returns. Below a certain latency, making it faster won't make a measurable difference in experience, but it's going to cost a lot more. So understanding that is important. Now, if you care more about throughputs, which is the time it takes for you to, you know, units per period of time, you care about time to solution, we should think about this throughput per dollar. And understand what you want is the highest throughput per dollar, which may come at the cost of higher latency, which you're not going to care about, right? So, and the reality here, John, is that, you know, humans and especially folks in this space want to have the latest and greatest hardware. And often they commit a lot of money to get access to them and have to commit upfront before they understand the needs that their models have, right? So common mistake here, one is not spending time to understand what you really need, and then two, over-committing and using more hardware than you actually need. And not giving yourself enough freedom to get your workload to move around to the more cost-effective choice, right? So this is just a metaphoric choice. And then another thing that's important here too is making a model run faster on the hardware directly translates to lower cost, right? So, but it takes a lot of engineers, you need to think of ways of producing very efficient versions of your model for the target hardware that you're going to use. >> Anna, what's the customer angle here? Because price performance has been around for a long time, people get that, but now latency and throughput, that's key because we're starting to see this in apps. I mean, there's an end user piece. I even seeing it on the infrastructure side where they're taking a heavy lifting away from operational costs. So you got, you know, application specific to the user and/or top of the stack, and then you got actually being used in operations where they want both. >> Yeah, absolutely. Maybe I can illustrate this with a quick story with the customer that we had recently been working with. So this customer is planning to run kind of a transformer based model for tech generation at super high scale on Nvidia T4 GPU, so kind of a commodity GPU. And the scale was so high that they would've been paying hundreds of thousands of dollars in cloud costs per year just to serve this model alone. You know, one of many models in their application stack. So we worked with this team to optimize our model and then benchmark across several possible targets. So that matching the hardware that Luis was just talking about, including the newer kind of Nvidia A10 GPUs. And what they found during this process was pretty interesting. First, the team was able to shave a quarter of their spend just by using better optimization techniques on the T4, the older hardware. But actually moving to a newer GPU would allow them to serve this model in a sub two milliseconds latency, so super fast, which was able to unlock an entirely new kind of user experience. So they were able to kind of change the value they're delivering in their application just because they were able to move to this new hardware easily. So they ultimately decided to plan their deployment on the more expensive A10 because of this, but because of the hardware specific optimizations that we helped them with, they managed to even, you know, bring costs down from what they had originally planned. And so if you extend this kind of example to everything that's happening with generative AI, I think the story we just talked about was super relevant, but the scale can be even higher, you know, it can be tenfold that. We were recently conducting kind of this internal study using GPT-J as a proxy to illustrate the experience of just a company trying to use one of these large language models with an example scenario of creating a chatbot to help job seekers prepare for interviews. So if you imagine kind of a conservative usage scenario where the model generates just 3000 words per user per day, which is, you know, pretty conservative for how people are interacting with these models. It costs 5 cents a session and if you're a company and your app goes viral, so from, you know, beginning of the year there's nobody, at the end of the year there's a million daily active active users in that year alone, going from zero to a million. You'll be spending about $6 million a year, which is pretty unmanageable. That's crazy, right? >> Yeah. >> For a company or a product that's just launching. So I think, you know, for us we see the real way to make these kind of advancements accessible and sustainable, as we said is to bring down cost to serve using these techniques. >> That's a great story and I think that illustrates this idea that deployment cost can vary from situation to situation, from model to model and that the efficiency is so strong with this new wave, it eliminates heavy lifting, creates more efficiency, automates intellect. I mean, this is the trend, this is radical, this is going to increase. So the cost could go from nominal to millions, literally, potentially. So, this is what customers are doing. Yeah, that's a great story. What makes sense on a financial, is there a cost of ownership? Is there a pattern for best practice for training? What do you guys advise cuz this is a lot of time and money involved in all potential, you know, good scenarios of upside. But you can get over your skis as they say, and be successful and be out of business if you don't manage it. I mean, that's what people are talking about, right? >> Yeah, absolutely. I think, you know, we see kind of three main vectors to reduce cost. I think one is make your deployment process easier overall, so that your engineering effort to even get your app running goes down. Two, would be get more from the compute you're already paying for, you're already paying, you know, for your instances in the cloud, but can you do more with that? And then three would be shop around for lower cost hardware to match your use case. So on the first one, I think making the deployment easier overall, there's a lot of manual work that goes into benchmarking, optimizing and packaging models for deployment. And because the performance of machine learning models can be really hardware dependent, you have to go through this process for each target you want to consider running your model on. And this is hard, you know, we see that every day. But for teams who want to incorporate some of these large language models into their applications, it might be desirable because licensing a model from a large vendor like OpenAI can leave you, you know, over provision, kind of paying for capabilities you don't need in your application or can lock you into them and you lose flexibility. So we have a customer whose team actually prepares models for deployment in a SaaS application that many of us use every day. And they told us recently that without kind of an automated benchmarking and experimentation platform, they were spending several days each to benchmark a single model on a single hardware type. So this is really, you know, manually intensive and then getting more from the compute you're already paying for. We do see customers who leave money on the table by running models that haven't been optimized specifically for the hardware target they're using, like Luis was mentioning. And for some teams they just don't have the time to go through an optimization process and for others they might lack kind of specialized expertise and this is something we can bring. And then on shopping around for different hardware types, we really see a huge variation in model performance across hardware, not just CPU vs. GPU, which is, you know, what people normally think of. But across CPU vendors themselves, high memory instances and across cloud providers even. So the best strategy here is for teams to really be able to, we say, look before you leap by running real world benchmarking and not just simulations or predictions to find the best software, hardware combination for their workload. >> Yeah. You guys sound like you have a very impressive customer base deploying large language models. Where would you categorize your current customer base? And as you look out, as you guys are growing, you have new customers coming in, take me through the progression. Take me through the profile of some of your customers you have now, size, are they hyperscalers, are they big app folks, are they kicking the tires? And then as people are out there scratching heads, I got to get in this game, what's their psychology like? Are they coming in with specific problems or do they have specific orientation point of view about what they want to do? Can you share some data around what you're seeing? >> Yeah, I think, you know, we have customers that kind of range across the spectrum of sophistication from teams that basically don't have MLOps expertise in their company at all. And so they're really looking for us to kind of give a full service, how should I do everything from, you know, optimization, find the hardware, prepare for deployment. And then we have teams that, you know, maybe already have their serving and hosting infrastructure up and ready and they already have models in production and they're really just looking to, you know, take the extra juice out of the hardware and just do really specific on that optimization piece. I think one place where we're doing a lot more work now is kind of in the developer tooling, you know, model selection space. And that's kind of an area that we're creating more tools for, particularly within the PyTorch ecosystem to bring kind of this power earlier in the development cycle so that as people are grabbing a model off the shelf, they can, you know, see how it might perform and use that to inform their development process. >> Luis, what's the big, I like this idea of picking the models because isn't that like going to the market and picking the best model for your data? It's like, you know, it's like, isn't there a certain approaches? What's your view on this? 'Cause this is where everyone, I think it's going to be a land rush for this and I want to get your thoughts. >> For sure, yeah. So, you know, I guess I'll start with saying the one main takeaway that we got from the GPT-J study is that, you know, having a different understanding of what your model's compute and memory requirements are, very quickly, early on helps with the much smarter AI model deployments, right? So, and in fact, you know, Anna just touched on this, but I want to, you know, make sure that it's clear that OctoML is putting that power into user's hands right now. So in partnership with AWS, we are launching this new PyTorch native profiler that allows you with a single, you know, one line, you know, code decorator allows you to see how your code runs on a variety of different hardware after accelerations. So it gives you very clear, you know, data on how you should think about your model deployments. And this ties back to choices of models. So like, if you have a set of choices that are equally good of models in terms of functionality and you want to understand after acceleration how are you going to deploy, how much they're going to cost or what are the options using a automated process of making a decision is really, really useful. And in fact, so I think these events can get early access to this by signing up for the Octopods, you know, this is exclusive group for insiders here, so you can go to OctoML.ai/pods to sign up. >> So that Octopod, is that a program? What is that, is that access to code? Is that a beta, what is that? Explain, take a minute and explain Octopod. >> I think the Octopod would be a group of people who is interested in experiencing this functionality. So it is the friends and users of OctoML that would be the Octopod. And then yes, after you sign up, we would provide you essentially the tool in code form for you to try out in your own. I mean, part of the benefit of this is that it happens in your own local environment and you're in control of everything kind of within the workflow that developers are already using to create and begin putting these models into their applications. So it would all be within your control. >> Got it. I think the big question I have for you is when do you, when does that one of your customers know they need to call you? What's their environment look like? What are they struggling with? What are the conversations they might be having on their side of the fence? If anyone's watching this, they're like, "Hey, you know what, I've got my team, we have a lot of data. Do we have our own language model or do I use someone else's?" There's a lot of this, I will say discovery going on around what to do, what path to take, what does that customer look like, if someone's listening, when do they know to call you guys, OctoML? >> Well, I mean the most obvious one is that you have a significant spend on AI/ML, come and talk to us, you know, putting AIML into production. So that's the clear one. In fact, just this morning I was talking to someone who is in life sciences space and is having, you know, 15 to $20 million a year cloud related to AI/ML deployment is a clear, it's a pretty clear match right there, right? So that's on the cost side. But I also want to emphasize something that Anna said earlier that, you know, the hardware and software complexity involved in putting model into production is really high. So we've been able to abstract that away, offering a clean automation flow enables one, to experiment early on, you know, how models would run and get them to production. And then two, once they are into production, gives you an automated flow to continuously updating your model and taking advantage of all this acceleration and ability to run the model on the right hardware. So anyways, let's say one then is cost, you know, you have significant cost and then two, you have an automation needs. And Anna please compliment that. >> Yeah, Anna you can please- >> Yeah, I think that's exactly right. Maybe the other time is when you are expecting a big scale up in serving your application, right? You're launching a new feature, you expect to get a lot of usage or, and you want to kind of anticipate maybe your CTO, your CIO, whoever pays your cloud bills is going to come after you, right? And so they want to know, you know, what's the return on putting this model essentially into my application stack? Am I going to, is the usage going to match what I'm paying for it? And then you can understand that. >> So you guys have a lot of the early adopters, they got big data teams, they're pushed in the production, they want to get a little QA, test the waters, understand, use your technology to figure it out. Is there any cases where people have gone into production, they have to pull it out? It's like the old lemon laws with your car, you buy a car and oh my god, it's not the way I wanted it. I mean, I can imagine the early people through the wall, so to speak, in the wave here are going to be bloody in the sense that they've gone in and tried stuff and get stuck with huge bills. Are you seeing that? Are people pulling stuff out of production and redeploying? Or I can imagine that if I had a bad deployment, I'd want to refactor that or actually replatform that. Do you see that too? >> Definitely after a sticker shock, yes, your customers will come and make sure that, you know, the sticker shock won't happen again. >> Yeah. >> But then there's another more thorough aspect here that I think we likely touched on, be worth elaborating a bit more is just how are you going to scale in a way that's feasible depending on the allocation that you get, right? So as we mentioned several times here, you know, model deployment is so hardware dependent and so complex that you tend to get a model for a hardware choice and then you want to scale that specific type of instance. But what if, when you want to scale because suddenly luckily got popular and, you know, you want to scale it up and then you don't have that instance anymore. So how do you live with whatever you have at that moment is something that we see customers needing as well. You know, so in fact, ideally what we want is customers to not think about what kind of specific instances they want. What they want is to know what their models need. Say, they know the SLA and then find a set of hybrid targets and instances that hit the SLA whenever they're also scaling, they're going to scale with more freedom, right? Instead of having to wait for AWS to give them more specific allocation for a specific instance. What if you could live with other types of hardware and scale up in a more free way, right? So that's another thing that we see customers, you know, like they need more freedom to be able to scale with whatever is available. >> Anna, you touched on this with the business model impact to that 6 million cost, if that goes out of control, there's a business model aspect and there's a technical operation aspect to the cost side too. You want to be mindful of riding the wave in a good way, but not getting over your skis. So that brings up the point around, you know, confidence, right? And teamwork. Because if you're in production, there's probably a team behind it. Talk about the team aspect of your customers. I mean, they're dedicated, they go put stuff into production, they're developers, there're data. What's in it for them? Are they getting better, are they in the beach, you know, reading the book. Are they, you know, are there easy street for them? What's the customer benefit to the teams? >> Yeah, absolutely. With just a few clicks of a button, you're in production, right? That's the dream. So yeah, I mean I think that, you know, we illustrated it before a little bit. I think the automated kind of benchmarking and optimization process, like when you think about the effort it takes to get that data by hand, which is what people are doing today, they just don't do it. So they're making decisions without the best information because it's, you know, there just isn't the bandwidth to get the information that they need to make the best decision and then know exactly how to deploy it. So I think it's actually bringing kind of a new insight and capability to these teams that they didn't have before. And then maybe another aspect on the team side is that it's making the hand-off of the models from the data science teams to the model deployment teams more seamless. So we have, you know, we have seen in the past that this kind of transition point is the place where there are a lot of hiccups, right? The data science team will give a model to the production team and it'll be too slow for the application or it'll be too expensive to run and it has to go back and be changed and kind of this loop. And so, you know, with the PyTorch profiler that Luis was talking about, and then also, you know, the other ways we do optimization that kind of prevents that hand-off problem from happening. >> Luis and Anna, you guys have a great company. Final couple minutes left. Talk about the company, the people there, what's the culture like, you know, if Intel has Moore's law, which is, you know, doubling the performance in few years, what's the culture like there? Is it, you know, more throughput, better pricing? Explain what's going on with the company and put a plug in. Luis, we'll start with you. >> Yeah, absolutely. I'm extremely proud of the team that we built here. You know, we have a people first culture, you know, very, very collaborative and folks, we all have a shared mission here of making AI more accessible and sustainable. We have a very diverse team in terms of backgrounds and life stories, you know, to do what we do here, we need a team that has expertise in software engineering, in machine learning, in computer architecture. Even though we don't build chips, we need to understand how they work, right? So, and then, you know, the fact that we have this, this very really, really varied set of backgrounds makes the environment, you know, it's say very exciting to learn more about, you know, assistance end-to-end. But also makes it for a very interesting, you know, work environment, right? So people have different backgrounds, different stories. Some of them went to grad school, others, you know, were in intelligence agencies and now are working here, you know. So we have a really interesting set of people and, you know, life is too short not to work with interesting humans. You know, that's something that I like to think about, you know. >> I'm sure your off-site meetings are a lot of fun, people talking about computer architectures, silicon advances, the next GPU, the big data models coming in. Anna, what's your take? What's the culture like? What's the company vibe and what are you guys looking to do? What's the customer success pattern? What's up? >> Yeah, absolutely. I mean, I, you know, second all of the great things that Luis just said about the team. I think one that I, an additional one that I'd really like to underscore is kind of this customer obsession, to use a term you all know well. And focus on the end users and really making the experiences that we're bringing to our user who are developers really, you know, useful and valuable for them. And so I think, you know, all of these tools that we're trying to put in the hands of users, the industry and the market is changing so rapidly that our products across the board, you know, all of the companies that, you know, are part of the showcase today, we're all evolving them so quickly and we can only do that kind of really hand in glove with our users. So that would be another thing I'd emphasize. >> I think the change dynamic, the power dynamics of this industry is just the beginning. I'm very bullish that this is going to be probably one of the biggest inflection points in history of the computer industry because of all the dynamics of the confluence of all the forces, which you mentioned some of them, I mean PC, you know, interoperability within internetworking and you got, you know, the web and then mobile. Now we have this, I mean, I wouldn't even put social media even in the close to this. Like, this is like, changes user experience, changes infrastructure. There's going to be massive accelerations in performance on the hardware side from AWS's of the world and cloud and you got the edge and more data. This is really what big data was going to look like. This is the beginning. Final question, what do you guys see going forward in the future? >> Well, it's undeniable that machine learning and AI models are becoming an integral part of an interesting application today, right? So, and the clear trends here are, you know, more and more competitional needs for these models because they're only getting more and more powerful. And then two, you know, seeing the complexity of the infrastructure where they run, you know, just considering the cloud, there's like a wide variety of choices there, right? So being able to live with that and making the most out of it in a way that does not require, you know, an impossible to find team is something that's pretty clear. So the need for automation, abstracting with the complexity is definitely here. And we are seeing this, you know, trends are that you also see models starting to move to the edge as well. So it's clear that we're seeing, we are going to live in a world where there's no large models living in the cloud. And then, you know, edge models that talk to these models in the cloud to form, you know, an end-to-end truly intelligent application. >> Anna? >> Yeah, I think, you know, our, Luis said it at the beginning. Our vision is to make AI sustainable and accessible. And I think as this technology just expands in every company and every team, that's going to happen kind of on its own. And we're here to help support that. And I think you can't do that without tools like those like OctoML. >> I think it's going to be an error of massive invention, creativity, a lot of the format heavy lifting is going to allow the talented people to automate their intellect. I mean, this is really kind of what we see going on. And Luis, thank you so much. Anna, thanks for coming on this segment. Thanks for coming on theCUBE and being part of the AWS Startup Showcase. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Great to have you on again, Luis. It's great to be here. but let's explain what you guys do. And what we offer customers is, you know, So what do you guys think? so that the business you got the new secret kind of across the board So I have to ask you guys, And that very, very quickly, you know, and the data center versus cloud options. And now that we have them but, you know, training's super important. John, is that, you know, humans and then you got actually managed to even, you know, So I think, you know, for us we see in all potential, you know, And this is hard, you know, And as you look out, as And then we have teams that, you know, and picking the best model for your data? from the GPT-J study is that, you know, What is that, is that access to code? And then yes, after you sign up, to call you guys, OctoML? come and talk to us, you know, And so they want to know, you know, So you guys have a lot make sure that, you know, we see customers, you know, What's the customer benefit to the teams? and then also, you know, what's the culture like, you know, So, and then, you know, and what are you guys looking to do? all of the companies that, you know, I mean PC, you know, in the cloud to form, you know, And I think you can't And Luis, thank you so much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Anna | PERSON | 0.99+ |
Anna Connolly | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Luis | PERSON | 0.99+ |
Luis Ceze | PERSON | 0.99+ |
John | PERSON | 0.99+ |
1 | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 year | QUANTITY | 0.99+ |
6 million | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
OctoML | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
$2 million | QUANTITY | 0.98+ |
3000 words | QUANTITY | 0.98+ |
one line | QUANTITY | 0.98+ |
A10 | COMMERCIAL_ITEM | 0.98+ |
OctoML | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
three main vectors | QUANTITY | 0.97+ |
hundreds of thousands of dollars | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
T4 | COMMERCIAL_ITEM | 0.97+ |
one time | QUANTITY | 0.97+ |
first one | QUANTITY | 0.96+ |
two cents | QUANTITY | 0.96+ |
GPT-J | ORGANIZATION | 0.96+ |
single model | QUANTITY | 0.95+ |
a minute | QUANTITY | 0.95+ |
about $6 million a year | QUANTITY | 0.95+ |
once a day | QUANTITY | 0.95+ |
$20,000 a day | QUANTITY | 0.95+ |
a million | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Octopod | TITLE | 0.93+ |
this morning | DATE | 0.93+ |
first culture | QUANTITY | 0.92+ |
$20 million a year | QUANTITY | 0.92+ |
AWS Startup Showcase | EVENT | 0.9+ |
North Star | ORGANIZATION | 0.9+ |
Steven Hillion & Jeff Fletcher, Astronomer | AWS Startup Showcase S3E1
(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase AI/ML Top Startups Building Foundation Model Infrastructure. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem to talk about data and analytics. I'm your host, Lisa Martin and today we're excited to be joined by two guests from Astronomer. Steven Hillion joins us, it's Chief Data Officer and Jeff Fletcher, it's director of ML. They're here to talk about machine learning and data orchestration. Guys, thank you so much for joining us today. >> Thank you. >> It's great to be here. >> Before we get into machine learning let's give the audience an overview of Astronomer. Talk about what that is, Steven. Talk about what you mean by data orchestration. >> Yeah, let's start with Astronomer. We're the Airflow company basically. The commercial developer behind the open-source project, Apache Airflow. I don't know if you've heard of Airflow. It's sort of de-facto standard these days for orchestrating data pipelines, data engineering pipelines, and as we'll talk about later, machine learning pipelines. It's really is the de-facto standard. I think we're up to about 12 million downloads a month. That's actually as a open-source project. I think at this point it's more popular by some measures than Slack. Airflow was created by Airbnb some years ago to manage all of their data pipelines and manage all of their workflows and now it powers the data ecosystem for organizations as diverse as Electronic Arts, Conde Nast is one of our big customers, a big user of Airflow. And also not to mention the biggest banks on Wall Street use Airflow and Astronomer to power the flow of data throughout their organizations. >> Talk about that a little bit more, Steven, in terms of the business impact. You mentioned some great customer names there. What is the business impact or outcomes that a data orchestration strategy enables businesses to achieve? >> Yeah, I mean, at the heart of it is quite simply, scheduling and managing data pipelines. And so if you have some enormous retailer who's managing the flow of information throughout their organization they may literally have thousands or even tens of thousands of data pipelines that need to execute every day to do things as simple as delivering metrics for the executives to consume at the end of the day, to producing on a weekly basis new machine learning models that can be used to drive product recommendations. One of our customers, for example, is a British food delivery service. And you get those recommendations in your application that says, "Well, maybe you want to have samosas with your curry." That sort of thing is powered by machine learning models that they train on a regular basis to reflect changing conditions in the market. And those are produced through Airflow and through the Astronomer platform, which is essentially a managed platform for running airflow. So at its simplest it really is just scheduling and managing those workflows. But that's easier said than done of course. I mean if you have 10 thousands of those things then you need to make sure that they all run that they all have sufficient compute resources. If things fail, how do you track those down across those 10,000 workflows? How easy is it for an average data scientist or data engineer to contribute their code, their Python notebooks or their SQL code into a production environment? And then you've got reproducibility, governance, auditing, like managing data flows across an organization which we think of as orchestrating them is much more than just scheduling. It becomes really complicated pretty quickly. >> I imagine there's a fair amount of complexity there. Jeff, let's bring you into the conversation. Talk a little bit about Astronomer through your lens, data orchestration and how it applies to MLOps. >> So I come from a machine learning background and for me the interesting part is that machine learning requires the expansion into orchestration. A lot of the same things that you're using to go and develop and build pipelines in a standard data orchestration space applies equally well in a machine learning orchestration space. What you're doing is you're moving data between different locations, between different tools, and then tasking different types of tools to act on that data. So extending it made logical sense from a implementation perspective. And a lot of my focus at Astronomer is really to explain how Airflow can be used well in a machine learning context. It is being used well, it is being used a lot by the customers that we have and also by users of the open source version. But it's really being able to explain to people why it's a natural extension for it and how well it fits into that. And a lot of it is also extending some of the infrastructure capabilities that Astronomer provides to those customers for them to be able to run some of the more platform specific requirements that come with doing machine learning pipelines. >> Let's get into some of the things that make Astronomer unique. Jeff, sticking with you, when you're in customer conversations, what are some of the key differentiators that you articulate to customers? >> So a lot of it is that we are not specific to one cloud provider. So we have the ability to operate across all of the big cloud providers. I know, I'm certain we have the best developers that understand how best practices implementations for data orchestration works. So we spend a lot of time talking to not just the business outcomes and the business users of the product, but also also for the technical people, how to help them better implement things that they may have come across on a Stack Overflow article or not necessarily just grown with how the product has migrated. So it's the ability to run it wherever you need to run it and also our ability to help you, the customer, better implement and understand those workflows that I think are two of the primary differentiators that we have. >> Lisa: Got it. >> I'll add another one if you don't mind. >> You can go ahead, Steven. >> Is lineage and dependencies between workflows. One thing we've done is to augment core Airflow with Lineage services. So using the Open Lineage framework, another open source framework for tracking datasets as they move from one workflow to another one, team to another, one data source to another is a really key component of what we do and we bundle that within the service so that as a developer or as a production engineer, you really don't have to worry about lineage, it just happens. Jeff, may show us some of this later that you can actually see as data flows from source through to a data warehouse out through a Python notebook to produce a predictive model or a dashboard. Can you see how those data products relate to each other? And when something goes wrong, figure out what upstream maybe caused the problem, or if you're about to change something, figure out what the impact is going to be on the rest of the organization. So Lineage is a big deal for us. >> Got it. >> And just to add on to that, the other thing to think about is that traditional Airflow is actually a complicated implementation. It required quite a lot of time spent understanding or was almost a bespoke language that you needed to be able to develop in two write these DAGs, which is like fundamental pipelines. So part of what we are focusing on is tooling that makes it more accessible to say a data analyst or a data scientist who doesn't have or really needs to gain the necessary background in how the semantics of Airflow DAGs works to still be able to get the benefit of what Airflow can do. So there is new features and capabilities built into the astronomer cloud platform that effectively obfuscates and removes the need to understand some of the deep work that goes on. But you can still do it, you still have that capability, but we are expanding it to be able to have orchestrated and repeatable processes accessible to more teams within the business. >> In terms of accessibility to more teams in the business. You talked about data scientists, data analysts, developers. Steven, I want to talk to you, as the chief data officer, are you having more and more conversations with that role and how is it emerging and evolving within your customer base? >> Hmm. That's a good question, and it is evolving because I think if you look historically at the way that Airflow has been used it's often from the ground up. You have individual data engineers or maybe single data engineering teams who adopt Airflow 'cause it's very popular. Lots of people know how to use it and they bring it into an organization and say, "Hey, let's use this to run our data pipelines." But then increasingly as you turn from pure workflow management and job scheduling to the larger topic of orchestration you realize it gets pretty complicated, you want to have coordination across teams, and you want to have standardization for the way that you manage your data pipelines. And so having a managed service for Airflow that exists in the cloud is easy to spin up as you expand usage across the organization. And thinking long term about that in the context of orchestration that's where I think the chief data officer or the head of analytics tends to get involved because they really want to think of this as a strategic investment that they're making. Not just per team individual Airflow deployments, but a network of data orchestrators. >> That network is key. Every company these days has to be a data company. We talk about companies being data driven. It's a common word, but it's true. It's whether it is a grocer or a bank or a hospital, they've got to be data companies. So talk to me a little bit about Astronomer's business model. How is this available? How do customers get their hands on it? >> Jeff, go ahead. >> Yeah, yeah. So we have a managed cloud service and we have two modes of operation. One, you can bring your own cloud infrastructure. So you can say here is an account in say, AWS or Azure and we can go and deploy the necessary infrastructure into that, or alternatively we can host everything for you. So it becomes a full SaaS offering. But we then provide a platform that connects at the backend to your internal IDP process. So however you are authenticating users to make sure that the correct people are accessing the services that they need with role-based access control. From there we are deploying through Kubernetes, the different services and capabilities into either your cloud account or into an account that we host. And from there Airflow does what Airflow does, which is its ability to then reach to different data systems and data platforms and to then run the orchestration. We make sure we do it securely, we have all the necessary compliance certifications required for GDPR in Europe and HIPAA based out of the US, and a whole bunch host of others. So it is a secure platform that can run in a place that you need it to run, but it is a managed Airflow that includes a lot of the extra capabilities like the cloud developer environment and the open lineage services to enhance the overall airflow experience. >> Enhance the overall experience. So Steven, going back to you, if I'm a Conde Nast or another organization, what are some of the key business outcomes that I can expect? As one of the things I think we've learned during the pandemic is access to realtime data is no longer a nice to have for organizations. It's really an imperative. It's that demanding consumer that wants to have that personalized, customized, instant access to a product or a service. So if I'm a Conde Nast or I'm one of your customers, what can I expect my business to be able to achieve as a result of data orchestration? >> Yeah, I think in a nutshell it's about providing a reliable, scalable, and easy to use service for developing and running data workflows. And talking of demanding customers, I mean, I'm actually a customer myself, as you mentioned, I'm the head of data for Astronomer. You won't be surprised to hear that we actually use Astronomer and Airflow to run all of our data pipelines. And so I can actually talk about my experience. When I started I was of course familiar with Airflow, but it always seemed a little bit unapproachable to me if I was introducing that to a new team of data scientists. They don't necessarily want to have to think about learning something new. But I think because of the layers that Astronomer has provided with our Astro service around Airflow it was pretty easy for me to get up and running. Of course I've got an incentive for doing that. I work for the Airflow company, but we went from about, at the beginning of last year, about 500 data tasks that we were running on a daily basis to about 15,000 every day. We run something like a million data operations every month within my team. And so as one outcome, just the ability to spin up new production workflows essentially in a single day you go from an idea in the morning to a new dashboard or a new model in the afternoon, that's really the business outcome is just removing that friction to operationalizing your machine learning and data workflows. >> And I imagine too, oh, go ahead, Jeff. >> Yeah, I think to add to that, one of the things that becomes part of the business cycle is a repeatable capabilities for things like reporting, for things like new machine learning models. And the impediment that has existed is that it's difficult to take that from a team that's an analyst team who then provide that or a data science team that then provide that to the data engineering team who have to work the workflow all the way through. What we're trying to unlock is the ability for those teams to directly get access to scheduling and orchestrating capabilities so that a business analyst can have a new report for C-suite execs that needs to be done once a week, but the time to repeatability for that report is much shorter. So it is then immediately in the hands of the person that needs to see it. It doesn't have to go into a long list of to-dos for a data engineering team that's already overworked that they eventually get it to it in a month's time. So that is also a part of it is that the realizing, orchestration I think is fairly well and a lot of people get the benefit of being able to orchestrate things within a business, but it's having more people be able to do it and shorten the time that that repeatability is there is one of the main benefits from good managed orchestration. >> So a lot of workforce productivity improvements in what you're doing to simplify things, giving more people access to data to be able to make those faster decisions, which ultimately helps the end user on the other end to get that product or the service that they're expecting like that. Jeff, I understand you have a demo that you can share so we can kind of dig into this. >> Yeah, let me take you through a quick look of how the whole thing works. So our starting point is our cloud infrastructure. This is the login. You go to the portal. You can see there's a a bunch of workspaces that are available. Workspaces are like individual places for people to operate in. I'm not going to delve into all the deep technical details here, but starting point for a lot of our data science customers is we have what we call our Cloud IDE, which is a web-based development environment for writing and building out DAGs without actually having to know how the underpinnings of Airflow work. This is an internal one, something that we use. You have a notebook-like interface that lets you write python code and SQL code and a bunch of specific bespoke type of blocks if you want. They all get pulled together and create a workflow. So this is a workflow, which gets compiled to something that looks like a complicated set of Python code, which is the DAG. I then have a CICD process pipeline where I commit this through to my GitHub repo. So this comes to a repo here, which is where these DAGs that I created in the previous step exist. I can then go and say, all right, I want to see how those particular DAGs have been running. We then get to the actual Airflow part. So this is the managed Airflow component. So we add the ability for teams to fairly easily bring up an Airflow instance and write code inside our notebook-like environment to get it into that instance. So you can see it's been running. That same process that we built here that graph ends up here inside this, but you don't need to know how the fundamentals of Airflow work in order to get this going. Then we can run one of these, it runs in the background and we can manage how it goes. And from there, every time this runs, it's emitting to a process underneath, which is the open lineage service, which is the lineage integration that allows me to come in here and have a look and see this was that actual, that same graph that we built, but now it's the historic version. So I know where things started, where things are going, and how it ran. And then I can also do a comparison. So if I want to see how this particular run worked compared to one historically, I can grab one from a previous date and it will show me the comparison between the two. So that combination of managed Airflow, getting Airflow up and running very quickly, but the Cloud IDE that lets you write code and know how to get something into a repeatable format get that into Airflow and have that attached to the lineage process adds what is a complete end-to-end orchestration process for any business looking to get the benefit from orchestration. >> Outstanding. Thank you so much Jeff for digging into that. So one of my last questions, Steven is for you. This is exciting. There's a lot that you guys are enabling organizations to achieve here to really become data-driven companies. So where can folks go to get their hands on this? >> Yeah, just go to astronomer.io and we have plenty of resources. If you're new to Airflow, you can read our documentation, our guides to getting started. We have a CLI that you can download that is really I think the easiest way to get started with Airflow. But you can actually sign up for a trial. You can sign up for a guided trial where our teams, we have a team of experts, really the world experts on getting Airflow up and running. And they'll take you through that trial and allow you to actually kick the tires and see how this works with your data. And I think you'll see pretty quickly that it's very easy to get started with Airflow, whether you're doing that from the command line or doing that in our cloud service. And all of that is available on our website >> astronomer.io. Jeff, last question for you. What are you excited about? There's so much going on here. What are some of the things, maybe you can give us a sneak peek coming down the road here that prospects and existing customers should be excited about? >> I think a lot of the development around the data awareness components, so one of the things that's traditionally been complicated with orchestration is you leave your data in the place that you're operating on and we're starting to have more data processing capability being built into Airflow. And from a Astronomer perspective, we are adding more capabilities around working with larger datasets, doing bigger data manipulation with inside the Airflow process itself. And that lends itself to better machine learning implementation. So as we start to grow and as we start to get better in the machine learning context, well, in the data awareness context, it unlocks a lot more capability to do and implement proper machine learning pipelines. >> Awesome guys. Exciting stuff. Thank you so much for talking to me about Astronomer, machine learning, data orchestration, and really the value in it for your customers. Steve and Jeff, we appreciate your time. >> Thank you. >> My pleasure, thanks. >> And we thank you for watching. This is season three, episode one of our ongoing series covering exciting startups from the AWS ecosystem. I'm your host, Lisa Martin. You're watching theCUBE, the leader in live tech coverage. (upbeat music)
SUMMARY :
of the AWS Startup Showcase let's give the audience and now it powers the data ecosystem What is the business impact or outcomes for the executives to consume how it applies to MLOps. and for me the interesting that you articulate to customers? So it's the ability to run it if you don't mind. that you can actually see as data flows the other thing to think about to more teams in the business. about that in the context of orchestration So talk to me a little bit at the backend to your So Steven, going back to you, just the ability to spin up but the time to repeatability a demo that you can share that allows me to come There's a lot that you guys We have a CLI that you can download What are some of the things, in the place that you're operating on and really the value in And we thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jeff Fletcher | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Steven Hillion | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Conde Nast | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
HIPAA | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Airflow | ORGANIZATION | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
10 thousands | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Electronic Arts | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
two modes | QUANTITY | 0.99+ |
Airflow | TITLE | 0.98+ |
10,000 workflows | QUANTITY | 0.98+ |
about 500 data tasks | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one outcome | QUANTITY | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
GDPR | TITLE | 0.97+ |
SQL | TITLE | 0.97+ |
GitHub | ORGANIZATION | 0.96+ |
astronomer.io | OTHER | 0.94+ |
Slack | ORGANIZATION | 0.94+ |
Astronomer | ORGANIZATION | 0.94+ |
some years ago | DATE | 0.92+ |
once a week | QUANTITY | 0.92+ |
Astronomer | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
last year | DATE | 0.91+ |
Kubernetes | TITLE | 0.88+ |
single day | QUANTITY | 0.87+ |
about 15,000 every day | QUANTITY | 0.87+ |
one cloud | QUANTITY | 0.86+ |
IDE | TITLE | 0.86+ |
Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1
(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Nishihara | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
35 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Ant Group | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
20% | QUANTITY | 0.99+ |
32 GPUs | QUANTITY | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Anyscale | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
September | DATE | 0.99+ |
today | DATE | 0.99+ |
Moore's Law | TITLE | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
PyTorch | TITLE | 0.99+ |
Ray | ORGANIZATION | 0.99+ |
second reason | QUANTITY | 0.99+ |
64 | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
each worker | QUANTITY | 0.99+ |
Photoshop | TITLE | 0.99+ |
UC Berkeley | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Shopify | ORGANIZATION | 0.99+ |
OpenAI | ORGANIZATION | 0.99+ |
Anyscale | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
ByteDance | ORGANIZATION | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
95 | QUANTITY | 0.99+ |
Asure | ORGANIZATION | 0.98+ |
one line | QUANTITY | 0.98+ |
one GPU | QUANTITY | 0.98+ |
ChatGPT | TITLE | 0.98+ |
TensorFlow | TITLE | 0.98+ |
last year | DATE | 0.98+ |
first bucket | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two layers | QUANTITY | 0.98+ |
Cohere | ORGANIZATION | 0.98+ |
Alipay | ORGANIZATION | 0.98+ |
Ray | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
Instacart | ORGANIZATION | 0.97+ |
Opening Panel | Generative AI: Hype or Reality | AWS Startup Showcase S3 E1
(light airy music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase, AI and machine learning. "Top Startups Building Generative AI on AWS." This is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about AI machine learning. We have three great guests Bratin Saha, VP, Vice President of Machine Learning and AI Services at Amazon Web Services. Tom Mason, the CTO of Stability AI, and Aidan Gomez, CEO and co-founder of Cohere. Two practitioners doing startups and AWS. Gentlemen, thank you for opening up this session, this episode. Thanks for coming on. >> Thank you. >> Thank you. >> Thank you. >> So the topic is hype versus reality. So I think we're all on the reality is great, hype is great, but the reality's here. I want to get into it. Generative AI's got all the momentum, it's going mainstream, it's kind of come out of the behind the ropes, it's now mainstream. We saw the success of ChatGPT, opens up everyone's eyes, but there's so much more going on. Let's jump in and get your early perspectives on what should people be talking about right now? What are you guys working on? We'll start with AWS. What's the big focus right now for you guys as you come into this market that's highly active, highly hyped up, but people see value right out of the gate? >> You know, we have been working on generative AI for some time. In fact, last year we released Code Whisperer, which is about using generative AI for software development and a number of customers are using it and getting real value out of it. So generative AI is now something that's mainstream that can be used by enterprise users. And we have also been partnering with a number of other companies. So, you know, stability.ai, we've been partnering with them a lot. We want to be partnering with other companies as well. In seeing how we do three things, you know, first is providing the most efficient infrastructure for generative AI. And that is where, you know, things like Trainium, things like Inferentia, things like SageMaker come in. And then next is the set of models and then the third is the kind of applications like Code Whisperer and so on. So, you know, it's early days yet, but clearly there's a lot of amazing capabilities that will come out and something that, you know, our customers are starting to pay a lot of attention to. >> Tom, talk about your company and what your focus is and why the Amazon Web Services relationship's important for you? >> So yeah, we're primarily committed to making incredible open source foundation models and obviously stable effusions been our kind of first big model there, which we trained all on AWS. We've been working with them over the last year and a half to develop, obviously a big cluster, and bring all that compute to training these models at scale, which has been a really successful partnership. And we're excited to take it further this year as we develop commercial strategy of the business and build out, you know, the ability for enterprise customers to come and get all the value from these models that we think they can get. So we're really excited about the future. We got hugely exciting pipeline for this year with new modalities and video models and wonderful things and trying to solve images for once and for all and get the kind of general value and value proposition correct for customers. So it's a really exciting time and very honored to be part of it. >> It's great to see some of your customers doing so well out there. Congratulations to your team. Appreciate that. Aidan, let's get into what you guys do. What does Cohere do? What are you excited about right now? >> Yeah, so Cohere builds large language models, which are the backbone of applications like ChatGPT and GPT-3. We're extremely focused on solving the issues with adoption for enterprise. So it's great that you can make a super flashy demo for consumers, but it takes a lot to actually get it into billion user products and large global enterprises. So about six months ago, we released our command models, which are some of the best that exist for large language models. And in December, we released our multilingual text understanding models and that's on over a hundred different languages and it's trained on, you know, authentic data directly from native speakers. And so we're super excited to continue pushing this into enterprise and solving those barriers for adoption, making this transformation a reality. >> Just real quick, while I got you there on the new products coming out. Where are we in the progress? People see some of the new stuff out there right now. There's so much more headroom. Can you just scope out in your mind what that looks like? Like from a headroom standpoint? Okay, we see ChatGPT. "Oh yeah, it writes my papers for me, does some homework for me." I mean okay, yawn, maybe people say that, (Aidan chuckles) people excited or people are blown away. I mean, it's helped theCUBE out, it helps me, you know, feed up a little bit from my write-ups but it's not always perfect. >> Yeah, at the moment it's like a writing assistant, right? And it's still super early in the technologies trajectory. I think it's fascinating and it's interesting but its impact is still really limited. I think in the next year, like within the next eight months, we're going to see some major changes. You've already seen the very first hints of that with stuff like Bing Chat, where you augment these dialogue models with an external knowledge base. So now the models can be kept up to date to the millisecond, right? Because they can search the web and they can see events that happened a millisecond ago. But that's still limited in the sense that when you ask the question, what can these models actually do? Well they can just write text back at you. That's the extent of what they can do. And so the real project, the real effort, that I think we're all working towards is actually taking action. So what happens when you give these models the ability to use tools, to use APIs? What can they do when they can actually affect change out in the real world, beyond just streaming text back at the user? I think that's the really exciting piece. >> Okay, so I wanted to tee that up early in the segment 'cause I want to get into the customer applications. We're seeing early adopters come in, using the technology because they have a lot of data, they have a lot of large language model opportunities and then there's a big fast follower wave coming behind it. I call that the people who are going to jump in the pool early and get into it. They might not be advanced. Can you guys share what customer applications are being used with large language and vision models today and how they're using it to transform on the early adopter side, and how is that a tell sign of what's to come? >> You know, one of the things we have been seeing both with the text models that Aidan talked about as well as the vision models that stability.ai does, Tom, is customers are really using it to change the way you interact with information. You know, one example of a customer that we have, is someone who's kind of using that to query customer conversations and ask questions like, you know, "What was the customer issue? How did we solve it?" And trying to get those kinds of insights that was previously much harder to do. And then of course software is a big area. You know, generating software, making that, you know, just deploying it in production. Those have been really big areas that we have seen customers start to do. You know, looking at documentation, like instead of you know, searching for stuff and so on, you know, you just have an interactive way, in which you can just look at the documentation for a product. You know, all of this goes to where we need to take the technology. One of which is, you know, the models have to be there but they have to work reliably in a production setting at scale, with privacy, with security, and you know, making sure all of this is happening, is going to be really key. That is what, you know, we at AWS are looking to do, which is work with partners like stability and others and in the open source and really take all of these and make them available at scale to customers, where they work reliably. >> Tom, Aidan, what's your thoughts on this? Where are customers landing on this first use cases or set of low-hanging fruit use cases or applications? >> Yeah, so I think like the first group of adopters that really found product market fit were the copywriting companies. So one great example of that is HyperWrite. Another one is Jasper. And so for Cohere, that's the tip of the iceberg, like there's a very long tail of usage from a bunch of different applications. HyperWrite is one of our customers, they help beat writer's block by drafting blog posts, emails, and marketing copy. We also have a global audio streaming platform, which is using us the power of search engine that can comb through podcast transcripts, in a bunch of different languages. Then a global apparel brand, which is using us to transform how they interact with their customers through a virtual assistant, two dozen global news outlets who are using us for news summarization. So really like, these large language models, they can be deployed all over the place into every single industry sector, language is everywhere. It's hard to think of any company on Earth that doesn't use language. So it's, very, very- >> We're doing it right now. We got the language coming in. >> Exactly. >> We'll transcribe this puppy. All right. Tom, on your side, what do you see the- >> Yeah, we're seeing some amazing applications of it and you know, I guess that's partly been, because of the growth in the open source community and some of these applications have come from there that are then triggering this secondary wave of innovation, which is coming a lot from, you know, controllability and explainability of the model. But we've got companies like, you know, Jasper, which Aidan mentioned, who are using stable diffusion for image generation in block creation, content creation. We've got Lensa, you know, which exploded, and is built on top of stable diffusion for fine tuning so people can bring themselves and their pets and you know, everything into the models. So we've now got fine tuned stable diffusion at scale, which is democratized, you know, that process, which is really fun to see your Lensa, you know, exploded. You know, I think it was the largest growing app in the App Store at one point. And lots of other examples like NightCafe and Lexica and Playground. So seeing lots of cool applications. >> So much applications, we'll probably be a customer for all you guys. We'll definitely talk after. But the challenges are there for people adopting, they want to get into what you guys see as the challenges that turn into opportunities. How do you see the customers adopting generative AI applications? For example, we have massive amounts of transcripts, timed up to all the videos. I don't even know what to do. Do I just, do I code my API there. So, everyone has this problem, every vertical has these use cases. What are the challenges for people getting into this and adopting these applications? Is it figuring out what to do first? Or is it a technical setup? Do they stand up stuff, they just go to Amazon? What do you guys see as the challenges? >> I think, you know, the first thing is coming up with where you think you're going to reimagine your customer experience by using generative AI. You know, we talked about Ada, and Tom talked about a number of these ones and you know, you pick up one or two of these, to get that robust. And then once you have them, you know, we have models and we'll have more models on AWS, these large language models that Aidan was talking about. Then you go in and start using these models and testing them out and seeing whether they fit in use case or not. In many situations, like you said, John, our customers want to say, "You know, I know you've trained these models on a lot of publicly available data, but I want to be able to customize it for my use cases. Because, you know, there's some knowledge that I have created and I want to be able to use that." And then in many cases, and I think Aidan mentioned this. You know, you need these models to be up to date. Like you can't have it staying. And in those cases, you augmented with a knowledge base, you know you have to make sure that these models are not hallucinating. And so you need to be able to do the right kind of responsible AI checks. So, you know, you start with a particular use case, and there are a lot of them. Then, you know, you can come to AWS, and then look at one of the many models we have and you know, we are going to have more models for other modalities as well. And then, you know, play around with the models. We have a playground kind of thing where you can test these models on some data and then you can probably, you will probably want to bring your own data, customize it to your own needs, do some of the testing to make sure that the model is giving the right output and then just deploy it. And you know, we have a lot of tools. >> Yeah. >> To make this easy for our customers. >> How should people think about large language models? Because do they think about it as something that they tap into with their IP or their data? Or is it a large language model that they apply into their system? Is the interface that way? What's the interaction look like? >> In many situations, you can use these models out of the box. But in typical, in most of the other situations, you will want to customize it with your own data or with your own expectations. So the typical use case would be, you know, these are models are exposed through APIs. So the typical use case would be, you know you're using these APIs a little bit for testing and getting familiar and then there will be an API that will allow you to train this model further on your data. So you use that AI, you know, make sure you augmented the knowledge base. So then you use those APIs to customize the model and then just deploy it in an application. You know, like Tom was mentioning, a number of companies that are using these models. So once you have it, then you know, you again, use an endpoint API and use it in an application. >> All right, I love the example. I want to ask Tom and Aidan, because like most my experience with Amazon Web Service in 2007, I would stand up in EC2, put my code on there, play around, if it didn't work out, I'd shut it down. Is that a similar dynamic we're going to see with the machine learning where developers just kind of log in and stand up infrastructure and play around and then have a cloud-like experience? >> So I can go first. So I mean, we obviously, with AWS working really closely with the SageMaker team, do fantastic platform there for ML training and inference. And you know, going back to your point earlier, you know, where the data is, is hugely important for companies. Many companies bringing their models to their data in AWS on-premise for them is hugely important. Having the models to be, you know, open sources, makes them explainable and transparent to the adopters of those models. So, you know, we are really excited to work with the SageMaker team over the coming year to bring companies to that platform and make the most of our models. >> Aidan, what's your take on developers? Do they just need to have a team in place, if we want to interface with you guys? Let's say, can they start learning? What do they got to do to set up? >> Yeah, so I think for Cohere, our product makes it much, much easier to people, for people to get started and start building, it solves a lot of the productionization problems. But of course with SageMaker, like Tom was saying, I think that lowers a barrier even further because it solves problems like data privacy. So I want to underline what Bratin was saying earlier around when you're fine tuning or when you're using these models, you don't want your data being incorporated into someone else's model. You don't want it being used for training elsewhere. And so the ability to solve for enterprises, that data privacy and that security guarantee has been hugely important for Cohere, and that's very easy to do through SageMaker. >> Yeah. >> But the barriers for using this technology are coming down super quickly. And so for developers, it's just becoming completely intuitive. I love this, there's this quote from Andrej Karpathy. He was saying like, "It really wasn't on my 2022 list of things to happen that English would become, you know, the most popular programming language." And so the barrier is coming down- >> Yeah. >> Super quickly and it's exciting to see. >> It's going to be awesome for all the companies here, and then we'll do more, we're probably going to see explosion of startups, already seeing that, the maps, ecosystem maps, the landscape maps are happening. So this is happening and I'm convinced it's not yesterday's chat bot, it's not yesterday's AI Ops. It's a whole another ballgame. So I have to ask you guys for the final question before we kick off the company's showcasing here. How do you guys gauge success of generative AI applications? Is there a lens to look through and say, okay, how do I see success? It could be just getting a win or is it a bigger picture? Bratin we'll start with you. How do you gauge success for generative AI? >> You know, ultimately it's about bringing business value to our customers. And making sure that those customers are able to reimagine their experiences by using generative AI. Now the way to get their ease, of course to deploy those models in a safe, effective manner, and ensuring that all of the robustness and the security guarantees and the privacy guarantees are all there. And we want to make sure that this transitions from something that's great demos to actual at scale products, which means making them work reliably all of the time not just some of the time. >> Tom, what's your gauge for success? >> Look, I think this, we're seeing a completely new form of ways to interact with data, to make data intelligent, and directly to bring in new revenue streams into business. So if businesses can use our models to leverage that and generate completely new revenue streams and ultimately bring incredible new value to their customers, then that's fantastic. And we hope we can power that revolution. >> Aidan, what's your take? >> Yeah, reiterating Bratin and Tom's point, I think that value in the enterprise and value in market is like a huge, you know, it's the goal that we're striving towards. I also think that, you know, the value to consumers and actual users and the transformation of the surface area of technology to create experiences like ChatGPT that are magical and it's the first time in human history we've been able to talk to something compelling that's not a human. I think that in itself is just extraordinary and so exciting to see. >> It really brings up a whole another category of markets. B2B, B2C, it's B2D, business to developer. Because I think this is kind of the big trend the consumers have to win. The developers coding the apps, it's a whole another sea change. Reminds me everyone use the "Moneyball" movie as example during the big data wave. Then you know, the value of data. There's a scene in "Moneyball" at the end, where Billy Beane's getting the offer from the Red Sox, then the owner says to the Red Sox, "If every team's not rebuilding their teams based upon your model, there'll be dinosaurs." I think that's the same with AI here. Every company will have to need to think about their business model and how they operate with AI. So it'll be a great run. >> Completely Agree >> It'll be a great run. >> Yeah. >> Aidan, Tom, thank you so much for sharing about your experiences at your companies and congratulations on your success and it's just the beginning. And Bratin, thanks for coming on representing AWS. And thank you, appreciate for what you do. Thank you. >> Thank you, John. Thank you, Aidan. >> Thank you John. >> Thanks so much. >> Okay, let's kick off season three, episode one. I'm John Furrier, your host. Thanks for watching. (light airy music)
SUMMARY :
of the AWS Startup Showcase, of the behind the ropes, and something that, you know, and build out, you know, Aidan, let's get into what you guys do. and it's trained on, you know, it helps me, you know, the ability to use tools, to use APIs? I call that the people and you know, making sure the first group of adopters We got the language coming in. Tom, on your side, what do you see the- and you know, everything into the models. they want to get into what you guys see and you know, you pick for our customers. then you know, you again, All right, I love the example. and make the most of our models. And so the ability to And so the barrier is coming down- and it's exciting to see. So I have to ask you guys and ensuring that all of the robustness and directly to bring in new and it's the first time in human history the consumers have to win. and it's just the beginning. I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Tom Mason | PERSON | 0.99+ |
Aidan | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrej Karpathy | PERSON | 0.99+ |
Bratin Saha | PERSON | 0.99+ |
December | DATE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Aidan Gomez | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Billy Beane | PERSON | 0.99+ |
Bratin | PERSON | 0.99+ |
Moneyball | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Ada | PERSON | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Two practitioners | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ChatGPT | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Code Whisperer | TITLE | 0.99+ |
third | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
App Store | TITLE | 0.99+ |
first time | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Inferentia | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
GPT-3 | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Lensa | TITLE | 0.98+ |
SageMaker | ORGANIZATION | 0.98+ |
three things | QUANTITY | 0.97+ |
Cohere | ORGANIZATION | 0.96+ |
over a hundred different languages | QUANTITY | 0.96+ |
English | OTHER | 0.96+ |
one example | QUANTITY | 0.96+ |
about six months ago | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
first use | QUANTITY | 0.96+ |
SageMaker | TITLE | 0.96+ |
Bing Chat | TITLE | 0.95+ |
one point | QUANTITY | 0.95+ |
Trainium | TITLE | 0.95+ |
Lexica | TITLE | 0.94+ |
Playground | TITLE | 0.94+ |
three great guests | QUANTITY | 0.93+ |
HyperWrite | TITLE | 0.92+ |
Madhura Maskasky, Platform9 | International Women's Day
(bright upbeat music) >> Hello and welcome to theCUBE's coverage of International Women's Day. I'm your host, John Furrier here in Palo Alto, California Studio and remoting is a great guest CUBE alumni, co-founder, technical co-founder and she's also the VP of Product at Platform9 Systems. It's a company pioneering Kubernetes infrastructure, been doing it for a long, long time. Madhura Maskasky, thanks for coming on theCUBE. Appreciate you. Thanks for coming on. >> Thank you for having me. Always exciting. >> So I always... I love interviewing you for many reasons. One, you're super smart, but also you're a co-founder, a technical co-founder, so entrepreneur, VP of product. It's hard to do startups. (John laughs) Okay, so everyone who started a company knows how hard it is. It really is and the rewarding too when you're successful. So I want to get your thoughts on what's it like being an entrepreneur, women in tech, some things you've done along the way. Let's get started. How did you get into your career in tech and what made you want to start a company? >> Yeah, so , you know, I got into tech long, long before I decided to start a company. And back when I got in tech it was very clear to me as a direction for my career that I'm never going to start a business. I was very explicit about that because my father was an entrepreneur and I'd seen how rough the journey can be. And then my brother was also and is an entrepreneur. And I think with both of them I'd seen the ups and downs and I had decided to myself and shared with my family that I really want a very well-structured sort of job at a large company type of path for my career. I think the tech path, tech was interesting to me, not because I was interested in programming, et cetera at that time, to be honest. When I picked computer science as a major for myself, it was because most of what you would consider, I guess most of the cool students were picking that as a major, let's just say that. And it sounded very interesting and cool. A lot of people were doing it and that was sort of the top, top choice for people and I decided to follow along. But I did discover after I picked computer science as my major, I remember when I started learning C++ the first time when I got exposure to it, it was just like a light bulb clicking in my head. I just absolutely loved the language, the lower level nature, the power of it, and what you can do with it, the algorithms. So I think it ended up being a really good fit for me. >> Yeah, so it clicked for you. You tried it, it was all the cool kids were doing it. I mean, I can relate, I did the same thing. Next big thing is computer science, you got to be in there, got to be smart. And then you get hooked on it. >> Yeah, exactly. >> What was the next level? Did you find any blockers in your way? Obviously male dominated, it must have been a lot of... How many females were in your class? What was the ratio at that time? >> Yeah, so the ratio was was pretty, pretty, I would say bleak when it comes to women to men. I think computer science at that time was still probably better compared to some of the other majors like mechanical engineering where I remember I had one friend, she was the single girl in an entire class of about at least 120, 130 students or so. So ratio was better for us. I think there were maybe 20, 25 girls in our class. It was a large class and maybe the number of men were maybe three X or four X number of women. So relatively better. Yeah. >> How about the job when you got into the structured big company? How did that go? >> Yeah, so, you know, I think that was a pretty smooth path I would say after, you know, you graduated from undergrad to grad school and then when I got into Oracle first and VMware, I think both companies had the ratios were still, you know, pretty off. And I think they still are to a very large extent in this industry, but I think this industry in my experience does a fantastic job of, you know, bringing everybody and kind of embracing them and treating them at the same level. That was definitely my experience. And so that makes it very easy for self-confidence, for setting up a path for yourself to thrive. So that was it. >> Okay, so you got an undergraduate degree, okay, in computer science and a master's from Stanford in databases and distributed systems. >> That's right. >> So two degrees. Was that part of your pathway or you just decided, "I want to go right into school?" Did it go right after each other? How did that work out? >> Yeah, so when I went into school, undergrad there was no special major and I didn't quite know if I liked a particular subject or set of subjects or not. Even through grad school, first year it wasn't clear to me, but I think in second year I did start realizing that in general I was a fan of backend systems. I was never a front-end person. The backend distributed systems really were of interest to me because there's a lot of complex problems to solve, and especially databases and large scale distributed systems design in the context of database systems, you know, really started becoming a topic of interest for me. And I think luckily enough at Stanford there were just fantastic professors like Mendel Rosenblum who offered operating system class there, then started VMware and later on I was able to join the company and I took his class while at school and it was one of the most fantastic classes I've ever taken. So they really had and probably I think still do a fantastic curriculum when it comes to distributor systems. And I think that probably helped stoke that interest. >> How do you talk to the younger girls out there in elementary school and through? What's the advice as they start to get into computer science, which is changing and still evolving? There's backend, there's front-end, there's AI, there's data science, there's no code, low code, there's cloud. What's your advice when they say what's the playbook? >> Yeah, so I think two things I always say, and I share this with anybody who's looking to get into computer science or engineering for that matter, right? I think one is that it's, you know, it's important to not worry about what that end specialization's going to be, whether it's AI or databases or backend or front-end. It does naturally evolve and you lend yourself to a path where you will understand, you know, which systems, which aspect you like better. But it's very critical to start with getting the fundamentals well, right? Meaning all of the key coursework around algorithm, systems design, architecture, networking, operating system. I think it is just so crucial to understand those well, even though at times you make question is this ever going to be relevant and useful to me later on in my career? It really does end up helping in ways beyond, you know, you can describe. It makes you a much better engineer. So I think that is the most important aspect of, you know, I would think any engineering stream, but definitely true for computer science. Because there's also been a trend more recently, I think, which I'm not a big fan of, of sort of limited scoped learning, which is you decide early on that you're going to be, let's say a front-end engineer, which is fine, you know. Understanding that is great, but if you... I don't think is ideal to let that limit the scope of your learning when you are an undergrad phrase or grad school. Because later on it comes back to sort of bite you in terms of you not being able to completely understand how the systems work. >> It's a systems kind of thinking. You got to have that mindset of, especially now with cloud, you got distributed systems paradigm going to the edge. You got 5G, Mobile World Congress recently happened, you got now all kinds of IOT devices out there, IP of devices at the edge. Distributed computing is only getting more distributed. >> That's right. Yeah, that's exactly right. But the other thing is also happens... That happens in computer science is that the abstraction layers keep raising things up and up and up. Where even if you're operating at a language like Java, which you know, during some of my times of programming there was a period when it was popular, it already abstracts you so far away from the underlying system. So it can become very easier if you're doing, you know, Java script or UI programming that you really have no understanding of what's happening behind the scenes. And I think that can be pretty difficult. >> Yeah. It's easy to lean in and rely too heavily on the abstractions. I want to get your thoughts on blockers. In your career, have you had situations where it's like, "Oh, you're a woman, okay seat at the table, sit on the side." Or maybe people misunderstood your role. How did you deal with that? Did you have any of that? >> Yeah. So, you know, I think... So there's something really kind of personal to me, which I like to share a few times, which I think I believe in pretty strongly. And which is for me, sort of my personal growth began at a very early phase because my dad and he passed away in 2012, but throughout the time when I was growing up, I was his special little girl. And every little thing that I did could be a simple test. You know, not very meaningful but the genuine pride and pleasure that he felt out of me getting great scores in those tests sort of et cetera, and that I could see that in him, and then I wanted to please him. And through him, I think I build that confidence in myself that I am good at things and I can do good. And I think that just set the building blocks for me for the rest of my life, right? So, I believe very strongly that, you know, yes, there are occasions of unfair treatment and et cetera, but for the most part, it comes from within. And if you are able to be a confident person who is kind of leveled and understands and believes in your capabilities, then for the most part, the right things happen around you. So, I believe very strongly in that kind of grounding and in finding a source to get that for yourself. And I think that many women suffer from the biggest challenge, which is not having enough self-confidence. And I've even, you know, with everything that I said, I've myself felt that, experienced that a few times. And then there's a methodical way to get around it. There's processes to, you know, explain to yourself that that's actually not true. That's a fake feeling. So, you know, I think that is the most important aspect for women. >> I love that. Get the confidence. Find the source for the confidence. We've also been hearing about curiosity and building, you mentioned engineering earlier, love that term. Engineering something, like building something. Curiosity, engineering, confidence. This brings me to my next question for you. What do you think the key skills and qualities are needed to succeed in a technical role? And how do you develop to maintain those skills over time? >> Yeah, so I think that it is so critical that you love that technology that you are part of. It is just so important. I mean, I remember as an example, at one point with one of my buddies before we started Platform9, one of my buddies, he's also a fantastic computer scientists from VMware and he loves video games. And so he said, "Hey, why don't we try to, you know, hack up a video game and see if we can take it somewhere?" And so, it sounded cool to me. And then so we started doing things, but you know, something I realized very quickly is that I as a person, I absolutely hate video games. I've never liked them. I don't think that's ever going to change. And so I was miserable. You know, I was trying to understand what's going on, how to build these systems, but I was not enjoying it. So, I'm glad that I decided to not pursue that. So it is just so important that you enjoy whatever aspect of technology that you decide to associate yourself with. I think that takes away 80, 90% of the work. And then I think it's important to inculcate a level of discipline that you are not going to get sort of... You're not going to get jaded or, you know, continue with happy path when doing the same things over and over again, but you're not necessarily challenging yourself, or pushing yourself, or putting yourself in uncomfortable situation. I think a combination of those typically I think works pretty well in any technical career. >> That's a great advice there. I think trying things when you're younger, or even just for play to understand whether you abandon that path is just as important as finding a good path because at least you know that skews the value in favor of the choices. Kind of like math probability. So, great call out there. So I have to ask you the next question, which is, how do you keep up to date given all the changes? You're in the middle of a world where you've seen personal change in the past 10 years from OpenStack to now. Remember those days when I first interviewed you at OpenStack, I think it was 2012 or something like that. Maybe 10 years ago. So much changed. How do you keep up with technologies in your field and resources that you rely on for personal development? >> Yeah, so I think when it comes to, you know, the field and what we are doing for example, I think one of the most important aspect and you know I am product manager and this is something I insist that all the other product managers in our team also do, is that you have to spend 50% of your time talking to prospects, customers, leads, and through those conversations they do a huge favor to you in that they make you aware of the other things that they're keeping an eye on as long as you're doing the right job of asking the right questions and not just, you know, listening in. So I think that to me ends up being one of the biggest sources where you get tidbits of information, new things, et cetera, and then you pursue. To me, that has worked to be a very effective source. And then the second is, you know, reading and keeping up with all of the publications. You guys, you know, create a lot of great material, you interview a lot of people, making sure you are watching those for us you know, and see there's a ton of activities, new projects keeps coming along every few months. So keeping up with that, listening to podcasts around those topics, all of that helps. But I think the first one I think goes in a big way in terms of being aware of what matters to your customers. >> Awesome. Let me ask you a question. What's the most rewarding aspect of your job right now? >> So, I think there are many. So I think I love... I've come to realize that I love, you know, the high that you get out of being an entrepreneur independent of, you know, there's... In terms of success and failure, there's always ups and downs as an entrepreneur, right? But there is this... There's something really alluring about being able to, you know, define, you know, path of your products and in a way that can potentially impact, you know, a number of companies that'll consume your products, employees that work with you. So that is, I think to me, always been the most satisfying path, is what kept me going. I think that is probably first and foremost. And then the projects. You know, there's always new exciting things that we are working on. Even just today, there are certain projects we are working on that I'm super excited about. So I think it's those two things. >> So now we didn't get into how you started. You said you didn't want to do a startup and you got the big company. Your dad, your brother were entrepreneurs. How did you get into it? >> Yeah, so, you know, it was kind of surprising to me as well, but I think I reached a point of VMware after spending about eight years or so where I definitely packed hold and I could have pushed myself by switching to a completely different company or a different organization within VMware. And I was trying all of those paths, interviewed at different companies, et cetera, but nothing felt different enough. And then I think I was very, very fortunate in that my co-founders, Sirish Raghuram, Roopak Parikh, you know, Bich, you've met them, they were kind of all at the same journey in their careers independently at the same time. And so we would all eat lunch together at VMware 'cause we were on the same team and then we just started brainstorming on different ideas during lunchtime. And that's kind of how... And we did that almost for a year. So by the time that the year long period went by, at the end it felt like the most logical, natural next step to leave our job and to, you know, to start off something together. But I think I wouldn't have done that had it not been for my co-founders. >> So you had comfort with the team as you knew each other at VMware, but you were kind of a little early, (laughing) you had a vision. It's kind of playing out now. How do you feel right now as the wave is hitting? Distributed computing, microservices, Kubernetes, I mean, stuff you guys did and were doing. I mean, it didn't play out exactly, but directionally you were right on the line there. How do you feel? >> Yeah. You know, I think that's kind of the challenge and the fun part with the startup journey, right? Which is you can never predict how things are going to go. When we kicked off we thought that OpenStack is going to really take over infrastructure management space and things kind of went differently, but things are going that way now with Kubernetes and distributed infrastructure. And so I think it's been interesting and in every path that you take that does end up not being successful teaches you so much more, right? So I think it's been a very interesting journey. >> Yeah, and I think the cloud, certainly AWS hit that growth right at 2013 through '17, kind of sucked all the oxygen out. But now as it reverts back to this abstraction layer essentially makes things look like private clouds, but they're just essentially DevOps. It's cloud operations, kind of the same thing. >> Yeah, absolutely. And then with the edge things are becoming way more distributed where having a single large cloud provider is becoming even less relevant in that space and having kind of the central SaaS based management model, which is what we pioneered, like you said, we were ahead of the game at that time, is becoming sort of the most obvious choice now. >> Now you look back at your role at Stanford, distributed systems, again, they have world class program there, neural networks, you name it. It's really, really awesome. As well as Cal Berkeley, there was in debates with each other, who's better? But that's a separate interview. Now you got the edge, what are some of the distributed computing challenges right now with now the distributed edge coming online, industrial 5G, data? What do you see as some of the key areas to solve from a problem statement standpoint with edge and as cloud goes on-premises to essentially data center at the edge, apps coming over the top AI enabled. What's your take on that? >> Yeah, so I think... And there's different flavors of edge and the one that we focus on is, you know, what we call thick edge, which is you have this problem of managing thousands of as we call it micro data centers, rather than managing maybe few tens or hundreds of large data centers where the problem just completely shifts on its head, right? And I think it is still an unsolved problem today where whether you are a retailer or a telecommunications vendor, et cetera, managing your footprints of tens of thousands of stores as a retailer is solved in a very archaic way today because the tool set, the traditional management tooling that's designed to manage, let's say your data centers is not quite, you know, it gets retrofitted to manage these environments and it's kind of (indistinct), you know, round hole kind of situation. So I think the top most challenges are being able to manage this large footprint of micro data centers in the most effective way, right? Where you have latency solved, you have the issue of a small footprint of resources at thousands of locations, and how do you fit in your containerized or virtualized or other workloads in the most effective way? To have that solved, you know, you need to have the security aspects around these environments. So there's a number of challenges that kind of go hand-in-hand, like what is the most effective storage which, you know, can still be deployed in that compact environment? And then cost becomes a related point. >> Costs are huge 'cause if you move data, you're going to have cost. If you move compute, it's not as much. If you have an operating system concept, is the data and state or stateless? These are huge problems. This is an operating system, don't you think? >> Yeah, yeah, absolutely. It's a distributed operating system where it's multiple layers, you know, of ways of solving that problem just in the context of data like you said having an intermediate caching layer so that you know, you still do just in time processing at those edge locations and then send some data back and that's where you can incorporate some AI or other technologies, et cetera. So, you know, just data itself is a multi-layer problem there. >> Well, it's great to have you on this program. Advice final question for you, for the folks watching technical degrees, most people are finding out in elementary school, in middle school, a lot more robotics programs, a lot more tech exposure, you know, not just in Silicon Valley, but all around, you're starting to see that. What's your advice for young girls and people who are getting either coming into the workforce re-skilled as they get enter, it's easy to enter now as they stay in and how do they stay in? What's your advice? >> Yeah, so, you know, I think it's the same goal. I have two little daughters and it's the same principle I try to follow with them, which is I want to give them as much exposure as possible without me having any predefined ideas about what you know, they should pursue. But it's I think that exposure that you need to find for yourself one way or the other, because you really never know. Like, you know, my husband landed into computer science through a very, very meandering path, and then he discovered later in his career that it's the absolute calling for him. It's something he's very good at, right? But so... You know, it's... You know, the reason why he thinks he didn't pick that path early is because he didn't quite have that exposure. So it's that exposure to various things, even things you think that you may not be interested in is the most important aspect. And then things just naturally lend themselves. >> Find your calling, superpower, strengths. Know what you don't want to do. (John chuckles) >> Yeah, exactly. >> Great advice. Thank you so much for coming on and contributing to our program for International Women's Day. Great to see you in this context. We'll see you on theCUBE. We'll talk more about Platform9 when we go KubeCon or some other time. But thank you for sharing your personal perspective and experiences for our audience. Thank you. >> Fantastic. Thanks for having me, John. Always great. >> This is theCUBE's coverage of International Women's Day, I'm John Furrier. We're talking to the leaders in the industry, from developers to the boardroom and everything in between and getting the stories out there making an impact. Thanks for watching. (bright upbeat music)
SUMMARY :
and she's also the VP of Thank you for having me. I love interviewing you for many reasons. Yeah, so , you know, And then you get hooked on it. Did you find any blockers in your way? I think there were maybe I would say after, you know, Okay, so you got an pathway or you just decided, systems, you know, How do you talk to the I think one is that it's, you know, you got now all kinds of that you really have no How did you deal with that? And I've even, you know, And how do you develop to a level of discipline that you So I have to ask you the And then the second is, you know, reading Let me ask you a question. that I love, you know, and you got the big company. Yeah, so, you know, I mean, stuff you guys did and were doing. Which is you can never predict kind of the same thing. which is what we pioneered, like you said, Now you look back at your and how do you fit in your Costs are huge 'cause if you move data, just in the context of data like you said a lot more tech exposure, you know, Yeah, so, you know, I Know what you don't want to do. Great to see you in this context. Thanks for having me, John. and getting the stories
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Madhura Maskasky | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
20 | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
Mendel Rosenblum | PERSON | 0.99+ |
Sirish Raghuram | PERSON | 0.99+ |
John | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Roopak Parikh | PERSON | 0.99+ |
Platform9 Systems | ORGANIZATION | 0.99+ |
International Women's Day | EVENT | 0.99+ |
Java | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
C++ | TITLE | 0.99+ |
10 years ago | DATE | 0.99+ |
'17 | DATE | 0.99+ |
today | DATE | 0.98+ |
KubeCon | EVENT | 0.98+ |
two little daughters | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
25 girls | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
Cal Berkeley | ORGANIZATION | 0.98+ |
Bich | PERSON | 0.98+ |
two things | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
two degrees | QUANTITY | 0.98+ |
single girl | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
about eight years | QUANTITY | 0.98+ |
single | QUANTITY | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
one friend | QUANTITY | 0.96+ |
5G | ORGANIZATION | 0.96+ |
one point | QUANTITY | 0.94+ |
first one | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
tens | QUANTITY | 0.92+ |
a year | QUANTITY | 0.91+ |
tens of thousands of stores | QUANTITY | 0.89+ |
Palo Alto, California Studio | LOCATION | 0.88+ |
Platform9 | ORGANIZATION | 0.88+ |
Kubernetes | ORGANIZATION | 0.86+ |
about at least 120 | QUANTITY | 0.85+ |
Mobile World Congress | EVENT | 0.82+ |
130 students | QUANTITY | 0.82+ |
hundreds of large data centers | QUANTITY | 0.8+ |
80, 90% | QUANTITY | 0.79+ |
VMware | TITLE | 0.73+ |
past 10 years | DATE | 0.72+ |
Nancy Wang & Kate Watts | International Women's Day
>> Hello everyone. Welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE been profiling the leaders in the technology world, women in technology from developers to the boardroom, everything in between. We have two great guests promoting in from Malaysia. Nancy Wang is the general manager, also CUBE alumni from AWS Data Protection, and founder and board chair of Advancing Women in Tech, awit.org. And of course Kate Watts who's the executive director of Advancing Women in Tech.org. So it's awit.org. Nancy, Kate, thanks for coming all the way across remotely from Malaysia. >> Of course, we're coming to you as fast as our internet bandwidth will allow us. And you know, I'm just thrilled today that you get to see a whole nother aspect of my life, right? Because typically we talk about AWS, and here we're talking about a topic near and dear to my heart. >> Well, Nancy, I love the fact that you're spending a lot of time taking the empowerment to go out and help the industries and helping with the advancement of women in tech. Kate, the executive director it's a 501C3, it's nonprofit, dedicating to accelerating the careers of women in groups in tech. Can you talk about the organization? >> Yes, I can. So Advancing Women in Tech was founded in 2017 in order to fix some of the pathway problems that we're seeing on the rise to leadership in the industry. And so we specifically focus on supporting mid-level women in technical roles, get into higher positions. We do that in a few different ways through mentorship programs through building technical skills and by connecting people to a supportive community. So you have your peer network and then a vertical sort of relationships to help you navigate the next steps in your career. So to date we've served about 40,000 individuals globally and we're just looking to expand our reach and impact and be able to better support women in the industry. >> Nancy, talk about the creation, the origination story. How'd this all come together? Obviously the momentum, everyone in the industry's been focused on this for a long time. Where did AWIT come from? Advancing Women in Technology, that's the acronym. Advancing Women in Technology.org, where'd it come from? What's the origination story? >> Yeah, so AWIT really originated from this desire that I had, to Kate's point around, well if you look around right and you know, don't take my word for it, right? Look at stats, look at news reports, or just frankly go on your LinkedIn and see how many women in underrepresented groups are in senior technical leadership roles right out in the companies whose names we all know. And so that was my case back in 2016. And so when I first got the idea and back then I was actually at Google, just another large tech company in the valley, right? It was about how do we get more role models, how we get more, for example, women into leadership roles so they can bring up the next generation, right? And so this is actually part of a longer speech that I'm about to give on Wednesday and part of the US State Department speaker program. In fact, that's why Kate and I are here in Malaysia right now is working with over 200 women entrepreneurs from all over in Southeast Asia, including Malaysia Philippines, Vietnam, Borneo, you know, so many countries where having more women entrepreneurs can help raise the GDP right, and that fits within our overall mission of getting more women into top leadership roles in tech. >> You know, I was talking about Teresa Carlson she came on the program as well for this year this next season we're going to do. And she mentioned the decision between the US progress and international. And she's saying as much as it's still bad numbers, it's worse than outside the United States and needs to get better. Can you comment on the global aspect? You brought that up. I think it's super important to highlight that it's just not one area, it's a global evolution. >> Absolutely, so let me start, and I'd love to actually have Kate talk about our current programs and all of the international groups that we're working with. So as Teresa aptly mentioned there is so much work to be done not just outside the US and North Americas where typically tech nonprofits will focus, but rather if you think about the one to end model, right? For example when I was doing the product market fit workshop for the US State Department I had women dialing in from rice fields, right? So let me just pause there for a moment. They were holding their cell phones up near towers near trees just so that they can get a few minutes of time with me to do a workshop and how to accelerate their business. So if you don't call that the desire to propel oneself or accelerate oneself, not sure what is, right. And so it's really that passion that drove me to spend the next week and a half here working with local entrepreneurs working with policy makers so we can take advantage and really leverage that passion that people have, right? To accelerate more business globally. And so that's why, you know Kate will be leading our contingent with the United Nations Women Group, right? That is focused on women's economic empowerment because that's super important, right? One aspect can be sure, getting more directors, you know vice presidents into companies like Google and Amazon. But another is also how do you encourage more women around the world to start businesses, right? To reach economic and freedom independence, right? To overcome some of the maybe social barriers to becoming a leader in their own country. >> Yes, and if I think about our own programs and our model of being very intentional about supporting the learning development and skills of women and members of underrepresented groups we focused very much on providing global access to a number of our programs. For instance, our product management certification on Coursera or engineering management our upcoming women founders accelerator. We provide both access that you can get from anywhere. And then also very intentional programming that connects people into the networks to be able to further their networks and what they've learned through the skills online, so. >> Yeah, and something Kate just told me recently is these courses that Kate's mentioning, right? She was instrumental in working with the American Council on Education and so that our learners can actually get up to six college credits for taking these courses on product management engineering management, on cloud product management. And most recently we had our first organic one of our very first organic testimonials was from a woman's tech bootcamp in Nigeria, right? So if you think about the worldwide impact of these upskilling courses where frankly in the US we might take for granted right around the world as I mentioned, there are women dialing in from rice patties from other, you know, for example, outside the, you know corporate buildings in order to access this content. >> Can you think about the idea of, oh sorry, go ahead. >> Go ahead, no, go ahead Kate. >> I was going to say, if you can't see it, you can't become it. And so we are very intentional about ensuring that we have we're spotlighting the expertise of women and we are broadcasting that everywhere so that anybody coming up can gain the skills and the networks to be able to succeed in this industry. >> We'll make sure we get those links so we can promote them. Obviously we feel the same way getting the word out. I think a couple things I'd like to ask you guys cause I think you hit a great point. One is the economic advantage the numbers prove that diverse teams perform better number one, that's clear. So good point there. But I want to get your thoughts on the entrepreneurial equation. You mentioned founders and startups and there's also different makeups in different countries. It's not like the big corporations sometimes it's smaller business in certain areas the different cultures have different business sizes and business types. How do you guys see that factoring in outside the United States, say the big tech companies? Okay, yeah. The easy lower the access to get in education than stay with them, in other countries is it the same or is it more diverse in terms of business? >> So what really actually got us started with the US State Department was around our work with women founders. And I love for Kate to actually share her experience working with AWS startups in that capacity. But frankly, you know, we looked at the content and the mentor programs that were providing women who wanted to be executives, you know, quickly realize a lot of those same skills such as finding customers, right? Scaling your product and building channels can also apply to women founders, not just executives. And so early supporters of our efforts from firms such as Moderna up in Seattle, Emergence Ventures, Decibel Ventures in, you know, the Bay Area and a few others that we're working with right now. Right, they believed in the mission and really helped us scale out what is now our existing platform and offerings for women founders. >> Those are great firms by the way. And they also are very founder friendly and also understand the global workforce. I mean, that's a whole nother dimension. Okay, what's your reaction to all that? >> Yes, we have been very intentional about taking the product expertise and the learnings of women and in our network, we first worked with AWS startups to support the development of the curriculum for the recent accelerator for women founders that was held last spring. And so we're able to support 25 founders and also brought in the expertise of about 20 or 30 women from Advancing Women in Tech to be able to be the lead instructors and mentors for that. And so we have really realized that with this network and this individual sort of focus on product expertise building strong teams, we can take that information and bring it to folks everywhere. And so there is very much the intentionality of allowing founders allowing individuals to take the lessons and bring it to their individual circumstances and the cultures in which they are operating. But the product sense is a skill that we can support the development of and we're proud to do so. >> That's awesome. Nancy, I want to ask you some never really talk about data storage and AWS cloud greatness and goodness, here's different and you also work full-time at AWS and you're the founder or the chairman of this great organization. How do you balance both and do you get, they're getting behind you on this, Amazon is getting behind you on this. >> Well, as I say it's always easier to negotiate on the way in. But jokes aside, I have to say the leadership has been tremendously supportive. If you think about, for example, my leaders Wayne Duso who's also been on the show multiple times, Bill Vaas who's also been on the show multiple times, you know they're both founders and also operators entrepreneurs at heart. So they understand that it is important, right? For all of us, it's really incumbent on all of us who are in positions to do so, to create a pathway for more people to be in leadership roles for more people to be successful entrepreneurs. So, no, I mean if you just looked at LinkedIn they're always uploading my vote so they reach to more audiences. And frankly they're rooting for us back home in the US while we're in Malaysia this week. >> That's awesome. And I think that's a good culture to have that empowerment and I think that's very healthy. What's next for you guys? What's on the agenda? Take us through the activities. I know that you got a ton of things happening. You got your event out there, which is why you're out there. There's a bunch of other activities. I think you guys call it the Advancing Women in Tech week. >> Yes, this week we are having a week of programming that you can check out at Advancing Women in Tech.org. That is spotlighting the expertise of a number of women in our space. So it is three days of programming Tuesday, Wednesday and Thursday if you are in the US so the seventh through the ninth, but available globally. We are also going to be in New York next week for the event at the UN and are looking to continue to support our mentorship programs and also our work supporting women founders throughout the year. >> All right. I have to ask you guys if you don't mind get a little market data so you can share with us here at theCUBE. What are you hearing this year that's different in the conversation space around the topics, the interests? Obviously I've seen massive amounts of global acceleration around conversations, more video, things like this more stories are scaling, a lot more LinkedIn activity. It just seems like it's a lot different this year. Can you guys share any kind of current trends you're seeing relative to the conversations and topics being discussed across the the community? >> Well, I think from a needle moving perspective, right? I think due to the efforts of wonderful organizations including the Q for spotlighting all of these awesome women, right? Trailblazing women and the nonprofits the government entities that we work with there's definitely more emphasis on creating access and creating pathways. So that's probably one thing that you're seeing is more women, more investors posting about their activities. Number two, from a global trend perspective, right? The rise of women in security. I noticed that on your agenda today, you had Lena Smart who's a good friend of mine chief information security officer at MongoDB, right? She and I are actually quite involved in helping founders especially early stage founders in the security space. And so globally from a pure technical perspective, right? There's right more increasing regulations around data privacy, data sovereignty, right? For example, India's in a few weeks about to get their first data protection regulation there locally. So all of that is giving rise to yet another wave of opportunity and we want women founders uniquely positioned to take advantage of that opportunity. >> I love it. Kate, reaction to that? I mean founders, more pathways it sounds like a neural network, it sounds like AI enabled. >> Yes, and speaking of AI, with the rise of that we are also hearing from many community members the importance of continuing to build their skills upskill learn to be able to keep up with the latest trends. There's a lot of people wondering what does this mean for my own career? And so they're turning to organizations like Advancing Women in Tech to find communities to both learn the latest information, but also build their networks so that they are able to move forward regardless of what the industry does. >> I love the work you guys are doing. It's so impressive. I think the economic angle is new it's more amplified this year. It's always kind of been there and continues to be. What do you guys hope for by next year this time what do you hope to see different from a needle moving perspective, to use your word Nancy, for next year? What's the visual output in your mind? >> I want to see real effort made towards 50-50 representation in all tech leadership roles. And I'd like to see that happen by 2050. >> Kate, anything on your end? >> I love that. I'm going to go a little bit more touchy-feely. I want everybody in our space to understand that the skills that they build and that the networks they have carry with them regardless of wherever they go. And so to be able to really lean in and learn and continue to develop the career that you want to have. So whether that be at a large organization or within your own business, that you've got the potential to move forward on that within you. >> Nancy, Kate, thank you so much for your contribution. I'll give you the final word. Put a plug in for the organization. What are you guys looking for? Any kind of PSA you want to share with the folks watching? >> Absolutely, so if you're in a position to be a mentor, join as a mentor, right? Help elevate and accelerate the next generation of women leaders. If you're an investor help us invest in more women started companies, right? Women founded startups and lastly, if you are women looking to accelerate your career, come join our community. We have resources, we have mentors and who we have investors who are willing to come in on the ground floor and help you accelerate your business. >> Great work. Thank you so much for participating in our International Women's Day 23 program and we'd look to keep this going quarterly. We'll see you next year, next time. Thanks for coming on. Appreciate it. >> Thanks so much John. >> Thank you. >> Okay, women leaders here. >> Nancy: Thanks for having us >> All over the world, coming together for a great celebration but really highlighting the accomplishments, the pathways the investment, the mentoring, everything in between. It's theCUBE. Bring as much as we can. I'm John Furrier, your host. Thanks for watching.
SUMMARY :
in the technology world, that you get to see a whole nother aspect of time taking the empowerment to go on the rise to leadership in the industry. in the industry's been focused of the US State Department And she mentioned the decision and all of the international into the networks to be able to further in the US we might take for Can you think about the and the networks to be able The easy lower the access to get and the mentor programs Those are great firms by the way. and also brought in the or the chairman of this in the US while we're I know that you got a of programming that you can check I have to ask you guys if you don't mind founders in the security space. Kate, reaction to that? of continuing to build their skills I love the work you guys are doing. And I'd like to see that happen by 2050. and that the networks Any kind of PSA you want to and accelerate the next Thank you so much for participating All over the world,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kate | PERSON | 0.99+ |
Nancy | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
Bill Vaas | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Malaysia | LOCATION | 0.99+ |
Kate Watts | PERSON | 0.99+ |
Nigeria | LOCATION | 0.99+ |
Nancy Wang | PERSON | 0.99+ |
Wayne Duso | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Moderna | ORGANIZATION | 0.99+ |
Wednesday | DATE | 0.99+ |
American Council on Education | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Vietnam | LOCATION | 0.99+ |
Borneo | LOCATION | 0.99+ |
Emergence Ventures | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
2016 | DATE | 0.99+ |
United Nations Women Group | ORGANIZATION | 0.99+ |
Decibel Ventures | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
United States | LOCATION | 0.99+ |
Southeast Asia | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2050 | DATE | 0.99+ |
MongoDB | ORGANIZATION | 0.99+ |
US State Department | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
International Women's Day | EVENT | 0.99+ |
25 founders | QUANTITY | 0.99+ |
Seattle | LOCATION | 0.99+ |
North Americas | LOCATION | 0.99+ |
AWS Data Protection | ORGANIZATION | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
seventh | QUANTITY | 0.99+ |
Bay Area | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
next week | DATE | 0.99+ |
30 women | QUANTITY | 0.98+ |
One aspect | QUANTITY | 0.98+ |
Thursday | DATE | 0.98+ |
this year | DATE | 0.98+ |
about 40,000 individuals | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
last spring | DATE | 0.98+ |
this week | DATE | 0.98+ |
Tuesday | DATE | 0.98+ |
Teresa Carlson, Flexport | International Women's Day
(upbeat intro music) >> Hello everyone. Welcome to theCUBE's coverage of International Women's Day. I'm your host, John Furrier, here in Palo Alto, California. Got a special remote guest coming in. Teresa Carlson, President and Chief Commercial Officer at Flexport, theCUBE alumni, one of the first, let me go back to 2013, Teresa, former AWS. Great to see you. Thanks for coming on. >> Oh my gosh, almost 10 years. That is unbelievable. It's hard to believe so many years of theCUBE. I love it. >> It's been such a great honor to interview you and follow your career. You've had quite the impressive run, executive level woman in tech. You've done such an amazing job, not only in your career, but also helping other women. So I want to give you props to that before we get started. Thank you. >> Thank you, John. I, it's my, it's been my honor and privilege. >> Let's talk about Flexport. Tell us about your new role there and what it's all about. >> Well, I love it. I'm back working with another Amazonian, Dave Clark, who is our CEO of Flexport, and we are about 3,000 people strong globally in over 90 countries. We actually even have, we're represented in over 160 cities and with local governments and places around the world, which I think is super exciting. We have over 100 network partners and growing, and we are about empowering the global supply chain and trade and doing it in a very disruptive way with the use of platform technology that allows our customers to really have visibility and insight to what's going on. And it's a lot of fun. I'm learning new things, but there's a lot of technology in this as well, so I feel right at home. >> You quite have a knack from mastering growth, technology, and building out companies. So congratulations, and scaling them up too with the systems and processes. So I want to get into that. Let's get into your personal background. Then I want to get into the work you've done and are doing for empowering women in tech. What was your journey about, how did it all start? Like, I know you had a, you know, bumped into it, you went Microsoft, AWS. Take us through your career, how you got into tech, how it all happened. >> Well, I do like to give a shout out, John, to my roots and heritage, which was a speech and language pathologist. So I did start out in healthcare right out of, you know, university. I had an undergraduate and a master's degree. And I do tell everyone now, looking back at my career, I think it was super helpful for me because I learned a lot about human communication, and it has done me very well over the years to really try to understand what environments I'm in and what kind of individuals around the world culturally. So I'm really blessed that I had that opportunity to work in healthcare, and by the way, a shout out to all of our healthcare workers that has helped us get through almost three years of COVID and flu and neurovirus and everything else. So started out there and then kind of almost accidentally got into technology. My first small company I worked for was a company called Keyfile Corporation, which did workflow and document management out of Nashua, New Hampshire. And they were a Microsoft goal partner. And that is actually how I got into big tech world. We ran on exchange, for everybody who knows that term exchange, and we were a large small partner, but large in the world of exchange. And those were the days when you would, the late nineties, you would go and be in the same room with Bill Gates and Steve Ballmer. And I really fell in love with Microsoft back then. I thought to myself, wow, if I could work for a big tech company, I got to hear Bill on stage about saving, he would talk about saving the world. And guess what my next step was? I actually got a job at Microsoft, took a pay cut and a job downgrade. I tell this story all the time. Took like three downgrades in my role. I had been a SVP and went to a manager, and it's one of the best moves I ever made. And I shared that because I really didn't know the world of big tech, and I had to start from the ground up and relearn it. I did that, I just really loved that job. I was at Microsoft from 2000 to 2010, where I eventually ran all of the U.S. federal government business, which was a multi-billion dollar business. And then I had the great privilege of meeting an amazing man, Andy Jassy, who I thought was just unbelievable in his insights and knowledge and openness to understanding new markets. And we talked about government and how government needed the same great technology as every startup. And that led to me going to work for Andy in 2010 and starting up our worldwide public sector business. And I pinch myself some days because we went from two people, no offices, to the time I left we had over 10,000 people, billions in revenue, and 172 countries and had done really amazing work. I think changing the way public sector and government globally really thought about their use of technology and Cloud computing in general. And that kind of has been my career. You know, I was there till 2020, 21 and then did a small stint at Splunk, a small stint back at Microsoft doing a couple projects for Microsoft with CEO, Satya Nadella, who is also an another amazing CEO and leader. And then Dave called me, and I'm at Flexport, so I couldn't be more honored, John. I've just had such an amazing career working with amazing individuals. >> Yeah, I got to say the Amazon One well-documented, certainly by theCUBE and our coverage. We watched you rise and scale that thing. And like I said at a time, this will when we look back as a historic run because of the build out. I mean as a zero to massive billions at a historic time where government was transforming, I would say Microsoft had a good run there with Fed, but it was already established stuff. Federal business was like, you know, blocking and tackling. The Amazon was pure build out. So I have to ask you, what was your big learnings? Because one, you're a Seattle big tech company kind of entrepreneurial in the sense of you got, here's some working capital seed finance and go build that thing, and you're in DC and you're a woman. What did you learn? >> I learned that you really have to have a lot of grit. You, my mom and dad, these are kind of more southern roots words, but stick with itness, you know. you can't give up and no's not in your vocabulary. I found no is just another way to get to yes. That you have to figure out what are all the questions people are going to ask you. I learned to be very patient, and I think one of the things John, for us was our secret sauce was we said to ourselves, if we're going to do something super transformative and truly disruptive, like Cloud computing, which the government really had not utilized, we had to be patient. We had to answer all their questions, and we could not judge in any way what they were thinking because if we couldn't answer all those questions and prove out the capabilities of Cloud computing, we were not going to accomplish our goals. And I do give so much credit to all my colleagues there from everybody like Steve Schmidt who was there, who's still there, who's the CISO, and Charlie Bell and Peter DeSantis and the entire team there that just really helped build that business out. Without them, you know, we would've just, it was a team effort. And I think that's the thing I loved about it was it was not just sales, it was product, it was development, it was data center operations, it was legal, finance. Everybody really worked as a team and we were on board that we had to make a lot of changes in the government relations team. We had to go into Capitol Hill. We had to talk to them about the changes that were required and really get them to understand why Cloud computing could be such a transformative game changer for the way government operates globally. >> Well, I think the whole world and the tech world can appreciate your work and thank you later because you broke down those walls asking those questions. So great stuff. Now I got to say, you're in kind of a similar role at Flexport. Again, transformative supply chain, not new. Computing wasn't new when before Cloud came. Supply chain, not a new concept, is undergoing radical change and transformation. Online, software supply chain, hardware supply chain, supply chain in general, shipping. This is a big part of our economy and how life is working. Similar kind of thing going on, build out, growth, scale. >> It is, it's very much like that, John, I would say, it's, it's kind of a, the model with freight forwarding and supply chain is fairly, it's not as, there's a lot of technology utilized in this global supply chain world, but it's not integrated. You don't have a common operating picture of what you're doing in your global supply chain. You don't have easy access to the information and visibility. And that's really, you know, I was at a conference last week in LA, and it was, the themes were so similar about transparency, access to data and information, being able to act quickly, drive change, know what was happening. I was like, wow, this sounds familiar. Data, AI, machine learning, visibility, common operating picture. So it is very much the same kind of themes that you heard even with government. I do believe it's an industry that is going through transformation and Flexport has been a group that's come in and said, look, we have this amazing idea, number one to give access to everyone. We want every small business to every large business to every government around the world to be able to trade their goods, think about supply chain logistics in a very different way with information they need and want at their fingertips. So that's kind of thing one, but to apply that technology in a way that's very usable across all systems from an integration perspective. So it's kind of exciting. I used to tell this story years ago, John, and I don't think Michael Dell would mind that I tell this story. One of our first customers when I was at Keyfile Corporation was we did workflow and document management, and Dell was one of our customers. And I remember going out to visit them, and they had runners and they would run around, you know, they would run around the floor and do their orders, right, to get all those computers out the door. And when I think of global trade, in my mind I still see runners, you know, running around and I think that's moved to a very digital, right, world that all this stuff, you don't need people doing this. You have machines doing this now, and you have access to the information, and you know, we still have issues resulting from COVID where we have either an under-abundance or an over-abundance of our supply chain. We still have clogs in our shipping, in the shipping yards around the world. So we, and the ports, so we need to also, we still have some clearing to do. And that's the reason technology is important and will continue to be very important in this world of global trade. >> Yeah, great, great impact for change. I got to ask you about Flexport's inclusion, diversity, and equity programs. What do you got going on there? That's been a big conversation in the industry around keeping a focus on not making one way more than the other, but clearly every company, if they don't have a strong program, will be at a disadvantage. That's well reported by McKinsey and other top consultants, diverse workforces, inclusive, equitable, all perform better. What's Flexport's strategy and how are you guys supporting that in the workplace? >> Well, let me just start by saying really at the core of who I am, since the day I've started understanding that as an individual and a female leader, that I could have an impact. That the words I used, the actions I took, the information that I pulled together and had knowledge of could be meaningful. And I think each and every one of us is responsible to do what we can to make our workplace and the world a more diverse and inclusive place to live and work. And I've always enjoyed kind of the thought that, that I could help empower women around the world in the tech industry. Now I'm hoping to do my little part, John, in that in the supply chain and global trade business. And I would tell you at Flexport we have some amazing women. I'm so excited to get to know all. I've not been there that long yet, but I'm getting to know we have some, we have a very diverse leadership team between men and women at Dave's level. I have some unbelievable women on my team directly that I'm getting to know more, and I'm so impressed with what they're doing. And this is a very, you know, while this industry is different than the world I live in day to day, it's also has a lot of common themes to it. So, you know, for us, we're trying to approach every day by saying, let's make sure both our interviewing cycles, the jobs we feel, how we recruit people, how we put people out there on the platforms, that we have diversity and inclusion and all of that every day. And I can tell you from the top, from Dave and all of our leaders, we just had an offsite and we had a big conversation about this is something. It's a drum beat that we have to think about and live by every day and really check ourselves on a regular basis. But I do think there's so much more room for women in the world to do great things. And one of the, one of the areas, as you know very well, we lost a lot of women during COVID, who just left the workforce again. So we kind of went back unfortunately. So we have to now move forward and make sure that we are giving women the opportunity to have great jobs, have the flexibility they need as they build a family, and have a workplace environment that is trusted for them to come into every day. >> There's now clear visibility, at least in today's world, not withstanding some of the setbacks from COVID, that a young girl can look out in a company and see a path from entry level to the boardroom. That's a big change. A lot than even going back 10, 15, 20 years ago. What's your advice to the folks out there that are paying it forward? You see a lot of executive leaderships have a seat at the table. The board still underrepresented by most numbers, but at least you have now kind of this solidarity at the top, but a lot of people doing a lot more now than I've seen at the next levels down. So now you have this leveled approach. Is that something that you're seeing more of? And credit compare and contrast that to 20 years ago when you were, you know, rising through the ranks? What's different? >> Well, one of the main things, and I honestly do not think about it too much, but there were really no women. There were none. When I showed up in the meetings, I literally, it was me or not me at the table, but at the seat behind the table. The women just weren't in the room, and there were so many more barriers that we had to push through, and that has changed a lot. I mean globally that has changed a lot in the U.S. You know, if you look at just our U.S. House of Representatives and our U.S. Senate, we now have the increasing number of women. Even at leadership levels, you're seeing that change. You have a lot more women on boards than we ever thought we would ever represent. While we are not there, more female CEOs that I get an opportunity to see and talk to. Women starting companies, they do not see the barriers. And I will share, John, globally in the U.S. one of the things that I still see that we have that many other countries don't have, which I'm very proud of, women in the U.S. have a spirit about them that they just don't see the barriers in the same way. They believe that they can accomplish anything. I have two sons, I don't have daughters. I have nieces, and I'm hoping someday to have granddaughters. But I know that a lot of my friends who have granddaughters today talk about the boldness, the fortitude, that they believe that there's nothing they can't accomplish. And I think that's what what we have to instill in every little girl out there, that they can accomplish anything they want to. The world is theirs, and we need to not just do that in the U.S., but around the world. And it was always the thing that struck me when I did all my travels at AWS and now with Flexport, I'm traveling again quite a bit, is just the differences you see in the cultures around the world. And I remember even in the Middle East, how I started seeing it change. You've heard me talk a lot on this program about the fact in both Saudi and Bahrain, over 60% of the tech workers were females and most of them held the the hardest jobs, the security, the architecture, the engineering. But many of them did not hold leadership roles. And that is what we've got to change too. To your point, the middle, we want it to get bigger, but the top, we need to get bigger. We need to make sure women globally have opportunities to hold the most precious leadership roles and demonstrate their capabilities at the very top. But that's changed. And I would say the biggest difference is when we show up, we're actually evaluated properly for those kind of roles. We have a ways to go. But again, that part is really changing. >> Can you share, Teresa, first of all, that's great work you've done and I wan to give you props of that as well and all the work you do. I know you champion a lot of, you know, causes in in this area. One question that comes up a lot, I would love to get your opinion 'cause I think you can contribute heavily here is mentoring and sponsorship is huge, comes up all the time. What advice would you share to folks out there who were, I won't say apprehensive, but maybe nervous about how to do the networking and sponsorship and mentoring? It's not just mentoring, it's sponsorship too. What's your best practice? What advice would you give for the best way to handle that? >> Well yeah, and for the women out there, I would say on the mentorship side, I still see mentorship. Like, I don't think you can ever stop having mentorship. And I like to look at my mentors in different parts of my life because if you want to be a well-rounded person, you may have parts of your life every day that you think I'm doing a great job here and I definitely would like to do better there. Whether it's your spiritual life, your physical life, your work life, you know, your leisure life. But I mean there's, and there's parts of my leadership world that I still seek advice from as I try to do new things even in this world. And I tried some new things in between roles. I went out and asked the people that I respected the most. So I just would say for sure have different mentorships and don't be afraid to have that diversity. But if you have mentorships, the second important thing is show up with a real agenda and questions. Don't waste people's time. I'm very sensitive today. If you're, if you want a mentor, you show up and you use your time super effectively and be prepared for that. Sponsorship is a very different thing. And I don't believe we actually do that still in companies. We worked, thank goodness for my great HR team. When I was at AWS, we worked on a few sponsorship programs where for diversity in general, where we would nominate individuals in the company that we felt that weren't, that had a lot of opportunity for growth, but they just weren't getting a seat at the table. And we brought 'em to the table. And we actually kind of had a Chatham House rules where when they came into the meetings, they had a sponsor, not a mentor. They had a sponsor that was with them the full 18 months of this program. We would bring 'em into executive meetings. They would read docs, they could ask questions. We wanted them to be able to open up and ask crazy questions without, you know, feeling wow, I just couldn't answer this question in a normal environment or setting. And then we tried to make sure once they got through the program that we found jobs and support and other special projects that they could go do. But they still had that sponsor and that group of individuals that they'd gone through the program with, John, that they could keep going back to. And I remember sitting there and they asked me what I wanted to get out of the program, and I said two things. I want you to leave this program and say to yourself, I would've never had that experience if I hadn't gone through this program. I learned so much in 18 months. It would probably taken me five years to learn. And that it helped them in their career. The second thing I told them is I wanted them to go out and recruit individuals that look like them. I said, we need diversity, and unless you all feel that we are in an inclusive environment sponsoring all types of individuals to be part of this company, we're not going to get the job done. And they said, okay. And you know, but it was really one, it was very much about them. That we took a group of individuals that had high potential and a very diverse with diverse backgrounds, held 'em up, taught 'em things that gave them access. And two, selfishly I said, I want more of you in my business. Please help me. And I think those kind of things are helpful, and you have to be thoughtful about these kind of programs. And to me that's more sponsorship. I still have people reach out to me from years ago, you know, Microsoft saying, you were so good with me, can you give me a reference now? Can you talk to me about what I should be doing? And I try to, I'm not pray 100%, some things pray fall through the cracks, but I always try to make the time to talk to those individuals because for me, I am where I am today because I got some of the best advice from people like Don Byrne and Linda Zecker and Andy Jassy, who were very honest and upfront with me about my career. >> Awesome. Well, you got a passion for empowering women in tech, paying it forward, but you're quite accomplished and that's why we're so glad to have you on the program here. President and Chief Commercial Officer at Flexport. Obviously storied career and your other jobs, specifically Amazon I think, is historic in my mind. This next chapter looks like it's looking good right now. Final question for you, for the few minutes you have left. Tell us what you're up to at Flexport. What's your goals as President, Chief Commercial Officer? What are you trying to accomplish? Share a little bit, what's on your mind with your current job? >> Well, you kind of said it earlier. I think if I look at my own superpowers, I love customers, I love partners. I get my energy, John, from those interactions. So one is to come in and really help us build even a better world class enterprise global sales and marketing team. Really listen to our customers, think about how we interact with them, build the best executive programs we can, think about new ways that we can offer services to them and create new services. One of my favorite things about my career is I think if you're a business leader, it's your job to come back around and tell your product group and your services org what you're hearing from customers. That's how you can be so much more impactful, that you listen, you learn, and you deliver. So that's one big job. The second job for me, which I am so excited about, is that I have an amazing group called flexport.org under me. And flexport.org is doing amazing things around the world to help those in need. We just announced this new funding program for Tech for Refugees, which brings assistance to millions of people in Ukraine, Pakistan, the horn of Africa, and those who are affected by earthquakes. We just took supplies into Turkey and Syria, and Flexport, recently in fact, just did sent three air shipments to Turkey and Syria for these. And I think we did over a hundred trekking shipments to get earthquake relief. And as you can imagine, it was not easy to get into Syria. But you know, we're very active in the Ukraine, and we are, our goal for flexport.org, John, is to continue to work with our commercial customers and team up with them when they're trying to get supplies in to do that in a very cost effective, easy way, as quickly as we can. So that not-for-profit side of me that I'm so, I'm so happy. And you know, Ryan Peterson, who was our founder, this was his brainchild, and he's really taken this to the next level. So I'm honored to be able to pick that up and look for new ways to have impact around the world. And you know, I've always found that I think if you do things right with a company, you can have a beautiful combination of commercial-ity and giving. And I think Flexport does it in such an amazing and unique way. >> Well, the impact that they have with their system and their technology with logistics and shipping and supply chain is a channel for societal change. And I think that's a huge gift that you have that under your purview. So looking forward to finding out more about flexport.org. I can only imagine all the exciting things around sustainability, and we just had Mobile World Congress for Big Cube Broadcast, 5Gs right around the corner. I'm sure that's going to have a huge impact to your business. >> Well, for sure. And just on gas emissions, that's another thing that we are tracking gas, greenhouse gas emissions. And in fact we've already reduced more than 300,000 tons and supported over 600 organizations doing that. So that's a thing we're also trying to make sure that we're being climate aware and ensuring that we are doing the best job we can at that as well. And that was another thing I was honored to be able to do when we were at AWS, is to really cut out greenhouse gas emissions and really go global with our climate initiatives. >> Well Teresa, it's great to have you on. Security, data, 5G, sustainability, business transformation, AI all coming together to change the game. You're in another hot seat, hot roll, big wave. >> Well, John, it's an honor, and just thank you again for doing this and having women on and really representing us in a big way as we celebrate International Women's Day. >> I really appreciate it, it's super important. And these videos have impact, so we're going to do a lot more. And I appreciate your leadership to the industry and thank you so much for taking the time to contribute to our effort. Thank you, Teresa. >> Thank you. Thanks everybody. >> Teresa Carlson, the President and Chief Commercial Officer of Flexport. I'm John Furrier, host of theCUBE. This is International Women's Day broadcast. Thanks for watching. (upbeat outro music)
SUMMARY :
and Chief Commercial Officer It's hard to believe so honor to interview you I, it's my, it's been Tell us about your new role and insight to what's going on. and are doing for And that led to me going in the sense of you got, I learned that you really Now I got to say, you're in kind of And I remember going out to visit them, I got to ask you about And I would tell you at Flexport to 20 years ago when you were, you know, And I remember even in the Middle East, I know you champion a lot of, you know, And I like to look at my to have you on the program here. And I think we did over a I can only imagine all the exciting things And that was another thing I Well Teresa, it's great to have you on. and just thank you again for and thank you so much for taking the time Thank you. and Chief Commercial Officer of Flexport.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Satya Nadella | PERSON | 0.99+ |
Jeremy Burton | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Ryan Peterson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Linda Zecker | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Steve Ballmer | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Flexport | ORGANIZATION | 0.99+ |
Dave Clark | PERSON | 0.99+ |
Mike Franco | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
Syria | LOCATION | 0.99+ |
Hallmark | ORGANIZATION | 0.99+ |
Ukraine | LOCATION | 0.99+ |
Don Byrne | PERSON | 0.99+ |
Keyfile Corporation | ORGANIZATION | 0.99+ |
Steve Schmidt | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Dave Stanford | PERSON | 0.99+ |
Turkey | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
June | DATE | 0.99+ |
Middle East | LOCATION | 0.99+ |
second job | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
dozens | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
May | DATE | 0.99+ |
2019 | DATE | 0.99+ |
LA | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Rachel Skaff, AWS | International Women's Day
(gentle music) >> Hello, and welcome to theCUBE's coverage of International Women's Day. I'm John Furrier, host of theCUBE. I've got a great guest here, CUBE alumni and very impressive, inspiring, Rachel Mushahwar Skaff, who's a managing director and general manager at AWS. Rachel, great to see you. Thanks for coming on. >> Thank you so much. It's always a pleasure to be here. You all make such a tremendous impact with reporting out what's happening in the tech space, and frankly, investing in topics like this, so thank you. >> It's our pleasure. Your career has been really impressive. You worked at Intel for almost a decade, and that company is very tech, very focused on Moore's law, cadence of technology power in the industry. Now at AWS, powering next-generation cloud. What inspired you to get into tech? How did you get here and how have you approached your career journey, because it's quite a track record? >> Wow, how long do we have? (Rachel and John laugh) >> John: We can go as long as you want. (laughs) It's great. >> You know, all joking aside, I think at the end of the day, it's about this simple statement. If you don't get goosebumps every single morning that you're waking up to do your job, it's not good enough. And that's a bit about how I've made all of the different career transitions that I have. You know, everything from building out data centers around the world, to leading network and engineering teams, to leading applications teams, to going and working for, you know, the largest semiconductor in the world, and now at AWS, every single one of those opportunities gave me goosebumps. And I was really focused on how do I surround myself with humans that are better than I am, smarter than I am, companies that plan in decades, but live in moments, companies that invest in their employees and create like artists? And frankly, for me, being part of a company where people know that life is finite, but they want to make an infinite impact, that's a bit about my career journey in a nutshell. >> Yeah. What's interesting is that, you know, over the years, a lot's changed, and a theme that we're hearing from leaders now that are heading up large teams and running companies, they have, you know, they have 20-plus years of experience under their belt and they look back and they say, "Wow, "things have changed and it's changing faster now, "hopefully faster to get change." But they all talk about confidence and they talk about curiosity and building. When did you know that this was going to be something that you got the goosebumps? And were there blockers in your way and how did you handle that? (Rachel laughs) >> There's always blockers in our way, and I think a lot of people don't actually talk about the blockers. I think they make it sound like, hey, I had this plan from day one, and every decision I've made has been perfect. And for me, I'll tell you, right, there are moments in your life that mark a differentiation and those moments that you realize nothing will be the same. And time is kind of divided into two parts, right, before this moment and after this moment. And that's everything from, before I had kids, that's a pretty big moment in people's lives, to after I had kids, and how do you work through some of those opportunities? Before I got married, before I got divorced. Before I went to this company, after I left this company. And I think the key for all of those is just having an insatiable curiosity around how do you continue to do better, create better and make better? And I'll tell you, those blockers, they exist. Coming back from maternity leave, hard. Coming back from a medical leave, hard. Coming back from caring for a sick parent or a sick friend, hard. But all of those things start to help craft who you are as a human being, not as a leader, but as a human being, and allows you to have some empathy with the people that you surround yourself with, right? And for me, it's, (sighs) you can think about these blockers in one of two ways. You can think about it as, you know, every single time that you're tempted to react in the same way to a blocker, you can be a prisoner of your past, or you can change how you react and be a pioneer of the future. It's not a blocker when you think about it in those terms. >> Mindset matters, and that's really a great point. You brought up something that's interesting, I want to bring this up. Some of the challenges in different stages of our lives. You know, one thing that's come out of this set of interviews, this, of day and in conversations is, that I haven't heard before, is the result of COVID, working at home brought empathy about people's personal lives to the table. That came up in a couple interviews. What's your reaction to that? Because that highlights that we're human, to your point of view. >> It does. It does. And I'm so thankful that you don't ask about balance because that is a pet peeve of mine, because there is no such thing as balance. If you're in perfect balance, you are not moving and you're not changing. But when you think about, you know, the impact of COVID and how the world has changed since that, it has allowed all of us to really think about, you know, what do we want to do versus what do we have to do? And I think so many times, in both our professional lives and our personal lives, we get caught up in doing what we think we have to do to get ahead versus taking a step back and saying, "Hey, what do I want to do? "And how do I become a, you know, "a better human?" And many times, John, I'm asked, "Hey, "how do you define success or achievement?" And, you know, my answer is really, for me, the greatest results that I've achieved, both personally and professionally, is when I eliminate the word success and balance from my vocabulary, and replace them with two words: What's my contribution and what's my impact? Those things make a difference, regardless of gender. And I'll tell you, none of it is easy, ever. I think all of us have been broken, we've been stretched, we've been burnt out. But I also think what we have to talk about as leaders in the industry is how we've also found endurance and resilience. And when we felt unsteady, we've continued to go forward, right? When we can't decide, the best answer is do what's uncomfortable. And all of those things really stemmed from a part of what happened with COVID. >> Yeah, yeah, I love the uncomfortable and the balance highlight. You mentioned being off balance. That means you're growing, you're not standing still. I want to get your thoughts on this because one thing that has come out again this year, and last year as well, is having a team with you when you do it. So if you're off balance and you're going to stretch, if you have a good team with you, that's where people help each other. Not just pick them up, but like maybe get 'em back on track again. So, but if you're solo, you fall, (laughs) you fall harder. So what's your reaction to that? 'Cause this has come up, and this comes up in team building, workforce formation, goal setting, contribution. What's your reaction to that? >> So my reaction to that that is pretty simple. Nobody gets there on their own at all, right? Passion and ambition can only take you so far. You've got to have people and teams that are supporting you. And here's the funny thing about people, and frankly, about being a leader that I think is really important: People don't follow for you. People follow for who you help them become. Think about that for a second. And when you think about all the amazing things that companies and teams are able to do, it's because of those people. And it's because you have leaders that are out there, inspiring them to take what they believe is impossible and turn it into the possible. That's the power of teams. >> Can you give an example of your approach on how you do that? How do you build your teams? How do you grow them? How do you lead them effectively and also make 'em inclusive, diverse and equitable? >> Whew. I'll give you a great example of some work that we're doing at AWS. This year at re:Invent, for the first time in its history, we've launched an initiative with theCUBE called Women of the Cloud. And part of Women of the Cloud is highlighting the business impact that so many of our partners, our customers and our employees have had on the social, on the economic and on the financials of many companies. They just haven't had the opportunity to tell their story. And at Amazon, right, it is absolutely integral to us to highlight those examples and continue to extend that ethos to our partners and our customers. And I think one of the things that I shared with you at re:Invent was, you know, as U2's Bono put it, (John laughs) "We'll build it better than we did before "and we are the people "that we've been waiting for." So if we're not out there, advocating and highlighting all the amazing things that other women are doing in the ecosystem, who will? >> Well, I've got to say, I want to give you props for that program. Not only was it groundbreaking, it's still running strong. And I saw some things on LinkedIn that were really impressive in its network effect. And I met at least half a dozen new people I never would have met before through some of that content interaction and engagement. And this is like the power of the current world. I mean, getting the voices out there creates momentum. And it's good for Amazon. It's not just personal brand building for my next job or whatever, you know, reason. It's sharing and it's attracting others, and it's causing people to connect and meet each other in that world. So it's still going strong. (laughs) And this program we did last year was part of Rachel Thornton, who's now at MessageBird, and Mary Camarata. They were the sponsors for this International Women's Day. They're not there anymore, so we decided we're going to do it again because the impact is so significant. We had the Amazon Education group on. It's amazing and it's free, and we've got to get the word out. I mean, talk about leveling up fast. You get in and you get trained and get certified, and there's a zillion jobs out (laughs) there in cloud, right, and partners. So this kind of leadership is really important. What was the key learnings that you've taken away and how do you extend this opportunity to nurture the talent out there in the field? Because when you throw the content out there from great leaders and practitioners and developers, it attracts other people. >> It does. It does. So look, I think there's two types of people, people that are focused on being and people who are focused on doing. And let me give you an example, right? When we think about labels of, hey, Rachel's a female executive who launched Women of the Cloud, that label really limits me. I'd rather just be a great executive. Or, hey, there's a great entrepreneur. Let's not be a great entrepreneur. Just go build something and sell it. And that's part of this whole Women of the cloud, is I don't want people focused on what their label is. I want people sharing their stories about what they're doing, and that's where the lasting impact happens, right? I think about something that my grandmother used to tell me, and she used to tell me, "Rachel, how successful "you are, doesn't matter. "The lasting impact that you have "is your legacy in this very finite time "that you have on Earth. "Leave a legacy." And that's what Women of the Cloud is about. So that people can start to say, "Oh, geez, "I didn't know that that was possible. "I didn't think about my career in that way." And, you know, all of those different types of stories that you're hearing out there. >> And I want to highlight something you said. We had another Amazonian on the program for this day earlier and she coined a term, 'cause inside Amazon, you have common language. One of them is bar raising. Raise the bar, that's an Amazonian (Rachel laughs) term. It means contribute and improve and raise the bar of capability. She said, "Bar raising is gender neutral. "The bar is a bar." And I'm like, wow, that was amazing. Now, that means your contribution angle there highlights that. What's the biggest challenge to get that mindset set in culture, in these- >> Oh. >> 'Cause it's that simple, contribution is neutral. >> It absolutely is neutral, but it's like I said earlier, I think so many times, people are focused on success and being a great leader versus what's the contribution I'm making and how am I doing as a leader, you know? And when it comes to a lot of the leadership principles that Amazon has, including bar raising, which means insisting on the highest standards, and then those standards continue to raise every single time. And what that is all about is having all of our employees figure out, how do I get better every single day, right? That's what it's about. It's not about being better than the peer next to you. It's about how do I become a better leader, a better human being than I was yesterday? >> Awesome. >> You know, I read this really cute quote and I think it really resonates. "You meditate to upgrade your software "and you work out to upgrade your hardware." And while it's important that we're all ourselves at work, we can't deny that a lot of times, ourselves still need that meditation or that workout. >> Well, I hope I don't have any zero days in my software out there, so, but I'm going to definitely work on that. I love that quote. I'm going to use that. Thank you very much. That was awesome. I got to ask you, I know you're really passionate about, and we've talked about this, around, so you're a great leader but you're also focused on what's behind you in the generation, pipelining women leaders, okay? Seats at the table, mentoring and sponsorship. What can we do to build a strong pipeline of leaders in technology and business? And where do you see the biggest opportunity to nurture the talent in these fields? >> Hmm, you know, that's great, great question. And, you know, I just read a "Forbes" article by another Amazonian, Tanuja Randery, who talked about, you know, some really interesting stats. And one of the stats that she shared was, you know, by 2030, less than 25% of tech specialists will be female, less than 25%. That's only a 6% growth from where we are in 2023, so in seven years. That's alarming. So we've really got to figure out what are the kinds of things that we're going to go do from an Amazon perspective to impact that? And one of the obvious starting points is showcasing tech careers to girls and young women, and talking openly about what a technology career looks like. So specifically at Amazon, we've got an AWS Git IT program that helps schools and educators bring in tech role models to show them what potential careers look like in tech. I think that's one great way that we can help build the pipeline, but once we get the pipeline, we also have to figure out how we don't let that pipeline leak. Meaning how do we keep women and, you know, young women on their tech career? And I think big part of that, John, is really talking about how hard it is, but it's also greater than you can ever imagine. And letting them see executives that are very authentic and will talk about, geez, you know, the challenges of COVID were a time of crisis and accelerated change, and here's what it meant to me personally and here's what we were able to solve professionally. These younger generations are all about social impact, they're about economic impact and they're about financial impact. And if we're not talking about all three of those, both from how AWS is leading from the front, but how its executives are also taking that into their personal lives, they're not going to want to go into tech. >> Yeah, and I think one of the things you mentioned there about getting people that get IT, good call out there, but also, Amazon's going to train 30 million people, put hundreds of millions of dollars into education. And not only are they making it easier to get in to get trained, but once you're in, even savvy folks that are in there still have to accelerate. And there's more ways to level up, more things are happening, but there's a big trend around people changing careers either in their late 20s, early 30s, or even those moments you talk about, where it's before and after, even later in the careers, 40s, 50s. Leaders like, well, good experience, good training, who were in another discipline who re-skilled. So you have, you know, more certifications coming in. So there's still other pivot points in the pipeline. It's not just down here. And that, I find that interesting. Are you seeing that same leadership opportunities coming in where someone can come into tech older? >> Absolutely. You know, we've got some amazing programs, like Amazon Returnity, that really focuses on how do we get other, you know, how do we get women that have taken some time off of work to get back into the workforce? And here's the other thing about switching careers. If I look back on my career, I started out as a civil engineer, heavy highway construction. And now I lead a sales team at the largest cloud company in the world. And there were, you know, twists and turns around there. I've always focused on how do we change and how do we continue to evolve? So it's not just focused on, you know, young women in the pipeline. It's focused on all gender and all diverse types throughout their career, and making sure that we're providing an inclusive environment for them to bring in their unique skillsets. >> Yeah, a building has good steel. It's well structured. Roads have great foundations. You know, you got the builder in you there. >> Yes. >> So I have to ask you, what's on your mind as a tech athlete, as an executive at AWS? You know, you got your huge team, big goals, the economy's got a little bit of a headwind, but still, cloud's transforming, edge is exploding. What's your outlook as you look out in the tech landscape these days and how are you thinking about it? What your plans? Can you share a little bit about what's on your mind? >> Sure. So, geez, there's so many trends that are top of mind right now. Everything from zero trust to artificial intelligence to security. We have more access to data now than ever before. So the opportunities are limitless when we think about how we can apply technology to solve some really difficult customer problems, right? Innovation sometimes feels like it's happening at a rapid pace. And I also say, you know, there are years when nothing happens, and then there's years when centuries happen. And I feel like we're kind of in those years where centuries are happening. Cloud technologies are refining sports as we know them now. There's a surge of innovation in smart energy. Everyone's supply chain is looking to transform. Custom silicon is going mainstream. And frankly, AWS's customers and partners are expecting us to come to them with a point of view on trends and on opportunities. And that's what differentiates us. (John laughs) That's what gives me goosebumps- >> I was just going to ask you that. Does that give you goosebumps? How could you not love technology with that excitement? I mean, AI, throw in AI, too. I just talked to Swami, who heads up the AI and database, and we just talked about the past 24 months, the change. And that is a century moment happening. The large language models, computer vision, more compute. Compute's booming than ever before. Who thought that was going to happen, is still happening? Massive change. So, I mean, if you're in tech, how can you not love tech? >> I know, even if you're not in tech, I think you've got to start to love tech because it gives you access to things you've never had before. And frankly, right, change is the only constant. And if you don't like change, you're going to like being irrelevant even less than you like change. So we've got to be nimble, we've got to adapt. And here's the great thing, once we figure it out, it changes all over again. And it's not something that's easy for any of us to operate. It's hard, right? It's hard learning new technology, it's hard figuring out what do I do next? But here's the secret. I think it's hard because we're doing it right. It's not hard because we're doing it wrong. It's just hard to be human and it's hard to figure out how we apply all this different technology in a way that positively impacts us, you know, economically, financially, environmentally and socially. >> And everyone's different, too. So you got to live those (mumbles). I want to get one more question in before we, my last question, which is about you and your impact. When you talk to your team, your sales, you got a large sales team, North America. And Tanuja, who you mentioned, is in EMEA, we're going to speak with her as well. You guys lead the front lines, helping customers, but also delivering the revenue to the company, which has been fantastic, by the way. So what's your message to the troops and the team out there? When you say, "Take that hill," like what is the motivational pitch, in a few sentences? What's the main North Star message in today's marketplace when you're doing that big team meeting? >> I don't know if it's just limited to a team meeting. I think this is a universal message, and the universal message for me is find your edge, whatever that may be. Whether it is the edge of what you know about artificial intelligence and neural networks or it's the edge of how do we migrate our applications to the cloud more quickly. Or it's the edge of, oh, my gosh, how do I be a better parent and still be great at work, right? Find your edge, and then sharpen it. Go to the brink of what you think is possible, and then force yourself to jump. Get involved. The world is run by the people that show up, professionally and personally. (John laughs) So show up and get started. >> Yeah as Steve Jobs once said, "The future "that everyone looks at was created "by people no smarter than you." And I love that quote. That's really there. Final question for you. I know we're tight on time, but I want to get this in. When you think about your impact on your company, AWS, and the industry, what's something you want people to remember? >> Oh, geez. I think what I want people to remember the most is it's not about what you've said, and this is a Maya Angelou quote. "It's not about what you've said to people "or what you've done, "it's about how you've made them feel." And we can all think back on leaders or we can all think back on personal moments in our lives where we felt like we belonged, where we felt like we did something amazing, where we felt loved. And those are the moments that sit with us for the rest of our lives. I want people to remember how they felt when they were part of something bigger. I want people to belong. It shouldn't be uncommon to talk about feelings at work. So I want people to feel. >> Rachel, thank you for your time. I know you're really busy and we stretched you a little bit there. Thank you so much for contributing to this wonderful day of great leaders sharing their stories. And you're an inspiration. Thanks for everything you do. We appreciate you. >> Thank you. And let's go do some more Women of the Cloud videos. >> We (laughs) got more coming. Bring those stories on. Back up the story truck. We're ready to go. Thanks so much. >> That's good. >> Thank you. >> Okay, this is theCUBE's coverage of International Women's Day. It's not just going to be March 8th. That's the big celebration day. It's going to be every quarter, more stories coming. Stay tuned at siliconangle.com and thecube.net here, with bringing all the stories. I'm John Furrier, your host. Thanks for watching. (gentle music)
SUMMARY :
and very impressive, inspiring, Thank you so much. and how have you approached long as you want. to going and working for, you know, and how did you handle that? and how do you work through Some of the challenges in And I'm so thankful that you don't ask and the balance highlight. And it's because you have leaders that I shared with you at re:Invent and how do you extend this opportunity And let me give you an example, right? and raise the bar of capability. contribution is neutral. than the peer next to you. "and you work out to And where do you see And one of the stats that she shared the things you mentioned there And there were, you know, twists You know, you got the and how are you thinking about it? And I also say, you know, I was just going to ask you that. And if you don't like change, And Tanuja, who you mentioned, is in EMEA, of what you know about And I love that quote. And we can all think back on leaders Rachel, thank you for your time. Women of the Cloud videos. We're ready to go. It's not just going to be March 8th.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Rachel | PERSON | 0.99+ |
Tim Cook | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Rachel Thornton | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Nayaki | PERSON | 0.99+ |
Sanjay | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Tanuja | PERSON | 0.99+ |
Rachel Skaff | PERSON | 0.99+ |
Todd Skidmore | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Bob Stefanski | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Tom Joyce | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Laura Cooney | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Todd | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Mary Camarata | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Blackberry | ORGANIZATION | 0.99+ |
Coca-Cola | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Sanjay Srivastava | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
BMC Software | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
Siri | TITLE | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Motorola | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Mihir Shukla | PERSON | 0.99+ |
2023 | DATE | 0.99+ |
Nayaki Nayyar | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Rachel Mushahwar Skaff | PERSON | 0.99+ |
6% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Share A Coke | ORGANIZATION | 0.99+ |
Heather Ruden & Jenni Troutman | International Women's Day
(upbeat music) >> Hello, everyone. Welcome to theCUBE's special presentation of International Women's Day. I'm John Furrier, host of theCUBE. Jenni Troutman is here, Director of Products and Services, and Training and Certification at AWS, and Heather Ruden, Director of Education Programs, Training and Certification. Thanks for coming on theCUBE and for the International Women's Day special program. >> Thanks so much for having us. >> So, I'll just get it out of the way. I'm a big fan of what you guys do. I've been shouting at the top of my lungs, "It's free. Get cloud training and you'll have a six figure job." Pretty much. I'm over amplifying. But this is really a big opportunity in the industry, education and the skills gap, and the skill velocities that's changing. New roles are coming on around cloud native, cloud native operators, cybersecurity. There's so much excitement going on around the industry, and all these open positions, and they need new talent. So you can't get a degree for some of these things. So, nope, it doesn't matter what school you went to, everyone's kind of level. This is a really big deal. So, Heather, share with us your thoughts as well on this topic. Jenni, you too. Like, where are you guys at? 'Cause this is a big opportunity for women and anyone to level up in the industry. >> Absolutely. So I'll jump in and then I'll hand it over to Jenni. We're your dream team here. We can talk about both sides of this. So I run a set of programs here at AWS that are really intended to help build the next generation of cloud builders. And we do that with a variety of programs, whether it is targeting young learners from kind of 12 and up. We have AWS GetIT that is designed to get women ambassadors or women mentors in front of girls 12 to 14 and get them curious about a career in STEM. We also have a program that is all digital online. It's available in 11 languages. It's got hundreds of courses. That's called AWS Educate that is designed to do exactly what you just talked about, expose the opportunities and start building cloud skills for learners at age 13 and up. They can go online and register with an email and start learning. We want them to understand not only what the opportunity is for them, but the ways that they can help influence and bring more diversity and more inclusion and into the cloud technology space, and just keep building all those amazing builders that we need here for our customers and partners. And those are the programs that I manage, but Jenni also has an amazing program, a set of programs. And so I'll hand it over to her as you get into the professional side of this things. >> So Jenni, you're on the product side. You've got the keys to the kingdom on all the materials and shaping it. What's your view on this? 'Cause this is a huge opportunity and it's always changing. What's the latest and greatest? >> It is a massive opportunity and to give you a sense, there was a study in '21 where IT executives said that talent availability is the biggest challenge to emerging tech adoption. 64% of IT executives said that up from only 4% the year before. So the challenge is growing really fast, which for everyone that's ready to go out there and learn and try something new is a massive opportunity. And that's really why I'm here. We provide all kinds of learning experiences for people across different cloud technologies to be able to not only gain the knowledge around cloud, but also the confidence to be able to build in the cloud. And so we look across different learner levels, different roles, different opportunities, and we provide those experiences where people can actually get hands-on in a totally risk-free environment and practice building in the cloud so they can go and be ready to get their certifications, their AWS certifications, give them the credentials to be able to show an employer they can do it, and then go out and get these jobs. It's really exciting. And we go kind of end to end from the very beginning. What is cloud? I want to know what it is all the way through to I can prove that I can build in the cloud and I'm ready for a job. >> So Jenni, you nailed that confidence word. I think I want to double click on that. And Heather, you talked about you're the dream team. You guys, you're the go to market, you bring this to the marketplace. Jenni, you get the products. This is the key, but to me the the international women days angle is, is that what I hear over and over again is that, "It's too technical. I'm not qualified." It can be scary. We had a guest on who has two double E degrees in robotics and aerospace and she's hard charging. She almost lost her confidence twice she said in her career. But she was hard charging. It can get scary, but also the ability to level up fast is just as good. So if you can break through that confidence and keep the curiosity and be a builder, talk about that dynamic 'cause you guys are in the middle of it, you're in the industry, how do you handle that? 'Cause I think that's a big thing that comes up over and over again. And confidence is not just women, it's men too. But women can always, that comes up as a theme. >> It is. It is a big challenge. I mean, I've struggled with it personally and I mentor a lot of women and that is the number one challenge that is holding women back from really being able to advance is the confidence to step out there and show what they can do. And what I love about some of the products we've put out recently is we have AWS Skill Builder. You can go online, you can get all kinds of free core training and if you want to go deeper, you can go deeper. And there's a lot of different options on there. But what it does is not only gives you that based knowledge, but you can actually go in. We have something called AWS Labs. You can go in and you can actually practice on the AWS console with the services that people are using in their jobs every day without any risk of doing something that is going to blow up in your face. You're not going to suddenly get this big AWS bill. You're not going to break something that's out there running. You just go in. It's your own little environment that gets wiped when you're done and you can practice. And there's lots of different ways to learn as well. So if you go in there and you're watching a video and to your point you're like, "Oh my gosh, this is too technical. I can't understand it. I don't know what I'm going to go do." You can go another route. There's something called AWS Cloud Quest. It's a game. You go in and it's like you're gaming and it walks you through. You're actually in a virtual world. You're walking through and it's telling you, "Hey, go build this and if you need help, here's hints and here's tips." And it continues to build on itself. So you're learning and you're applying practical skills and it's at your own pace. You don't have to watch somebody else talking that is going at a pace that maybe accelerates beyond what you're ready. You can do it at your own pace, you can redo it, you can try it again until you feel confident that you know it and you're really ready to move on to the next thing. Personally, I find that hugely valuable. I go in and do these myself and I sit there and I have a lot of engineers on my team, very smart people. And I have my own imposter syndrome. I get nervous to go talk to them. Like, are they going to think I'm totally lost? And so I go in and I learn some of this myself by experiment. And then I feel like, okay, now I can go ask them some intelligent questions and they're not going to be like, "Oh gosh, my leader is totally unaware of what we're doing." And so I think that we all struggle with confidence. I think everybody does, but I see it especially in women as I mentor them. And that's what I encourage them to do is go and on your own time, practice a bit, get a little bit of experience and once you feel like you can throw a couple words out there that you know what they mean and suddenly other people look at you like, "Oh, she knows what she's talking about." And you can kind of get past that feeling. >> Well Jenni, you nailed it. Heather, she just mentioned she's in the job and she's going and she's still leveling up. That's the end when you're in, but it's also the barriers to entry are lowering. You guys are doing a good job of getting people in, but also growing fast too. So there's two dynamics at play here. How do people do this? What's the playbook? Because I think that's really key, easy to get in. And then once you're in, you can level up fast at your own pace to ride the wave. And then there's new stuff coming. I mean, every re:Invent there's 5,000 announcements. So it's like zillion new things and AI taught now. >> re:Invent is a perfect example of that ongoing imposter syndrome or confidence check for all of us. I think something that that Jenni said too is we really try and meet learners where they are and make sure that we have the support, whether it's accessibility requirements or we have the content that is built for the age that we're talking to, or we have a workforce development program called re/Start that is for people that have very little tech experience and really want to talk about a career in cloud, but they need a little bit more handholding. They need a combination of instructor-led and digital. But then we have AWS educators, I mentioned. If you want to be more self-directed, all of these tools are intended to work well together and to be complimentary and to take you on a journey as a learner. And the more skills you have, the more you increase your knowledge, the more you can take on more. But meeting folks where they are with a variety of programs, tools, languages, and accessibility really helps ensure that we can do that for learners throughout the world. >> That's awesome. Let's get into it. Let's get into the roadmaps of people and their personas. And you guys can share the programs that you have and where people could fit in. 'Cause this comes up a lot when I talk to folks. There's the young person who's I'm a gamer or whatever, I want to get a job. I'm in high school or an elementary or I want to tinker around or I'm in college or I'm learning, I'm an entry level kind of entry. Then you have the re-skilling. I'm going to change my careers, I'm kind of bored, I want to do something compelling. How do I get into the cloud game? And then the advanced re-skill is I want to get into cyber and AI and then there's other. Could you break down? Did I get that right or did I miss anything? And then what's available for those kind of lanes? So those persona lanes? >> Well, let's see, I could start with maybe the high schooler stuff and then we can bring Jenni in as well. I would say a great place to start for anyone is aws.amazon.com/training. That's going to give them the full suite of options that they could take on. If you're in high school, you can go onto AWS Educate. All you need is an email. And if you're 13 years and older, you can start exploring the types of jobs that are available in the cloud and you could start taking some introductory classes. You can do some of those labs in a safe environment that Jenni mentioned. That's a great place to start. If you are in an environment where you have an educator that is willing to go on this with you, this journey with you, we have this AWS GetIT program that is, again, educator-led. So it's an afterschool or it's an a program where we match mentors and students up with cloud professionals and they do some real-time experimentation. They build an app, they work on things together, and do a presentation at the end. The other thing I would say too is that if you are in a university, I would double check and see if the AWS Academy curriculum is already in your university. And if so, explore some of those classes there. We have instructor-led, educator-ready. course curriculum that we've designed that help people get to those certifications and get closer to those jobs and as well as hopefully then lead people right into skill builder and all the things that Jenni talked about to help them as they start out in a professional environment. >> So is the GetIT, is that an instructor-led that the person has to find someone for? Or is this available for them? >> It is through teachers. It's through educators. We are in, we've reached over 19,000 students. We're available in eight countries. There are ways for educators to lead this, but we want to make sure that we are helping the kids be successful and giving them an educator environment to do that. If they want to do it on their own, then they can absolutely go through AWS Educate or even and to explore where they want to get started. >> So what about someone who's educated in their middle of their career, might want to switch from being a biologist to a cloud cybersecurity guru or a cloud native operator? >> Yeah, so in that case, AWS re/Start is one of the great program for them to explore. We run that program with collaborating organizations in 160 cities in 80 countries throughout the world. That is a multi-week cohort-based program where we do take folks through a very clear path towards certification and job skilling that will help them get into those opportunities. Over 98% of the cohorts, the graduates of those cohorts get an interview and are hopefully on their path to getting a job. So that really has global reach. The partnership with collaborating organizations helps us ensure that we find communities that are often unreached by cloud skills training and we really work to keep a diverse focus on those cohorts and bring those folks into the cloud. >> Okay. Jenni, you've got the Skill Builder action here. What's going on on your side? Because you must have to manage all the change. I mean, AI is hot right now. I'm sure you're cranking away on curriculum and content for SageMaker, large language models, computer vision, cybersecurity. >> We do. There are a lot of options. >> How is your world? Tell us about what people can take out of way from your side. >> Yeah. So a great way to think about it is if they're already out in the workforce or they're entering the workforce, but they are technical, have technical skills is what are the roles that are interesting in the technologies that are interesting. Because the way we put out our training and our certifications is aligned to paths. So if you're look interested in a specific role. If you're interested in architecting a cloud environment or in security as you mentioned, and you want to go deep in security, there are AWS certifications that give you that. If you achieve them, they're very difficult. But if you work to them and achieve them, they give you the credential that you can take to an employer and say, "Look, I can do this job." And they are in very high demand. In fact that's where if you look at some of the publications that have come out, they talk about, what are people making if they have different certifications? What are the most in-demand certifications that are out there? And those are what help people get jobs. And so you identify what is that role or that technology area I want to learn. And then you have multiple options for how you build those skills depending on how you want to learn. And again, that's really our focus, is on providing experiences based on how people learn and making it accessible to them. 'Cause not everybody wants to learn in the same way. And so there is AWS Skill Builder where people can go learn on their own that is really great particularly for people who maybe are already working and have to learn in the evenings, on the weekends. People who like to learn at their own pace, who just want to be hands-on, but are self-starters. And they can get those whole learning plans through there all the way aligned to the certification and then they can go get their certification. There's also classroom training. So a lot of people maybe want to do continuous learning through an online, but want to go really deep with an expert in the room and maybe have a more focused period of time if they can go for a couple days. And so they can do classroom training. We provide a lot of classroom training. We have partners all over the globe who provide classroom training. And so there's that and what we find to be the most powerful is when you couple the two. If you can really get deep, you have an expert, you can ask questions, but first before you go do that, you get some of that foundational that you've kind of learned on your own. And then after you go back and reinforce, you go back online, you try out things that maybe you learned in the classroom, but you didn't quite, you hadn't used it enough yet to quite know how to do it. Now you can go back and actually use it, experiment and play around. And so we really encourage that kind of, figure out what are some areas you're interested in, go learn it and then go get a job and continue to learn because then once you learn that first area, you start to build confidence in it. Suddenly other areas become interesting. 'Cause as you said, cloud is changing fast. And once you learn a space, first of all you have to keep going back to stay up on it as it changes. But you quickly find that there are other areas that are really interesting too. >> I've observed that the training side, it's just like cloud itself, it's very agile. You can get hands-on quickly, you don't need to take a class, and then get in weeks later. You're in it like it's real time. So you're immersed in gamification and all kinds of ways to funnel into the either advanced tracks and certification. So you guys do a great job and I want to give you props for that and a shout out. The question I have for you guys is can you scope the opportunity for these certifications and opportunities for women in particular? What are some of the top jobs pulling down? Scope out the opportunity because I think when people hear that they really fall out of their chair, they go, "Wow, I didn't know I could make $200,000 doing cybersecurity." Well, yeah or maybe more. I just made the number, I don't actually know, but like I know people do make that much in cyber, but there are huge financial opportunities with certifications and education. Can you scope that order of magnitude? Can you share any data? >> Yeah, so in the US they certainly are. Certifications on average aligned to six digit type jobs. And if you go out and do a search, there are research studies out there that are refreshed every year that say what are the top IT industry certifications and how much money do they make? And the reason I don't put a number out there is because it's constantly changing and in fact it keeps going up, >> It's going up, not going down. >> But I would encourage people to do that quick search. What are the top IT industry certifications. Again, based on the country you're in, it makes a difference. But if you're US, there's a lot of data out there for the US and then there is some for other countries as well around how much on average people make. >> Do you list like the higher level certifications, stack rank them in terms of order? Like say, I'm a type A personnel, I want to climb Mount Everest, I want to get the highest level certification. How do I know that? Is it like laddered up or is like how do you guys present that? >> Yeah, so we have different types of certifications. There is a foundational, which we call the cloud practitioner. That one is more about just showing that you know something about cloud. It's not aligned to a specific job role. But then we have what we call associate level certifications, which are aligned to roles. So there's the solutions architect, cloud developer, so developer operations. And so you can tell by the role and associate is kind of that next level. And then the roles often have a professional level, which is even more advanced. And basically that's saying you're kind of an Uber expert at that point. And then there are technology specialties, which are less about a specific role, although some would argue a security technology specialty might align very well to a security role, but they're more about showing the technology. And so typically, it goes foundational, advanced, professional, and then the specialties are more on the side. They're not aligned, but they're deep. They're deep within that area. >> So you can go dig and pick your deep dive and jump into where you're comfortable. Heather, talk about the commitment in terms of dollars. I know Amazon's flaunted some numbers like 30 million or something, people they want to have trained, hundreds of millions of dollars in investment. This is key, obviously, more people trained on cloud, more operators, more cloud usage, obviously. I see the business connection. What's the women relationship to the numbers? Or what the experience is? How do you guys see that? Obviously International Women's Day, get the confidence, got the curiosity. You're a builder, you're in. It's that easy. >> It doesn't always feel that way, I'm sure to everybody, but we'd like to think that it is. Amazon and AWS do invest hundreds of millions of dollars in free training every year that is accessible to everyone out there. I think that sometimes the hardest obstacles to get overcome are getting started and we try and make it as easy as possible to get started with the tools that we've talked about already today. We run into plenty of cohorts of women as part of our re/Start program that are really grateful for the opportunity to see something, see a new way of thinking, see a new opportunity for them. We don't necessarily break out our funding by women versus men. We want to make sure that we are open and diverse for everybody to come in and get the training that they need to. But we definitely want to make sure that we are accessible and available to women and all genders outside of the US and inside the US. >> Well, I know the number's a lot lower than they should be and that's obviously why we're promoting this heavily. There's a lot more interest I see in tech. So digital transformation is gender neutral. I mean, it's like the world eats software and uses software, uses the cloud. So it has to get 50/50 in my opinion. So you guys do a great job. Now that we're done kind of promoting Amazon, which I wanted to do 'cause I think it's super important. Let's talk about you guys. What got you guys involved in tech? What was the inspiration and share some stories about your experiences and advice for folks watching? >> So I've always been in traditionally male dominated roles. I actually started in aviation and then moved to tech. And what I found was I got a mentor early on, a woman who was senior to me and who was kind of who I saw as the smartest person out there. She was incredibly smart, she was incredibly kind, and she was always lifting women up. And I kind of latched onto her and followed her around and she was such an amazing mentor. She brought me from throughout tech, from company to company, job to job, was always positioning me in front of other people as the go-to person. And I realized, "Wow, I want to be like her." And so that's been my focus as well in tech is you can be deeply technical in tech or you can be not deeply technical and be in tech and you can be successful both ways, but the way you're going to be most successful is if you find other people, build them up and help put them out in front. And so I personally love to mentor women and to put them in places where they can feel comfortable being out in front of people. And that's really been my career. I have tried to model her approach as much as I can. >> That's a really interesting observation. It's the pattern we've been seeing in all these interviews for the past two years of doing the International Women's Day is that networking, mentoring and sponsorship are one thing. So it's all one thing. It's not just mentoring. It's like people think, "Oh, just mentoring. What does that mean? Advice?" No, it's sponsorship, it's lifting people up, creating a keiretsu, creating networks. Really important. Heather, what's your experience? >> Yeah, I'm sort of the example of somebody who never thought they'd be in tech, but I happened to graduate from college in the Silicon Valley in the early nineties and next thing you know, it's more than a couple years later and I'm deeply in tech and I think it when we were having the conversation about confidence and willingness to learn and try that really spoke to me as well. I think I had to get out of my own way sometimes and just be willing to not be the smartest person in the room and just be willing to ask a lot of questions. And with every opportunity to ask questions, I think somebody, I ended up with good mentors, male and female, that saw the willingness to ask questions and the willingness to be humble in my approach to learning. And that really helped. I'm also very aware that nobody's journey is the same and I need to create an environment on my team and I need to be a role model within AWS and Amazon for allowing people to show up in the way that they're going to be most successful. And sometimes that will mean giving them learning opportunities. Sometimes that will be hooking them up with a mentor. Sometimes that will be giving them the freedom to do what they need for their family or their personal life. And modeling that behavior regardless of gender has always been how I choose to show up and what I ask my leaders to do. And the more we can do that, I've seen the team been able to grow and flourish in that way and support our entire team. >> I love that story. You also have a great leader, Maureen Lonergan, who I've met many conversations with, but also it starts at the top. Andy Jassy who can come across, he's kind of technical, he's dirty, he's a builder mentality. He has first principles and you're bringing up this first principles concept and whether that's passing it forward, what you've learned, having first principles helps in an organization. Can you guys talk about what that's like at your company? 'Cause everyone's different. And sometimes whether, and I sometimes I worry about what I say, but I also have my first principles. So talk about how principles matter in how you guys interface with others and letting people be their authentic self. >> Yeah, I'll jump in Jenni and then you can. The Amazon leadership principles are super important to how we interact with each other and it really does provide a set of guidelines for how we work with each other and how we work for our customers and with our partners. But most of all it gives us a common language and a common set of expectations. And I will be honest, they're not always easy. When you come from an environment that tends to be less open to feedback and less open to direct conversations than you find at Amazon, it could take a while to get used to that, but for me at least, it was extremely empowering to have those tools and those principles as guidance for how to operate and to gain the confidence in using them. I've also been able to participate in hundreds and hundreds of interviews in the time that I've been here as part of an interview team of bar raisers. I think that really helps us understand whether or not folks are going to be successful at AWS and at Amazon and helps them understand if they're going to be able to be successful. >> Bar raising is an Amazon term and it's gender neutral, right Jenni? >> It is gender neutral. >> Bar is a bar, it raises. >> That's right. And it's funny, we say that our culture here is peculiar. And when I started, I had been in consulting for several years, so I worked with a lot of different companies in tech and so I thought I'd seen everything and I came here and I went, "Hmm." I see what they mean by peculiar. It is very different environment. >> In the fullness of time, it'll all work out. >> That's right, that's right. Well and it's funny because when you first started, it's a lot to figure out to how to operate in an environment where people do use a 16 leadership principles. I've worked at a lot of companies with three or four core values and nobody can state those. We could state all 16 leadership principles and we use them in our regular everyday dialogue. That is an awkward thing when you first come to have people saying, "Oh, I'm going to use bias for action in this situation and I'm going to go move fast. And they're actually used in everyday conversations. But after a couple years suddenly you realize, "Oh, I'm doing that." And maybe even sometimes at the dinner table I'm doing that, which can get to be a bit much. But it creates an environment where we can all be different. We can all think differently. We can all have different ways of doing things, but we have a common overall approach to what we're trying to achieve. And that's really, it gives us a good framework for that. >> Jenni, it's great insight. Heather, thank you so much for sharing your stories. We're going to do this not once a year. We're going to continue this Women in Tech program every quarter. We'll check in with you guys and find out what's new. And thank you for what you do. We appreciate that getting the word out and really is an opportunity for everyone with education and cloud and it's only going to get more opportunities at the edge in AI and so much more tech. Thank you for coming on the program. >> Thank you for having us. >> Thanks, John. >> Thank you. That's the International Women's Day segment here with leaders from AWS. I'm John Furrier. Thanks for watching. (upbeat musiC)
SUMMARY :
and for the International and anyone to level up in the industry. to do exactly what you just talked about, You've got the keys to the and to give you a sense, the ability to level up fast and that is the number one challenge you can level up fast at your and to be complimentary and to take you the programs that you have is that if you are in a university, or even and to explore where and we really work to keep a and content for SageMaker, There are a lot of options. How is your world? and you want to go deep in security, and I want to give you props And if you go out and do a search, Again, based on the country you're in, or is like how do you guys present that? And so you can tell by So you can go dig and available to women and all genders So it has to get 50/50 in my opinion. and you can be successful both ways, for the past two years of doing and flourish in that way in how you guys interface with others Jenni and then you can. and so I thought I'd seen In the fullness of And maybe even sometimes at the and it's only going to get more That's the International
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jenni | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
$200,000 | QUANTITY | 0.99+ |
Jenni Troutman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Heather | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Heather Ruden | PERSON | 0.99+ |
13 years | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
first principles | QUANTITY | 0.99+ |
11 languages | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
30 million | QUANTITY | 0.99+ |
5,000 announcements | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
aws.amazon.com/training | OTHER | 0.99+ |
160 cities | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
International Women's Day | EVENT | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
International Women's Day | EVENT | 0.99+ |
International Women's Day | EVENT | 0.99+ |
64% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
80 countries | QUANTITY | 0.99+ |
over 19,000 students | QUANTITY | 0.99+ |
GetIT | TITLE | 0.99+ |
eight countries | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
two dynamics | QUANTITY | 0.99+ |
twice | QUANTITY | 0.98+ |
hundreds of millions of dollars | QUANTITY | 0.98+ |
Over 98% | QUANTITY | 0.98+ |
Mount Everest | LOCATION | 0.98+ |
today | DATE | 0.98+ |
14 | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
'21 | DATE | 0.98+ |
one thing | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Adam Wenchel, Arthur.ai | CUBE Conversation
(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)
SUMMARY :
I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Markoff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adam Wenchel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
John Dickerson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
13 years | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Billy Beane | PERSON | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
University of Maryland | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
first | QUANTITY | 0.96+ |
six years ago | DATE | 0.96+ |
New York Times | ORGANIZATION | 0.96+ |
ChatGPT | ORGANIZATION | 0.96+ |
Swami | PERSON | 0.95+ |
ChatGPT | TITLE | 0.95+ |
hundreds of models | QUANTITY | 0.95+ |
25, 30% | QUANTITY | 0.95+ |
single problem | QUANTITY | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
10 | QUANTITY | 0.94+ |
Moneyball | TITLE | 0.94+ |
wave | EVENT | 0.91+ |
three things | QUANTITY | 0.9+ |
AIOps | TITLE | 0.9+ |
last six months | DATE | 0.89+ |
few months ago | DATE | 0.88+ |
big | EVENT | 0.86+ |
next couple years | DATE | 0.86+ |
DevOps | TITLE | 0.85+ |
Arthur | PERSON | 0.85+ |
CUBE | ORGANIZATION | 0.83+ |
dozens of models | QUANTITY | 0.8+ |
a few years back | DATE | 0.8+ |
six years ago | DATE | 0.78+ |
theCUBE | ORGANIZATION | 0.76+ |
SageMaker | TITLE | 0.75+ |
decades | QUANTITY | 0.75+ |
ORGANIZATION | 0.74+ | |
MLOps | TITLE | 0.74+ |
supercloud | ORGANIZATION | 0.73+ |
super AI wave | EVENT | 0.73+ |
a couple months | QUANTITY | 0.72+ |
Arthur | ORGANIZATION | 0.72+ |
100 customers | QUANTITY | 0.71+ |
Cube Conversation | EVENT | 0.69+ |
theCUBE.net | OTHER | 0.67+ |
LaDavia Drane, AWS | International Women's Day
(bright music) >> Hello, everyone. Welcome to theCUBE special presentation of International Women's Day. I'm John Furrier, host of theCUBE. This is a global special open program we're doing every year. We're going to continue it every quarter. We're going to do more and more content, getting the voices out there and celebrating the diversity. And I'm excited to have an amazing guest here, LaDavia Drane, who's the head of Global Inclusion Diversity & Equity at AWS. LaDavia, we tried to get you in on AWS re:Invent, and you were super busy. So much going on. The industry has seen the light. They're seeing everything going on, and the numbers are up, but still not there, and getting better. This is your passion, our passion, a shared passion. Tell us about your situation, your career, how you got into it. What's your story? >> Yeah. Well, John, first of all, thank you so much for having me. I'm glad that we finally got this opportunity to speak. How did I get into this work? Wow, you know, I'm doing the work that I love to do, number one. It's always been my passion to be a voice for the voiceless, to create a seat at the table for folks that may not be welcome to certain tables. And so, it's been something that's been kind of the theme of my entire professional career. I started off as a lawyer, went to Capitol Hill, was able to do some work with members of Congress, both women members of Congress, but also, minority members of Congress in the US Congress. And then, that just morphed into what I think has become a career for me in inclusion, diversity, and equity. I decided to join Amazon because I could tell that it's a company that was ready to take it to the next level in this space. And sure enough, that's been my experience here. So now, I'm in it, I'm in it with two feet, doing great work. And yeah, yeah, it's almost a full circle moment for me. >> It's really an interesting background. You have a background in public policy. You mentioned Capitol Hill. That's awesome. DC kind of moves slow, but it's a complicated machinery there. Obviously, as you know, navigating that, Amazon grew significantly. We've been at every re:Invent with theCUBE since 2013, like just one year. I watched Amazon grow, and they've become very fast and also complicated, like, I won't say like Capitol, 'cause that's very slow, but Amazon's complicated. AWS is in the realm of powering a generation of public policy. We had the JEDI contract controversy, all kinds of new emerging challenges. This pivot to tech was great timing because one, (laughs) Amazon needed it because they were growing so fast in a male dominated world, but also, their business is having real impact on the public. >> That's right, that's right. And when you say the public, I'll just call it out. I think that there's a full spectrum of diversity and we work backwards from our customers, and our customers are diverse. And so, I really do believe, I agree that I came to the right place at the right time. And yeah, we move fast and we're also moving fast in this space of making sure that both internally and externally, we're doing the things that we need to do in order to reach a diverse population. >> You know, I've noticed how Amazon's changed from the culture, male dominated culture. Let's face it, it was. And now, I've seen over the past five years, specifically go back five, is kind of in my mental model, just the growth of female leaders, it's been impressive. And there was some controversy. They were criticized publicly for this. And we said a few things as well in those, like around 2014. How is Amazon ensuring and continuing to get the female employees feel represented and empowered? What's going on there? What programs do you have? Because it's not just doing it, it's continuing it, right? And 'cause there is a lot more to do. I mean, the half (laughs) the products are digital now for everybody. It's not just one population. (laughs) Everyone uses digital products. What is Amazon doing now to keep it going? >> Well, I'll tell you, John, it's important for me to note that while we've made great progress, there's still more that can be done. I am very happy to be able to report that we have big women leaders. We have leaders running huge parts of our business, which includes storage, customer experience, industries and business development. And yes, we have all types of programs. And I should say that, instead of calling it programs, I'm going to call it strategic initiatives, right? We are very thoughtful about how we engage our women. And not only how we hire, attract women, but how we retain our women. We do that through engagement, groups like our affinity groups. So Women at Amazon is an affinity group. Women in finance, women in engineering. Just recently, I helped our Black employee network women's group launch, BEN Women. And so you have these communities of women who come together, support and mentor one another. We have what we call Amazon Circles. And so these are safe spaces where women can come together and can have conversations, where we are able to connect mentors and sponsors. And we're seeing that it's making all the difference in the world for our women. And we see that through what we call Connections. We have an inclusion sentiment tracker. So we're able to ask questions every single day and we get a response from our employees and we can see how are our women feeling, how are they feeling included at work? Are they feeling as though they can be who they are authentically at Amazon? And so, again, there's more work that needs to be done. But I will say that as I look at the data, as I'm talking to engaging women, I really do believe that we're on the right path. >> LaDavia, talk about the urgent needs of the women that you're hearing from the Circles. That's a great program. The affinity circles, the groups are great. Now, you have the groups, what are you hearing? What are the needs of the women? >> So, John, I'll just go a little bit into what's becoming a conversation around equity. So, initially I think we talked a lot about equality, right? We wanted everyone to have fair access to the same things. But now, women are looking for equity. We're talking about not just leveling the playing field, which is equality, but don't give me the same as you give everyone else. Instead, recognize that I may have different circumstances, I may have different needs. And give me what I need, right? Give me what I need, not just the same as everyone else. And so, I love seeing women evolve in this way, and being very specific about what they need more than, or what's different than what a man may have in the same situation because their circumstances are not always the same and we should treat them as such. >> Yeah, I think that's a great equity point. I interviewed a woman here, ex-Amazonian, she's now a GSI, Global System Integrator. She's a single mom. And she said remote work brought her equity because people on her team realized that she was a single mom. And it wasn't the, how do you balance life, it was her reality. And what happened was, she had more empathy with the team because of the new work environment. So, I think this is an important point to call out, that equity, because that really makes things smoother in terms of the interactions, not the assumptions, you have to be, you know, always the same as a man. So, how does that go? What's the current... How would you characterize the progress in that area right now? >> I believe that employers are just getting better at this. It's just like you said, with the hybrid being the norm now, you have an employer who is looking at people differently based on what they need. And it's not a problem, it's not an issue that a single mother says, "Well, I need to be able to leave by 5:00 PM." I think that employers now, and Amazon is right there along with other employers, are starting just to evolve that muscle of meeting the needs. People don't have to feel different. You don't have to feel as though there's some kind of of special circumstance for me. Instead, it's something that we, as employers, we're asking for. And we want to meet those needs that are different in some situations. >> I know you guys do a lot of support of women outside of AWS, and I had a story I recorded for the program. This woman, she talked about how she was a nerd from day one. She's a tomboy. They called her a tomboy, but she always loved robotics. And she ended up getting dual engineering degrees. And she talked about how she didn't run away and there was many signals to her not to go. And she powered through, at that time, and during her generation, that was tough. And she was successful. How are you guys taking the education to STEM, to women, at young ages? Because we don't want to turn people away from tech if they have the natural affinity towards it. And not everyone is going to be, as, you know, (laughs) strong, if you will. And she was a bulldog, she was great. She's just like, "I'm going for it. I love it so much." But not everyone's like that. So, this is an educational thing. How do you expose technology, STEM for instance, and making it more accessible, no stigma, all that stuff? I mean, I think we've come a long way, but still. >> What I love about women is we don't just focus on ourselves. We do a very good job of thinking about the generation that's coming after us. And so, I think you will see that very clearly with our women Amazonians. I'll talk about three different examples of ways that Amazonian women in particular, and there are men that are helping out, but I'll talk about the women in particular that are leading in this area. On my team, in the Inclusion, Diversity & Equity team, we have a program that we run in Ghana where we meet basic STEM needs for a afterschool program. So we've taken this small program, and we've turned their summer camp into this immersion, where girls and boys, we do focus on the girls, can come and be completely immersed in STEM. And when we provide the technology that they need, so that they'll be able to have access to this whole new world of STEM. Another program which is run out of our AWS In Communities team, called AWS Girls' Tech Day. All across the world where we have data centers, we're running these Girls' Tech Day. They're basically designed to educate, empower and inspire girls to pursue a career in tech. Really, really exciting. I was at the Girls' Tech Day here recently in Columbus, Ohio, and I got to tell you, it was the highlight of my year. And then I'll talk a little bit about one more, it's called AWS GetIT, and it's been around for a while. So this is a program, again, it's a global program, it's actually across 13 countries. And it allows girls to explore cloud technology, in particular, and to use it to solve real world problems. Those are just three examples. There are many more. There are actually women Amazonians that create these opportunities off the side of their desk in they're local communities. We, in Inclusion, Diversity & Equity, we fund programs so that women can do this work, this STEM work in their own local communities. But those are just three examples of some of the things that our Amazonians are doing to bring girls along, to make sure that the next generation is set up and that the next generation knows that STEM is accessible for girls. >> I'm a huge believer. I think that's amazing. That's great inspiration. We need more of that. It's awesome. And why wouldn't we spread it around? I want to get to the equity piece, that's the theme for this year's IWD. But before that, getting that segment, I want to ask you about your title, and the choice of words and the sequence. Okay, Global Inclusion, Diversity, Equity. Not diversity only. Inclusion is first. We've had this debate on theCUBE many years now, a few years back, it started with, "Inclusion is before diversity," "No, diversity before inclusion, equity." And so there's always been a debate (laughs) around the choice of words and their order. What's your opinion? What's your reaction to that? Is it by design? And does inclusion come before diversity, or am I just reading it to it? >> Inclusion doesn't necessarily come before diversity. (John laughs) It doesn't necessarily come before equity. Equity isn't last, but we do lead with inclusion in AWS. And that is very important to us, right? And thank you for giving me the opportunity to talk a little bit about it. We lead with inclusion because we want to make sure that every single one of our builders know that they have a place in this work. And so it's important that we don't only focus on hiring, right? Diversity, even though there are many, many different levels and spectrums to diversity. Inclusion, if you start there, I believe that it's what it takes to make sure that you have a workplace where everyone knows you're included here, you belong here, we want you to stay here. And so, it helps as we go after diversity. And we want all types of people to be a part of our workforce, but we want you to stay. And inclusion is the thing. It's the thing that I believe makes sure that people stay because they feel included. So we lead with inclusion. Doesn't mean that we put diversity or equity second or third, but we are proud to lead with inclusion. >> Great description. That was fabulous. Totally agree. Double click, thumbs up. Now let's get into the theme. Embracing equity, 'cause this is a term, it's in quotes. What does that mean to you? You mentioned it earlier, I love it. What does embrace equity mean? >> Yeah. You know, I do believe that when people think about equity, especially non-women think about equity, it's kind of scary. It's, "Am I going to give away what I have right now to make space for someone else?" But that's not what equity means. And so I think that it's first important that we just educate ourselves about what equity really is. It doesn't mean that someone's going to take your spot, right? It doesn't mean that the pie, let's use that analogy, gets smaller. The pie gets bigger, right? >> John: Mm-hmm. >> And everyone is able to have their piece of the pie. And so, I do believe that I love that IWD, International Women's Day is leading with embracing equity because we're going to the heart of the matter when we go to equity, we're going to the place where most people feel most challenged, and challenging people to think about equity and what it means and how they can contribute to equity and thus, embrace equity. >> Yeah, I love it. And the advice that you have for tech professionals out there on this, how do you advise other groups? 'Cause you guys are doing a lot of great work. Other organizations are catching up. What would be your advice to folks who are working on this equity challenge to reach gender equity and other equitable strategic initiatives? And everyone's working on this. Sustainability and equity are two big projects we're seeing in every single company right now. >> Yeah, yeah. I will say that I believe that AWS has proven that equity and going after equity does work. Embracing equity does work. One example I would point to is our AWS Impact Accelerator program. I mean, we provide 30 million for early stage startups led by women, Black founders, Latino founders, LGBTQ+ founders, to help them scale their business. That's equity. That's giving them what they need. >> John: Yeah. >> What they need is they need access to capital. And so, what I'd say to companies who are looking at going into the space of equity, I would say embrace it. Embrace it. Look at examples of what companies like AWS is doing around it and embrace it because I do believe that the tech industry will be better when we're comfortable with embracing equity and creating strategic initiatives so that we could expand equity and make it something that's just, it's just normal. It's the normal course of business. It's what we do. It's what we expect of ourselves and our employees. >> LaDavia, you're amazing. Thank you for spending the time. My final couple questions really more around you. Capitol Hill, DC, Amazon Global Head of Inclusion, Diversity & Equity, as you look at making change, being a change agent, being a leader, is really kind of similar, right? You've got DC, it's hard to make change there, but if you do it, it works, right? (laughs) If you don't, you're on the side of the road. So, as you're in your job now, what are you most excited about? What's on your agenda? What's your focus? >> Yeah, so I'm most excited about the potential of what we can get done, not just for builders that are currently in our seats, but for builders in the future. I tend to focus on that little girl. I don't know her, I don't know where she lives. I don't know how old she is now, but she's somewhere in the world, and I want her to grow up and for there to be no question that she has access to AWS, that she can be an employee at AWS. And so, that's where I tend to center, I center on the future. I try to build now, for what's to come, to make sure that this place is accessible for that little girl. >> You know, I've always been saying for a long time, the software is eating the world, now you got digital transformation, business transformation. And that's not a male only, or certain category, it's everybody. And so, software that's being built, and the systems that are being built, have to have first principles. Andy Jassy is very strong on this. He's been publicly saying, when trying to get pinned down about certain books in the bookstore that might offend another group. And he's like, "Look, we have first principles. First principles is a big part of leading." What's your reaction to that? How would you talk to another professional and say, "Hey," you know this, "How do I make the right call? Am I doing the wrong thing here? And I might say the wrong thing here." And is it first principles based? What's the guardrails? How do you keep that in check? How would you advise someone as they go forward and lean in to drive some of the change that we're talking about today? >> Yeah, I think as leaders, we have to trust ourselves. And Andy actually, is a great example. When I came in as head of ID&E for AWS, he was our CEO here at AWS. And I saw how he authentically spoke from his heart about these issues. And it just aligned with who he is personally, his own personal principles. And I do believe that leaders should be free to do just that. Not to be scripted, but to lead with their principles. And so, I think Andy's actually a great example. I believe that I am the professional in this space at this company that I am today because of the example that Andy set. >> Yeah, you guys do a great job, LaDavia. What's next for you? >> What's next. >> World tour, you traveling around? What's on your plate these days? Share a little bit about what you're currently working on. >> Yeah, so you know, at Amazon, we're always diving deep. We're always diving deep, we're looking for root cause, working very hard to look around corners, and trying to build now for what's to come in the future. And so I'll continue to do that. Of course, we're always planning and working towards re:Invent, so hopefully, John, I'll see you at re:Invent this December. But we have some great things happening throughout the year, and we'll continue to... I think it's really important, as opposed to looking to do new things, to just continue to flex the same muscles and to show that we can be very, very focused and intentional about doing the same things over and over each year to just become better and better at this work in this space, and to show our employees that we're committed for the long haul. So of course, there'll be new things on the horizon, but what I can say, especially to Amazonians, is we're going to continue to stay focused, and continue to get at this issue, and doing this issue of inclusion, diversity and equity, and continue to do the things that work and make sure that our culture evolves at the same time. >> LaDavia, thank you so much. I'll give you the final word. Just share some of the big projects you guys are working on so people can know about them, your strategic initiatives. Take a minute to plug some of the major projects and things that are going on that people either know about or should know about, or need to know about. Take a minute to share some of the big things you guys got going on, or most of the things. >> So, one big thing that I would like to focus on, focus my time on, is what we call our Innovation Fund. This is actually how we scale our work and we meet the community's needs by providing micro grants to our employees so our employees can go out into the world and sponsor all types of different activities, create activities in their local communities, or throughout the regions. And so, that's probably one thing that I would like to focus on just because number one, it's our employees, it's how we scale this work, and it's how we meet our community's needs in a very global way. And so, thank you John, for the opportunity to talk a bit about what we're up to here at Amazon Web Services. But it's just important to me, that I end with our employees because for me, that's what's most important. And they're doing some awesome work through our Innovation Fund. >> Inclusion makes the workplace great. Empowerment, with that kind of program, is amazing. LaDavia Drane, thank you so much. Head of Global Inclusion and Diversity & Equity at AWS. This is International Women's Day. I'm John Furrier with theCUBE. Thanks for watching and stay with us for more great interviews and people and what they're working on. Thanks for watching. (bright music)
SUMMARY :
And I'm excited to have that I love to do, number one. AWS is in the realm of powering I agree that I came to the And 'cause there is a lot more to do. And so you have these communities of women of the women that you're And give me what I need, right? not the assumptions, you have to be, "Well, I need to be able the education to STEM, And it allows girls to and the choice of words and the sequence. And so it's important that we don't What does that mean to you? It doesn't mean that the pie, And everyone is able to And the advice that you I mean, we provide 30 million because I do believe that the to make change there, that she has access to AWS, And I might say the wrong thing here." I believe that I am the Yeah, you guys do a great job, LaDavia. World tour, you traveling around? and to show that we can Take a minute to share some of the And so, thank you John, Inclusion makes the workplace great.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Ghana | LOCATION | 0.99+ |
Congress | ORGANIZATION | 0.99+ |
LaDavia Drane | PERSON | 0.99+ |
5:00 PM | DATE | 0.99+ |
two feet | QUANTITY | 0.99+ |
30 million | QUANTITY | 0.99+ |
International Women's Day | EVENT | 0.99+ |
LaDavia | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
Columbus, Ohio | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
ID&E | ORGANIZATION | 0.99+ |
three examples | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Girls' Tech Day | EVENT | 0.99+ |
Capitol Hill | LOCATION | 0.99+ |
first princip | QUANTITY | 0.98+ |
three examples | QUANTITY | 0.98+ |
13 countries | QUANTITY | 0.98+ |
first principles | QUANTITY | 0.98+ |
First principles | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
2013 | DATE | 0.98+ |
Capitol Hill | LOCATION | 0.98+ |
second | QUANTITY | 0.98+ |
Capitol Hill, DC | LOCATION | 0.97+ |
one year | QUANTITY | 0.97+ |
single mother | QUANTITY | 0.97+ |
Amazonian | OTHER | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
GSI | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.96+ |
each year | QUANTITY | 0.96+ |
Latino | OTHER | 0.96+ |
one thing | QUANTITY | 0.95+ |
One example | QUANTITY | 0.93+ |
single mom | QUANTITY | 0.93+ |
two big projects | QUANTITY | 0.93+ |
DC | LOCATION | 0.91+ |
Prem Balasubramanian and Suresh Mothikuru | Hitachi Vantara: Build Your Cloud Center of Excellence
(soothing music) >> Hey everyone, welcome to this event, "Build Your Cloud Center of Excellence." I'm your host, Lisa Martin. In the next 15 minutes or so my guest and I are going to be talking about redefining cloud operations, an application modernization for customers, and specifically how partners are helping to speed up that process. As you saw on our first two segments, we talked about problems enterprises are facing with cloud operations. We talked about redefining cloud operations as well to solve these problems. This segment is going to be focusing on how Hitachi Vantara's partners are really helping to speed up that process. We've got Johnson Controls here to talk about their partnership with Hitachi Vantara. Please welcome both of my guests, Prem Balasubramanian is with us, SVP and CTO Digital Solutions at Hitachi Vantara. And Suresh Mothikuru, SVP Customer Success Platform Engineering and Reliability Engineering from Johnson Controls. Gentlemen, welcome to the program, great to have you. >> Thank. >> Thank you, Lisa. >> First question is to both of you and Suresh, we'll start with you. We want to understand, you know, the cloud operations landscape is increasingly complex. We've talked a lot about that in this program. Talk to us, Suresh, about some of the biggest challenges and pin points that you faced with respect to that. >> Thank you. I think it's a great question. I mean, cloud has evolved a lot in the last 10 years. You know, when we were talking about a single cloud whether it's Azure or AWS and GCP, and that was complex enough. Now we are talking about multi-cloud and hybrid and you look at Johnson Controls, we have Azure we have AWS, we have GCP, we have Alibaba and we also support on-prem. So the architecture has become very, very complex and the complexity has grown so much that we are now thinking about whether we should be cloud native or cloud agnostic. So I think, I mean, sometimes it's hard to even explain the complexity because people think, oh, "When you go to cloud, everything is simplified." Cloud does give you a lot of simplicity, but it also really brings a lot more complexity along with it. So, and then next one is pretty important is, you know, generally when you look at cloud services, you have plenty of services that are offered within a cloud, 100, 150 services, 200 services. Even within those companies, you take AWS they might not know, an individual resource might not know about all the services we see. That's a big challenge for us as a customer to really understand each of the service that is provided in these, you know, clouds, well, doesn't matter which one that is. And the third one is pretty big, at least at the CTO the CIO, and the senior leadership level, is cost. Cost is a major factor because cloud, you know, will eat you up if you cannot manage it. If you don't have a good cloud governance process it because every minute you are in it, it's burning cash. So I think if you ask me, these are the three major things that I am facing day to day and that's where I use my partners, which I'll touch base down the line. >> Perfect, we'll talk about that. So Prem, I imagine that these problems are not unique to Johnson Controls or JCI, as you may hear us refer to it. Talk to me Prem about some of the other challenges that you're seeing within the customer landscape. >> So, yeah, I agree, Lisa, these are not very specific to JCI, but there are specific issues in JCI, right? So the way we think about these are, there is a common issue when people go to the cloud and there are very specific and unique issues for businesses, right? So JCI, and we will talk about this in the episode as we move forward. I think Suresh and his team have done some phenomenal step around how to manage this complexity. But there are customers who have a lesser complex cloud which is, they don't go to Alibaba, they don't have footprint in all three clouds. So their multi-cloud footprint could be a bit more manageable, but still struggle with a lot of the same problems around cost, around security, around talent. Talent is a big thing, right? And in Suresh's case I think it's slightly more exasperated because every cloud provider Be it AWS, JCP, or Azure brings in hundreds of services and there is nobody, including many of us, right? We learn every day, nowadays, right? It's not that there is one service integrator who knows all, while technically people can claim as a part of sales. But in reality all of us are continuing to learn in this landscape. And if you put all of this equation together with multiple clouds the complexity just starts to exponentially grow. And that's exactly what I think JCI is experiencing and Suresh's team has been experiencing, and we've been working together. But the common problems are around security talent and cost management of this, right? Those are my three things. And one last thing that I would love to say before we move away from this question is, if you think about cloud operations as a concept that's evolving over the last few years, and I have touched upon this in the previous episode as well, Lisa, right? If you take architectures, we've gone into microservices, we've gone into all these server-less architectures all the fancy things that we want. That helps us go to market faster, be more competent to as a business. But that's not simplified stuff, right? That's complicated stuff. It's a lot more distributed. Second, again, we've advanced and created more modern infrastructure because all of what we are talking is platform as a service, services on the cloud that we are consuming, right? In the same case with development we've moved into a DevOps model. We kind of click a button put some code in a repository, the code starts to run in production within a minute, everything else is automated. But then when we get to operations we are still stuck in a very old way of looking at cloud as an infrastructure, right? So you've got an infra team, you've got an app team, you've got an incident management team, you've got a soft knock, everything. But again, so Suresh can talk about this more because they are making significant strides in thinking about this as a single workload, and how do I apply engineering to go manage this? Because a lot of it is codified, right? So automation. Anyway, so that's kind of where the complexity is and how we are thinking, including JCI as a partner thinking about taming that complexity as we move forward. >> Suresh, let's talk about that taming the complexity. You guys have both done a great job of articulating the ostensible challenges that are there with cloud, especially multi-cloud environments that you're living in. But Suresh, talk about the partnership with Hitachi Vantara. How is it helping to dial down some of those inherent complexities? >> I mean, I always, you know, I think I've said this to Prem multiple times. I treat my partners as my internal, you know, employees. I look at Prem as my coworker or my peers. So the reason for that is I want Prem to have the same vested interest as a partner in my success or JCI success and vice versa, isn't it? I think that's how we operate and that's how we have been operating. And I think I would like to thank Prem and Hitachi Vantara for that really been an amazing partnership. And as he was saying, we have taken a completely holistic approach to how we want to really be in the market and play in the market to our customers. So if you look at my jacket it talks about OpenBlue platform. This is what JCI is building, that we are building this OpenBlue digital platform. And within that, my team, along with Prem's or Hitachi's, we have built what we call as Polaris. It's a technical platform where our apps can run. And this platform is automated end-to-end from a platform engineering standpoint. We stood up a platform engineering organization, a reliability engineering organization, as well as a support organization where Hitachi played a role. As I said previously, you know, for me to scale I'm not going to really have the talent and the knowledge of every function that I'm looking at. And Hitachi, not only they brought the talent but they also brought what he was talking about, Harc. You know, they have set up a lot and now we can leverage it. And they also came up with some really interesting concepts. I went and met them in India. They came up with this concept called IPL. Okay, what is that? They really challenged all their employees that's working for GCI to come up with innovative ideas to solve problems proactively, which is self-healing. You know, how you do that? So I think partners, you know, if they become really vested in your interests, they can do wonders for you. And I think in this case Hitachi is really working very well for us and in many aspects. And I'm leveraging them... You started with support, now I'm leveraging them in the automation, the platform engineering, as well as in the reliability engineering and then in even in the engineering spaces. And that like, they are my end-to-end partner right now? >> So you're really taking that holistic approach that you talked about and it sounds like it's a very collaborative two-way street partnership. Prem, I want to go back to, Suresh mentioned Harc. Talk a little bit about what Harc is and then how partners fit into Hitachi's Harc strategy. >> Great, so let me spend like a few seconds on what Harc is. Lisa, again, I know we've been using the term. Harc stands for Hitachi application reliability sectors. Now the reason we thought about Harc was, like I said in the beginning of this segment, there is an illusion from an architecture standpoint to be more modern, microservices, server-less, reactive architecture, so on and so forth. There is an illusion in your development methodology from Waterfall to agile, to DevOps to lean, agile to path program, whatever, right? Extreme program, so on and so forth. There is an evolution in the space of infrastructure from a point where you were buying these huge humongous servers and putting it in your data center to a point where people don't even see servers anymore, right? You buy it, by a click of a button you don't know the size of it. All you know is a, it's (indistinct) whatever that name means. Let's go provision it on the fly, get go, get your work done, right? When all of this is advanced when you think about operations people have been solving the problem the way they've been solving it 20 years back, right? That's the issue. And Harc was conceived exactly to fix that particular problem, to think about a modern way of operating a modern workload, right? That's exactly what Harc. So it brings together finest engineering talent. So the teams are trained in specific ways of working. We've invested and implemented some of the IP, we work with the best of the breed partner ecosystem, and I'll talk about that in a minute. And we've got these facilities in Dallas and I am talking from my office in Dallas, which is a Harc facility in the US from where we deliver for our customers. And then back in Hyderabad, we've got one more that we opened and these are facilities from where we deliver Harc services for our customers as well, right? And then we are expanding it in Japan and Portugal as we move into 23. That's kind of the plan that we are thinking through. However, that's what Harc is, Lisa, right? That's our solution to this cloud complexity problem. Right? >> Got it, and it sounds like it's going quite global, which is fantastic. So Suresh, I want to have you expand a bit on the partnership, the partner ecosystem and the role that it plays. You talked about it a little bit but what role does the partner ecosystem play in really helping JCI to dial down some of those challenges and the inherent complexities that we talked about? >> Yeah, sure. I think partners play a major role and JCI is very, very good at it. I mean, I've joined JCI 18 months ago, JCI leverages partners pretty extensively. As I said, I leverage Hitachi for my, you know, A group and the (indistinct) space and the cloud operations space, and they're my primary partner. But at the same time, we leverage many other partners. Well, you know, Accenture, SCL, and even on the tooling side we use Datadog and (indistinct). All these guys are major partners of our because the way we like to pick partners is based on our vision and where we want to go. And pick the right partner who's going to really, you know make you successful by investing their resources in you. And what I mean by that is when you have a partner, partner knows exactly what kind of skillset is needed for this customer, for them to really be successful. As I said earlier, we cannot really get all the skillset that we need, we rely on the partners and partners bring the the right skillset, they can scale. I can tell Prem tomorrow, "Hey, I need two parts by next week", and I guarantee it he's going to bring two parts to me. So they let you scale, they let you move fast. And I'm a big believer, in today's day and age, to get things done fast and be more agile. I'm not worried about failure, but for me moving fast is very, very important. And partners really do a very good job bringing that. But I think then they also really make you think, isn't it? Because one thing I like about partners they make you innovate whether they know it or not but they do because, you know, they will come and ask you questions about, "Hey, tell me why you are doing this. Can I review your architecture?" You know, and then they will try to really say I don't think this is going to work. Because they work with so many different clients, not JCI, they bring all that expertise and that's what I look from them, you know, just not, you know, do a T&M job for me. I ask you to do this go... They just bring more than that. That's how I pick my partners. And that's how, you know, Hitachi's Vantara is definitely one of a good partner from that sense because they bring a lot more innovation to the table and I appreciate about that. >> It sounds like, it sounds like a flywheel of innovation. >> Yeah. >> I love that. Last question for both of you, which we're almost out of time here, Prem, I want to go back to you. So I'm a partner, I'm planning on redefining CloudOps at my company. What are the two things you want me to remember from Hitachi Vantara's perspective? >> So before I get to that question, Lisa, the partners that we work with are slightly different from from the partners that, again, there are some similar partners. There are some different partners, right? For example, we pick and choose especially in the Harc space, we pick and choose partners that are more future focused, right? We don't care if they are huge companies or small companies. We go after companies that are future focused that are really, really nimble and can change for our customers need because it's not our need, right? When I pick partners for Harc my ultimate endeavor is to ensure, in this case because we've got (indistinct) GCI on, we are able to operate (indistinct) with the level of satisfaction above and beyond that they're expecting from us. And whatever I don't have I need to get from my partners so that I bring this solution to Suresh. As opposed to bringing a whole lot of people and making them stand in front of Suresh. So that's how I think about partners. What do I want them to do from, and we've always done this so we do workshops with our partners. We just don't go by tools. When we say we are partnering with X, Y, Z, we do workshops with them and we say, this is how we are thinking. Either you build it in your roadmap that helps us leverage you, continue to leverage you. And we do have minimal investments where we fix gaps. We're building some utilities for us to deliver the best service to our customers. And our intention is not to build a product to compete with our partner. Our intention is to just fill the wide space until they go build it into their product suite that we can then leverage it for our customers. So always think about end customers and how can we make it easy for them? Because for all the tool vendors out there seeing this and wanting to partner with Hitachi the biggest thing is tools sprawl, especially on the cloud is very real. For every problem on the cloud. I have a billion tools that are being thrown at me as Suresh if I'm putting my installation and it's not easy at all. It's so confusing. >> Yeah. >> So that's what we want. We want people to simplify that landscape for our end customers, and we are looking at partners that are thinking through the simplification not just making money. >> That makes perfect sense. There really is a very strong symbiosis it sounds like, in the partner ecosystem. And there's a lot of enablement that goes on back and forth it sounds like as well, which is really, to your point it's all about the end customers and what they're expecting. Suresh, last question for you is which is the same one, if I'm a partner what are the things that you want me to consider as I'm planning to redefine CloudOps at my company? >> I'll keep it simple. In my view, I mean, we've touched upon it in multiple facets in this interview about that, the three things. First and foremost, reliability. You know, in today's day and age my products has to be reliable, available and, you know, make sure that the customer's happy with what they're really dealing with, number one. Number two, my product has to be secure. Security is super, super important, okay? And number three, I need to really make sure my customers are getting the value so I keep my cost low. So these three is what I would focus and what I expect from my partners. >> Great advice, guys. Thank you so much for talking through this with me and really showing the audience how strong the partnership is between Hitachi Vantara and JCI. What you're doing together, we'll have to talk to you again to see where things go but we really appreciate your insights and your perspectives. Thank you. >> Thank you, Lisa. >> Thanks Lisa, thanks for having us. >> My pleasure. For my guests, I'm Lisa Martin. Thank you so much for watching. (soothing music)
SUMMARY :
In the next 15 minutes or so and pin points that you all the services we see. Talk to me Prem about some of the other in the episode as we move forward. that taming the complexity. and play in the market to our customers. that you talked about and it sounds Now the reason we thought about Harc was, and the inherent complexities But at the same time, we like a flywheel of innovation. What are the two things you want me especially in the Harc space, we pick for our end customers, and we are looking it sounds like, in the partner ecosystem. make sure that the customer's happy showing the audience how Thank you so much for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Suresh | PERSON | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Suresh Mothikuru | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
Prem Balasubramanian | PERSON | 0.99+ |
JCI | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Harc | ORGANIZATION | 0.99+ |
Johnson Controls | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
India | LOCATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Hyderabad | LOCATION | 0.99+ |
Hitachi Vantara | ORGANIZATION | 0.99+ |
Johnson Controls | ORGANIZATION | 0.99+ |
Portugal | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
SCL | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two parts | QUANTITY | 0.99+ |
150 services | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
200 services | QUANTITY | 0.99+ |
First question | QUANTITY | 0.99+ |
Prem | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Polaris | ORGANIZATION | 0.99+ |
T&M | ORGANIZATION | 0.99+ |
hundreds of services | QUANTITY | 0.99+ |
three things | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
agile | TITLE | 0.98+ |
Wayne Duso, AWS & Iyad Tarazi, Federated Wireless | MWC Barcelona 2023
(light music) >> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to the Fira in Barcelona. Dave Vellante with Dave Nicholson. Lisa Martin's been here all week. John Furrier is in our Palo Alto studio, banging out all the news. Don't forget to check out siliconangle.com, thecube.net. This is day four, our last segment, winding down. MWC23, super excited to be here. Wayne Duso, friend of theCUBE, VP of engineering from products at AWS is here with Iyad Tarazi, who's the CEO of Federated Wireless. Gents, welcome. >> Good to be here. >> Nice to see you. >> I'm so stoked, Wayne, that we connected before the show. We texted, I'm like, "You're going to be there. I'm going to be there. You got to come on theCUBE." So thank you so much for making time, and thank you for bringing a customer partner, Federated Wireless. Everybody knows AWS. Iyad, tell us about Federated Wireless. >> We're a software and services company out of Arlington, Virginia, right outside of Washington, DC, and we're really focused on this new technology called Shared Spectrum and private wireless for 5G. Think of it as enterprises consuming 5G, the way they used to consume WiFi. >> Is that unrestricted spectrum, or? >> It is managed, organized, interference free, all through cloud platforms. That's how we got to know AWS. We went and got maybe about 300 products from AWS to make it work. Quite sophisticated, highly available, and pristine spectrum worth billions of dollars, but available for people like you and I, that want to build enterprises, that want to make things work. Also carriers, cable companies everybody else that needs it. It's really a new revolution for everyone. >> And that's how you, it got introduced to AWS. Was that through public sector, or just the coincidence that you're in DC >> No, I, well, yes. The center of gravity in the world for spectrum is literally Arlington. You have the DOD spectrum people, you have spectrum people from National Science Foundation, DARPA, and then you have commercial sector, and you have the FCC just an Uber ride away. So we went and found the scientists that are doing all this work, four or five of them, Virginia Tech has an office there too, for spectrum research for the Navy. Come together, let's have a party and make a new model. >> So I asked this, I'm super excited to have you on theCUBE. I sat through the keynotes on Monday. I saw Satya Nadella was in there, Thomas Kurian there was no AWS. I'm like, where's AWS? AWS is everywhere. I mean, you guys are all over the show. I'm like, "Hey, where's the number one cloud?" So you guys have made a bunch of announcements at the show. Everybody's talking about the cloud. What's going on for you guys? >> So we are everywhere, and you know, we've been coming to this show for years. But this is really a year that we can demonstrate that what we've been doing for the IT enterprise, IT people for 17 years, we're now bringing for telcos, you know? For years, we've been, 17 years to be exact, we've been bringing the cloud value proposition, whether it's, you know, cost efficiencies or innovation or scale, reliability, security and so on, to these enterprise IT folks. Now we're doing the same thing for telcos. And so whether they want to build in region, in a local zone, metro area, on-prem with an outpost, at the edge with Snow Family, or with our IoT devices. And no matter where they want to start, if they start in the cloud and they want to move to the edge, or they start in the edge and they want to bring the cloud value proposition, like, we're demonstrating all of that is happening this week. And, and very much so, we're also demonstrating that we're bringing the same type of ecosystem that we've built for enterprise IT. We're bringing that type of ecosystem to the telco companies, with CSPs, with the ISP vendors. We've seen plenty of announcements this week. You know, so on and so forth. >> So what's different, is it, the names are different? Is it really that simple, that you're just basically taking the cloud model into telco, and saying, "Hey, why do all this undifferentiated heavy lifting when we can do it for you? Don't worry about all the plumbing." Is it really that simple? I mean, that straightforward. >> Well, simple is probably not what I'd say, but we can make it straightforward. >> Conceptually. >> Conceptually, yes. Conceptually it is the same. Because if you think about, firstly, we'll just take 5G for a moment, right? The 5G folks, if you look at the architecture for 5G, it was designed to run on a cloud architecture. It was designed to be a set of services that you could partition, and run in different places, whether it's in the region or at the edge. So in many ways it is sort of that simple. And let me give you an example. Two things, the first one is we announced integrated private wireless on AWS, which allows enterprise customers to come to a portal and look at the industry solutions. They're not worried about their network, they're worried about solving a problem, right? And they can come to that portal, they can find a solution, they can find a service provider that will help them with that solution. And what they end up with is a fully validated offering that AWS telco SAS have actually put to its paces to make sure this is a real thing. And whether they get it from a telco, and, and quite frankly in that space, it's SIs such as Federated that actually help our customers deploy those in private environments. So that's an example. And then added to that, we had a second announcement, which was AWS telco network builder, which allows telcos to plan, deploy, and operate at scale telco network capabilities on the cloud, think about it this way- >> As a managed service? >> As a managed service. So think about it this way. And the same way that enterprise IT has been deploying, you know, infrastructure as code for years. Telco network builder allows the telco folks to deploy telco networks and their capabilities as code. So it's not simple, but it is pretty straightforward. We're making it more straightforward as we go. >> Jump in Dave, by the way. He can geek out if you want. >> Yeah, no, no, no, that's good, that's good, that's good. But actually, I'm going to ask an AWS question, but I'm going to ask Iyad the AWS question. So when we, when I hear the word cloud from Wayne, cloud, AWS, typically in people's minds that denotes off-premises. Out there, AWS data center. In the telecom space, yes, of course, in the private 5G space, we're talking about a little bit of a different dynamic than in the public 5G space, in terms of the physical infrastructure. But regardless at the edge, there are things that need to be physically at the edge. Do you feel that AWS is sufficiently, have they removed the H word, hybrid, from the list of bad words you're not allowed to say? 'Cause there was a point in time- >> Yeah, of course. >> Where AWS felt that their growth- >> They'll even say multicloud today, (indistinct). >> No, no, no, no, no. But there was a period of time where, rightfully so, AWS felt that the growth trajectory would be supported solely by net new things off premises. Now though, in this space, it seems like that hybrid model is critical. Do you see AWS being open to the hybrid nature of things? >> Yeah, they're, absolutely. I mean, just to explain from- we're a services company and a solutions company. So we put together solutions at the edge, a smart campus, smart agriculture, a deployment. One of our biggest deployment is a million square feet warehouse automation project with the Marine Corps. >> That's bigger than the Fira. >> Oh yeah, it's bigger, definitely bigger than, you know, a small section of here. It's actually three massive warehouses. So yes, that is the edge. What the cloud is about is that massive amount of efficiency has happened by concentrating applications in data centers. And that is programmability, that is APIs that is solutions, that is applications that can run on it, where people know how to do it. And so all that efficiency now is being ported in a box called the edge. What AWS is doing for us is bringing all the business and technical solutions they had into the edge. Some of the data may send back and forth, but that's actually a smaller piece of the value for us. By being able to bring an AWS package at the edge, we're bringing IoT applications, we're bringing high speed cameras, we're able to integrate with the 5G public network. We're able to bring in identity and devices, we're able to bring in solutions for students, embedded laptops. All of these things that you can do much much faster and cheaper if you are able to tap in the 4,000, 5,000 partners and all the applications and all the development and all the models that AWS team did. By being able to bring that efficiency to the edge why reinvent that? And then along with that, there are partners that you, that help do integration. There are development done to make it hardened, to make the data more secure, more isolated. All of these things will contribute to an edge that truly is a carbon copy of the data center. >> So Wayne, it's AWS, Regardless of where the compute, networking and storage physically live, it's AWS. Do you think that the term cloud will sort of drift away from usage? Because if, look, it's all IT, in this case it's AWS and federated IT working together. How, what's your, it's sort of a obscure question about cloud, because cloud is so integrated. >> You Got this thing about cloud, it's just IT. >> I got thing about cloud too, because- >> You and Larry Ellison. >> Because it's no, no, no, I'm, yeah, well actually there's- >> There's a lot of IT that's not cloud, just say that okay. >> Now, a lot of IT that isn't cloud, but I would say- >> But I'll (indistinct) cloud is an IT tool, and you see AWS obviously with the Snow fill in the blank line of products and outpost type stuff. Fair to say that you're, doesn't matter where it is, it could be AWS if it's on the edge, right? >> Well, you know, everybody wants to define the cloud as what it may have been when it started. But if you look at what it was when it started and what it is today, it is different. But the ability to bring the experience, the AWS experience, the services, the operational experience and all the things that Iyad had been talking about from the region all to all the way to, you know, the IoT device, if you would, that entire continuum. And it doesn't matter where you start. Like if you start in region and you need to bring your value to other places because your customers are asking you to do so, we're enabling that experience where you need to bring it. If you started at the edge, and- but you want to build cloud value, you know, whether it's again, cost efficiency, scalability, AI, ML or analytics into those capabilities, you can start at the edge with the same APIs, with the same service, the same capabilities, and you can build that value in right from the get go. You don't build this bifurcation or many separations and try to figure out how do I glue them together? There is no gluing together. So if you think of cloud as being elastic, scalable flexible, where you can drive innovation, it's the same exact model on the continuum. And you can start at either end, it's up to you as a customer. >> And I think if, the key to me is the ecosystem. I mean, if you can do for this industry what you've done for the technology- enterprise technology business from an ecosystem standpoint, you know everybody talks about flywheel, but that gives you like the massive flywheel. I don't know what the ratio is, but it used to be for every dollar spent on a VMware license, $15 is spent in the ecosystem. I've never heard similar ratios in the AWS ecosystem, but it's, I go to reinvent and I'm like, there's some dollars being- >> That's a massive ecosystem. >> (indistinct). >> And then, and another thing I'll add is Jose Maria Alvarez, who's the chairman of Telefonica, said there's three pillars of the future-ready telco, low latency, programmable networks, and he said cloud and edge. So they recognizing cloud and edge, you know, low latency means you got to put the compute and the data, the programmable infrastructure was invented by Amazon. So what's the strategy around the telco edge? >> So, you know, at the end, so those are all great points. And in fact, the programmability of the network was a big theme in the show. It was a huge theme. And if you think about the cloud, what is the cloud? It's a set of APIs against a set of resources that you use in whatever way is appropriate for what you're trying to accomplish. The network, the telco network becomes a resource. And it could be described as a resource. We, I talked about, you know, network as in code, right? It's same infrastructure in code, it's telco infrastructure as code. And that code, that infrastructure, is programmable. So this is really, really important. And in how you build the ecosystem around that is no different than how we built the ecosystem around traditional IT abstractions. In fact, we feel that really the ecosystem is the killer app for 5G. You know, the killer app for 4G, data of sorts, right? We started using data beyond simple SMS messages. So what's the killer app for 5G? It's building this ecosystem, which includes the CSPs, the ISVs, all of the partners that we bring to the table that can drive greater value. It's not just about cost efficiency. You know, you can't save your way to success, right? At some point you need to generate greater value for your customers, which gives you better business outcomes, 'cause you can monetize them, right? The ecosystem is going to allow everybody to monetize 5G. >> 5G is like the dot connector of all that. And then developers come in on top and create new capabilities >> And how different is that than, you know, the original smartphones? >> Yeah, you're right. So what do you guys think of ChatGPT? (indistinct) to Amazon? Amazon turned the data center into an API. It's like we're visioning this world, and I want to ask that technologist, like, where it's turning resources into human language interfaces. You know, when you see that, you play with ChatGPT at all, or I know you guys got your own. >> So I won't speak directly to ChatGPT. >> No, don't speak from- >> But if you think about- >> Generative AI. >> Yeah generative AI is important. And, and we are, and we have been for years, in this space. Now you've been talking to AWS for a long time, and we often don't talk about things we don't have yet. We don't talk about things that we haven't brought to market yet. And so, you know, you'll often hear us talk about something, you know, a year from now where others may have been talking about it three years earlier, right? We will be talking about this space when we feel it's appropriate for our customers and our partners. >> You have talked about it a little bit, Adam Selipsky went on an interview with myself and John Furrier in October said you watch, you know, large language models are going to be enormous and I know you guys have some stuff that you're working on there. >> It's, I'll say it's exciting. >> Yeah, I mean- >> Well proof point is, Siri is an idiot compared to Alexa. (group laughs) So I trust one entity to come up with something smart. >> I have conversations with Alexa and Siri, and I won't judge either one. >> You don't need, you could be objective on that one. I definitely have a preference. >> Are the problems you guys solving in this space, you know, what's unique about 'em? What are they, can we, sort of, take some examples here (indistinct). >> Sure, the main theme is that the enterprise is taking control. They want to have their own networks. They want to focus on specific applications, and they want to build them with a skeleton crew. The one IT person in a warehouse want to be able to do it all. So what's unique about them is that they're now are a lot of automation on robotics, especially in warehousing environment agriculture. There simply aren't enough people in these industries, and that required precision. And so you need all that integration to make it work. People also want to build these networks as they want to control it. They want to figure out how do we actually pick this team and migrate it. Maybe just do the front of the house first. Maybe it's a security team that monitor the building, maybe later on upgrade things that use to open doors and close doors and collect maintenance data. So that ability to pick what you want to do from a new processors is really important. And then you're also seeing a lot of public-private network interconnection. That's probably the undercurrent of this show that haven't been talked about. When people say private networks, they're also talking about something called neutral host, which means I'm going to build my own network, but I want it to work, my Verizon (indistinct) need to work. There's been so much progress, it's not done yet. So much progress about this bring my own network concept, and then make sure that I'm now interoperating with the public network, but it's my domain. I can create air gaps, I can create whatever security and policy around it. That is probably the power of 5G. Now take all of these tiny networks, big networks, put them all in one ecosystem. Call it the Amazon marketplace, call it the Amazon ecosystem, that's 5G. It's going to be tremendous future. >> What does the future look like? We're going to, we just determined we're going to be orchestrating the network through human language, okay? (group laughs) But seriously, what's your vision for the future here? You know, both connectivity and cloud are on on a continuum. It's, they've been on a continuum forever. They're going to continue to be on a continuum. That being said, those continuums are coming together, right? They're coming together to bring greater value to a greater set of customers, and frankly all of us. So, you know, the future is now like, you know, this conference is the future, and if you look at what's going on, it's about the acceleration of the future, right? What we announced this week is really the acceleration of listening to customers for the last handful of years. And, we're going to continue to do that. We're going to continue to bring greater value in the form of solutions. And that's what I want to pick up on from the prior question. It's not about the network, it's not about the cloud, it's about the solutions that we can provide the customers where they are, right? And if they're on their mobile phone or they're in their factory floor, you know, they're looking to accelerate their business. They're looking to accelerate their value. They're looking to create greater safety for their employees. That's what we can do with these technologies. So in fact, when we came out with, you know, our announcement for integrated private wireless, right? It really was about industry solutions. It really isn't about, you know, the cloud or the network. It's about how you can leverage those technologies, that continuum, to deliver you value. >> You know, it's interesting you say that, 'cause again, when we were interviewing Adam Selipsky, everybody, you know, all journalists analysts want to know, how's Adam Selipsky going to be different from Andy Jassy, what's the, what's he going to do to Amazon to change? And he said, listen, the real answer is Amazon has changed. If Andy Jassy were here, we'd be doing all, you know, pretty much the same things. Your point about 17 years ago, the cloud was S3, right, and EC2. Now it's got to evolve to be solutions. 'Cause if that's all you're selling, is the bespoke services, then you know, the future is not as bright as the past has been. And so I think it's key to look for what are those outcomes or solutions that customers require and how you're going to meet 'em. And there's a lot of challenges. >> You continue to build value on the value that you've brought, and you don't lose sight of why that value is important. You carry that value proposition up the stack, but the- what you're delivering, as you said, becomes maybe a bigger or or different. >> And you are getting more solution oriented. I mean, you're not hardcore solutions yet, but we're seeing more and more of that. And that seems to be a trend. We've even seen in the database world, making things easier, connecting things. Not really an abstraction layer, which is sort of antithetical to your philosophy, but it creates a similar outcome in terms of simplicity. Yeah, you're smiling 'cause you guys always have a different angle, you know? >> Yeah, we've had this conversation. >> It's right, it's, Jassy used to say it's okay to be misunderstood. >> That's Right. For a long time. >> Yeah, right, guys, thanks so much for coming to theCUBE. I'm so glad we could make this happen. >> It's always good. Thank you. >> Thank you so much. >> All right, Dave Nicholson, for Lisa Martin, Dave Vellante, John Furrier in the Palo Alto studio. We're here at the Fira, wrapping out MWC23. Keep it right there, thanks for watching. (upbeat music)
SUMMARY :
that drive human progress. banging out all the news. and thank you for bringing the way they used to consume WiFi. but available for people like you and I, or just the coincidence that you're in DC and you have the FCC excited to have you on theCUBE. and you know, we've been the cloud model into telco, and saying, but we can make it straightforward. that you could partition, And the same way that enterprise Jump in Dave, by the way. that need to be physically at the edge. They'll even say multicloud AWS felt that the growth trajectory I mean, just to explain from- and all the models that AWS team did. the compute, networking You Got this thing about cloud, not cloud, just say that okay. on the edge, right? But the ability to bring the experience, but that gives you like of the future-ready telco, And in fact, the programmability 5G is like the dot So what do you guys think of ChatGPT? to ChatGPT. And so, you know, you'll often and I know you guys have some stuff it's exciting. Siri is an idiot compared to Alexa. and I won't judge either one. You don't need, you could Are the problems you that the enterprise is taking control. that continuum, to deliver you value. is the bespoke services, then you know, and you don't lose sight of And that seems to be a trend. it's okay to be misunderstood. For a long time. so much for coming to theCUBE. It's always good. in the Palo Alto studio.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Marine Corps | ORGANIZATION | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Wayne | PERSON | 0.99+ |
Iyad Tarazi | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Jose Maria Alvarez | PERSON | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Federated Wireless | ORGANIZATION | 0.99+ |
Wayne Duso | PERSON | 0.99+ |
$15 | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
17 years | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
Arlington | LOCATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Virginia Tech | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
FCC | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Jassy | PERSON | 0.99+ |
DC | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.98+ |
thecube.net | OTHER | 0.98+ |
this week | DATE | 0.98+ |
second announcement | QUANTITY | 0.98+ |
three years earlier | DATE | 0.98+ |
Tammy Whyman, Telco & Kurt Schaubach, Federated Wireless | MWC Barcelona 2023
>> Announcer: The cube's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) (background indistinct chatter) >> Good morning from Barcelona, everyone. It's theCUBE live at MWC23, day three of our four days of coverage. Lisa Martin here with Dave Nicholson. Dave, we have had some great conversations. Can't believe it's day three already. Anything sticking out at you from a thematic perspective that really caught your eye the last couple days? >> I guess I go back to kind of our experience with sort of the generalized world of information technology and a lot of the parallels between what's been happening in other parts of the economy and what's happening in the telecom space now. So it helps me understand some of the complexity when I tie it back to things that I'm aware of >> A lot of complexity, but a big ecosystem that's growing. We're going to be talking more about the ecosystem next and what they're doing to really enable customers CSPs to deliver services. We've got two guests here, Tammy Wyman joins us the Global head of Partners Telco at AWS. And Kurt Schaubach, CTO of Federated Wireless. Welcome to theCUBE Guys. >> Thank you. >> Thank you. >> Great to have you here, day three. Lots of announcements, lots of news at MWC. But Tammy, there's been a lot of announcements from partners with AWS this week. Talk to us a little bit more about first of all, the partner program and then let's unpack some of those announcements. One of them is with Federated Wireless. >> Sure. Yeah. So AWS created the partner program 10 years ago when they really started to understand the value of bringing together the ecosystem. So, I think we're starting to see how this is becoming a reality. So now we 100,000 partners later, 150 countries, 70% of those partners are outside of the US. So truly the global nature and partners being ISVs, GSIs. And then in the telco space, we're actually looking at how we help CSBs become partners of AWS and bring new revenue streams. So that's how we start having the discussions around Federated Wireless. >> Talk a little bit about Federated Wireless, Kurt, give the audience an overview of what you guys are doing and then maybe give us some commentary on the partnership. >> Sure. So we're a shared spectrum and private wireless company, and we actually started working with AWS about five years ago to take this model that we developed to perfect the use of shared spectrum to enable enterprise communications and bring the power of 5G to the enterprise to bring it to all of the AWS customers and partners. So through that now through we're one of the partner network participants. We're working very closely with the AWS team on bringing this, really unique form of connectivity to all sorts of different enterprise use cases from solving manufacturing and warehouse logistics issues to providing connectivity to mines, enhancing the experience for students on a university campus. So it's a really exciting partnership. Everything that we deliver on an end-to-end basis from design deployment to bringing the infrastructure on-prem, all runs on AWS. (background indistinct chatter) >> So a lot of the conversations that we've had sort of start with this concept of the radio access network and frankly in at least the public domain cellular sites. And so all of a sudden it's sort of grounded in this physical reality of these towers with these boxes of equipment on the tower, at the base of the tower, connected to other things. How does AWS and Federated Wireless, where do you fit in that model in terms of equipment at the base of a tower versus what having that be off-premises in some way or another. Kind of give us more of a flavor for the kind of physical reality of what you guys are doing? >> Yeah, I'll start. >> Yeah, Tammy. >> I'll hand it over to the real expert but from an AWS perspective, what we're finding is really I don't know if it's even a convergence or kind of a delaying of the network. So customers are, they don't care if they're on Wi-Fi if they're on public spectrum, if they're on private spectrum, what they want are networks that are able to talk to each other and to provide the right connectivity at the right time and with the right pricing model. So by moving to the cloud that allows us that flexibility to be able to offer the quality of service and to be able to bring in a larger ecosystem of partners as with the networks are almost disaggregated. >> So does the AWS strategy focus solely on things that are happening in, say, AWS locations or AWS data centers? Or is AWS also getting into the arena of what I would refer to as an Outpost in an AWS parlance where physical equipment that's running a stack might actually also be located physically where the communications towers are? What does that mix look like in terms of your strategy? >> Yeah, certainly as customers are looking at hybrid cloud environments, we started looking at how we can use Outpost as part of the network. So, we've got some great use cases where we're taking Outpost into the edge of operators networks, and really starting to have radio in the cloud. We've launched with Dish earlier, and now we're starting to see some other announcements that we've made with Nokia about having ran in the cloud as well. So using Outpost, that's one of our key strategies. It creates, again, a lot of flexibility for the hybrid cloud environment and brings a lot of that compute power to the edge of the network. >> Let's talk about some of the announcements. Tammy was reading that AWS is expanding, its telecom and 5g, private 5G network support. You've also unveiled the AWS Telco Network Builder service. Talk about that, who that's targeted for. What does an operator do with AWS on this? Or maybe you guys can talk about that together. >> Sure. Would you like to start? I can talk. All right. So from the network builder, it's aimed at the, I would say the persona that it's aimed at would be the network engineer within the CSPs. And there was a bit of a difficulty when you want to design a telco network on AWS versus the way that the network engineers would traditionally design. So I'm going to call them protocols, but you know I can imagine saying, "I really want to build this on the cloud, but they're making me move away from my typical way that I design a network and move it into a cloud world." So what we did was really kind of create this template saying, "You can build the network as you always do and we are going to put the magic behind it to translate it into a cloud world." So just really facilitating and taking some of the friction out of the building of the network. >> What was the catalyst for that? I think Dish and Swisscom you've been working with but talk about the catalyst for doing that and how it's facilitating change because part of that's change management with how network engineers actually function and how they work. >> Absolutely, yeah. And we're looking, we listen to customers and we're trying to understand what are those friction points? What would make it easier? And that was one that we heard consistently. So we wanted to apply a bit of our experience and the way that we're able to use data translate that using code so that you're building a network in your traditional way, and then it kind of spits out what's the formula to build the network in the cloud. >> Got it. Kurt, talk about, yeah, I saw that there was just an announcement that Federated Wireless made JBG Smith. Talk to us more about that. What will federated help them to create and how are you all working together? >> Sure. So JBG Smith is the exclusive redeveloper of an area just on the other side of the Potomac from Washington DC called National Landing. And it's about half the size of Manhattan. So it's an enormous area that's getting redeveloped. It's the home of Amazon's new HQ two location. And JBG Smith is investing in addition to the commercial real estate, digital place making a place where people live, work, play, and connect. And part of that is bringing an enhanced level of connectivity to people's homes, their residents, the enterprise, and private wireless is a key component of that. So when we talk about private wireless, what we're doing with AWS is giving an enterprise the freedom to operate a network independent of a mobile network operator. So that means everything from the ran to the core to the applications that run on this network are sort of within the domain of the enterprise merging 5G and edge compute and driving new business outcomes. That's really the most important thing. We can talk a lot about 5G here at MWC about what the enterprise really cares about are new business outcomes how do they become more efficient? And that's really what private wireless helps enable. >> So help us connect the dots. When we talk about private wireless we've definitely been in learning mode here. Well, I'll speak for myself going around and looking at some of the exhibits and seeing how things work. And I know that I wasn't necessarily a 100% clear on this connection between a 5G private wireless network today and where Wi-Fi still comes into play. So if I am a new resident in this area, happily living near the amazing new presence of AWS on the East coast, and I want to use my mobile device how am I connected into that private wireless network? What does that look like as a practical matter? >> So that example that you've just referred to is really something that we enable through neutral host. So in fact, what we're able to do through this private network is also create carrier connectivity. Basically create a pipe almost for the carriers to be able to reach a consumer device like that. A lot of private wireless is also driving business outcomes with enterprises. So work that we're doing, like for example, with the Cal Poly out in California, for example is to enable a new 5G innovation platform. So this is driving all sorts of new 5G research and innovation with the university, new applications around IoT. And they need the ability to do that indoors, outdoors in a way that's sort of free from the domain of connectivity to a a mobile network operator and having the freedom and flexibility to do that, merging that with edge compute. Those are some really important components. We're also doing a lot of work in things like warehouses. Think of a warehouse as being this very complex RF environment. You want to bring robotics you want to bring better inventory management and Wi-Fi just isn't an effective means of providing really reliable indoor coverage. You need more secure networks you need lower latency and the ability to move more data around again, merging new applications with edge compute and that's where private wireless really shines. >> So this is where we do the shout out to my daughter Rachel Nicholson, who is currently a junior at Cal Poly San Luis Obispo. Rachel, get plenty of sleep and get your homework done. >> Lisa: She better be studying. >> I held up my mobile device and I should have said full disclosure, we have spotty cellular service where I live. So I think of this as a Wi-Fi connected device, in fact. So maybe I confuse the issue at least. >> Tammy, talk to us a little bit about the architecture from an AWS perspective that is enabling JBG Smith, Cal Poly is this, we're talking an edge architecture, but give us a little bit more of an understanding of what that actually technically looks like. >> Alright, I would love to pass this one over to Kurt. >> Okay. >> So I'm sorry, just in terms of? >> Wanting to understand the AWS architecture this is an edge based architecture hosted on what? On AWS snow, application storage. Give us a picture of what that looks like. >> Right. So I mean, the beauty of this is the simplicity in it. So we're able to bring an AWS snowball, snow cone, edge appliance that runs a pack of core. We're able to run workloads on that locally so some applications, but we also obviously have the ability to bring that out to the public cloud. So depending on what the user application is, we look at anything from the AWS snow family to Outpost and sort of develop templates or solutions depending on what the customer workloads demand. But the innovation that's happened, especially around the pack core and how we can make that so compact and able to run on such a capable appliance is really powerful. >> Yeah, and I will add that I think the diversification of the different connectivity modules that we have a lot of them have been developed because of the needs from the telco industry. So the adaptation of Outpost to run into the edge, the snow family. So the telco industry is really leading a lot of the developments that AWS takes to market in the end because of the nature of having to have networks that are able to disconnect, ruggedize environments, the latency, the numerous use cases that our telco customers are facing to take to their end customers. So like it really allows us to adapt and bring the right network to the right place and the right environment. And even for the same customer they may have different satellite offices or remote sites that need different connectivity needs. >> Right. So it sounds like that collaboration between AWS and telco is quite strong and symbiotic, it sounds like. >> Tammy: Absolutely. >> So we talked about a number of the announcements in our final minutes. I want to talk about integrated private wireless that was just announced last week. What is that? Who are the users going to be? And I understand T-Mobile is involved there. >> Yes. Yeah. So this is a program that we launched based on what we're seeing is kind of a convergence of the ecosystem of private wireless. So we wanted to be able to create a program which is offering spectrum that is regulated as well. And we wanted to offer that on in a more of a multi country environment. So we launched with T-Mobile, Telephonica, KDDI and a number of other succeed, as a start to start being able to bring the regulated spectrum into the picture and as well other ISVs who are going to be bringing unique use cases so that when you look at, well we've got the connectivity into this environment, the mine or the port, what are those use cases? You know, so ISVs who are providing maybe asset tracking or some of the health and safety and we bring them in as part of the program. And I think an important piece is the actual discoverability of this, because when you think about that if you're a buyer on the other side, like where do I start? So we created a portal with this group of ISVs and partners so that one could come together and kind of build what are my needs? And then they start picking through and then the ecosystem would be recommended to them. So it's a really a way to discover and to also procure a private wireless network much more easily than could be done in the past. >> That's a great service >> And we're learning a lot from the market. And what we're doing together in our partnership is through a lot of these sort of ruggedized remote location deployments that we're doing, mines, clearing underbrush and forest forest areas to prevent forest fires. There's a tremendous number of applications for private wireless where sort of the conventional carrier networks just aren't prioritized to serve. And you need a different level of connectivity. Privacy is big concern as well. Data security. Keeping data on premise, which is a another big application that we were able to drive through these edge compute platforms. >> Awesome. Guys, thank you so much for joining us on the program talking about what AWS Federated are doing together and how you're really helping to evolve the telco landscape and make life ultimately easier for all the Nicholsons to connect over Wi-Fi, our private 5g. >> Keep us in touch. And from two Californians you had us when you said clear the brush, prevent fires. >> You did. Thanks guys, it was a pleasure having you on the program. >> Thank you. >> Thank you. >> Our pleasure. For our guest and for Dave Nicholson, I'm Lisa Martin. You're watching theCUBE Live from our third day of coverage of MWC23. Stick around Dave and I will be right back with our next guest. (upbeat music)
SUMMARY :
that drive human progress. eye the last couple days? and a lot of the parallels the Global head of Partners Telco at AWS. the partner program and then let's unpack So AWS created the partner commentary on the partnership. and bring the power of So a lot of the So by moving to the cloud that allows us and brings a lot of that compute power of the announcements. So from the network but talk about the catalyst for doing that and the way that we're Talk to us more about that. from the ran to the core and looking at some of the exhibits and the ability to move So this is where we do the shout out So maybe I confuse the issue at least. bit about the architecture pass this one over to Kurt. the AWS architecture the beauty of this is a lot of the developments that AWS and telco is quite strong and number of the announcements a convergence of the ecosystem a lot from the market. on the program talking the brush, prevent fires. having you on the program. of coverage of MWC23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Kurt Schaubach | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Rachel Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tammy Wyman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
Tammy | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
Kurt | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Washington DC | LOCATION | 0.99+ |
Federated Wireless | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Rachel | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Swisscom | ORGANIZATION | 0.99+ |
Cal Poly | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tammy Whyman | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Telephonica | ORGANIZATION | 0.99+ |
JBG Smith | ORGANIZATION | 0.99+ |
Manhattan | LOCATION | 0.99+ |
National Landing | LOCATION | 0.99+ |
four days | QUANTITY | 0.99+ |
this week | DATE | 0.98+ |
third day | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
JBG Smith | PERSON | 0.98+ |
Dish | ORGANIZATION | 0.98+ |
Potomac | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
KDDI | ORGANIZATION | 0.98+ |
150 countries | QUANTITY | 0.97+ |
MWC23 | EVENT | 0.96+ |
two location | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
day three | QUANTITY | 0.95+ |
MWC | EVENT | 0.95+ |
John Kreisa, Couchbase | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music intro) (logo background tingles) >> Hi everybody, welcome back to day three of MWC23, my name is Dave Vellante and we're here live at the Theater of Barcelona, Lisa Martin, David Nicholson, John Furrier's in our studio in Palo Alto. Lot of buzz at the show, the Mobile World Daily Today, front page, Netflix chief hits back in fair share row, Greg Peters, the co-CEO of Netflix, talking about how, "Hey, you guys want to tax us, the telcos want to tax us, well, maybe you should help us pay for some of the content. Your margins are higher, you have a monopoly, you know, we're delivering all this value, you're bundling Netflix in, from a lot of ISPs so hold on, you know, pump the brakes on that tax," so that's the big news. Lockheed Martin, FOSS issues, AI guidelines, says, "AI's not going to take over your job anytime soon." Although I would say, your job's going to be AI-powered for the next five years. We're going to talk about data, we've been talking about the disaggregation of the telco stack, part of that stack is a data layer. John Kreisa is here, the CMO of Couchbase, John, you know, we've talked about all week, the disaggregation of the telco stacks, they got, you know, Silicon and operating systems that are, you know, real time OS, highly reliable, you know, compute infrastructure all the way up through a telemetry stack, et cetera. And that's a proprietary block that's really exploding, it's like the big bang, like we saw in the enterprise 20 years ago and we haven't had much discussion about that data layer, sort of that horizontal data layer, that's the market you play in. You know, Couchbase obviously has a lot of telco customers- >> John: That's right. >> We've seen, you know, Snowflake and others launch telco businesses. What are you seeing when you talk to customers at the show? What are they doing with that data layer? >> Yeah, so they're building applications to drive and power unique experiences for their users, but of course, it all starts with where the data is. So they're building mobile applications where they're stretching it out to the edge and you have to move the data to the edge, you have to have that capability to deliver that highly interactive experience to their customers or for their own internal use cases out to that edge, so seeing a lot of that with Couchbase and with our customers in telco. >> So what do the telcos want to do with data? I mean, they've got the telemetry data- >> John: Yeah. >> Now they frequently complain about the over-the-top providers that have used that data, again like Netflix, to identify customer demand for content and they're mopping that up in a big way, you know, certainly Amazon and shopping Google and ads, you know, they're all using that network. But what do the telcos do today and what do they want to do in the future? They're all talking about monetization, how do they monetize that data? >> Yeah, well, by taking that data, there's insight to be had, right? So by usage patterns and what's happening, just as you said, so they can deliver a better experience. It's all about getting that edge, if you will, on their competition and so taking that data, using it in a smart way, gives them that edge to deliver a better service and then grow their business. >> We're seeing a lot of action at the edge and, you know, the edge can be a Home Depot or a Lowe's store, but it also could be the far edge, could be a, you know, an oil drilling, an oil rig, it could be a racetrack, you know, certainly hospitals and certain, you know, situations. So let's think about that edge, where there's maybe not a lot of connectivity, there might be private networks going in, in the future- >> John: That's right. >> Private 5G networks. What's the data flow look like there? Do you guys have any customers doing those types of use cases? >> Yeah, absolutely. >> And what are they doing with the data? >> Yeah, absolutely, we've got customers all across, so telco and transportation, all kinds of service delivery and healthcare, for example, we've got customers who are delivering healthcare out at the edge where they have a remote location, they're able to deliver healthcare, but as you said, there's not always connectivity, so they need to have the applications, need to continue to run and then sync back once they have that connectivity. So it's really having the ability to deliver a service, reliably and then know that that will be synced back to some central server when they have connectivity- >> So the processing might occur where the data- >> Compute at the edge. >> How do you sync back? What is that technology? >> Yeah, so there's, so within, so Couchbase and Couchbase's case, we have an autonomous sync capability that brings it back to the cloud once they get back to whether it's a private network that they want to run over, or if they're doing it over a public, you know, wifi network, once it determines that there's connectivity and, it can be peer-to-peer sync, so different edge apps communicating with each other and then ultimately communicating back to a central server. >> I mean, the other theme here, of course, I call it the software-defined telco, right? But you got to have, you got to run on something, got to have hardware. So you see companies like AWS putting Outposts, out to the edge, Outposts, you know, doesn't really run a lot of database to mind, I mean, it runs RDS, you know, maybe they're going to eventually work with companies like... I mean, you're a partner of AWS- >> John: We are. >> Right? So do you see that kind of cloud infrastructure that's moving to the edge? Do you see that as an opportunity for companies like Couchbase? >> Yeah, we do. We see customers wanting to push more and more of that compute out to the edge and so partnering with AWS gives us that opportunity and we are certified on Outpost and- >> Oh, you are? >> We are, yeah. >> Okay. >> Absolutely. >> When did that, go down? >> That was last year, but probably early last year- >> So I can run Couchbase at the edge, on Outpost? >> Yeah, that's right. >> I mean, you know, Outpost adoption has been slow, we've reported on that, but are you seeing any traction there? Are you seeing any nibbles? >> Starting to see some interest, yeah, absolutely. And again, it has to be for the right use case, but again, for service delivery, things like healthcare and in transportation, you know, they're starting to see where they want to have that compute, be very close to where the actions happen. >> And you can run on, in the data center, right? >> That's right. >> You can run in the cloud, you know, you see HPE with GreenLake, you see Dell with Apex, that's essentially their Outposts. >> Yeah. >> They're saying, "Hey, we're going to take our whole infrastructure and make it as a service." >> Yeah, yeah. >> Right? And so you can participate in those environments- >> We do. >> And then so you've got now, you know, we call it supercloud, you've got the on-prem, you've got the, you can run in the public cloud, you can run at the edge and you want that consistent experience- >> That's right. >> You know, from a data layer- >> That's right. >> So is that really the strategy for a data company is taking or should be taking, that horizontal layer across all those use cases? >> You do need to think holistically about it, because you need to be able to deliver as a, you know, as a provider, wherever the customer wants to be able to consume that application. So you do have to think about any of the public clouds or private networks and all the way to the edge. >> What's different John, about the telco business versus the traditional enterprise? >> Well, I mean, there's scale, I mean, one thing they're dealing with, particularly for end user-facing apps, you're dealing at a very very high scale and the expectation that you're going to deliver a very interactive experience. So I'd say one thing in particular that we are focusing on, is making sure we deliver that highly interactive experience but it's the scale of the number of users and customers that they have, and the expectation that your application's always going to work. >> Speaking of applications, I mean, it seems like that's where the innovation is going to come from. We saw yesterday, GSMA announced, I think eight APIs telco APIs, you know, we were talking on theCUBE, one of the analysts was like, "Eight, that's nothing," you know, "What do these guys know about developers?" But you know, as Daniel Royston said, "Eight's better than zero." >> Right? >> So okay, so we're starting there, but the point being, it's all about the apps, that's where the innovation's going to come from- >> That's right. >> So what are you seeing there, in terms of building on top of the data app? >> Right, well you have to provide, I mean, have to provide the APIs and the access because it is really, the rubber meets the road, with the developers and giving them the ability to create those really rich applications where they want and create the experiences and innovate and change the way that they're giving those experiences. >> Yeah, so what's your relationship with developers at Couchbase? >> John: Yeah. >> I mean, talk about that a little bit- >> Yeah, yeah, so we have a great relationship with developers, something we've been investing more and more in, in terms of things like developer relations teams and community, Couchbase started in open source, continue to be based on open source projects and of course, those are very developer centric. So we provide all the consistent APIs for developers to create those applications, whether it's something on Couchbase Lite, which is our kind of edge-based database, or how they can sync that data back and we actually automate a lot of that syncing which is a very difficult developer task which lends them to one of the developer- >> What I'm trying to figure out is, what's the telco developer look like? Is that a developer that comes from the enterprise and somebody comes from the blockchain world, or AI or, you know, there really doesn't seem to be a lot of developer talk here, but there's a huge opportunity. >> Yeah, yeah. >> And, you know, I feel like, the telcos kind of remind me of, you know, a traditional legacy company trying to get into the developer world, you know, even Oracle, okay, they bought Sun, they got Java, so I guess they have developers, but you know, IBM for years tried with Bluemix, they had to end up buying Red Hat, really, and that gave them the developer community. >> Yep. >> EMC used to have a thing called EMC Code, which was a, you know, good effort, but eh. And then, you know, VMware always trying to do that, but, so as you move up the stack obviously, you have greater developer affinity. Where do you think the telco developer's going to come from? How's that going to evolve? >> Yeah, it's interesting, and I think they're... To kind of get to your first question, I think they're fairly traditional enterprise developers and when we break that down, we look at it in terms of what the developer persona is, are they a front-end developer? Like they're writing that front-end app, they don't care so much about the infrastructure behind or are they a full stack developer and they're really involved in the entire application development lifecycle? Or are they living at the backend and they're really wanting to just focus in on that data layer? So we lend towards all of those different personas and we think about them in terms of the APIs that we create, so that's really what the developers are for telcos is, there's a combination of those front-end and full stack developers and so for them to continue to innovate they need to appeal to those developers and that's technology, like Couchbase, is what helps them do that. >> Yeah and you think about the Apples, you know, the app store model or Apple sort of says, "Okay, here's a developer kit, go create." >> John: Yeah. >> "And then if it's successful, you're going to be successful and we're going to take a vig," okay, good model. >> John: Yeah. >> I think I'm hearing, and maybe I misunderstood this, but I think it was the CEO or chairman of Ericsson on the day one keynotes, was saying, "We are going to monetize the, essentially the telemetry data, you know, through APIs, we're going to charge for that," you know, maybe that's not the best approach, I don't know, I think there's got to be some innovation on top. >> John: Yeah. >> Now maybe some of these greenfield telcos are going to do like, you take like a dish networks, what they're doing, they're really trying to drive development layers. So I think it's like this wild west open, you know, community that's got to be formed and right now it's very unclear to me, do you have any insights there? >> I think it is more, like you said, Wild West, I think there's no emerging standard per se for across those different company types and sort of different pieces of the industry. So consequently, it does need to form some more standards in order to really help it grow and I think you're right, you have to have the right APIs and the right access in order to properly monetize, you have to attract those developers or you're not going to be able to monetize properly. >> Do you think that if, in thinking about your business and you know, you've always sold to telcos, but now it's like there's this transformation going on in telcos, will that become an increasingly larger piece of your business or maybe even a more important piece of your business? Or it's kind of be steady state because it's such a slow moving industry? >> No, it is a big and increasing piece of our business, I think telcos like other enterprises, want to continue to innovate and so they look to, you know, technologies like, Couchbase document database that allows them to have more flexibility and deliver the speed that they need to deliver those kinds of applications. So we see a lot of migration off of traditional legacy infrastructure in order to build that new age interface and new age experience that they want to deliver. >> A lot of buzz in Silicon Valley about open AI and Chat GPT- >> Yeah. >> You know, what's your take on all that? >> Yeah, we're looking at it, I think it's exciting technology, I think there's a lot of applications that are kind of, a little, sort of innovate traditional interfaces, so for example, you can train Chat GPT to create code, sample code for Couchbase, right? You can go and get it to give you that sample app which gets you a headstart or you can actually get it to do a better job of, you know, sorting through your documentation, like Chat GPT can do a better job of helping you get access. So it improves the experience overall for developers, so we're excited about, you know, what the prospect of that is. >> So you're playing around with it, like everybody is- >> Yeah. >> And potentially- >> Looking at use cases- >> Ways tO integrate, yeah. >> Hundred percent. >> So are we. John, thanks for coming on theCUBE. Always great to see you, my friend. >> Great, thanks very much. >> All right, you're welcome. All right, keep it right there, theCUBE will be back live from Barcelona at the theater. SiliconANGLE's continuous coverage of MWC23. Go to siliconangle.com for all the news, theCUBE.net is where all the videos are, keep it right there. (cheerful upbeat music outro)
SUMMARY :
that drive human progress. that's the market you play in. We've seen, you know, and you have to move the data to the edge, you know, certainly Amazon that edge, if you will, it could be a racetrack, you know, Do you guys have any customers the applications, need to over a public, you know, out to the edge, Outposts, you know, of that compute out to the edge in transportation, you know, You can run in the cloud, you know, and make it as a service." to deliver as a, you know, and the expectation that But you know, as Daniel Royston said, and change the way that they're continue to be based on open or AI or, you know, there developer world, you know, And then, you know, VMware and so for them to continue to innovate about the Apples, you know, and we're going to take data, you know, through APIs, are going to do like, you and the right access in and so they look to, you know, so we're excited about, you know, yeah. Always great to see you, Go to siliconangle.com for all the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Greg Peters | PERSON | 0.99+ |
Daniel Royston | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John Kreisa | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Eight | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Chat GPT | TITLE | 0.99+ |
Hundred percent | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
Couchbase | ORGANIZATION | 0.98+ |
John Furrier | PERSON | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
Apex | ORGANIZATION | 0.98+ |
Home Depot | ORGANIZATION | 0.98+ |
early last year | DATE | 0.98+ |
Barcelona | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
MWC23 | EVENT | 0.97+ |
Bluemix | ORGANIZATION | 0.96+ |
Sun | ORGANIZATION | 0.96+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
GreenLake | ORGANIZATION | 0.94+ |
Apples | ORGANIZATION | 0.94+ |
Snowflake | ORGANIZATION | 0.93+ |
Outpost | ORGANIZATION | 0.93+ |
VMware | ORGANIZATION | 0.93+ |
zero | QUANTITY | 0.93+ |
EMC | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
today | DATE | 0.89+ |
Mobile World Daily Today | TITLE | 0.88+ |
Wild West | ORGANIZATION | 0.88+ |
theCUBE.net | OTHER | 0.87+ |
app store | TITLE | 0.86+ |
one thing | QUANTITY | 0.86+ |
EMC Code | TITLE | 0.86+ |
Couchbase | TITLE | 0.85+ |
Danielle Royston, TelcoDR | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hi everybody. Welcome back to Barcelona. We're here at the Fira Live, theCUBE's ongoing coverage of day two of MWC 23. Back in 2021 was my first Mobile World Congress. And you know what? It was actually quite an experience because there was nobody there. I talked to my friend, who's now my co-host, Chris Lewis about what to expect. He said, Dave, I don't think a lot of people are going to be there, but Danielle Royston is here and she's the CEO of Totoge. And that year when Erickson tapped out of its space she took out 60,000 square feet and built out Cloud City. If it weren't for Cloud City, there would've been no Mobile World Congress in June and July of 2021. DR is back. Great to see you. Thanks for coming on. >> It's great to see you. >> Chris. Awesome to see you. >> Yeah, Chris. Yep. >> Good to be back. Yep. >> You guys remember the narrative back then. There was this lady running around this crazy lady that I met at at Google Cloud next saying >> Yeah. Yeah. >> the cloud's going to take over Telco. And everybody's like, well, this lady's nuts. The cloud's been leaning in, you know? >> Yeah. >> So what do you think, I mean, what's changed since since you first caused all those ripples? >> I mean, I have to say that I think that I caused a lot of change in the industry. I was talking to leaders over at AWS yesterday and they were like, we've never seen someone push like you have and change so much in a short period of time. And Telco moves slow. It's known for that. And they're like, you are pushing buttons and you're getting people to change and thank you and keep going. And so it's been great. It's awesome. >> Yeah. I mean, it was interesting, Chris, we heard on the keynotes we had Microsoft, Satya came in, Thomas Curian came in. There was no AWS. And now I asked CMO of GSMA about that. She goes, hey, we got a great relationship with it, AWS. >> Danielle: Yeah. >> But why do you think they weren't here? >> Well, they, I mean, they are here. >> Mean, not here. Why do you think they weren't profiled? >> They weren't on the keynote stage. >> But, you know, at AWS, a lot of the times they want to be the main thing. They want to be the main part of the show. They don't like sharing the limelight. I think they just didn't want be on the stage with the Google CLoud guys and the these other guys, what they're doing they're building out, they're doing so much stuff. As Danielle said, with Telcos change in the ecosystem which is what's happening with cloud. Cloud's making the Telcos think about what the next move is, how they fit in with the way other people do business. Right? So Telcos never used to have to listen to anybody. They only listened to themselves and they dictated the way things were done. They're very successful and made a lot of money but they're now having to open up they're having to leverage the cloud they're having to leverage the services that (indistinct words) and people out provide and they're changing the way they work. >> So, okay in 2021, we talked a lot about the cloud as a potential disruptor, and your whole premise was, look you got to lean into the cloud, or you're screwed. >> Danielle: Yeah. >> But the flip side of that is, if they lean into the cloud too much, they might be screwed. >> Danielle: Yeah. >> So what's that equilibrium? Have they been able to find it? Are you working with just the disruptors or how's that? >> No I think they're finding it right. So my talk at MWC 21 was all about the cloud is a double-edged sword, right? There's two sides to it, and you definitely need to proceed through it with caution, but also I don't know that you have a choice, right? I mean, the multicloud, you know is there another industry that spends more on CapEx than Telco? >> No. >> Right. The hyperscalers are doing it right. They spend, you know, easily approaching over a $100 billion in CapEx that rivals this industry. And so when you have a player like that an industry driving, you know and investing so much Telco, you're always complaining how everyone's riding your coattails. This is the opportunity to write someone else's coattails. So jump on, right? I think you don't have a choice especially if other Telco competitors are using hyperscalers and you don't, they're going to be left behind. >> So you advise these companies all the time, but >> I mean, the issue is they're all they're all using all the hyperscalers, right? So they're the multi, the multiple relationships. And as Danielle said, the multi-layer of relationship they're using the hyperscalers to change their own internal operational environments to become more IT-centric to move to that software centric Telco. And they're also then with the hyperscalers going to market in different ways sometimes with them, sometimes competing with them. What what it means from an analyst point of view is you're suddenly changing the dynamic of a market where we used to have nicely well defined markets previously. Now they're, everyone's in it together, you know, it's great. And, and it's making people change the way they think about services. What I, what I really hope it changes more than anything else is the way the customers at the end of the, at the end of the supply, the value chain think this is what we can get hold of this stuff. Now we can go into the network through the cloud and we can get those APIs. We can draw on the mechanisms we need to to run our personal lives, to run our business lives. And frankly, society as a whole. It's really exciting. >> Then your premise is basically you were saying they should ride on the top over the top of the cloud vendor. >> Yeah. Right? >> No. Okay. But don't they lose the, all the data if they do that? >> I don't know. I mean, I think the hyperscalers are not going to take their data, right? I mean, that would be a really really bad business move if Google Cloud and Azure and and AWS start to take over that, that data. >> But they can't take it. >> They can't. >> From regulate, from sovereignty and regulation. >> They can't because of regulation, but also just like business, right? If they started taking their data and like no enterprises would use them. So I think, I think the data is safe. I think you, obviously every country is different. You got to understand the different rules and regulations for data privacy and, and how you keep it. But I think as we look at the long term, right and we always talk about 10 and 20 years there's going to be a hyperscaler region in every country right? And there will be a way for every Telco to use it. I think their data will be safe. And I think it just, you're going to be able to stand on on the shoulders of someone else for once and use the building blocks of software that these guys provide to make better experiences for subscribers. >> You guys got to explain this to me because when I say data I'm not talking about, you know, personal information. I'm talking about all the telemetry, you know, all the all the, you know the plumbing. >> Danielle: Yeah. >> Data, which is- >> It will increasingly be shared because you need to share it in order to deliver the services in the streamline efficient way that needs to be deliver. >> Did I hear the CEO of Ericsson Wright where basically he said, we're going to charge developers for access to that data through APIs. >> What the Ericsson have done, obviously with the Vage acquisition is they want to get into APIs. So the idea is you're exposing features, quality policy on demand type features for example, or even pulling we still use that a lot of SMS, right? So pulling those out using those APIs. So it will be charged in some way. Whether- >> Man: Like Twitter's charging me for APIs, now I API calls, you >> Know what it is? I think it's Twilio. >> Man: Oh, okay. >> Right. >> Man: No, no, that's sure. >> There's no reason why telcos couldn't provide a Twilio like service itself. >> It's a horizontal play though right? >> Danielle: Correct because developers need to be charged by the API. >> But doesn't there need to be an industry standard to do that as- >> Well. I think that's what they just announced. >> Industry standard. >> Danielle: I think they just announced that. Yeah. Right now I haven't looked at that API set, right? >> There's like eight of them. >> There's eight of them. Twilio has, it's a start you got to start somewhere Dave. (crosstalk) >> And there's all, the TM forum is all the other standard >> Right? Eight is better than zero- >> Right? >> Haven't got plenty. >> I mean for an industry that didn't really understand APIs as a feature, as a product as a service, right? For Mats Granryd, the deputy general of GSMA to stand on the keynote stage and say we partnered and we're unveiling, right. Pay by the use APIs. I was for it. I was like, that is insane. >> I liked his keynote actually, because I thought he was going to talk about how many attendees and how much economic benefiting >> Danielle: We're super diverse. >> He said, I would usually talk about that and you know greening in the network by what you did talk about a little bit. But, but that's, that surprised me. >> Yeah. >> But I've seen in the enterprise this is not my space as, you know, you guys don't live this but I've seen Oracle try to get developers. IBM had to pay $35 billion trying to get for Red Hat to get developers, right? EMC used to have a thing called EMC code, failed. >> I mean they got to do something, right? So 4G they didn't really make the business case the ROI on the investment in the network. Here we are with 5G, same discussion is having where's the use case? How are we going to monetize and make the ROI on this massive investment? And now they're starting to talk about 6G. Same fricking problem is going to happen again. And so I think they need to start experimenting with new ideas. I don't know if it's going to work. I don't know if this new a API network gateway theme that Mats talked about yesterday will work. But they need to start unbundling that unlimited plan. They need to start charging people who are using the network more, more money. Those who are using it less, less. They need to figure this out. This is a crisis for them. >> Yeah our own CEO, I mean she basically said, Hey, I'm for net neutrality, but I want to be able to charge the people that are using it more and more >> To make a return on, on a capital. >> I mean it costs billions of dollars to build these networks, right? And they're valuable. We use them and we talked about this in Cloud City 21, right? The ability to start building better metaverses. And I know that's a buzzword and everyone hates it, but it's true. Like we're working from home. We need- there's got to be a better experience in Zoom in 2D, right? And you need a great network for that metaverse to be awesome. >> You do. But Danielle, you don't need cellular for doing that, do you? So the fixed network is as important. >> Sure. >> And we're at mobile worlds. But actually what we beginning to hear and Crystal Bren did say this exactly, it's about the comp the access is sort of irrelevant. Fixed is better because it's more the cost the return on investment is better from fiber. Mobile we're going to change every so many years because we're a new generation. But we need to get the mechanism in place to deliver that. I actually don't agree that we should everyone should pay differently for what they use. It's a universal service. We need it as individuals. We need to make it sustainable for every user. Let's just not go for the biggest user. It's not, it's not the way to build it. It won't work if you do that you'll crash the system if you do that. And, and the other thing which I disagree on it's not about standing on the shoulders and benefiting from what- It's about cooperating across all levels. The hyperscalers want to work with the telcos as much as the telcos want to work with the hyperscalers. There's a lot of synergy there. There's a lot of ways they can work together. It's not one or the other. >> But I think you're saying let the cloud guys do the heavy lifting and I'm - >> Yeah. >> Not at all. >> And so you don't think so because I feel like the telcos are really good at pipes. They've always been good at pipes. They're engineers. >> Danielle: Yeah. >> Are they hanging on to the to the connectivity or should they let that go and well and go toward the developer. >> I mean AWS had two announcements on the 21st a week before MWC. And one was that telco network builder. This is literally being able to deploy a network capability at AWS with keystrokes. >> As a managed service. >> Danielle: Correct. >> Yeah. >> And so I don't know how the telco world I felt the shock waves, right? I was like, whoa, that seems really big. Because they're taking something that previously was like bread and butter. This is what differentiates each telco and now they've standardized it and made it super easy so anyone can do it. Now do I think the five nines of super crazy hardcore network criteria will be built on AWS this way? Probably not, but no >> It's not, it's not end twin. So you can't, no. >> Right. But private networks could be built with this pretty easily, right? And so telcos that don't have as much funding, right. Smaller, more experiments. I think it's going to change the way we think about building networks in telcos >> And those smaller telcos I think are going to be more developer friendly. >> Danielle: Yeah. >> They're going to have business models that invite those developers in. And that's, it's the disruption's going to come from the ISVs and the workloads that are on top of that. >> Well certainly what Dish is trying to do, right? Dish is trying to build a- they launched it reinvent a developer experience. >> Dave: Yeah. >> Right. Built around their network and you know, again I don't know, they were not part of this group that designed these eight APIs but I'm sure they're looking with great intent on what does this mean for them. They'll probably adopt them because they want people to consume the network as APIs. That's their whole thing that Mark Roanne is trying to do. >> Okay, and then they're doing open ran. But is it- they're not really cons- They're not as concerned as Rakuten with the reliability and is that the right play? >> In this discussion? Open RAN is not an issue. It really is irrelevant. It's relevant for the longer term future of the industry by dis aggregating and being able to share, especially ran sharing, for example, in the short term in rural environments. But we'll see some of that happening and it will change, but it will also influence the way the other, the existing ran providers build their services and offer their value. Look you got to remember in the relationship between the equipment providers and the telcos are very dramatically. Whether it's Ericson, NOKIA, Samsung, Huawei, whoever. So those relations really, and the managed services element to that depends on what skills people have in-house within the telco and what service they're trying to deliver. So there's never one size fits all in this industry. >> You're very balanced in your analysis and I appreciate that. >> I try to be. >> But I am not. (chuckles) >> So when Dr went off, this is my question. When Dr went off a couple years ago on the cloud's going to take over the world, you were skeptical. You gave a approach. Have you? >> I still am. >> Have you moderated your thoughts on that or- >> I believe the telecom industry is is a very strong industry. It's my industry of course I love it. But the relationship it is developing much different relationships with the ecosystem players around it. You mentioned developers, you mentioned the cloud players the equipment guys are changing there's so many moving parts to build the telco of the future that every country needs a very strong telco environment to be able to support the site as a whole. People individuals so- >> Well I think two years ago we were talking about should they or shouldn't they, and now it's an inevitability. >> I don't think we were Danielle. >> All using the hyperscalers. >> We were always going to need to transform the telcos from the conservative environments in which they developed. And they've had control of everything in order to reduce if they get no extra revenue at all, reducing the cost they've got to go on a cloud migration path to do that. >> Amenable. >> Has it been harder than you thought? >> It's been easier than I thought. >> You think it's gone faster than >> It's gone way faster than I thought. I mean pushing on this flywheel I thought for sure it would take five to 10 years it is moving. I mean the maths comp thing the AWS announcements last week they're putting in hyperscalers in Saudi Arabia which is probably one of the most sort of data private places in the world. It's happening really fast. >> What Azure's doing? >> I feel like I can't even go to sleep. Because I got to keep up with it. It's crazy. >> Guys. >> This is awesome. >> So awesome having you back on. >> Yeah. >> Chris, thanks for co-hosting. Appreciate you stay here. >> Yep. >> Danielle, amazing. We'll see you. >> See you soon. >> A lot of action here. We're going to come out >> Great. >> Check out your venue. >> Yeah the Togi buses that are outside. >> The big buses. You got a great setup there. We're going to see you on Wednesday. Thanks again. >> Awesome. Thanks. >> All right. Keep it right there. We'll be back to wrap up day two from MWC 23 on theCUBE. (upbeat music)
SUMMARY :
coverage is made possible I talked to my friend, who's Awesome to see you. Yep. Good to be back. the narrative back then. the cloud's going to take over Telco. I mean, I have to say that And now I asked CMO of GSMA about that. Why do you think they weren't profiled? on the stage with the Google CLoud guys talked a lot about the cloud But the flip side of that is, I mean, the multicloud, you know This is the opportunity to I mean, the issue is they're all over the top of the cloud vendor. the data if they do that? and AWS start to take But I think as we look I'm talking about all the in the streamline efficient Did I hear the CEO of Ericsson Wright So the idea is you're exposing I think it's Twilio. There's no reason why telcos need to be charged by the API. what they just announced. Danielle: I think got to start somewhere Dave. of GSMA to stand on the greening in the network But I've seen in the enterprise I mean they got to do something, right? of dollars to build these networks, right? So the fixed network is as important. Fixed is better because it's more the cost because I feel like the telcos Are they hanging on to the This is literally being able to I felt the shock waves, right? So you can't, no. I think it's going to going to be more developer friendly. And that's, it's the is trying to do, right? consume the network as APIs. is that the right play? It's relevant for the longer and I appreciate that. But I am not. on the cloud's going to take I believe the telecom industry is Well I think two years at all, reducing the cost I mean the maths comp thing Because I got to keep up with it. Appreciate you stay here. We'll see you. We're going to come out We're going to see you on Wednesday. We'll be back to wrap up day
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danielle | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Mark Roanne | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Wednesday | DATE | 0.99+ |
Thomas Curian | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Danielle Royston | PERSON | 0.99+ |
Saudi Arabia | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
$35 billion | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
60,000 square feet | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
June | DATE | 0.99+ |
Mats Granryd | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
NOKIA | ORGANIZATION | 0.99+ |
Eight | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Barcelona | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
Totoge | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
MWC 23 | EVENT | 0.99+ |
Crystal Bren | PERSON | 0.99+ |
10 years | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Satya | PERSON | 0.98+ |
two announcements | QUANTITY | 0.98+ |
Ericsson Wright | ORGANIZATION | 0.98+ |
Dish | ORGANIZATION | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
Mats | PERSON | 0.98+ |
20 years | QUANTITY | 0.98+ |
day two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Twilio | ORGANIZATION | 0.97+ |
telcos | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Day 2 MWC Analyst Hot Takes MWC Barcelona 2023
(soft music) >> Announcer: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Spain, everybody. We're here at the Fira in MWC23. Is just an amazing day. This place is packed. They said 80,000 people. I think it might even be a few more walk-ins. I'm Dave Vellante, Lisa Martin is here, David Nicholson. But right now we have the Analyst Hot Takes with three friends of theCUBE. Chris Lewis is back again with me in the co-host seat. Zeus Kerravala, analyst extraordinaire. Great to see you, Z. and Sarbjeet SJ Johal. Good to see you again, theCUBE contributor. And that's my new name for him. He says that is his nickname. Guys, thanks for coming back on. We got the all male panel, sorry, but it is what it is. So Z, is this the first time you've been on it at MWC. Take aways from the show, Hot Takes. What are you seeing? Same wine, new bottle? >> In a lot of ways, yeah. I mean, I was talking to somebody this earlier that if you had come from like MWC five years ago to this year, a lot of the themes are the same. Telco transformation, cloud. I mean, 5G is a little new. Sustainability is certainly a newer theme here. But I think it highlights just the difficulty I think the telcos have in making this transformation. And I think, in some ways, I've been unfair to them in some degree 'cause I've picked on them in the past for not moving fast enough. These are, you know, I think these kind of big transformations almost take like a perfect storm of things that come together to happen, right? And so, in the past, we had technologies that maybe might have lowered opex, but they're hard to deploy. They're vertically integrated. We didn't have the software stacks. But it appears today that between the cloudification of, you know, going to cloud native, the software stacks, the APIs, the ecosystems, I think we're actually in a position to see this industry finally move forward. >> Yeah, and Chris, I mean, you have served this industry for a long time. And you know, when you, when you do that, you get briefed as an analyst, you actually realize, wow, there's a lot of really smart people here, and they're actually, they have challenges, they're working through it. So Zeus was saying he's been tough on the industry. You know, what do you think about how the telcos have evolved in the last five years? >> I think they've changed enormously. I think the problem we have is we're always looking for the great change, the big step change, and there is no big step change in a way. What telcos deliver to us as individuals, businesses, society, the connectivity piece, that's changed. We get better and better and more reliable connectivity. We're shunting a load more capacity through. What I think has really changed is their attitude to their suppliers, their attitude to their partners, and their attitude to the ecosystem in which they play. Understanding that connectivity is not the end game. Connectivity is part of the emerging end game where it will include storage, compute, connect, and analytics and everything else. So I think the realization that they are not playing their own game anymore, it's a much more open game. And some things they will continue to do, some things they'll stop doing. We've seen them withdraw from moving into adjacent markets as much as we used to see. So a lot of them in the past went off to try and do movies, media, and a lot went way way into business IT stuff. They've mainly pulled back from that, and they're focusing on, and let's face it, it's not just a 5G show. The fixed environment is unbelievably important. We saw that during the pandemic. Having that fixed broadband connection using wifi, combining with cellular. We love it. But the problem as an industry is that the users often don't even know the connectivity's there. They only know when it doesn't work, right? >> If it's not media and it's not business services, what is it? >> Well, in my view, it will be enabling third parties to deliver the services that will include media, that will include business services. So embedding the connectivity all the way into the application that gets delivered or embedding it so the quality mechanism deliver the gaming much more accurately or, I'm not a gamer, so I can't comment on that. But no, the video quality if you want to have a high quality video will come through better. >> And those cohorts will pay for that value? >> Somebody will pay somewhere along the line. >> Seems fuzzy to me. >> Me too. >> I do think it's use case dependent. Like you look at all the work Verizon did at the Super Bowl this year, that's a perfect case where they could have upsold. >> Explain that. I'm not familiar with it. >> So Verizon provided all the 5G in the Super Bowl. They provided a lot of, they provided private connectivity for the coaches to talk to the sidelines. And that's a mission critical application, right? In the NFL, if one side can't talk, the other side gets shut down. You can't communicate with the quarterback or the coaches. There's a lot of risk at that. So, but you know, there's a case there, though, I think where they could have even made that fan facing. Right? And if you're paying 2000 bucks to go to a game, would you pay 50 bucks more to have a higher tier of bandwidth so you can post things on social? People that go there, they want people to know they were there. >> Every football game you go to, you can't use your cell. >> Analyst: Yeah, I know, right? >> All right, let's talk about developers because we saw the eight APIs come out. I think ISVs are going to be a big part of this. But it's like Dee Arthur said. Hey, eight's better than zero, I guess. Okay, so, but so the innovation is going to come from ISVs and developers, but what are your hot takes from this show and now day two, we're a day and a half in, almost two days in. >> Yeah, yeah. There's a thing that we have talked, I mentioned many times is skills gravity, right? Skills have gravity, and also, to outcompete, you have to also educate. That's another theme actually of my talks is, or my research is that to puts your technology out there to the practitioners, you have to educate them. And that's the only way to democratize your technology. What telcos have been doing is they have been stuck to the proprietary software and proprietary hardware for too long, from Nokia's of the world and other vendors like that. So now with the open sourcing of some of the components and a few others, right? And they're open source space and antenna, you know? Antennas are becoming software now. So with the invent of these things, which is open source, it helps us democratize that to the other sort of skirts of the practitioners, if you will. And that will bring in more applications first into the IOT space, and then maybe into the core sort of California, if you will. >> So what does a telco developer look like? I mean, all the blockchain developers and crypto developers are moving into generative AI, right? So maybe those worlds come together. >> You'd like to think though that the developers would understand everything's network centric today. So you'd like to think they'd understand that how the network responds, you know, you'd take a simple app like Zoom or something. If it notices the bandwidth changes, it should knock down the resolution. If it goes up it, then you can add different features and things and you can make apps a lot smarter that way. >> Well, G2 was saying today that they did a deal with Mercedes, you know this probably better than I do, where they're going to embed WebEx in the car. And if you're driving, it'll shut off the camera. >> Of course. >> I'm like, okay. >> I'll give you a better example though. >> But that's my point. Like, isn't there more that we can do? >> You noticed down on the SKT stand the little helicopter. That's a vertical lift helicopter. So it's an electric vertical lift helicopter. Just think of that for a second. And then think of the connectivity to control that, to securely control that. And then I was recently at an event with Zeus actually where we saw an air traffic control system where there was no people manning the tower. It was managed by someone remotely with all the cameras around them. So managing all of those different elements, we call it IOT, but actually it's way more than what we thought of as IOT. All those components connecting, communicating securely and safely. 'Cause I don't want that helicopter to come down on my head, do you? (men laugh) >> Especially if you're in there. (men laugh) >> Okay, so you mentioned sustainability. Everybody's talking about power. I don't know if you guys have a lot of experience around TCO, but I'm trying to get to, well, is this just because energy costs are so high, and then when the energy becomes cheap again, nobody's going to pay any attention to it? Or is this the real deal? >> So one of the issues around the, if we want to experience all that connectivity locally or that helicopter wants to have that connectivity, we have to ultimately build denser, more reliable networks. So there's a CapEx, we're going to put more base stations in place. We need more fiber in the ground to support them. Therefore, the energy consumption will go up. So we need to be more efficient in the use of energy. Simple as that. >> How much of the operating expense is energy? Like what percent of it? Is it 10%? Is it 20%? Is it, does anybody know? >> It depends who you ask and it depends on the- >> I can't get an answer to that. I mean, in the enterprise- >> Analyst: The data centers? >> Yeah, the data centers. >> We have the numbers. I think 10 to 15%. >> It's 10 to 12%, something like that. Is it much higher? >> I've got feeling it's 30%. >> Okay, so if it's 30%, that's pretty good. >> I do think we have to get better at understanding how to measure too. You know, like I was talking with John Davidson at Sysco about this that every rev of silicon they come out with uses more power, but it's a lot more dense. So at the surface, you go, well, that's using a lot more power. But you can consolidate 10 switches down to two switches. >> Well, Intel was on early and talking about how they can intelligently control the cores. >> But it's based off workload, right? That's the thing. So what are you running over it? You know, and so, I don't think our industry measures that very well. I think we look at things kind of boxed by box versus look at total consumption. >> Well, somebody else in theCUBE was saying they go full throttle. That the networks just say just full throttle everything. And that obviously has to change from the power consumption standpoint. >> Obviously sustainability and sensory or sensors from IOT side, they go hand in hand. Just simple examples like, you know, lights in the restrooms, like in public areas. Somebody goes in there and just only then turns. The same concept is being applied to servers and compute and storage and every aspects and to networks as well. >> Cell tower. >> Yeah. >> Cut 'em off, right? >> Like the serverless telco? (crosstalk) >> Cell towers. >> Well, no, I'm saying, right, but like serverless, you're not paying for the compute when you're not using it, you know? >> It is serverless from the economics point of view. Yes, it's like that, you know? It goes to the lowest level almost like sleep on our laptops, sleep level when you need more power, more compute. >> I mean, some of that stuff's been in networking equipment for a long time, it just never really got turned on. >> I want to ask you about private networks. You wrote a piece, Athenet was acquired by HPE right after Dell announced a relationship with Athenet, which was kind of, that was kind of funny. And so a good move, good judo move by by HP. I asked Dell about it, and they said, look, we're open. They said the right things. We'll see, but I think it's up to HP. >> Well, and the network inside Dell is. >> Yeah, okay, so. Okay, cool. So, but you said something in that article you wrote on Silicon Angle that a lot of people feel like P5G is going to basically replace wireless or cannibalize wireless. You said you didn't agree with that. Explain why? >> Analyst: Wifi. >> Wifi, sorry, I said wireless. >> No, that's, I mean that's ridiculous. Pat Gelsinger said that in his last VMware, which I thought was completely irresponsible. >> That it was going to cannibalize? >> Cannibalize wifi globally is what he said, right? Now he had Verizon on stage with him, so. >> Analyst: Wifi's too inexpensive and flexible. >> Wifi's cheap- >> Analyst: It's going to embed really well. Embedded in that. >> It's reached near ubiquity. It's unlicensed. So a lot of businesses don't want to manage their own spectrum, right? And it's great for this, right? >> Analyst: It does the job. >> For casual connectivity. >> Not today. >> Well, it does for the most part. Right now- >> For the most part. But never at these events. >> If it's engineered correctly, it will. Right? Where you need private 5G is when reliability is an absolute must. So, Chris, you and I visited the Port of Rotterdam, right? So they're putting 5G, private 5G there, but there's metal containers everywhere, right? And that's going to disrupt it. And so there are certain use cases where it makes sense. >> I've been in your basement, and you got some pretty intense equipment in there. You have private 5G in there. >> But for carpeted offices, it does not make sense to bring private. The economics don't make any sense. And you know, it runs hot. >> So where's it going to be used? Give us some examples of where we should be looking for. >> The early ones are obviously in mining, and you say in ports, in airports. It broadens cities because you've got so many moving parts in there, and always think about it, very expensive moving parts. The cranes in the port are normally expensive piece of kits. You're moving that, all that logistics around. So managing that over a distance where the wifi won't work over the distance. And in mining, we're going to see enormous expensive trucks moving around trying to- >> I think a great new use case though, so the Cleveland Browns actually the first NFL team to use it for facial recognition to enter the stadium. So instead of having to even pull your phone out, it says, hey Dave Vellante. You've got four tickets, can we check you all in? And you just walk through. You could apply that to airports. You could do put that in a hotel. You could walk up and check in. >> Analyst: Retail. >> Yeah, retail. And so I think video, realtime video analytics, I think it's a perfect use case for that. >> But you don't need 5G to do that. You could do that through another mechanism, couldn't you? >> You could do wire depending on how mobile you want to do it. Like in a stadium, you're pulling those things in and out all the time. You're moving 'em around and things, so. >> Yeah, but you're coming in at a static point. >> I'll take the contrary view here. >> See, we can't even agree on that. (men laugh) >> Yeah, I love it. Let's go. >> I believe the reliability of connection is very important, right? And the moving parts. What are the moving parts in wifi? We have the NIC card, you know, the wifi card in these suckers, right? In a machine, you know? They're bigger in size, and the radios for 5G are smaller in size. So neutralization is important part of the whole sort of progress to future, right? >> I think 5G costs as well. Yes, cost as well. But cost, we know that it goes down with time, right? We're already talking about 60, and the 5G stuff will be good. >> Actually, sorry, so one of the big boom areas at the moment is 4G LTE because the component price has come down so much, so it is affordable, you can afford to bring it all together. People don't, because we're still on 5G, if 5G standalone everywhere, you're not going to get a consistent service. So those components are unbelievably important. The skillsets of the people doing integration to bring them all together, unbelievably important. And the business case within the business. So I was talking to one of the heads of one of the big retail outlets in the UK, and I said, when are you going to do 5G in the stores? He said, well, why would I tear out all the wifi? I've got perfectly functioning wifi. >> Yeah, that's true. It's already there. But I think the technology which disappears in front of you, that's the best technology. Like you don't worry about it. You don't think it's there. Wifi, we think we think about that like it's there. >> And I do think wifi 5G switching's got to get easier too. Like for most users, you don't know which is better. You don't even know how to test it. And to your point, it does need to be invisible where the user doesn't need to think about it, right? >> Invisible. See, we came back to invisible. We talked about that yesterday. Telecom should be invisible. >> And it should be, you know? You don't want to be thinking about telecom, but at the same time, telecoms want to be more visible. They want to be visible like Netflix, don't they? I still don't see the path. It's fuzzy to me the path of how they're not going to repeat what happened with the over the top providers if they're invisible. >> Well, if you think about what telcos delivers to consumers, to businesses, then extending that connectivity into your home to help you support secure and extend your connection into Zeus's basement, whatever it is. Obviously that's- >> His awesome setup down there. >> And then in the business environment, there's a big change going on from the old NPLS networks, the old rigid structures of networks to SD1 where the control point is moved outside, which can be under control of the telco, could be under the control of a third party integrator. So there's a lot changing. I think we obsess about the relative role of the telco. The demand is phenomenal for connectivity. So address that, fulfill that. And if they do that, then they'll start to build trust in other areas. >> But don't you think they're going to address that and fulfill that? I mean, they're good at it. That's their wheelhouse. >> And it's a 1.6 trillion market, right? So it's not to be sniffed at. That's fixed on mobile together, obviously. But no, it's a big market. And do we keep changing? As long as the service is good, we don't move away from it. >> So back to the APIs, the eight APIs, right? >> I mean- >> Eight APIs is a joke actually almost. I think they released it too early. The release release on the main stage, you know? Like, what? What is this, right? But of course they will grow into hundreds and thousands of APIs. But they have to spend a lot of time and effort in that sort of context. >> I'd actually like to see the GSMA work with like AWS and Microsoft and VMware and software companies and create some standardization across their APIs. >> Yeah. >> I spoke to them yes- >> We're trying to reinvent them. >> Is that not what they're doing? >> No, they said we are not in the business of a defining standards. And they used a different term, not standard. I mean, seriously. I was like, are you kidding me? >> Let's face it, there aren't just eight APIs out there. There's so many of them. The TM forum's been defining when it's open data architecture. You know, the telcos themselves are defining them. The standards we talked about too earlier with Danielle. There's a lot of APIs out there, but the consistency of APIs, so we can bring them together, to bring all the different services together that will support us in our different lives is really important. I think telcos will do it, it's in their interest to do it. >> All right, guys, we got to wrap. Let's go around the horn here, starting with Chris, Zeus, and then Sarbjeet, just bring us home. Number one hot take from Mobile World Congress MWC23 day two. >> My favorite hot take is the willingness of all the participants who have been traditional telco players who looked inwardly at the industry looking outside for help for partnerships, and to build an ecosystem, a more open ecosystem, which will address our requirements. >> Zeus? >> Yeah, I was going to talk about ecosystem. I think for the first time ever, when I've met with the telcos here, I think they're actually, I don't think they know how to get there yet, but they're at least aware of the fact that they need to understand how to build a big ecosystem around them. So if you think back like 50 years ago, IBM and compute was the center of everything in your company, and then the ecosystem surrounded it. I think today with digital transformation being network centric, the telcos actually have the opportunity to be that center of excellence, and then build an ecosystem around them. I think the SIs are actually in a really interesting place to help them do that 'cause they understand everything top to bottom that I, you know, pre pandemic, I'm not sure the telcos were really understand. I think they understand it today, I'm just not sure they know how to get there. . >> Sarbjeet? >> I've seen the lot of RN demos and testing companies and I'm amazed by it. Everything is turning into software, almost everything. The parts which are not turned into software. I mean every, they will soon. But everybody says that we need the hardware to run something, right? But that hardware, in my view, is getting miniaturized, and it's becoming smaller and smaller. The antennas are becoming smaller. The equipment is getting smaller. That means the cost on the physicality of the assets is going down. But the cost on the software side will go up for telcos in future. And telco is a messy business. Not everybody can do it. So only few will survive, I believe. So that's what- >> Software defined telco. So I'm on a mission. I'm looking for the monetization path. And what I haven't seen yet is, you know, you want to follow the money, follow the data, I say. So next two days, I'm going to be looking for that data play, that potential, the way in which this industry is going to break down the data silos I think there's potential goldmine there, but I haven't figured out yet. >> That's a subject for another day. >> Guys, thanks so much for coming on. You guys are extraordinary partners of theCUBE friends, and great analysts and congratulations and thank you for all you do. Really appreciate it. >> Analyst: Thank you. >> Thanks a lot. >> All right, this is a wrap on day two MWC 23. Go to siliconangle.com for all the news. Where Rob Hope and team are just covering all the news. John Furrier is in the Palo Alto studio. We're rocking all that news, taking all that news and putting it on video. Go to theCUBE.net, you'll see everything on demand. Thanks for watching. This is a wrap on day two. We'll see you tomorrow. (soft music)
SUMMARY :
that drive human progress. Good to see you again, And so, in the past, we had technologies have evolved in the last five years? is that the users often don't even know So embedding the connectivity somewhere along the line. at the Super Bowl this year, I'm not familiar with it. for the coaches to talk to the sidelines. you can't use your cell. Okay, so, but so the innovation of the practitioners, if you will. I mean, all the blockchain developers that how the network responds, embed WebEx in the car. Like, isn't there more that we can do? You noticed down on the SKT Especially if you're in there. I don't know if you guys So one of the issues around the, I mean, in the enterprise- I think 10 to 15%. It's 10 to 12%, something like that. Okay, so if it's So at the surface, you go, control the cores. That's the thing. And that obviously has to change and to networks as well. the economics point of view. I mean, some of that stuff's I want to ask you P5G is going to basically replace wireless Pat Gelsinger said that is what he said, right? Analyst: Wifi's too to embed really well. So a lot of businesses Well, it does for the most part. For the most part. And that's going to disrupt it. and you got some pretty it does not make sense to bring private. So where's it going to be used? The cranes in the port are You could apply that to airports. I think it's a perfect use case for that. But you don't need 5G to do that. in and out all the time. Yeah, but you're coming See, we can't even agree on that. Yeah, I love it. I believe the reliability of connection and the 5G stuff will be good. I tear out all the wifi? that's the best technology. And I do think wifi 5G We talked about that yesterday. I still don't see the path. to help you support secure from the old NPLS networks, But don't you think So it's not to be sniffed at. the main stage, you know? the GSMA work with like AWS are not in the business You know, the telcos Let's go around the horn here, of all the participants that they need to understand But the cost on the the data silos I think there's and thank you for all you do. John Furrier is in the Palo Alto studio.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mercedes | ORGANIZATION | 0.99+ |
Zeus Kerravala | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
50 bucks | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
Z. | PERSON | 0.99+ |
10 switches | QUANTITY | 0.99+ |
Sysco | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
2000 bucks | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Cleveland Browns | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Spain | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two switches | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
80,000 people | QUANTITY | 0.99+ |
Athenet | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Davidson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Super Bowl | EVENT | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Dee Arthur | PERSON | 0.99+ |
G2 | ORGANIZATION | 0.99+ |
Zeus | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
15% | QUANTITY | 0.99+ |
Rob Hope | PERSON | 0.99+ |
five years ago | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
MWC23 | LOCATION | 0.99+ |
SKT | ORGANIZATION | 0.99+ |
theCUBE.net | OTHER | 0.99+ |
12% | QUANTITY | 0.98+ |
GSMA | ORGANIZATION | 0.98+ |
Eight APIs | QUANTITY | 0.98+ |
Danielle | PERSON | 0.98+ |
Telco | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
eight APIs | QUANTITY | 0.98+ |
5G | ORGANIZATION | 0.98+ |
telcos | ORGANIZATION | 0.98+ |
three friends | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Mobile World Congress | EVENT | 0.97+ |
CapEx | ORGANIZATION | 0.97+ |
50 years ago | DATE | 0.97+ |
day two | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
four tickets | QUANTITY | 0.96+ |
a day and a half | QUANTITY | 0.96+ |
MWC | EVENT | 0.96+ |
TheCUBE | ORGANIZATION | 0.96+ |
pandemic | EVENT | 0.95+ |
Zeus | PERSON | 0.95+ |
Dave Duggal, EnterpriseWeb & Azhar Sayeed, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (ambient music) >> Lisa: Hey everyone, welcome back to Barcelona, Spain. It's theCUBE Live at MWC 23. Lisa Martin with Dave Vellante. This is day two of four days of cube coverage but you know that, because you've already been watching yesterday and today. We're going to have a great conversation next with EnterpriseWeb and Red Hat. We've had great conversations the last day and a half about the Telco industry, the challenges, the opportunities. We're going to unpack that from this lens. Please welcome Dave Duggal, founder and CEO of EnterpriseWeb and Azhar Sayeed is here, Senior Director Solution Architecture at Red Hat. >> Guys, it's great to have you on the program. >> Yes. >> Thank you Lisa, >> Great being here with you. >> Dave let's go ahead and start with you. Give the audience an overview of EnterpriseWeb. What kind of business is it? What's the business model? What do you guys do? >> Okay so, EnterpriseWeb is reinventing middleware, right? So the historic middleware was to build vertically integrated stacks, right? And those stacks are now such becoming the rate limiters for interoperability for so the end-to-end solutions that everybody's looking for, right? Red Hat's talking about the unified platform. You guys are talking about Supercloud, EnterpriseWeb addresses that we've built middleware based on serverless architecture, so lightweight, low latency, high performance middleware. And we're working with the world's biggest, we sell through channels and we work through partners like Red Hat Intel, Fortnet, Keysight, Tech Mahindra. So working with some of the biggest players that have recognized the value of our innovation, to deliver transformation to the Telecom industry. >> So what are you guys doing together? Is this, is this an OpenShift play? >> Is it? >> Yeah. >> Yeah, so we've got two projects right her on the floor at MWC throughout the various partners, where EnterpriseWeb is actually providing an application layer, sorry application middleware over Red Hat's, OpenShift and we're essentially generating operators so Red Hat operators, so that all our vendors, and, sorry vendors that we onboard into our catalog can be deployed easily through the OpenShift platform. And we allow those, those vendors to be flexibly composed into network services. So the real challenge for operators historically is that they, they have challenges onboarding the vendors. It takes a long time. Each one of them is a snowflake. They, you know, even though there's standards they don't all observe or follow the same standards. So we make it easier using models, right? For, in a model driven process to on boards or streamline that onboarding process, compose functions into services deploy those services seamlessly through Red Hat's OpenShift, and then manage the, the lifecycle, like the quality of service and the SLAs for those services. >> So Red Hat obviously has pretty prominent Telco business has for a while. Red Hat OpenStack actually is is pretty popular within the Telco business. People thought, "Oh, OpenStack, that's dead." Actually, no, it's actually doing quite well. We see it all over the place where for whatever reason people want to build their own cloud. And, and so, so what's happening in the industry because you have the traditional Telcos we heard in the keynotes that kind of typical narrative about, you know, we can't let the over the top vendors do this again. We're, we're going to be Apifi everything, we're going to monetize this time around, not just with connectivity but the, but the fact is they really don't have a developer community. >> Yes. >> Yet anyway. >> Then you have these disruptors over here that are saying "Yeah, we're going to enable ISVs." How do you see it? What's the landscape look like? Help us understand, you know, what the horses on the track are doing. >> Sure. I think what has happened, Dave, is that the conversation has moved a little bit from where they were just looking at IS infrastructure service with virtual machines and OpenStack, as you mentioned, to how do we move up the value chain and look at different applications. And therein comes the rub, right? You have applications with different requirements, IT network that have various different requirements that are there. So as you start to build those cloud platform, as you start to modernize those set of applications, you then start to look at microservices and how you build them. You need the ability to orchestrate them. So some of those problem statements have moved from not just refactoring those applications, but actually now to how do you reliably deploy, manage in a multicloud multi cluster way. So this conversation around Supercloud or this conversation around multicloud is very >> You could say Supercloud. That's okay >> (Dave Duggal and Azhar laughs) >> It's absolutely very real though. The reason why it's very real is, if you look at transformations around Telco, there are two things that are happening. One, Telco IT, they're looking at partnerships with hybrid cloud, I mean with public cloud players to build a hybrid environment. They're also building their own Telco Cloud environment for their network functions. Now, in both of those spaces, they end up operating two to three different environments themselves. Now how do you create a level of abstraction across those? How do you manage that particular infrastructure? And then how do you orchestrate all of those different workloads? Those are the type of problems that they're actually beginning to solve. So they've moved on from really just putting that virtualizing their application, putting it on OpenStack to now really seriously looking at "How do I build a service?" "How do I leverage the catalog that's available both in my private and public and build an overall service process?" >> And by the way what you just described as hybrid cloud and multicloud is, you know Supercloud is what multicloud should have been. And what, what it originally became is "I run on this cloud and I run on this cloud" and "I run on this cloud and I have a hybrid." And, and Supercloud is meant to create a common experience across those clouds. >> Dave Duggal: Right? >> Thanks to, you know, Supercloud middleware. >> Yeah. >> Right? And, and so that's what you guys do. >> Yeah, exactly. Exactly. Dave, I mean, even the name EnterpriseWeb, you know we started from looking from the application layer down. If you look at it, the last 10 years we've looked from the infrastructure up, right? And now everybody's looking northbound saying "You know what, actually, if I look from the infrastructure up the only thing I'll ever build is silos, right?" And those silos get in the way of the interoperability and the agility the businesses want. So we take the perspective as high level abstractions, common tools, so that if I'm a CXO, I can look down on my environments, right? When I'm really not, I honestly, if I'm an, if I'm a CEO I don't really care or CXO, I don't really care so much about my infrastructure to be honest. I care about my applications and their behavior. I care about my SLAs and my quality of service, right? Those are the things I care about. So I really want an EnterpriseWeb, right? Something that helps me connect all my distributed applications all across all of the environments. So I can have one place a consistency layer that speaks a common language. We know that there's a lot of heterogeneity down all those layers and a lot of complexity down those layers. But the business doesn't care. They don't want to care, right? They want to actually take their applications deploy them where they're the most performant where they're getting the best cost, right? The lowest and maybe sustainability concerns, all those. They want to address those problems, meet their SLAs meet their quality service. And you know what, if it's running on Amazon, great. If it's running on Google Cloud platform, great. If it, you know, we're doing one project right here that we're demonstrating here is with with Amazon Tech Mahindra and OpenShift, where we took a disaggregated 5G core, right? So this is like sort of latest telecom, you know net networking software, right? We're deploying pulling elements of that network across core, across Amazon EKS, OpenShift on Red Hat ROSA, as well as just OpenShift for cloud. And we, through a single pane of deployment and management, we deployed the elements of the 5G core across them and then connected them in an end-to-end process. That's Telco Supercloud. >> Dave Vellante: So that's an O-RAN deployment. >> Yeah that's >> So, the big advantage of that, pardon me, Dave but the big advantage of that is the customer really doesn't care where the components are being served from for them. It's a 5G capability. It happens to sit in different locations. And that's, it's, it's about how do you abstract and how do you manage all those different workloads in a cohesive way? And that's exactly what EnterpriseWeb is bringing to the table. And what we do is we abstract the underlying infrastructure which is the cloud layer. So if, because AWS operating environment is different then private cloud operating environment then Azure environment, you have the networking is set up is different in each one of them. If there is a way you can abstract all of that and present it in a common operating model it becomes a lot easier than for anybody to be able to consume. >> And what a lot of customers tell me is the way they deal with multicloud complexity is they go with mono cloud, right? And so they'll lose out on some of the best services >> Absolutely >> If best of, so that's not >> that's not ideal, but at the end of the day, agree, developers don't want to muck with all the plumbing >> Dave Duggal: Yep. >> They want to write code. >> Azhar: Correct. >> So like I come back to are the traditional Telcos leaning in on a way that they're going to enable ISVs and developers to write on top of those platforms? Or are there sort of new entrance and disruptors? And I know, I know the answer is both >> Dave Duggal: Yep. >> but I feel as though the Telcos still haven't, traditional Telcos haven't tuned in to that developer affinity, but you guys sell to them. >> What, what are you seeing? >> Yeah, so >> What we have seen is there are Telcos fall into several categories there. If you look at the most mature ones, you know they are very eager to move up the value chain. There are some smaller very nimble ones that have actually doing, they're actually doing something really interesting. For example, they've provided sandbox environments to developers to say "Go develop your applications to the sandbox environment." We'll use that to build an net service with you. I can give you some interesting examples across the globe that, where that is happening, right? In AsiaPac, particularly in Australia, ANZ region. There are a couple of providers who have who have done this, but in, in, in a very interesting way. But the challenges to them, why it's not completely open or public yet is primarily because they haven't figured out how to exactly monetize that. And, and that's the reason why. So in the absence of that, what will happen is they they have to rely on the ISV ecosystem to be able to build those capabilities which they can then bring it on as part of the catalog. But in Latin America, I was talking to one of the providers and they said, "Well look we have a public cloud, we have our own public cloud, right?" What we want do is use that to offer localized services not just bring everything in from the top >> But, but we heard from Ericson's CEO they're basically going to monetize it by what I call "gouge", the developers >> (Azhar laughs) >> access to the network telemetry as opposed to saying, "Hey, here's an open platform development on top of it and it will maybe create something like an app store and we'll take a piece of the action." >> So ours, >> to be is a better model. >> Yeah. So that's perfect. Our second project that we're showing here is with Intel, right? So Intel came to us cause they are a reputation for doing advanced automation solutions. They gave us carte blanche in their labs. So this is Intel Network Builders they said pick your partners. And we went with the Red Hat, Fort Net, Keysite this company KX doing AIML. But to address your DevX, here's Intel explicitly wants to get closer to the developers by exposing their APIs, open APIs over their infrastructure. Just like Red Hat has APIs, right? And so they can expose them northbound to developers so developers can leverage and tune their applications, right? But the challenge there is what Intel is doing at the low level network infrastructure, right? Is fundamentally complex, right? What you want is an abstraction layer where develop and this gets to, to your point Dave where you just said like "The developers just want to get their job done." or really they want to focus on the business logic and accelerate that service delivery, right? So the idea here is an EnterpriseWeb they can literally declaratively compose their services, express their intent. "I want this to run optimized for low latency. I want this to run optimized for energy consumption." Right? And that's all they say, right? That's a very high level statement. And then the run time translates it between all the elements that are participating in that service to realize the developer's intent, right? No hands, right? Zero touch, right? So that's now a movement in telecom. So you're right, it's taking a while because these are pretty fundamental shifts, right? But it's intent based networking, right? So it's almost two parts, right? One is you have to have the open APIs, right? So that the infrastructure has to expose its capabilities. Then you need abstractions over the top that make it simple for developers to take, you know, make use of them. >> See, one of the demonstrations we are doing is around AIOps. And I've had literally here on this floor, two conversations around what I call as network as a platform. Although it sounds like a cliche term, that's exactly what Dave was describing in terms of exposing APIs from the infrastructure and utilizing them. So once you get that data, then now you can do analytics and do machine learning to be able to build models and figure out how you can orchestrate better how you can monetize better, how can how you can utilize better, right? So all of those things become important. It's just not about internal optimization but it's also about how do you expose it to third party ecosystem to translate that into better delivery mechanisms or IOT capability and so on. >> But if they're going to charge me for every API call in the network I'm going to go broke (team laughs) >> And I'm going to get really pissed. I mean, I feel like, I'm just running down, Oracle. IBM tried it. Oracle, okay, they got Java, but they don't they don't have developer jobs. VMware, okay? They got Aria. EMC used to have a thing called code. IBM had to buy Red Hat to get to the developer community. (Lisa laughs) >> So I feel like the telcos don't today have those developer shops. So, so they have to partner. [Azhar] Yes. >> With guys like you and then be more open and and let a zillion flowers bloom or else they're going to get disrupted in a big way and they're going to it's going to be a repeat of the over, over the top in, in in a different model that I can't predict. >> Yeah. >> Absolutely true. I mean, look, they cannot be in the connectivity business. Telcos cannot be just in the connectivity business. It's, I think so, you know, >> Dave Vellante: You had a fry a frozen hand (Dave Daggul laughs) >> off that, you know. >> Well, you know, think about they almost have to go become over the top on themselves, right? That's what the cloud guys are doing, right? >> Yeah. >> They're riding over their backbone that by taking a creating a high level abstraction, they in turn abstract away the infrastructure underneath them, right? And that's really the end game >> Right? >> Dave Vellante: Yeah. >> Is because now, >> they're over the top it's their network, it's their infrastructure, right? They don't want to become bid pipes. >> Yep. >> Now you, they can take OpenShift, run that in any cloud. >> Yep. >> Right? >> You can run that in hybrid cloud, enterprise web can do the application layer configuration and management. And together we're running, you know, OSI layers one through seven, east to west, north to south. We're running across the the RAN, the core and the transport. And that is telco super cloud, my friend. >> Yeah. Well, >> (Dave Duggal laughs) >> I'm dominating the conversation cause I love talking super cloud. >> I knew you would. >> So speaking of super superpowers, when you're in customer or prospective customer conversations with providers and they've got, obviously they're they're in this transformative state right now. How, what do you describe as the superpower between Red Hat and EnterpriseWeb in terms of really helping these Telcos transforms. But at the end of the day, the connectivity's there the end user gets what they want, which is I want this to work wherever I am. >> Yeah, yeah. That's a great question, Lisa. So I think the way you could look at it is most software has, has been evolved to be specialized, right? So in Telcos' no different, right? We have this in the enterprise, right? All these specialized stacks, all these components that they wire together in the, in you think of Telco as a sort of a super set of enterprise problems, right? They have all those problems like magnified manyfold, right? And so you have specialized, let's say orchestrators and other tools for every Telco domain for every Telco layer. Now you have a zoo of orchestrators, right? None of them were designed to work together, right? They all speak a specific language, let's say quote unquote for doing a specific purpose. But everything that's interesting in the 21st century is across layers and across domains, right? If a siloed static application, those are dead, right? Nobody's doing those anymore. Even developers don't do those developers are doing composition today. They're not doing, nobody wants to hear about a 6 million lines of code, right? They want to hear, "How did you take these five things and bring 'em together for productive use?" >> Lisa: Right. How did you deliver faster for my enterprise? How did you save me money? How did you create business value? And that's what we're doing together. >> I mean, just to add on to Dave, I was talking to one of the providers, they have more than 30,000 nodes in their infrastructure. When I say no to your servers running, you know, Kubernetes,running open stack, running different components. If try managing that in one single entity, if you will. Not possible. You got to fragment, you got to segment in some way. Now the question is, if you are not exposing that particular infrastructure and the appropriate KPIs and appropriate things, you will not be able to efficiently utilize that across the board. So you need almost a construct that creates like a manager of managers, a hierarchical structure, which would allow you to be more intelligent in terms of how you place those, how you manage that. And so when you ask the question about what's the secret sauce between the two, well this is exactly where EnterpriseWeb brings in that capability to analyze information, be more intelligent about it. And what we do is provide an abstraction of the cloud layer so that they can, you know, then do the right job in terms of making sure that it's appropriate and it's consistent. >> Consistency is key. Guys, thank you so much. It's been a pleasure really digging through EnterpriseWeb. >> Thank you. >> What you're doing >> with Red Hat. How you're helping the organization transform and Supercloud, we can't forget Supercloud. (Dave Vellante laughs) >> Fight Supercloud. Guys, thank you so much for your time. >> Thank you so much Lisa. >> Thank you. >> Thank you guys. >> Very nice. >> Lisa: We really appreciate it. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live tech coverage coming to you live from MWC 23. We'll be back after a short break.
SUMMARY :
that drive human progress. the challenges, the opportunities. have you on the program. What's the business model? So the historic middleware So the real challenge for happening in the industry What's the landscape look like? You need the ability to orchestrate them. You could say Supercloud. And then how do you orchestrate all And by the way Thanks to, you know, And, and so that's what you guys do. even the name EnterpriseWeb, you know that's an O-RAN deployment. of that is the customer but you guys sell to them. on the ISV ecosystem to be able take a piece of the action." So that the infrastructure has and figure out how you And I'm going to get So, so they have to partner. the over, over the top in, in in the connectivity business. They don't want to become bid pipes. OpenShift, run that in any cloud. And together we're running, you know, I'm dominating the conversation the end user gets what they want, which is And so you have specialized, How did you create business value? You got to fragment, you got to segment Guys, thank you so much. and Supercloud, we Guys, thank you so much for your time. to you live from MWC 23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave Duggal | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Fortnet | ORGANIZATION | 0.99+ |
Keysight | ORGANIZATION | 0.99+ |
EnterpriseWeb | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two projects | QUANTITY | 0.99+ |
Telcos' | ORGANIZATION | 0.99+ |
Latin America | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave Daggul | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
second project | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
more than 30,000 nodes | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
KX | ORGANIZATION | 0.99+ |
Azhar Sayeed | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
Tech Mahindra | ORGANIZATION | 0.98+ |
two conversations | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
five things | QUANTITY | 0.98+ |
telcos | ORGANIZATION | 0.97+ |
four days | QUANTITY | 0.97+ |
Azhar | PERSON | 0.97+ |
Yousef Khalidi, Microsoft & Dennis Hoffman, Dell Technologies | MWC Barcelona 2023
>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Welcome back to the Fira in Barcelona. This is Dave Vellante with David Nicholson. Lisa Martin is also here. This is day two of our coverage of MWC 23 on theCUBE. We're super excited. We're in between hall four and five. Stop by if you're here. Dennis Hoffman is here. He's the senior vice president and general manager of the Telecom systems business at Dell Technologies, and he's joined by Yousef Khalidi, who's the corporate vice president of Azure for Operators from Microsoft. Gents, Welcome. >> Thanks, Dave. >> Thank you. >> So we saw Satya in the keynote. He wired in. We saw T.K. came in. No AWS. I don't know. They're maybe not part of the show, but maybe next year they'll figure it out. >> Indeed, indeed. >> Lots of stuff happened in the Telecom, but the Azure operator distributed service is the big news, you guys got here. What's that all about? >> Oh, first of all, we changed the name. >> Oh, you did? >> You did? >> Oh, yeah. We have a real name now. It's called the Azure Operator Nexus. >> Oh, I like Nexus better than that. >> David: That's much better, much better. >> Dave: The engineers named it first time around. >> I wish, long story, but thank you for our marketing team. But seriously, not only did we rename the platform, we expanded the platform. >> Dave: Yeah. >> So it now covers the whole spectrum from the far-edge to the public cloud as well, including the near-edge as well. So essentially, it's a hybrid platform that can also run network functions. So all these operators around you, they now have a platform which combines cloud technologies with the choice where they want to run, optimized for the network. >> Okay and so, you know, we've talked about the disaggregation of the network and how you're bringing kind of engineered systems to the table. We've seen this movie before, but Dennis, there are differences, right? I mean, you didn't really have engineered systems in the 90s. You didn't have those integration points. You really didn't have the public cloud, you didn't have AI. >> Right. >> So you have all those new powers that you can tap, so give us the update from your perspective, having now spent a day and a half here. What's the vibe, what's the buzz, and what's your take on everything? >> Yeah, I think to build on what Yousef said, there's a lot going on with people still trying to figure out exactly how to architect the Telecom network of the future. They know it's got to have a lot to do with cloud. It does have some pretty significant differences, one of those being, there's definitely got to be a hybrid component because there are pieces of the Telecom network that even when modernized will not end up centralized, right? They're going to be highly distributed. I would say though, you know, we took away two things, yesterday, from all the meetings. One, people are done, I think the network operators are done, questioning technology readiness. They're now beginning to wrestle with operationalization of it all, right? So it's like, okay, it's here. I can in fact build a modern network in a very cloud native way, but I've got to figure out how to do that all. And another big part of it is the ecosystem and certainly the partnership long standing between Dell and Microsoft which we're extending into this space is part of that, making it easier on people to actually acquire, deploy, and importantly, support these new technologies. >> So a lot of the traditional carriers, like you said, they're sort of beyond the technology readiness. Jose Maria Alvarez in the keynote said there are three pillars to the future Telecom network. He said low latency, programmable networks, and then cloud and edge, kind of threw that in. You agree with that, Yousef? (Dave and Yousef speaking altogether) >> I mean, we've been for years talking about the cloud and edge. >> Yeah. >> Satya for years had the same graphic. We still have it. Today, we have expanded the graphic a bit to include the network as one, because you can have a cloud without connectivity as well but this is very, very, very, very much true. >> And so the question then, Dennis, is okay, you've got disruptors, we had Dish on yesterday. >> Oh, did you? Good. >> Yeah, yeah, and they're talking about what they're doing with, you know, ORAN and all the applications, really taking account of it. What I see is a developer friendly, you know, environment. You got the carriers talking about how they're going to charge developers for APIs. I think they've published eight APIs which is nowhere near enough. So you've got that sort of, you know, inertia and yet, you have the disruptors that are going to potentially be a catalyst to, you know, cross the chasm, if you will. So, you know, put on your strategy hat. >> Yeah. >> Dave: How do you see that playing out? >> Well, they're trying to tap into three things, the disruptors. You know, I think the thesis is, "If I get to a truly cloud native, communications network first, I ought to have greater agility so that I can launch more services and create more revenue streams. I ought to be lower cost in terms of both acquisition cost and operating cost, right, and I ought to be able to create scale between my IT organization, everything I know how to do there and my Telecom network." You know, classic, right? Better, faster, cheaper if I embrace cloud early on. And people like Dish, you know, they have a clean sheet of paper with which to do that. So innovation and rate of innovation is huge for them. >> So what would you do? We put your Clay Christensen hat on, now. What if you were at a traditional Telco who's like, complaining about- >> You're going to get me in trouble. >> Dave: Come on, come on. >> Don't do it. >> Dave: Help him out. Help him out, help him out. So if, you know, they're complaining about CapEx, they're highly regulated, right, they want net neutrality but they want to be able to sort of dial up the cost of those using the network. So what would you do? Would you try to disrupt yourself? Would you create a skunkworks? Would you kind of spin off a disruptor? That's a real dilemma for those guys. >> Well for mobile network operators, the beauty of 5G is it's the first cloud native cellular standard. So I don't know if anybody's throwing these terms around, but 5G SA is standalone, right? >> Dave: Yeah, yeah. >> So a lot of 'em, it's not a skunkworks. They're just literally saying, "I've got to have a 5G network." And some of 'em are deciding, "I'm going to stand it up all by itself." Now, that's duplicative expense in a lot of ways, but it creates isolation from the two networks. Others are saying, "No, it's got to be NSA. I've got to be able to combine 4G and 5G." And then you're into the brownfield thing. >> That's the hybrid. >> Not hybrid as in cloud, but hybrid as in, you know. >> Yeah, yeah. >> It's a converge network. >> Dave: Yeah, yeah. >> So, you know, I would say for a lot of them, they're adopting, probably rightly so, a wait and see attitude. One thing we haven't talked about and you got to get on the table, their high order bit is resilience. >> Dave: Yeah, totally. >> David: Yeah. >> Right? Can't go down. It's national, secure infrastructure, first responder. >> Indeed. >> Anytime you ask them to embrace any new technology, the first thing that they have to work through in their minds is, you know, "Is the juice worth the squeeze? Like, can I handle the risk?" >> But you're saying they're not questioning the technology. Aren't they questioning ORAN in terms of the quality of service, or are they beyond that? >> Dennis: They're questioning the timing, not the inevitability. >> Okay, so they agree that ORAN is going to be open over time. >> At some point, RAN will be cloud native, whether it's ORAN the spec, open RAN the concept, (Yousef speaking indistinctly) >> Yeah. >> Virtual RAN. But yeah, I mean I think it seems pretty evident at this point that the mainframe will give way to open systems once again. >> Dave: Yeah, yeah, yeah. >> ERAN, ecosystem RAN. >> Any RAN. (Dave laughing) >> You don't have to start with the ORAN where they're inside the house. So as you probably know, our partner AT&T started with the core. >> Dennis: They almost all have. >> And they've been on the virtualization path since 2014 and 15. And what we are working with them on is the hybrid cloud model to expand all the way, if you will, as I mentioned to the far-edge or the public cloud. So there's a way to be in the brownfield environment, yet jump on the new bandwagon of technology without necessarily taking too much risk, because you're quite right. I mean, resiliency, security, service assurance, I mean, for example, AT&T runs the first responder network for the US on their network, on our platform, and I'm personally very familiar of how high the bar is. So it's doable, but you need to go in stages, of course. >> And they've got to do that integration. >> Yes. >> They do. >> And Yousef made a great point. Like, out of the top 30 largest Telcos by CapEx outside of China, three quarters of them have virtualized their core. So the cloudification, if you will, software definition run on industry standard hardware, embraced cloud native principles, containerized apps, that's happened in the core. It's well accepted. Now it's just a ripple-down through the network which will happen as and when things are faster, better, cheaper. >> Right. >> So as implemented, what does this look like? Is it essentially what we used to loosely refer to as Azure stacked software, running with Dell optimized Telecom infrastructure together, sometimes within a BBU, out in a hybrid cloud model communicating back to Azure locations in some cases? Is that what we're looking at? >> Approximately. So you start with the near-edge, okay? So the near-edge lives in the operator's data centers, edges, whatever the case may be, built out of off the shelf hardware. Dell is our great partner there but in principle, it could be different mix and match. So once you have that true near-edge, then you can think of, "Okay, how can I make sure this environment is as uniform, same APIs, same everything, regardless what the physical location is?" And this is key, key for the network function providers and the NEPs because they need to be able to port once, run everywhere, and it's key for the operator to reduce their costs. You want to teach your workforce, your operations folks, if you will, how to manage this system one time, to automation and so forth. So, and that is actually an expansion of the Azure capabilities that people are familiar with in a public cloud, projected into different locations. And we have technology called Arc which basically models everything. >> Yeah, yeah. >> So if you have trained your IT side, you are halfway there, how to manage your new network. Even though of course the network is carrier graded, there's different gear. So yes, what you said, a lot of it is true but the actual components, whatever they might be running, are carrier grade, highly optimized, the next images and our solution is not a DIY solution, okay? I know you cater to a wide spectrum here but for us, we don't believe in the TCO. The proper TCO can be achieved by just putting stuff by yourself. We just published a report with Analysys Mason that shows that our approach will save 36 percent of the cost compared to a DIY approach. >> Dave: What percent? >> 36 percent. >> Dave: Of the cost? >> Of, compared to DIY, which is already cheaper than classical models. >> And there's a long history of fairly failed DIY, right, >> Yeah. >> That preceded this. As in the early days of public cloud, the network operators wrestled with, "Do I have to become one to survive?" >> Dave: Yeah. Right. >> So they all ended up having cloud projects and by and large, they've all dematerialized in favor of this. >> Yeah, and it's hard for them to really invest at scale. Let me give you an example. So, your biggest tier one operator, without naming anybody, okay, how many developers do they have that can build and maintain an OS image, or can keep track of container technology, or build monitoring at scale? In our company, we have literally thousands of developers doing it already for the cloud and all we're doing for the operator segment is customizing it and focusing it at the carrier grade aspects of it. But so, I don't have half a dozen exterior experts. I literally have a building of developers who can do that and I'm being literal, here. So it's a scale thing. Once you have a product that you can give to multiple people, everybody benefits. >> Dave: Yeah, and the carriers are largely, they're equipment engineers in a large setting. >> Oh, they have a tough job. I always have total respect what they do. >> Oh totally, and a lot of the work happens, you know, kind of underground and here they are. >> They are network operators. >> They don't touch. >> It's their business. >> Right, absolutely, and they're good at it. They're really good at it. That's right. You know, you think about it, we love to, you know, poke fun at the big carriers, but think about what happened during the pandemic. When they had us shift everything to remote work, >> Dennis: Yes. >> Landline traffic went through the roof. You didn't even notice. >> Yep. That's very true. >> I mean, that's the example. >> That's very true. >> However, in the future where there's innovation and it's going to be driven by developers, right, that's where the open ecosystem comes in. >> Yousef: Indeed. >> And that's the hard transition for a lot of these folks because the developers are going to win that with new workloads, new applications that we can't even think of. >> Dennis: Right. And a lot of it is because if you look at it, there's the fundamental back strategy hat back on, fundamental dynamics of the industry, forced investment, flat revenues. >> Dave: Yeah. Right. >> Very true. >> Right? Every few years, a new G comes out. "Man, I got to retool this massive thing and where I can't do towers, I'm dropping fiber or vice a versa." And meanwhile, most diversification efforts into media have failed. They've had to unwind them and resell them. There's a lot of debt in the industry. >> Yousef: Yeah. >> Dennis: And so, they're looking for that next big, adjacent revenue stream and increasingly deciding, "If I don't modernize my network, I can't get it." >> Can't do it. >> Right, and again, what I heard from some of the carriers in the keynote was, "We're going to charge for API access 'cause we have data in the network." Okay, but I feel like there's a lot more innovation beyond that that's going to come from the disruptors. >> Dennis: Oh yeah. >> Yousef: Yes. >> You know, that's going to blow that away, right? And then that may not be the right model. We'll see, you know? I mean, what would Microsoft do? They would say, "Here, here's a platform. Go develop." >> No, I'll tell you. We are actually working with CAMARA and GSMA on the whole API layer. We actually announced a service as well as (indistinct). >> Dave: Yeah, yeah, right. >> And the key there, frankly, in my opinion, are not the disruptors as in operators. It's the ISV community. You want to get developers that can write to a global set of APIs, not per Telco APIs, such that they can do the innovation. I mean, this is what we've seen in other industries, >> Absolutely. >> That I critically can think of. >> This is the way they get a slice of that pie, right? The recent history of this industry is one where 4G LTE begot the smartphone and app store era, a bevy of consumer services, and almost every single profit stream went somewhere other than the operator, right? >> Yousef: Someone else. So they're looking at this saying, "Okay, 5G is the enterprise G and there's going to be a bevy of applications that are business service related, based on 5G capability and I can't let the OTT, over the top, thing happen again." >> Right. >> They'll say that. "We cannot let this happen." >> "We can't let this happen again." >> Okay, but how do they, >> Yeah, how do they make that not happen? >> Not let it happen again? >> Eight APIs, Dave. The answer is eight APIs. No, I mean, it's this approach. They need to make it easy to work with people like Yousef and more importantly, the developer community that people like Yousef and his company have found a way to harness. And by the way, they need to be part of that developer community themselves. >> And they're not, today. They're not speaking that developer language. >> Right. >> It's hard. You know, hey. >> Dennis: Hey, what's the fastest way to sell an enterprise, a business service? Resell Azure, Teams, something, right? But that's a resale. >> Yeah, that's a resale thing. >> See, >> That's not their service. >> They also need to free their resources from all the plumbing they do and leave it to us. We are plumbers, okay? >> Dennis: We are proud plumbers. >> We are proud plumbers. I'm a plumber. I keep telling people this thing. We had the same discussion with banks and enterprises 10 years ago, by the way. Don't do the plumbing. Go add value on the top. Retool your workforce to do applications and work with ISVs to the verticals, as opposed to either reselling, which many do, or do the plumbing. You'd be surprised. Traditionally, many operators do around, "I want to plumb this thing to get this small interrupt per second." Like, who cares? >> Well, 'cause they made money on connectivity. >> Yes. >> And we've seen this before. >> And in a world without telephone poles and your cables- >> Hey, if what you have is a hammer, everything's a nail, right? And we sell connectivity services and that's what we know how to do, and that both build and sell. And if that's no longer driving a revenue stream sufficient to cover this forced investment march, not to mention Huawei rip and government initiatives to pull infrastructure out and accelerate investment, they got to find new ways. >> I mean, the regulations have been tough, right? They don't go forward and ask for permission. They really can't, right? They have to be much more careful. >> Dennis: It is tough. >> So, we don't mean to sound like it's easy for these guys. >> Dennis: No, it's not. >> But it does require a new mindset, new skillsets, and I think some of 'em are going to figure it out and then pff, the wave, and you guys are going to be riding that wave. >> We're going to try. >> Definitely. Definitely. >> As a veteran of working with both Dell and Microsoft, specifically Azure on things, I am struck by how you're very well positioned in this with Microsoft in particular. Because of Azure's history, coming out of the on-premises world that Microsoft knows so well, there's a natural affinity to the hybrid nature of Telecom. We talk about edge, we talk about hybrid, this is it, absolutely the center of it. So it seems like a- >> Yousef: Indeed. Actually, if you look at the history of Azure, from day one, and I was there from day one, we always spoke of the hybrid model. >> Yeah. >> The third point, we came from the on-premises world. >> David: Right. >> And don't get me wrong, I want people to use the public cloud, but I also know due to physics, regulation, geopolitical boundaries, there's something called on-prem, something called an edge here. I want to add something else. Remember our deal on how we are partner-centric? We're applying the same playbook, here. So, you know, for every dollar we make, so many of it's been done by the ecosystem. Same applies here. So we have announced partnerships with Ericson, Nokia, (indistinct), all the names, and of course with Dell and many others. The ecosystem has to come together and customers must retain their optionality to drum up whatever they are on. So it's the same playbook, with this. >> And enterprise technology companies are, actually, really good at, you know, decoding the customer, figuring out specific requirements, making some mistakes the first time through and then eventually getting it right. And as these trends unfold, you know, you're in a good position, I think, as are others and it's an exciting time for enterprise tech in this industry, you know? >> It really is. >> Indeed. >> Dave: Guys, thanks so much for coming on. >> Thank you. >> Dave: It's great to see you. Have a great rest of the show. >> Thank you. >> Thanks, Dave. Thank you, Dave. >> All right, keep it right there. John Furrier is live in our studio. He's breaking down all the news. Go to siliconangle.com to go to theCUBE.net. Dave Vellante, David Nicholson and Lisa Martin, we'll be right back from the theater in Barcelona, MWC 23 right after this short break. (relaxing music)
SUMMARY :
that drive human progress. of the Telecom systems They're maybe not part of the show, Lots of stuff happened in the Telecom, It's called the Azure Operator Nexus. Dave: The engineers you for our marketing team. from the far-edge to the disaggregation of the network What's the vibe, and certainly the So a lot of the traditional about the cloud and edge. to include the network as one, And so the question Oh, did you? cross the chasm, if you will. and I ought to be able to create scale So what would you do? So what would you do? of 5G is it's the first cloud from the two networks. but hybrid as in, you know. and you got to get on the table, It's national, secure in terms of the quality of Dennis: They're questioning the timing, is going to be open over time. to open systems once again. (Dave laughing) You don't have to start with the ORAN familiar of how high the bar is. So the cloudification, if you will, and it's key for the operator but the actual components, Of, compared to DIY, As in the early days of public cloud, dematerialized in favor of this. and focusing it at the Dave: Yeah, and the I always have total respect what they do. the work happens, you know, poke fun at the big carriers, but think You didn't even notice. and it's going to be driven And that's the hard fundamental dynamics of the industry, There's a lot of debt in the industry. and increasingly deciding, in the keynote was, to blow that away, right? on the whole API layer. And the key there, and I can't let the OTT, over "We cannot let this happen." And by the way, And they're not, today. You know, hey. to sell an enterprise, a business service? from all the plumbing they We had the same discussion Well, 'cause they made they got to find new ways. I mean, the regulations So, we don't mean to sound and you guys are going Definitely. coming out of the on-premises of the hybrid model. from the on-premises world. So it's the same playbook, with this. the first time through Dave: Guys, thanks Have a great rest of the show. Thank you, Dave. from the theater in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dennis | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Yousef Khalidi | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Dennis Hoffman | PERSON | 0.99+ |
Yousef | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Jose Maria Alvarez | PERSON | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
36 percent | QUANTITY | 0.99+ |
36 percent | QUANTITY | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
yesterday | DATE | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
theCUBE.net | OTHER | 0.99+ |
2014 | DATE | 0.99+ |
Eight APIs | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
CAMARA | ORGANIZATION | 0.99+ |
Satya | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
MWC 23 | EVENT | 0.99+ |
third point | QUANTITY | 0.99+ |