Clemence W. Chee & Christoph Sawade, HelloFresh
(upbeat music) >> Hello everyone. We're here at theCUBE startup showcase made possible by AWS. Thanks so much for joining us today. You know, when Zhamak Dehghani was formulating her ideas around data mesh, she wasn't the only one thinking about decentralized data architectures. HelloFresh was going into hyper-growth mode and realized that in order to support its scale, it needed to rethink how it thought about data. Like many companies that started in the early part of the last decade, HelloFresh relied on a monolithic data architecture and the internal team it had concerns about its ability to support continued innovation at high velocity. The company's data team began to think about the future and work backwards from a target architecture, which possessed many principles of so-called data mesh, even though they didn't use that term specifically. The company is a strong example of an early but practical pioneer of data mesh. Now, there are many practitioners and stakeholders involved in evolving the company's data architecture many of whom are listed here on this slide. Two are highlighted in red and joining us today. We're really excited to welcome you to theCUBE, Clemence Chee, who is the global senior director for data at HelloFresh, and Christoph Sawade, who's the global senior director of data also of course at HelloFresh. Folks, welcome. Thanks so much for making some time today and sharing your story. >> Thank you very much. >> Thanks, Dave. >> All right, let's start with HelloFresh. You guys are number one in the world in your field. You deliver hundreds of millions of meals each year to many, many millions of people around the globe. You're scaling. Christoph, tell us a little bit more about your company and its vision. >> Yeah. Should I start or Clemence? Maybe take over the first piece because Clemence has actually been longer a director at HelloFresh. >> Yeah go ahead Clemence. >> I mean, yes, about approximately six years ago I joined and HelloFresh, and I didn't think about the startup I was joining would eventually IPO. And just two years later, HelloFresh went public. And approximately three years and 10 months after HelloFresh was listed on the German stock exchange which was just last week, HelloFresh was included in the DAX Germany's leading stock market index and that, to mind a great, great milestone, and I'm really looking forward and I'm very excited for the future for HelloFresh and also our data. The vision that we have is to become the world's leading food solution group. And there are a lot of attractive opportunities. So recently we did launch and expand in Norway. This was in July. And earlier this year, we launched the US brand, Green Chef, in the UK as well. We're committed to launch continuously different geographies in the next coming years and have a strong path ahead of us. With the acquisition of ready to eat companies like factor in the US and the plant acquisition of Youfoodz in Australia, we are diversifying our offer, now reaching even more and more untapped customer segments and increase our total address for the market. So by offering customers and growing range of different alternatives to shop food and to consume meals, we are charging towards this vision and this goal to become the world's leading integrated food solutions group. >> Love it. You guys are on a rocket ship. You're really transforming the industry. And as you expand your TAM, it brings us to sort of the data as a core part of that strategy. So maybe you guys could talk a little bit about your journey as a company, specifically as it relates to your data journey. I mean, you began as a startup, you had a basic architecture and like everyone, you've made extensive use of spreadsheets, you built a Hadoop based system that started to grow. And when the company IPO'd, you really started to explode. So maybe describe that journey from a data perspective. >> Yes, Dave. So HelloFresh by 2015, approximately had evolved what amount, a classical centralized data management set up. So we grew very organically over the years, and there were a lot of very smart people around the globe, really building the company and building our infrastructure. This also means that there were a small number of internal and external sources, data sources, and a centralized BI team with a number of people producing different reports, different dashboards and, and products for our executives, for example, or for different operations teams to see a company's performance and knowledge was transferred just by our talking to each other face-to-face conversations. And the people in the data warehouse team were considered as the data wizard or as the ETL wizard. Very classical challenges. And it was ETL, who reserved, indicated the kind of like a style of knowledge of data management, right? So our central data warehouse team then was responsible for different type of verticals in different domains, different geographies. And all this setup gave us in the beginning, the flexibility to grow fast as a company in 2015. >> Christoph, anything to add to that? >> Yes, not explicitly to that one, but as, as Clemence said, right, this was kind of the setup that actually worked for us quite a while. And then in 2017, when HelloFresh went public, the company also grew rapidly. And just to give you an idea how that looked like as well, the tech departments have actually increased from about 40 people to almost 300 engineers. And in the same way as the business units, as there Clemence has described, also grew sustainably. So we continue to launch HelloFresh in new countries, launched new brands like Every Plate, and also acquired other brands like we have Factor. And that grows also from a data perspective, the number of data requests that the central (mumbles), we're getting become more and more and more, and also more and more complex. So that for the team meant that they had a fairly high mental load. So they had to achieve a very, or basically get a very deep understanding about the business and also suffered a lot from this context, switching back and forth. Essentially, they had to prioritize across our product requests from our physical product, digital product, from a physical, from, sorry, from the marketing perspective, and also from the central reporting teams. And in a nutshell, this was very hard for these people, and that altered situations that let's say the solution that we have built. We can not really optimal. So in a, in a, in a, in a nutshell, the central function became a bottleneck and slow down of all the innovation of the company. >> It's a classic case. Isn't it? I mean, Clemence, you see, you see the central team becomes a bottleneck, and so the lines of business, the marketing team, sales teams say "Okay, we're going to take things into our own hands." And then of course IT and the technical team is called in later to clean up the mess. Maybe, maybe I'm overstating it, but, but that's a common situation. Isn't it? >> Yeah this is what exactly happened. Right. So we had a bottleneck, we had those central teams, there was always a bit of tension. Analytics teams then started in those business domains like marketing, supply chain, finance, HR, and so on started really to build their own data solutions. At some point you have to get the ball rolling, right? And then continue the trajectory, which means then that the data pipelines didn't meet the engineering standards. And there was an increased need for maintenance and support from central teams. Hence over time, the knowledge about those pipelines and how to maintain a particular infrastructure, for example, left the company, such that most of those data assets and data sets that turned into a huge debt with decreasing data quality, also decreasing lack of trust, decreasing transparency. And this was an increasing challenge where a majority of time was spent in meeting rooms to align on, on data quality for example. >> Yeah. And the point you were making Christoph about context switching, and this is, this is a point that Zhamak makes quite often as we've, we've, we've contextualized our operational systems like our sales systems, our marketing systems, but not our, our data systems. So you're asking the data team, okay, be an expert in sales, be an expert in marketing, be an expert in logistics, be an expert in supply chain and it's start, stop, start, stop. It's a paper cut environment, and it's just not as productive. But, but, and the flip side of that is when you think about a centralized organization, you think, hey, this is going to be a very efficient way across functional team to support the organization, but it's not necessarily the highest velocity, most effective organizational structure. >> Yeah. So, so I agree with that piece, that's up to a certain scale. A centralized function has a lot of advantages, right? So it's a tool for everyone, which would go to a destined kind of expert team. However, if you see that you actually would like to accelerate that in specific as the type of growth. But you want to actually have autonomy on certain teams and move the teams, or let's say the data to the experts in these teams. And this, as you have mentioned, right, that increases mental load. And you can either internally start splitting your team into different kinds of sub teams focusing on different areas, however, that is then again, just adding another piece where actually collaboration needs to happen because the external seized, so why not bridging that gap immediately and actually move these teams end to end into the, into the function themselves. So maybe just to continue what Clemence was saying, and this is actually where our, so, Clemence and my journey started to become one joint journey. So Clemence was coming actually from one of these teams who builds their own solutions. I was basically heading the platform team called data warehouse team these days. And in 2019, where (mumbles) become more and more serious, I would say, so more and more people have recognized that this model does not really scale, in 2019, basically the leadership of the company came together and identified data as a key strategic asset. And what we mean by that, that if he leveraged it in a, in a, an appropriate way, it gives us a unique, competitive advantage, which could help us to, to support and actually fully automate our decision making process across the entire value chain. So once we, what we're trying to do now, or what we would be aiming for is that HelloFresh is able to build data products that have a purpose. We're moving away from the idea that it's just a bi-product. We have a purpose why we would like to collect this data. There's a clear business need behind that. And because it's so important to, for the company as a business, we also want to provide them as a trustworthy asset to the rest of the organization. We'd say, this is the best customer experience, but at least in a way that users can easily discover, understand and securely access, high quality data. >> Yeah. So, and, and, and Clemence, when you see Zhamak's writing, you see, you know, she has the four pillars and the principles. As practitioners, you look at that say, okay, hey, that's pretty good thinking. And then now we have to apply it. And that's where the devil meets the details. So it's the for, the decentralized data ownership, data as a product, which we'll talk about a little bit, self-serve, which you guys have spent a lot of time on, and Clemence your wheelhouse, which is, which is governance and a federated governance model. And it's almost like if you, if you achieve the first two, then you have to solve for the second two, it almost creates a new challenges, but maybe you could talk about that a little bit as to how it relates to HelloFresh. >> Yes. So Chris has mentioned that we identified kind of a challenge beforehand and said, how can we actually decentralized and actually empower the different colleagues of ours? And this was more a, we realized that it was more an organizational or a cultural change. And this is something that someone also mentioned. I think ThoughtWorks mentioned one of the white papers, it's more of an organizational or a cultural impact. And we kicked off a phased reorganization, or different phases we're currently on, in the middle of still, but we kicked off different phases of organizational restructuring or reorganization trying to lock this data at scale. And the idea was really moving away from ever growing complex matrix organizations or matrix setups and split between two different things. One is the value creation. So basically when people ask the question, what can we actually do? What should we do? This is value creation and the how, which is capability building, and both are equal in authority. This actually then creates a high urge in collaboration and this collaboration breaks up the different silos that were built. And of course, this also includes different needs of staffing for teams staffing with more, let's say data scientists or data engineers, data professionals into those business domains, enhance, or some more capability building. >> Okay, go ahead. Sorry. >> So back to Zhamak Dehghani. So we, the idea also then crossed over when she published her papers in May, 2019. And we thought, well, the four pillars that she described were around decentralized data ownership, product, data as a product mindset, we have a self-service infrastructure. And as you mentioned, federated computational governance. And this suited very much with our thinking at that point of time to reorganize the different teams and this then that to not only organizational restructure, but also in completely new approach of how we need to manage data, through data. >> Got it. Okay. So your businesses is exploding. The data team was having to become domain experts to many areas, constantly context switching as we said, people started to take things into their own hands. So again, we said classic story, but, but you didn't let it get out of control and that's important. And so we, we actually have a picture of kind of where you're going today and it's evolved into this, Pat, if you could bring up the picture with the, the elephant, here we go. So I will talk a little bit about the architecture. It doesn't show it here, the spreadsheet era, but Christoph, maybe you could talk about that. It does show the Hadoop monolith, which exists today. I think that's in a managed hosting service, but, but you, you preserve that piece of it. But if I understand it correctly, everything is evolving to the cloud. I think you're running a lot of this or all of it in AWS. You've got, everybody's got their own data sources. You've got a data hub, which I think is enabled by a master catalog for discovery and all this underlying technical infrastructure that is, is really not the focus of this conversation today. But the key here, if I understand correctly is these domains are autonomous and that not only this required technical thinking, but really supportive organizational mindset, which we're going to talk about today. But, but Christoph, maybe you could address, you know, at a high level, some of the architectural evolution that you guys went through. >> Yeah, sure. Yeah. Maybe it's also a good summary about the entire history. So as you have mentioned, right, we started in the very beginning, it's a monolith on the operational plan, right? Actually it wasn't just one model it was two, one for the backend and one for the front end. And our analytical plan was essentially a couple of spreadsheets. And I think there's nothing wrong with spreadsheets, but it allows you to store information, it allows you to transform data, it allows you to share this information, it allows you to visualize this data, but all kind of, it's not actually separating concern, right? Every single one tool. And this means that it's obviously not scalable, right? You reach the point where this kind of management's set up in, or data management is in one tool, reached elements. So what we have started is we created our data lake, as we have seen here on our dupe. And just in the very beginning actually reflected very much our operation upon this. On top of that, we used Impala as a data warehouse, but there was not really a distinction between what is our data warehouse and what is our data lakes as the Impala was used as kind of both as a kind of engine to create a warehouse and data lake constructed itself. And this organic growth actually led to a situation. As I think it's clear now that we had the centralized model as, for all the domains that were really lose Kimball, the modeling standards and there's new uniformity we used to actually build, in-house, a base of building materialized use, of use that we have used for the presentation there. There was a lot of duplication of effort. And in the end, essentially the amendments and feedback tool, which helped us to, to improve of what we, have built during the end in a natural, as you said, the lack of trust. And this basically was a starting point for us to understand, okay, how can we move away? And there are a lot of different things that we can discuss of apart from this organizational structure that we have set up here, we have three or four pillars from Zhamak. However, there's also the next, extra question around, how do we implement product, right? What are the implications on that level and I think that is, that's something that we are, that we are currently still in progress. >> Got it. Okay. So I wonder if we could talk about, switch gears a little bit, and talk about the organizational and cultural challenges that you faced. What were those conversations like? And let's, let's dig into that a little bit. I want to get into governance as well. >> The conversations on the cultural change. I mean, yes, we went through a hyper growth through the last year, and obviously there were a lot of new joiners, a lot of different, very, very smart people joining the company, which then results that collaborations got a bit more difficult. Of course, the time zone changes. You have different, different artifacts that you had recreated in documentation that were flying around. So we were, we had to build the company from scratch, right? Of course, this then resulted always this tension, which I described before. But the most important part here is that data has always been a very important factor at HelloFresh, and we collected more of this data and continued to improve, use data to improve the different key areas of our business. Even when organizational struggles like the central (mumbles) struggles, data somehow always helped us to grow through this kind of change, right? In the end, those decentralized teams in our local geographies started with solutions that serve the business, which was very, very important. Otherwise, we wouldn't be at the place where we are today, but they did violate best practices and standards. And I always use the sports analogy, Dave. So like any sport, there are different rules and regulations that need to be followed. These routes are defined by, I'll call it, the sports association. And this is what you can think about other data governance and then our compliance team. Now we add the players to it who need to follow those rules and abide by them. This is what we then call data management. Now we have the different players, the professionals they also need to be trained and understand the strategy and the rules before they can play. And this is what I then called data literacy. So we realized that we need to focus on helping our teams to develop those capabilities and teach the standards for how work is being done to truly drive functional excellence in the different domains. And one of our ambition of our data literacy program for example, is to really empower every employee at HelloFresh, everyone, to make the right data-informed decisions by providing data education that scales (mumbles), and that can be different things. Different things like including data capabilities with, in the learning path for example, right? So help them to create and deploy data products, connecting data, producers, and data consumers, and create a common sense and more understanding of each other's dependencies, which is important. For example, SIS, SLO, state of contracts, et cetera, people get more of a sense of ownership and responsibility. Of course, we have to define what it means. What does ownership means? What does responsibility mean? But we are teaching this to our colleagues via individual learning patterns and help them upscale to use also their shared infrastructure, and those self-service data applications. And of all to summarize, we are still in this progress of learning. We're still learning as well. So learning never stops at Hello Fresh, but we are really trying this to make it as much fun as possible. And in the end, we all know user behavior is changed through positive experience. So instead of having massive training programs over endless courses of workshops, leaving our new joiners and colleagues confused and overwhelmed, we're applying gamification, right? So split different levels of certification where our colleagues, can access, have had access points. They can earn badges along the way, which then simplifies the process of learning and engagement of the users. And this is what we see in surveys, for example, where our employees value this gamification approach a lot and are even competing to collect those learning pet badges, to become the number one on the leaderboard. >> I love the gamification. I mean, we've seen it work so well in so many different industries, not the least of which is crypto. So you've identified some of the process gaps that you, you saw, you just gloss over them. Sometimes I say, pave the cow path. You didn't try to force. In other words, a new architecture into the legacy processes, you really had to rethink your approach to data management. So what did that entail? >> To rethink the way of data management, 100%. So if I take the example of revolution, industrial revolution or classical supply chain revolution, but just imagine that you have been riding a horse, for example, your whole life, and suddenly you can operate a car or you suddenly receive just a complete new way of transporting assets from A to B. So we needed to establish a new set of cross-functional business processes to run faster, drive faster, more robustly, and deliver data products which can be trusted and used by downstream processes and systems. Hence we had a subset of new standards and new procedures that would fall into the internal data governance and compliance sector. With internal, I'm always referring to the data operations around new things like data catalog, how to identify ownership, how to change ownership, how to certify data assets, everything around classical is software development, which we now apply to data. This, this is some old and new thinking, right? Deployment, versioning, QA, all the different things, ingestion policies, the deletion procedures, all the things that software development has been doing, we do it now with data as well. And it's simple terms, it's a whole redesign of the supply chain of our data with new procedures and new processes in asset creation, asset management and asset consumption. >> So data's become kind of the new development kit, if you will. I want to shift gears and talk about the notion of data product, and we have a slide that, that we pulled from your deck. And I'd like to unpack it a little bit. I'll just, if you can bring that up, I'll, I'll read it. A data product is a product whose primary objective is to leverage on data to solve customer problems, where customers are both internal and external. so pretty straightforward. I know you've, you've gone much deeper in your thinking and into your organization, but how do you think about that and how do you determine for instance, who owns what, how did you get everybody to agree? >> I can take that one. Maybe let me start as a data product. So I think that's an ongoing debate, right? And I think the debate itself is the important piece here, right? You mentioned the debate, you've clarified what we actually mean by that, a product, and what is actually the mindset. So I think just from a definition perspective, right? I think we find the common denominator that we say, okay, that our product is something which is important for the company that comes with value. What do you mean by that? Okay. It's a solution to a customer problem that delivers ideally maximum value to the business. And yes, leverage is the power of data. And we have a couple of examples, and I'll hit refresh here, the historical and classical ones around dashboards, for example, to monitor our error rates, but also more sophisticated based for example, to incorporate machine learning algorithms in our recipe recommendation. However, I think the important aspects of a data product is A: there is an owner, right? There's someone accountable for making sure that the product that you're providing is actually served and has maintained. And there are, there's someone who's making sure that this actually keeps the value of what we are promising. Combined with the idea of the proper documentation, like a product description, right? The people understand how to use it. What is this about? And related to that piece is the idea of, there's a purpose, right? We need to understand or ask ourselves, okay, why does a thing exist? Does it provide the value that we think it does? Then it leads in to a good understanding of what the life cycle of the data product and product life cycle. What do we mean? Okay. From the beginning, from the creation, you need to have a good understanding. You need to collect feedback. We need to learn about that, you need to rework, and actually finally, also to think about, okay, when is it time to decommission that piece So overall I think the core of this data product is product thinking 101, right? That we start, the point is, the starting point needs to be the problem and not the solution. And this is essentially what we have seen, what was missing, what brought us to this kind of data spaghetti that we have built there in Rush, essentially, we built it. Certain data assets develop in isolation and continuously patch the solution just to fulfill these ad hoc requests that we got and actually really understanding what the stakeholder needs. And the interesting piece as a results in duplication of (mumbled) And this is not just frustrating and probably not the most efficient way, how the company should work. But also if I build the same data assets, but slightly different assumption across the company and multiple teams that leads to data inconsistency. And imagine the following scenario. You, as a management, for management perspective, you're asking basically a specific question and you get essentially from a couple of different teams, different kinds of graphs, different kinds of data and numbers. And in the end, you do not know which ones to trust. So there's actually much (mumbles) but good. You do not know what actually is it noise for times of observing or is it just actually, is there actually a signal that I'm looking for? And the same as if I'm running an AB test, right? I have a new feature, I would like to understand what is the business impact of this feature? I run that with a specific source and an unfortunate scenario. Your production system is actually running on a different source. You see different numbers. What you have seen in the AB test is actually not what you see then in production, typical thing. Then as you asking some analytics team to actually do a deep dive, to understand where the discrepancies are coming from, worst case scenario again, there's a different kind of source. So in the end, it's a pretty frustrating scenario. And it's actually a waste of time of people that have to identify the root cause of this type of divergence. So in a nutshell, the highest degree of consistency is actually achieved if people are just reusing data assets. And also in the end, the meetup talk they've given, right? We start trying to establish this approach by AB testing. So we have a team, but just providing, or is kind of owning their target metric associated business teams, and they're providing that as a product also to other services, including the AB testing team. The AB testing team can use this information to find an interface say, okay, I'm drawing information for the metadata of an experiment. And in the end, after the assignment, after this data collection phase, they can easily add a graph to a dashboard just grouped by the AB testing barrier. And we have seen that also in other companies. So it's not just a nice dream that we have, right? I have actually looked at other companies maybe looked on search and we established a complete KPI pipeline that was computing all these information and this information both hosted by the team and those that (mumbles) AB testing, deep dives and, and regular reporting again. So just one last second, the, the important piece, Now, why I'm coming back to that is that it requires that we are treating this data as a product, right? If we want to have multiple people using the thing that I am owning and building, we have to provide this as a trust (mumbles) asset and in a way that it's easy for people to discover and to actually work with. >> Yeah. And coming back to that. So this is, to me this is why I get so excited about data mesh, because I really do think it's the right direction for organizations. When people hear data product, they think, "Well, what does that mean?" But then when you start to sort of define it as you did, it's using data to add value that could be cutting costs, that could be generating revenue, it could be actually directly creating a product that you monetize. So it's sort of in the eyes of the beholder, but I think the other point that we've made, is you made it earlier on too, and again, context. So when you have a centralized data team and you have all these P&L managers, a lot of times they'll question the data 'cause they don't own it. They're like, "Well, wait a minute." If it doesn't agree with their agenda, they'll attack the data. But if they own the data, then they're responsible for defending that. And that is a mindset change that's really important. And I'm curious is how you got to that ownership. Was it a top-down or was somebody providing leadership? Was it more organic bottom up? Was it a sort of a combination? How do you decide who owned what? In other words, you know, did you get, how did you get the business to take ownership of the data and what does owning the data actually mean? >> That's a very good question, Dave. I think that one of the pieces where I think we have a lot of learning and basically if you ask me how we could stop the filling, I think that would be the first piece that we need to start. Really think about how that should be approached. If it's staff has ownership, right? That means somehow that the team has the responsibility to host themselves the data assets to minimum acceptable standards. That's minimum dependencies up and down stream. The interesting piece has to be looking backwards. What was happening is that under that definition, this extra process that we have to go through is not actually transferring ownership from a central team to the other teams, but actually in most cases to establish ownership. I make this difference because saying we have to transfer ownership actually would erroneously suggest that the dataset was owned before, but this platform team, yes, they had the capability to make the change, but actually the analytics team, but always once we had the business understand the use cases and what no one actually bought, it's actually expensive, expected. So we had to go through this very lengthy process and establishing ownership, how we have done that as in the beginning, very naively started, here's a document, here are all the data assets, what is probably the nearest neighbor who can actually take care of that. And then we, we moved it over. But the problem here is that all these things is kind of technical debt, right? It's not really properly documented, pretty unstable. It was built in a very inconsistent way over years. And these people that built this thing have already left the company. So this is actually not a nice thing that you want to see and people build up a certain resistance, even if they have actually bought into this idea of domain ownership. So if you ask me these learnings, what needs to happen is first, the company needs to really understand what our core business concept that we have the need to have this mapping from this other core business concept that we have. These are the domain teams who are owning this concept, and then actually linked that to the, the assets and integrate that better, but suppose understanding how we can evolve, actually the data assets and new data builds things new and the, in this piece and the domain, but also how can we address reduction of technical depth and stabilizing what we have already. >> Thank you for that Christoph. So I want to turn a direction here and talk Clemence about governance. And I know that's an area that's passionate, you're passionate about. I pulled this slide from your deck, which I kind of messed up a little bit, sorry for that. But, but, but by the way, we're going to publish a link to the full video that you guys did. So we'll share that with folks, but it's one of the most challenging aspects of data mesh. If you're going to decentralize, you, you quickly realize this could be the wild west, as we talked about all over again. So how are you approaching governance? There's a lot of items on this slide that are, you know, underscore the complexity, whether it's privacy compliance, et cetera. So, so how did you approach this? >> It's yeah, it's about connecting those dots, right? So the aim of the data governance program is to promote the autonomy of every team while still ensuring that everybody has the right interoperability. So when we want to move from the wild west, riding horses to a civilized way of transport, I can take the example of modern street traffic. Like when all participants can maneuver independently, and as long as they follow the same rules and standards, everybody can remain compatible with each other and understand and learn from each other so we can avoid car crashes. So when I go from country to country, I do understand what the street infrastructure means. How do I drive my car? I can also read the traffic lights and the different signals. So likewise, as a business in HelloFresh we do operate autonomously and consequently need to follow those external and internal rules and standards set forth by the tradition in which we operate. So in order to prevent a, a car crash, we need to at least ensure compliance with regulations, to account for societies and our customers' increasing concern with data protection and privacy. So teaching and advocating this imaging, evangelizing this to everyone in the company was a key community or communication strategy. And of course, I mean, I mentioned data privacy, external factors, the same goes for internal regulations and processes to help our colleagues to adapt for this very new environment. So when I mentioned before, the new way of thinking, the new way of dealing and managing data, this of course implies that we need new processes and regulations for our colleagues as well. In a nutshell, then this means that data governance provides a framework for managing our people, the processes and technology and culture around our data traffic. And that governance must come together in order to have this effective program providing at least a common denominator is especially critical for shared data sets, which we have across our different geographies managed, and shared applications on shared infrastructure and applications. And as then consumed by centralized processes, for example, master data, everything, and all the metrics and KPIs, which are also used for a central steering. It's a big change, right? And our ultimate goal is to have this non-invasive federated, automated and computational governance. And for that, we can't just talk about it. We actually have to go deep and use case by use case and QC by PUC and generate learnings and learnings with the different teams. And this would be a classical approach of identifying the target structure, the target status, match it with the current status, by identifying together with the business teams, with the different domains and have a risk assessment, for example, to increase transparency because a lot of teams, they might not even know what kind of situation they might be. And this is where this training and this piece of data literacy comes into place, where we go in and trade based on the findings, based on the most valuable use case. And based on that, help our teams to do this change, to increase their capability. I just told a little bit more, I wouldn't say hand-holding, but a lot of guidance. >> Can I kind of kind of chime in quickly and (mumbled) below me, I mean, there's a lot of governance piece, but I think that is important. And if you're talking about documentation, for example, yes, we can go from team to team and tell these people, hey, you have to document your data assets and data catalog, or you have to establish a data contract and so on and forth. But if we would like to build data products at scale, following actual governance, we need to think about automation, right? We need to think about a lot of things that we can learn from engineering before, and just starts as simple things. Like if we would like to build up trust in our data products, right? And actually want to apply the same rigor and the best practices that we know from engineering. There are things that we can do. And we should probably think about what we can copy. And one example might be so the level of service level agreements, so that level objectives. So the level of indicators, right, that represent on a, on an engineering level, right? Are we providing services? They're representing the promises we make to our customer and to our consumers. These are the internal objectives that help us to keep those promises. And actually these audits of, of how we are tracking ourselves, how we are doing. And this is just one example of where I think the federated governance, governance comes into play, right? In an ideal world, you should not just talk about data as a product, but also data product that's code. That'd be say, okay, as most, as much as possible, right? Give the engineers the tool that they are familiar with, and actually not ask the product managers, for example, to document the data assets in the data catalog, but make it part of the configuration has as, as a, as a CDCI continuous delivery pipeline, as we typically see in other engineering, tasks through it and services maybe say, okay, there is configuration, we can think about PII, we can think about data quality monitoring, we can think about the ingestion data catalog and so on and forth. But I think ideally in a data product goals become a sort of templates that can be deployed and are actually rejected or verified at build time before we actually make them and deploy them to production. >> Yeah so it's like DevOps for data product. So, so I'm envisioning almost a three-phase approach to governance. And you're kind of, it sounds like you're in the early phase of it, call it phase zero, where there's learning, there's literacy, there's training education, there's kind of self-governance. And then there's some kind of oversight, some, a lot of manual stuff going on, and then you, you're trying to process builders at this phase and then you codify it and then you can automate it. Is that fair? >> Yeah. I would rather think, think about automation as early as possible in a way, and yes, it needs to be separate rules, but then actually start actually use case by use case. Is there anything that small piece that we can already automate? If just possible roll that out at the next extended step-by-step. >> Is there a role though, that adjudicates that? Is there a central, you know, chief state officer who's responsible for making sure people are complying or is it, how do you handle it? >> I mean, from a, from a, from a platform perspective, yes. This applies in to, to implement certain pieces, that we are saying are important and actually would like to implement, however, that is actually working very closely with the governance department, So it's Clemence's piece to understand that defy the policies that needs to be implemented. >> So good. So Clemence essentially, it's, it's, it's your responsibility to make sure that the policy is being followed. And then as you were saying, Christoph, you want to compress the time to automation as fast as possible. Is that, is that-- >> Yeah, so it's a really, it's a, what needs to be really clear is that it's always a split effort, right? So you can't just do one or the other thing, but there is some that really goes hand in hand because for the right information, for the right engineering tooling, we need to have the transparency first. I mean, code needs to be coded. So we kind of need to operate on the same level with the right understanding. So there's actually two things that are important, which is one it's policies and guidelines, but not only that, because more importantly or equally important is to align with the end-user and tech teams and engineering and really bridge between business value business teams and the engineering teams. >> Got it. So just a couple more questions, because we got to wrap up, I want to talk a little bit about the business outcome. I know it's hard to quantify and I'll talk about that in a moment, but, but major learnings, we've got some of the challenges that, that you cited. I'll just put them up here. We don't have to go detailed into this, but I just wanted to share with some folks, but my question, I mean, this is the advice for your peers question. If you had to do it differently, if you had a do over or a Mulligan, as we like to say for you, golfers, what, what would you do differently? >> I mean, I, can we start with, from, from the transformational challenge that understanding that it's also high load of cultural exchange. I think this is, this is important that a particular communication strategy needs to be put into place and people really need to be supported, right? So it's not that we go in and say, well, we have to change into, towards data mash, but naturally it's the human nature, nature, nature, we are kind of resistant to change, right? And (mumbles) uncomfortable. So we need to take that away by training and by communicating. Chris, you might want to add something to that. >> Definitely. I think the point that I've also made before, right? We need to acknowledge that data mesh it's an architectural scale, right? If you're looking for something which is necessary by huge companies who are vulnerable, that are product at scale. I mean, Dave, you mentioned that right, there are a lot of advantages to have a centralized team, but at some point it may make sense to actually decentralize here. And at this point, right, if you think about data mesh, you have to recognize that you're not building something on a green field. And I think there's a big learning, which is also reflected on the slide is, don't underestimate your baggage. It's typically is you come to a point where the old model doesn't work anymore. And as had a fresh write, we lost the trust in our data. And actually we have seen certain risks of slowing down our innovation. So we triggered that, this was triggering the need to actually change something. So at this transition applies that you took, we have a lot of technical depth accumulated over years. And I think what we have learned is that potentially we have, de-centralized some assets too early. This is not actually taking into account the maturity of the team. We are actually investigating too. And now we'll be actually in the face of correcting pieces of that one, right? But I think if you, if you, if you start from scratch, you have to understand, okay, is all my teams actually ready for taking on this new, this new capability? And you have to make sure that this is decentralization. You build up these capabilities and the teams, and as Clemence has mentioned, right? Make sure that you take the, the people on your journey. I think these are the pieces that also here it comes with this knowledge gap, right? That we need to think about hiring literacy, the technical depth I just talked about. And I think the, the last piece that I would add now, which is not here on the slide deck is also from our perspective, we started on the analytical layer because it was kind of where things are exploding, right? This is the bit where people feel the pain. But I think a lot of the efforts that we have started to actually modernize the current stage and data products, towards data mesh, we've understood that it always comes down basically to a proper shape of our operational plan. And I think what needs to happen is I think we got through a lot of pains, but the learning here is this needs to really be an, a commitment from the company. It needs to have an end to end. >> I think that point, that last point you made is so critical because I, I, I hear a lot from the vendor community about how they're going to make analytics better. And that's not, that's not unimportant, but, but true data product thinking and decentralized data organizations really have to operationalize in order to scale it. So these decisions around data architecture and organization, they're fundamental and lasting, it's not necessarily about an individual project ROI. They're going to be projects, sub projects, you know, within this architecture. But the architectural decision itself is organizational it's cultural and, and what's the best approach to support your business at scale. It really speaks to, to, to what you are, who you are as a company, how you operate and getting that right, as we've seen in the success of data-driven companies is, yields tremendous results. So I'll, I'll, I'll ask each of you to give, give us your final thoughts and then we'll wrap. Maybe. >> Just can I quickly, maybe just jumping on this piece, what you have mentioned, right, the target architecture. If you talk about these pieces, right, people often have this picture of (mumbled). Okay. There are different kinds of stages. We have (incomprehensible speech), we have actually a gesture layer, we have a storage layer, transformation layer, presentation data, and then we are basically putting a lot of technology on top of that. That's kind of our target architecture. However, I think what we really need to make sure is that we have these different kinds of views, right? We need to understand what are actually the capabilities that we need to know, what new goals, how does it look and feel from the different kinds of personas and experience view. And then finally that should actually go to the, to the target architecture from a technical perspective. Maybe just to give an outlook what we are planning to do, how we want to move that forward. Yes. Actually based on our strategy in the, in the sense of we would like to increase the maturity as a whole across the entire company. And this is kind of a framework around the business strategy and it's breaking down into four pillars as well. People meaning the data culture, data literacy, data organizational structure and so on. If you're talking about governance, as Clemence had actually mentioned that right, compliance, governance, data management, and so on, you're talking about technology. And I think we could talk for hours for that one it's around data platform, data science platform. And then finally also about enablements through data. Meaning we need to understand data quality, data accessibility and applied science and data monetization. >> Great. Thank you, Christoph. Clemence why don't you bring us home. Give us your final thoughts. >> Okay. I can just agree with Christoph that important is to understand what kind of maturity people have, but I understand we're at the maturity level, where a company, where people, our organization is, and really understand what does kind of, it's just kind of a change applies to that, those four pillars, for example, what needs to be tackled first. And this is not very clear from the very first beginning (mumbles). It's kind of like green field, you come up with must wins to come up with things that you really want to do out of theory and out of different white papers. Only if you really start conducting the first initiatives, you do understand that you are going to have to put those thoughts together. And where do I miss out on one of those four different pillars, people process technology and governance, but, and then that can often the integration like doing step by step, small steps, by small steps, not pulling the ocean where you're capable, really to identify the gaps and see where either you can fill the gaps or where you have to increase maturity first and train people or increase your tech stack. >> You know, HelloFresh is an excellent example of a company that is innovating. It was not born in Silicon Valley, which I love. It's a global company. And, and I got to ask you guys, it seems like it's just an amazing place to work. Are you guys hiring? >> Yes, definitely. We do. As, as mentioned right as well as one of these aspects distributing and actually hiring as an entire company, specifically for data. I think there are a lot of open roles, so yes, please visit or our page from data engineering, data, product management, and Clemence has a lot of roles that you can speak to about. But yes. >> Guys, thanks so much for sharing with theCUBE audience, you're, you're pioneers, and we look forward to collaborations in the future to track progress, and really want to thank you for your time. >> Thank you very much. >> Thank you very much Dave. >> And thank you for watching theCUBE's startup showcase made possible by AWS. This is Dave Volante. We'll see you next time. (cheerful music)
SUMMARY :
and the internal team it had the world in your field. Maybe take over the first and the plant acquisition And as you expand your TAM, the flexibility to grow So that for the team meant and so the lines of business, and so on started really to and the flip side of that say the data to the experts So it's the for, And the idea was really moving away Okay, go ahead. And as you mentioned, federated computational governance. is really not the focus of And in the end, and talk about the organizational And in the end, we all know user behavior not the least of which is crypto. So if I take the example of revolution, of the new development kit, And also in the end, So it's sort of in the the company needs to really but it's one of the most So the aim of the data governance and actually not ask the the early phase of it, that we can already automate? that defy the policies that the time to automation on the same level with the about the business outcome. So it's not that we go in and say, well, efforts that we have started to I hear a lot from the vendor in the sense of we would like Clemence why don't you bring us home. fill the gaps or where you And, and I got to ask you guys, that you can speak to about. collaborations in the future to track And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Christoph | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Christoph Sawade | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Youfoodz | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Clemence Chee | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Norway | LOCATION | 0.99+ |
2017 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
May, 2019 | DATE | 0.99+ |
UK | LOCATION | 0.99+ |
HelloFresh | ORGANIZATION | 0.99+ |
Clemence | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
July | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Clemence W. Chee | PERSON | 0.99+ |
Two | QUANTITY | 0.99+ |
TAM | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Hello Fresh | ORGANIZATION | 0.99+ |
first piece | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
Zhamak | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
two years later | DATE | 0.99+ |
Pat | PERSON | 0.99+ |
second two | QUANTITY | 0.99+ |
one last second | QUANTITY | 0.99+ |
Green Chef | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
first two | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one model | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
four pillars | QUANTITY | 0.97+ |
Every Plate | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
Nataraj Nagaratnam, IBM Hybrid Cloud & Rohit Badlaney, IBM Systems | IBM Think 2019
>> Live, from San Francisco, it's theCUBE covering IBM Think 2019. Brought to you by IBM. >> Hello everyone, welcome back to theCUBE's live coverage here in San Francisco for IBM Think 2019. I'm John Furrier, Stu Miniman with theCUBE. Stu, it's been a great day. We're on our fourth day of four days of wall to wall coverage. A theme of AI, large scale compute with Cloud and data that's great. Great topics. Got two great guests here. Rohit Badlaney, who's the director of IBM Z As a Service, IBM Systems. Real great to see you. And Nataraj Nagaratnam, Distinguished Engineer and CTO and Director of Cloud Security at IBM and Hybrid Cloud, thanks for joining us. >> Glad to be here. >> So, the subtext to all the big messaging around AI and multi-cloud is that you need power to run this. Horsepower, you need big iron, you need the servers, you need the storage, but software is in the heart of all this. So you guys had some big announcements around capabilities. The Hyper Protect was a big one on the securities side but now you've got Z As a Service. We've seen Linux come on Z. So it's just another network now. It's just network computing is now tied in with cloud. Explain the offering. What's the big news? >> Sure, so two major announcements for us this week. One's around our private cloud capabilities on the platform. So we announced our IBM Cloud Private set of products fully supported on our LinuxOne systems, and what we've also announced is the extensions of those around hyper-secure workloads through a capability called the Secure Services Container, as well as giving our traditional z/OS clients cloud consumption through a capability called the z/OS Cloud Broker. So it's really looking at how do we cloudify the platform for our existing base, as well as clients looking to do digital transformation projects on-premise. How do we help them? >> This has been a key part of this. I want to just drill down this cloudification because we've been talking about how you guys are positioned for growth. All the REORG's are done. >> Sure, yeah >> The table's all set. Products have been modernized, upgraded. Now the path is pretty clear. Kind of like what Microsoft's playbook was. Build the core cloudification. Get your core set of products cloudified. Target your base of customers. Grow that and expand into the modern era. This is a key part of the strategy, right? >> Absolutely right. A key part of our private cloud strategy is targeted to our existing base and moving them forward on their cloud journey, whether they're looking to modernize parts of their application. Can we start first with where they are on-premise is really what we're after. >> Alright, also you have the Hyper Protect. >> Correct. >> What is that announcement? Can you explain Hyper Protect? >> Absolutely. Like Rohit talked about, taking our LinuxOne capabilities, now that enterprise trusts the level of assurance, the level of security that they're dependent on, on-premise and now in private cloud. We are taking that further into the public cloud offering as Hyper Protect services. So these are set of services that leverage the underlyings of security hardening that nobody else has the level of control that you can get and offering that as a service so you don't need to know Z or LinuxOne from a consumption perspective. So I'll take two examples. Hyper Protect Crypto Service is about exposing the level of control. That you can manage they keys. What we call "keep your own keys" because encryption is out there but it's all about key management so we provide that with the highest level of security that LinuxOne servers from us offer. Another example is database as a service, which runs in this Hyper Secure environment. Not only encryption and keys, but leveraging down the line pervasive encryption capabilities so nobody can even get into the box, so to say. >> Okay, so I get the encryption piece. That's solid, great. Internet encryption is always good. Containers, there's been discussions at the CNCF about containers not being part of the security boundaries and putting a VMware around it. Different schools of thought there. How do you guys look at the containerization? Does that fit into Secure Protect? Talk about that dynamic because encryption I get, but are you getting containers? >> Great question because it's about the workload, right? When people are modernizing their apps or building cloud-native apps, it's built on Kubernetes and containers. What we have done, the fantastic work across both the IBM Cloud Private on Z, as well as Hyper Protect, underlying it's all about containers, right? So as we deliver these services and for customers also to build data services as containers or VM's, they can deploy on this environment or consume these as a compute. So fundamentally it's kubernetes everywhere. That's a foundational focus for us. When it can go public, private and multicloud, and we are taking that journey into the most austere environment with a performance and scale of Z and LinuxONE. >> Alright, so Rohit, help bring us up to date. We've been talking about this hybrid and multi-cloud stuff for a number of years, and the idea we've heard for many years is, "I want to have the same stack on both ends. I want encryption all the way down to the chip set." I've heard of companies like Oracle, like IBM say, "We have resources in both. We want to do this." We understand kubernetes is not a magic layer, it takes care of a certain piece you know and we've been digging in that quite a bit. Super important, but there's more than that and there still are differences between what I'm doing in the private cloud and public cloud just naturally. Public cloud, I'm really limited to how many data centers, private cloud, everything's different. Help us understand what's the same, what's different. How do we sort that out in 2019? >> Sure, from a brand perspective we're looking at private cloud in our IBM Cloud Private set of products and standardizing on that from a kubernetes perspective, but also in a public cloud, we're standardizing on kubernetes. The key secret source is our Secure Services Container under there. It's the same technology that we use under our Blockchain Platform. Right, it brings the Z differentiation for hyper-security, lockdown, where you can run the most secure workloads, and we're standardizing that on both public and private cloud. Now, of course, there are key differences, right? We're standardizing on a different set of workloads on-premise. We're focusing on containerizing on-premise. That journey to move for the public cloud, we still need to get there. >> And the container piece is super important. Can you explain the piece around, if I've got multi-cloud going on, Z becomes a critical node on the network because if you have an on-premise base, Z's been very popular, LinuxONE has been really popular, but it's been for the big banks, and it seems like the big, you know, it's big ire, it's IBM, right? But it's not just the mainframe. It's not proprietary software anymore, it's essentially large-scale capability. >> Right. >> So now, when that gets factored into the pool of resources and cloud, how should customers look at Z? How should they look at the equation? Because this seems to me like an interesting vector into adding more head room for you guys, at least on the product side, but for a customer, it's not just a use case for the big banks, or doing big backups, it seems to have more legs now. Can you explain where this fits into the big picture? Because why wouldn't someone want to have a high performant? >> Why don't I use a customer example? I had a great session this morning with Brad Chun from Shuttle Fund, who joined us on stage. They know financial industry. They are building a Fintech capability called Digital Asset Custody Services. It's about how you digitize your asset, how do you tokenize them, how you secure it. So when they look at it from that perspective, they've been partnering with us, it's a classic hybrid workload where they've deployed some of the apps on the private cloud and on-premise with Z/LinuxONE and reaching out to the cloud using the Hyper Protect services. So when they bring this together, built on Blockchain under the covers, they're bringing the capability being agile to the market, the ability for them to innovate and deliver with speed, but with the level of capability. So from that perspective, it's a Fintech, but they are not the largest banks that you may know of, but that's the kind of innovation it enables, even if you don't have quote, unquote a mainframe or a Z. >> This gives you guys more power, and literally, sense of pretty more reach in the market because what containers and now these kubernetes, for example, Ginni Rometty said "kubernetes" twice in her keynote. I'm like, "Oh my God. The CEO of IBM said 'kubernetes' twice." We used to joke about it. Only geeks know about kubernetes. Here she is talking about kubernetes. Containers, kubernetes, and now service missions around the corner give you guys reach into the public cloud to extend the Z capability without foreclosing the benefits of Z. So that seems to be a trend. Who's the target for that? Give me an example of who's the customer or use case? What's the situation that would allow me to take advantage of cloud and extend the capability to Z? >> If you just step back, what we're really trying to do is create a higher shorten zone in our cloud called Hyper Protect. It's targeted to our existing Z base, who want to move on this enterprise out journey, but it's also targeted to clients like Shuttle Fund and DAX that Raj talked about that are building these hyper secure apps in the cloud and want the capabilities of the platform, but wanted more cloud-native style. It's the breadth of moving our existing base to the cloud, but also these new security developers who want to do enterprise development in the cloud. >> Security is key. That's the big drive. >> And that's the beauty of Z. That's what it brings to the table. And to a cloud is the hyper lockdown, the scale, the performance, all those characteristics. >> We know that security is always an on-going journey, but one of the ones that has a lot of people concerned is when we start adding IoT into the mix. It increased the surface area by orders of magnitude. How do those type of applications fit into these offerings? >> Great question. As a matter of fact, I didn't give you the question by the way, but this morning, KONE joined me on stage. >> We actually talked about it on Twitter. (laughs) >> KONE joined us on stage. It's about in the residential workflow, how they're enabling here their integration, access, and identity into that. As an example, they're building on our IoT platform and then they integrate with security services. That's the beauty of this. Rohit talked about developers, right? So when developers build it, our mission is to make it simple for a developer to build secure applications. With security skill shortage, you can't expect every developer to be a security geek, right? So we're making it simple, so that you can kind of connect your IoT to your business process and your back-end application seamlessly in a multi-cloud and hybrid-cloud fashion. That's where both from a cloud native perspective comes in, and building some of these sensitive applications on Hyper Protect or Z/LinuxONE and private cloud enables that end to end. >> I want to get you guys take while you're here because one of the things I've observed here at Think, which is clearly the theme is Cloud AI and developers all kind of coming together. I mean, AI, Amazon's event, AI, AI, AI, in cloud scale, you guys don't have that. But developer angle is really interesting. And you guys have a product called IBM Cloud Private, which seems to be a very big centerpiece of the strategy. What is this product? Why is it important? It seems to be part of all the key innovative parts that we see evolving out of the thing. Can you explain what is the IBM Cloud Private and how does it fit into the puzzle? >> Let me take a pass at it Raj. In a way it is, well, we really see IBM Cloud Private as that key linchpin on-premise. It's a Platform as a Service product on-premise, it's built on kubernetes and darker containers, but what it really brings is that standardized cloud consumption for containerized apps on-premise. We've expanded that, of course, to our Z footprint, and let me give you a use case of clients and how they use it. We're working with a very big, regulated bank that's looking to modernize a massive monolithic piece of WebSphere application server on-premise and break it down into micro-services. They're doing that on IBM Cloud Private. They've containerized big parts of the application on WebSphere on-premise. Now they've not made that journey to the cloud, to the public cloud, but they are using... How do you modernize your existing footprint into a more containerized micro-services one? >> So this is the trend we're seeing, the decomposition of monolithic apps on-premise is step one. Let's get that down, get the culture, and attract the new, younger people who come in, not the older guys like me, mini-computer days. Really make it ready, composable, then they're ready to go to the cloud. This seems to be the steps. Talk about that dynamic, Raj, from a technical perspective. How hard is it to do that? Is it a heavy lift? Is it pretty straight-forward? >> Great question. IBM, we're all about open, right? So when it comes to our cloud strategy open is the centerpiece offered, that's why we have banked on kubernetes and containers as that standardization layer. This way you can move a workflow from private to public, even ICP can be on other cloud vendors as well, not just IBM Cloud. So it's a private cloud that customers can manage, or in the public cloud or IBM kubernetes that we manage for them. Then it's about the app, the containerized app that can be moved around and that's where our announcements about Multicloud Manager, that we made late last year come into play, which helps you seamlessly move and integrate applications that are deployed on communities across private, public or multicloud. So that abstraction venire enables that to happen and that's why the open... >> So it's an operational construct? Not an IBM product, per say, if you think about it that way. So the question I have for you, I know Stu wants to jump in, he's got some questions. I want to get to this new mindset. The world's flipped upside down. The applications and workloads are dictating architecture and programmability to the DevOps, or infrastructure, in this case, Z or cloud. This is changing the game on how the cloud selection is. So we've been having a debate on theCUBE here, publicly, that in some cases it's the best cloud for the job decision, not a procurement, "I need multi-vendor cloud," versus I have a workload that runs best with this cloud. And it might be as if you're running 365, or G Suite as Google, Amazon's got something so it seems to be the trend. Do you agree with that? And certainly, there'll be many clouds. We think that's true, it's already happened. Your thoughts on this workload driving the requirements for the cloud? Whether it's a sole purpose cloud, meaning for the app. >> That's right. I'll start and Rohit will add in as well. That's where this chapter two comes into play, as we call Chapter Two of Cloud because it is about how do you take enterprise applications, the mission-critical complex workloads, and then look for the enablers. How do you make that modernization seamless? How do you make the cloud native seamless? So in that particular journey, is where IBM cloud and our Multicloud and Hybrid Cloud strategy come into play to make that transition happen and provide the set of capabilities that enterprises are looking for to move their critical workloads across private and public in bit much more assurance and performance and scale, and that's where the work that we are doing with Z, LinuxONE set of as an underpinning to embark on the journey to move those critical workloads to their cloud. So you're absolutely right. When they look at which cloud to go, it's about capabilities, the tools, the management orchestration layers that a cloud provider or a cloud vendor provide and it's not only just about IBM Public Cloud, but it's about enabling the enterprises to provide them the choice and then offer. >> So it's not multicloud for multicloud sake, it's multicloud, that's the reality. Workload drives the functionality. >> Absolutely. We see that as well. >> Validated on theCUBE by the gurus of IBM. The cloud for the job is the best solution. >> So I guess to kind of put a bow on this, the journey we're having is talking about distributed architectures, and you know, we're down on the weeds, we've got micro-services architectures, containerization, and we're working at making those things more secure. Obviously, there's still a little bit more work to do there, but what's next is we look forward, what are the challenges customers have. They live in this, you know, heterogeneous multicloud world. What do we have to do as an industry? Where is IBM making sure that they have a leadership position? >> From my perspective, I think really the next big wave of cloud is going to be looking at those enterprise workloads. It's funny, I was just having a conversation with a very big bank in the Netherlands, and they were, of course, a very big Z client, and asking us about the breadth of our cloud strategy and how they can move forward. Really looking at a private cloud strategy helping them modernize, and then looking at which targeted workloads they could move to public cloud is going to be the next frontier. And those 80 percent of workloads that haven't moved. >> An integration is key, and for you guys competitive strategy-wise, you've got a lot of business applications running on IBM's huge customer base. Focus on those. >> Yes. >> And then give them the path to the cloud. The integration piece is where the linchpin is and OSSI secure. >> Enterprise out guys. >> Love encryption, love to follow up more on the secure container thing, I think that's a great topic. We'll follow-up after this show Raj. Thanks for coming on. theCUBE coverage here. I'm John Furrier, Stu Miniman. Live coverage, day four, here live in San Francisco for IBM Think 2019. Stay with us more. Our next guests will be here right after a short break. (upbeat music)
SUMMARY :
Brought to you by IBM. and CTO and Director of Cloud Security at IBM So, the subtext to all the big messaging One's around our private cloud capabilities on the platform. All the REORG's are done. Grow that and expand into the modern era. is targeted to our existing base that nobody else has the level of control that you can get about containers not being part of the security boundaries Great question because it's about the workload, right? and the idea we've heard for many years is, It's the same technology that we use and it seems like the big, you know, it's big ire, at least on the product side, the ability for them to innovate and extend the capability to Z? It's the breadth of moving our existing base to the cloud, That's the big drive. And that's the beauty of Z. but one of the ones that has a lot of people concerned As a matter of fact, I didn't give you the question We actually talked about it on Twitter. It's about in the residential workflow, and how does it fit into the puzzle? to our Z footprint, and let me give you a use case Let's get that down, get the culture, Then it's about the app, the containerized app that in some cases it's the best cloud for the job decision, but it's about enabling the enterprises it's multicloud, that's the reality. We see that as well. The cloud for the job is the best solution. the journey we're having is talking about is going to be the next frontier. An integration is key, and for you guys And then give them the path to the cloud. on the secure container thing,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nataraj Nagaratnam | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
Rohit Badlaney | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Rohit | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
2019 | DATE | 0.99+ |
Brad Chun | PERSON | 0.99+ |
Shuttle Fund | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
80 percent | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Raj | PERSON | 0.99+ |
IBM Systems | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
fourth day | QUANTITY | 0.99+ |
twice | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
ORGANIZATION | 0.98+ | |
this week | DATE | 0.98+ |
WebSphere | TITLE | 0.98+ |
late last year | DATE | 0.98+ |
two great guests | QUANTITY | 0.98+ |
four days | QUANTITY | 0.98+ |
G Suite | TITLE | 0.98+ |
DAX | ORGANIZATION | 0.98+ |
Z | TITLE | 0.97+ |
two examples | QUANTITY | 0.96+ |
two major announcements | QUANTITY | 0.96+ |
Think | ORGANIZATION | 0.96+ |
z/OS | TITLE | 0.95+ |
Stu | PERSON | 0.95+ |
IBM Z | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.95+ |
Hyper Protect | TITLE | 0.95+ |
day four | QUANTITY | 0.95+ |
Hybrid Cloud | ORGANIZATION | 0.94+ |
Chapter Two | OTHER | 0.93+ |
first | QUANTITY | 0.93+ |
CEO | PERSON | 0.93+ |
step one | QUANTITY | 0.92+ |
IBM Cloud Private | TITLE | 0.91+ |
this morning | DATE | 0.91+ |
REORG | ORGANIZATION | 0.91+ |
LinuxONE | TITLE | 0.91+ |
chapter two | OTHER | 0.89+ |
Multicloud Manager | TITLE | 0.87+ |
wave | EVENT | 0.87+ |
both ends | QUANTITY | 0.86+ |
ORGANIZATION | 0.85+ | |
Services | OTHER | 0.82+ |
big | EVENT | 0.81+ |
Hyper | TITLE | 0.81+ |
Scott Noteboom, Litbit – When IoT Met AI: The Intelligence of Things - #theCUBE
>> Announcer: From the Fairmont Hotel in the heart of Silicon Valley, it's The Cube. Covering When IoT met AI: The Intelligence of Things. Brought to you by Western Digital. >> Hey, welcome back, everybody. Jeff Frick here with The Cube. We're in downtown Los Angeles at the Fairmont Hotel at a interesting little show called When IoT Met AI: The Intelligence of Things. A lot of cool startups here along with some big companies. We're really excited go have our next guest, taking a little different angle. He's Scott Noteboom. He is the co-founder and CEO of a company called Litbit. First off, Scott, welcome. >> Yeah, thank you very much. >> Absolutely. For folks that aren't familiar, what is Litbit, what's your core mission? >> Well, probably, the simplest way to put it is, is in business we enable our users who have a lot of experience in a lot of different areas to take their expertise and experience which may not be coding software, or understanding, or even being able to spell what an algorithm is on the data science perspective, and being able to give them an easy interface so they can kind of create their own Siro or Alexa, an AI but an AI that's based on their own subject matter expertise that they can put to work in a lot of different ways. >> So, there's often a lot of talk about kind of tribal knowledge, and how does tribal knowledge get passed down so people know how to do things. Whether it's with new employees, or as you were talking about a little bit off camera, just remote locations for this or that. And there hasn't really been a great system to do that. So, you're really attacking that, not only with the documentation, but then making an AI actionable piece of software that can then drive machines and using IoT to do things. Is that correct? >> That's right. So, if you created, say an AI that I've been passionate about 'cause I ran data centers for a lot of years, is DAC. So, DAC's an AI that has a lot of expertise, and how to run a data center by, and kind of fueled and mentored by a lot of the experts in the industry. So, how can you take DAC and put Dak to work in a lot of places? And the people who need the best trained DAC aren't people who are building apps. They are people who have their area of subject matter expertise, and we view these AI personas that can be put to work as kind of apps of the future, where can people can prescribe to personas that are build directly by the experts, which is a pretty pure way to connect AIs with the right people, and then be able to get them and put them-- >> So, there's kind of two steps to the process. How does the information get from the experts into your system? How's that training happen? >> So, where we spend a lot of attention is, a lot of people question and go, "Well, an AI lives in this virtual logical world "that's disconnected from the physical world." And I always questions for people to close their eyes and imagine their favorite person that loves them in the world. And when they picture that person hear that person's voice in their head, that's actually a very similar virtual world as what AIs working. It's not the physical world. And what connects us as people to the physical world, our senses, our sight, our hearing, our touch, our feeling. And what we've done is we've enabled using IoT sensors, the ability of combining those sensors with AI to turn sensors into senses, which then provide the ability for the AI to connect really meaningful ways to the physical world. And then the experts can teach the AI this is what this looks like, this is what this sounds like, this is what it's supposed to feel like. If it's greater than 80 degrees in an office location, it's hot. Really teaching the AI to be able to form thoughts based on a specific expertise and then be able to take the right actions to do the right things when those thoughts are formed. >> How do you deal with nuance, 'cause I'm sure there's a lot of times where people, as you said, are sensing or smelling or something, but they don't even necessarily consciously know that that's an input into their decision process, even though it really is. They just haven't really thought of it as a discrete input. How do you separate out all these discreet inputs so you get a great model that represents your best of breed technicians? >> Well, to try to answer the question, first of all, the more training the better. So, the good way to think of the AI is, unlike a lot of technologies that typically age and go out of life over time, an AI continuously gets smarter the more it's mentored by people, which would be supervised learning. And the more it can adjust and learn on it's own combined with real day to day data activity combined with that supervised learning and unsupervised learning approach, so enabling it to continuously get better over time. We've figure out some ways that it can produce some pretty meaningful results with a small amount of training. So, yeah. >> Okay. What are some of the applications, kind of your initial go to market? >> We're a small startup, and really, what we've done is we've developed a platform that we really like to, our goal is for it to be very horizontal in nature. And then the applications or the AI personas can be very vertical or subject matter experts across different silos. So, what we're doing is, is we're working with partners right now in different silos developing AIs that have expertise in the oil and gas business, in the pharmaceutical space, in the data center space, in the corporate facilities manage space, and really making sure that people who aren't technologists in all of those spaces, whether you're a very specific scientists who're running a lab, or a facilities guy in a corporate building, can successfully make that experiential connection between themselves and the AI, and put it to practical use. And then as we go, there's a lot of efforts that can be very specific to specific silos, whatever they may be. >> So, those personas are actually roles of individuals, if you will, performing certain tasks within those verticals. >> Absolutely. What we call them is coworkers, and the way things are designed is, one of the things that I think is really important in the AI world is that we approach everything from a human perspective because it's a big disruptive shift, and there's a lot of concern over it. So, if you get people to connect to it in a humanistic way, like coworker Viv works along with coworker Sophia, and Viv has this expertise, Sophia has this expertise, and has better improving ways to interface with people who have names that aren't a lot different from them and have skillsets that aren't a lot different. When you look at the AIS, they don't mind working longer hours. Let them work the weekends so I can spend hours with my family. Let them work the crazy shifts. So, things are different in that regard. But the relationship aspect of how the workplace works, try not to disrupt that too much. >> So, then on a consumption side, with the person coworker that's working with the persona, how do they interact with it, how do they get the data out, and I guess even more importantly, maybe, how do they get the new data back in to continue to train the model? >> So, the biggest thing you have to focus on with a human and machine learning interface that doesn't require a program or a data science, is that the language that the AI is taught in is human language, natural human language. So, we developed a lot of natural human language files that are pretty neat because a human coworker in California here could be interfacing in english to their coworker, and at the same time, someone speaking Mandarin in Shanghai could be interfacing with the same coworker speaking mandarin unless you can get multilingual functionality. Right now, to answer your question, people are doing it in a text based scenario. But the future vision, I think when the industry timing is right, is we view that every one of the coworkers we're developing will have a very distinct unique fingerprint of a voice. So, therefor, when you're engaging with your coworker using voice, you'll begin to recognize, oh, that's Dax, or that's Viv, or that's Sophia, based on their voice. So, like many people, this is how we're communicating with voice, and we believe the same thing's going to occur. And a lot of that's in timing. That's the direction where things are headed. >> Interesting. The whole voice aspect is just a whole 'nother interesting thing in terms of what type of voice personality attributes associated with voice. That's probably going to be a huge piece in terms of the adoption, in terms of having a true coworker experience, if you will. >> One of the things we haven't figure out, and these are important questions, and there's so many unknowns, is we feel really confident that the AI persona should have a unique voice because then I know who I'm engaging with, and I can connect by ear without them saying what their name is. But what does an AI persona look like? That's something where actually we don't know that, and we explore different things and, oh, that looks scary, or oh, that doesn't make sense. Should it look like anything? Which has largely been the approach of what does an Alexa or a Siri look like. As you continue to advance those engagements, and particularly when augmented reality comes into play, through augmented reality, if you're able to look and say, "Oh, a coworker's working over there," there's some value in that. But what is it going to look like? That's interesting, and we don't know that. >> Hopefully, better than those things at the San Jose Airport that are running around. >> Yeah, exactly. >> Classic robot. All right, Scott, very interesting story. I look forward to watching you grow and develop over time. >> Awesome, it's good to talk. >> Absolutely, all right, he's Scott Noteboom, he's from Litbit. I'm Jeff Frick, you're watching The Cube. We're at When IoT met AI: The Intelligence of Things, here at San Jose California. We'll be right back after the short break. Thanks for watching. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, We're in downtown Los Angeles at the Fairmont Hotel For folks that aren't familiar, that they can put to work in a lot of different ways. And there hasn't really been a great system to do that. by a lot of the experts in the industry. the experts into your system? Really teaching the AI to be able to that represents your best of breed technicians? So, the good way to think of the AI is, What are some of the applications, in the pharmaceutical space, in the data center space, So, those personas are actually and the way things are designed is, So, the biggest thing you have to in terms of the adoption, in terms of One of the things we haven't figure out, at the San Jose Airport that are running around. I look forward to watching you We'll be right back after the short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Sophia | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Scott Noteboom | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Litbit | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Shanghai | LOCATION | 0.99+ |
Siri | TITLE | 0.99+ |
two steps | QUANTITY | 0.99+ |
San Jose California | LOCATION | 0.99+ |
San Jose Airport | LOCATION | 0.99+ |
Mandarin | OTHER | 0.99+ |
The Cube | TITLE | 0.98+ |
greater than 80 degrees | QUANTITY | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
mandarin | OTHER | 0.98+ |
Viv | PERSON | 0.98+ |
one | QUANTITY | 0.97+ |
First | QUANTITY | 0.95+ |
Fairmont Hotel | ORGANIZATION | 0.94+ |
When IoT Met AI: The Intelligence of Things | TITLE | 0.94+ |
Alexa | TITLE | 0.88+ |
AI: The Intelligence of Things | TITLE | 0.86+ |
When IoT met AI: The Intelligence of Things | TITLE | 0.86+ |
When IoT | TITLE | 0.83+ |
Los Angeles | LOCATION | 0.78+ |
AIS | ORGANIZATION | 0.77+ |
One | QUANTITY | 0.77+ |
english | OTHER | 0.72+ |
Siro | TITLE | 0.72+ |
#theCUBE | TITLE | 0.64+ |
Litbit | TITLE | 0.58+ |
times | QUANTITY | 0.55+ |
first | QUANTITY | 0.52+ |
lot | QUANTITY | 0.49+ |
Viv | ORGANIZATION | 0.41+ |
Dax | ORGANIZATION | 0.4+ |
Fireside Chat with Andy Jassy, AWS CEO, at the AWS Summit SF 2017
>> Announcer: Please welcome Vice President of Worldwide Marketing, Amazon Web Services, Ariel Kelman. (applause) (techno music) >> Good afternoon, everyone. Thank you for coming. I hope you guys are having a great day here. It is my pleasure to introduce to come up on stage here, the CEO of Amazon Web Services, Andy Jassy. (applause) (techno music) >> Okay. Let's get started. I have a bunch of questions here for you, Andy. >> Just like one of our meetings, Ariel. >> Just like one of our meetings. So, I thought I'd start with a little bit of a state of the state on AWS. Can you give us your quick take? >> Yeah, well, first of all, thank you, everyone, for being here. We really appreciate it. We know how busy you guys are. So, hope you're having a good day. You know, the business is growing really quickly. In the last financials, we released, in Q four of '16, AWS is a 14 billion dollar revenue run rate business, growing 47% year over year. We have millions of active customers, and we consider an active customer as a non-Amazon entity that's used the platform in the last 30 days. And it's really a very broad, diverse customer set, in every imaginable size of customer and every imaginable vertical business segment. And I won't repeat all the customers that I know Werner went through earlier in the keynote, but here are just some of the more recent ones that you've seen, you know NELL is moving their their digital and their connected devices, meters, real estate to AWS. McDonalds is re-inventing their digital platform on top of AWS. FINRA is moving all in to AWS, yeah. You see at Reinvent, Workday announced AWS was its preferred cloud provider, and to start building on top of AWS further. Today, in press releases, you saw both Dunkin Donuts and Here, the geo-spatial map company announced they'd chosen AWS as their provider. You know and then I think if you look at our business, we have a really large non-US or global customer base and business that continues to expand very dramatically. And we're also aggressively increasing the number of geographic regions in which we have infrastructure. So last year in 2016, on top of the broad footprint we had, we added Korea, India, and Canada, and the UK. We've announced that we have regions coming, another one in China, in Ningxia, as well as in France, as well as in Sweden. So we're not close to being done expanding geographically. And then of course, we continue to iterate and innovate really quickly on behalf of all of you, of our customers. I mean, just last year alone, we launched what we considered over 1,000 significant services and features. So on average, our customers wake up every day and have three new capabilities they can choose to use or not use, but at their disposal. You've seen it already this year, if you look at Chime, which is our new unified communication service. It makes meetings much easier to conduct, be productive with. You saw Connect, which is our new global call center routing service. If you look even today, you look at Redshift Spectrum, which makes it easy to query all your data, not just locally on disk in your data warehouse but across all of S3, or DAX, which puts a cash in front of DynamoDB, we use the same interface, or all the new features in our machine learning services. We're not close to being done delivering and iterating on your behalf. And I think if you look at that collection of things, it's part of why, as Gartner looks out at the infrastructure space, they estimate the AWS is several times the size business of the next 14 providers combined. It's a pretty significant market segment leadership position. >> You talked a lot about adopts in there, a lot of customers moving to AWS, migrating large numbers of workloads, some going all in on AWS. And with that as kind of backdrop, do you still see a role for hybrid as being something that's important for customers? >> Yeah, it's funny. The quick answer is yes. I think the, you know, if you think about a few years ago, a lot of the rage was this debate about private cloud versus what people call public cloud. And we don't really see that debate very often anymore. I think relatively few companies have had success with private clouds, and most are pretty substantially moving in the direction of building on top of clouds like AWS. But, while you increasingly see more and more companies every month announcing that they're going all in to the cloud, we will see most enterprises operate in some form of hybrid mode for the next number of years. And I think in the early days of AWS and the cloud, I think people got confused about this, where they thought that they had to make this binary decision to either be all in on the public cloud and AWS or not at all. And of course that's not the case. It's not a binary decision. And what we know many of our enterprise customers want is they want to be able to run the data centers that they're not ready to retire yet as seamlessly as they can alongside of AWS. And it's why we've built a lot of the capabilities we've built the last several years. These are things like PPC, which is our virtual private cloud, which allows you to cordon off a portion of our network, deploy resources into it and connect to it through VPN or Direct Connect, which is a private connection between your data centers and our regions or our storage gateway, which is a virtual storage appliance, or Identity Federation, or a whole bunch of capabilities like that. But what we've seen, even though the vast majority of the big hybrid implementations today are built on top of AWS, as more and more of the mainstream enterprises are now at the point where they're really building substantial cloud adoption plans, they've come back to us and they've said, well, you know, actually you guys have made us make kind of a binary decision. And that's because the vast majority of the world is virtualized on top of VMWare. And because VMWare and AWS, prior to a few months ago, had really done nothing to try and make it easy to use the VMWare tools that people have been using for many years seamlessly with AWS, customers were having to make a binary choice. Either they stick with the VMWare tools they've used for a while but have a really tough time integrating with AWS, or they move to AWS and they have to leave behind the VMWare tools they've been using. And it really was the impetus for VMWare and AWS to have a number of deep conversations about it, which led to the announcement we made late last fall of VMWare and AWS, which is going to allow customers who have been using the VMWare tools to manage their infrastructure for a long time to seamlessly be able to run those on top of AWS. And they get to do so as they move workloads back and forth and they evolve their hybrid implementation without having to buy any new hardware, which is a big deal for companies. Very few companies are looking to find ways to buy more hardware these days. And customers have been very excited about this prospect. We've announced that it's going to be ready in the middle of this year. You see companies like Amadeus and Merck and Western Digital and the state of Louisiana, a number of others, we've a very large, private beta and preview happening right now. And people are pretty excited about that prospect. So we will allow customers to run in the mode that they want to run, and I think you'll see a huge transition over the next five to 10 years. >> So in addition to hybrid, another question we get a lot from enterprises around the concept of lock-in and how they should think about their relationship with the vendor and how they should think about whether to spread the workloads across multiple infrastructure providers. How do you think about that? >> Well, it's a question we get a lot. And Oracle has sure made people care about that issue. You know, I think people are very sensitive about being locked in, given the experience that they've had over the last 10 to 15 years. And I think the reality is when you look at the cloud, it really is nothing like being locked into something like Oracle. The APIs look pretty similar between the various providers. We build an open standard, it's like Linux and MySQL and Postgres. All the migration tools that we build allow you to migrate in or out of AWS. It's up to customers based on how they want to run their workload. So it is much easier to move away from something like the cloud than it is from some of the old software services that has created some of this phobia. But I think when you look at most CIOs, enterprise CIOs particularly, as they think about moving to the cloud, many of them started off thinking that they, you know, very well might split their workloads across multiple cloud providers. And I think when push comes to shove, very few decide to do so. Most predominately pick an infrastructure provider to run their workloads. And the reason that they don't split it across, you know, pretty evenly across clouds is a few reasons. Number one, if you do so, you have to standardize in the lowest common denominator. And these platforms are in radically different stages at this point. And if you look at something like AWS, it has a lot more functionality than anybody else by a large margin. And we're also iterating more quickly than you'll find from the other providers. And most folks don't want to tie the hands of their developers behind their backs in the name of having the ability of splitting it across multiple clouds, cause they actually are, in most of their spaces, competitive, and they have a lot of ideas that they want to actually build and invent on behalf of their customers. So, you know, they don't want to actually limit their functionality. It turns out the second reason is that they don't want to force their development teams to have to learn multiple platforms. And most development teams, if any of you have managed multiple stacks across different technologies, and many of us have had that experience, it's a pain in the butt. And trying to make a shift from what you've been doing for the last 30 years on premises to the cloud is hard enough. But then forcing teams to have to get good at running across two or three platforms is something most teams don't relish, and it's wasteful of people's time, it's wasteful of natural resources. That's the second thing. And then the third reason is that you effectively diminish your buying power because all of these cloud providers have volume discounts, and then you're splitting what you buy across multiple providers, which gives you a lower amount you buy from everybody at a worse price. So when most CIOs and enterprises look at this carefully, they don't actually end up splitting it relatively evenly. They predominately pick a cloud provider. Some will just pick one. Others will pick one and then do a little bit with a second, just so they know they can run with a second provider, in case that relationship with the one they choose to predominately run with goes sideways in some fashion. But when you really look at it, CIOs are not making that decision to split it up relatively evenly because it makes their development teams much less capable and much less agile. >> Okay, let's shift gears a little bit, talk about a subject that's on the minds of not just enterprises but startups and government organizations and pretty much every organization we talk to. And that's AI and machine learning. Reinvent, we introduced our Amazon AI services and just this morning Werner announced the general availability of Amazon Lex. So where are we overall on machine learning? >> Well it's a hugely exciting opportunity for customers, and I think, we believe it's exciting for us as well. And it's still in the relatively early stages, if you look at how people are using it, but it's something that we passionately believe is going to make a huge difference in the world and a huge difference with customers, and that we're investing a pretty gigantic amount of resource and capability for our customers. And I think the way that we think about, at a high level, the machine learning and deep learning spaces are, you know, there's kind of three macro layers of the stack. I think at that bottom layer, it's generally for the expert machine learning practitioners, of which there are relatively few in the world. It's a scarce resource relative to what I think will be the case in five, 10 years from now. And these are folks who are comfortable working with deep learning engines, know how to build models, know how to tune those models, know how to do inference, know how to get that data from the models into production apps. And for that group of people, if you look at the vast majority of machine learning and deep learning that's being done in the cloud today, it's being done on top of AWS, are P2 instances, which are optimized for deep learning and our deep learning AMIs, that package, effectively the deep learning engines and libraries inside those AMIs. And you see companies like Netflix, Nvidia, and Pinterest and Stanford and a whole bunch of others that are doing significant amounts of machine learning on top of those optimized instances for machine learning and the deep learning AMIs. And I think that you can expect, over time, that we'll continue to build additional capabilities and tools for those expert practitioners. I think we will support and do support every single one of the deep learning engines on top of AWS, and we have a significant amount of those workloads with all those engines running on top of AWS today. We also are making, I would say, a disproportionate investment of our own resources and the MXNet community just because if you look at running deep learning models once you get beyond a few GPUs, it's pretty difficult to have those scale as you get into the hundreds of GPUs. And most of the deep learning engines don't scale very well horizontally. And so what we've found through a lot of extensive testing, cause remember, Amazon has thousands of deep learning experts inside the company that have built very sophisticated deep learning capabilities, like the ones you see in Alexa, we have found that MXNet scales the best and almost linearly, as we continue to add nodes, as we continue to horizontally scale. So we have a lot of investment at that bottom layer of the stack. Now, if you think about most companies with developers, it's still largely inaccessible to them to do the type of machine learning and deep learning that they'd really like to do. And that's because the tools, I think, are still too primitive. And there's a number of services out there, we built one ourselves in Amazon Machine Learning that we have a lot of customers use, and yet I would argue that all of those services, including our own, are still more difficult than they should be for everyday developers to be able to build machine learning and access machine learning and deep learning. And if you look at the history of what AWS has done, in every part of our business, and a lot of what's driven us, is trying to democratize technologies that were really only available and accessible before to a select, small number of companies. And so we're doing a lot of work at what I would call that middle layer of the stack to get rid of a lot of the muck associated with having to do, you know, building the models, tuning the models, doing the inference, figuring how to get the data into production apps, a lot of those capabilities at that middle layer that we think are really essential to allow deep learning and machine learning to reach its full potential. And then at the top layer of the stack, we think of those as solutions. And those are things like, pass me an image and I'll tell you what that image is, or show me this face, does it match faces in this group of faces, or pass me a string of text and I'll give you an mpg file, or give me some words and what your intent is and then I'll be able to return answers that allow people to build conversational apps like the Lex technology. And we have a whole bunch of other services coming in that area, atop of Lex and Polly and Recognition, and you can imagine some of those that we've had to use in Amazon over the years that we'll continue to make available for you, our customers. So very significant level of investment at all three layers of that stack. We think it's relatively early days in the space but have a lot of passion and excitement for that. >> Okay, now for ML and AI, we're seeing customers wanting to load in tons of data, both to train the models and to actually process data once they've built their models. And then outside of ML and AI, we're seeing just as much demand to move in data for analytics and traditional workloads. So as people are looking to move more and more data to the cloud, how are we thinking about making it easier to get data in? >> It's a great question. And I think it's actually an often overlooked question because a lot of what gets attention with customers is all the really interesting services that allow you to do everything from compute and storage and database and messaging and analytics and machine learning and AI. But at the end of the day, if you have a significant amount of data already somewhere else, you have to get it into the cloud to be able to take advantage of all these capabilities that you don't have on premises. And so we have spent a disproportionate amount of focus over the last few years trying to build capabilities for our customers to make this easier. And we have a set of capabilities that really is not close to matched anywhere else, in part because we have so many customers who are asking for help in this area that it's, you know, that's really what drives what we build. So of course, you could use the good old-fashioned wire to send data over the internet. Increasingly, we find customers that are trying to move large amounts of data into S3, is using our S3 transfer acceleration service, which basically uses our points of presence, or POPs, all over the world to expedite delivery into S3. You know, a few years ago, we were talking to a number of companies that were looking to make big shifts to the cloud, and they said, well, I need to move lots of data that just isn't viable for me to move it over the wire, given the connection we can assign to it. It's why we built Snowball. And so we launched Snowball a couple years ago, which is really, it's a 50 terabyte appliance that is encrypted, the data's encrypted three different ways, and you ingest the data from your data center into Snowball, it has a Kindle connected to it, it allows you to, you know, that makes sure that you send it to the right place, and you can also track the progress of your high-speed ingestion into our data centers. And when we first launched Snowball, we launched it at Reinvent a couple years ago, I could not believe that we were going to order as many Snowballs to start with as the team wanted to order. And in fact, I reproached the team and I said, this is way too much, why don't we first see if people actually use any of these Snowballs. And so the team thankfully didn't listen very carefully to that, and they really only pared back a little bit. And then it turned out that we, almost from the get-go, had ordered 10X too few. And so this has been something that people have used in a very broad, pervasive way all over the world. And last year, at the beginning of the year, as we were asking people what else they would like us to build in Snowball, customers told us a few things that were pretty interesting to us. First, one that wasn't that surprising was they said, well, it would be great if they were bigger, you know, if instead of 50 terabytes it was more data I could store on each device. Then they said, you know, one of the problems is when I load the data onto a Snowball and send it to you, I have to still keep my local copy on premises until it's ingested, cause I can't risk losing that data. So they said it would be great if you could find a way to provide clustering, so that I don't have to keep that copy on premises. That was pretty interesting. And then they said, you know, there's some of that data that I'd actually like to be loading synchronously to S3, and then, or some things back from S3 to that data that I may want to compare against. That was interesting, having that endpoint. And then they said, well, we'd really love it if there was some compute on those Snowballs so I can do analytics on some relatively short-term signals that I want to take action on right away. Those were really the pieces of feedback that informed Snowball Edge, which is the next version of Snowball that we launched, announced at Reinvent this past November. So it has, it's a hundred-terabyte appliance, still the same level of encryption, and it has clustering so that you don't have to keep that copy of the data local. It allows you to have an endpoint to S3 to synchronously load data back and forth, and then it has a compute inside of it. And so it allows customers to use these on premises. I'll give you a good example. GE is using these for their wind turbines. And they collect all kinds of data from those turbines, but there's certain short-term signals they want to do analytics on in as close to real time as they can, and take action on those. And so they use that compute to do the analytics and then when they fill up that Snowball Edge, they detach it and send it back to AWS to do broad-scale analytics in the cloud and then just start using an additional Snowball Edge to capture that short-term data and be able to do those analytics. So Snowball Edge is, you know, we just launched it a couple months ago, again, amazed at the type of response, how many customers are starting to deploy those all over the place. I think if you have exabytes of data that you need to move, it's not so easy. An exabyte of data, if you wanted to move from on premises to AWS, would require 10,000 Snowball Edges. Those customers don't want to really manage a fleet of 10,000 Snowball Edges if they don't have to. And so, we tried to figure out how to solve that problem, and it's why we launched Snowmobile back at Reinvent in November, which effectively, it's a hundred-petabyte container on a 45-foot trailer that we will take a truck and bring out to your facility. It comes with its own power and its own network fiber that we plug in to your data center. And if you want to move an exabyte of data over a 10 gigabit per second connection, it would take you 26 years. But using 10 Snowmobiles, it would take you six months. So really different level of scale. And you'd be surprised how many companies have exabytes of data at this point that they want to move to the cloud to get all those analytics and machine learning capabilities running on top of them. Then for streaming data, as we have more and more companies that are doing real-time analytics of streaming data, we have Kinesis, where we built something called the Kinesis Firehose that makes it really simple to stream all your real-time data. We have a storage gateway for companies that want to keep certain data hot, locally, and then asynchronously be loading the rest of their data to AWS to be able to use in different formats, should they need it as backup or should they choose to make a transition. So it's a very broad set of storage capabilities. And then of course, if you've moved a lot of data into the cloud or into anything, you realize that one of the hardest parts that people often leave to the end is ETL. And so we have announced an ETL service called Glue, which we announced at Reinvent, which is going to make it much easier to move your data, be able to find your data and map your data to different locations and do ETL, which of course is hugely important as you're moving large amounts. >> So we've talked a lot about moving things to the cloud, moving applications, moving data. But let's shift gears a little bit and talk about something not on the cloud, connected devices. >> Yeah. >> Where do they fit in and how do you think about edge? >> Well, you know, I've been working on AWS since the start of AWS, and we've been in the market for a little over 11 years at this point. And we have encountered, as I'm sure all of you have, many buzzwords. And of all the buzzwords that everybody has talked about, I think I can make a pretty strong argument that the one that has delivered fastest on its promise has been IOT and connected devices. Just amazing to me how much is happening at the edge today and how fast that's changing with device manufacturers. And I think that if you look out 10 years from now, when you talk about hybrid, I think most companies, majority on premise piece of hybrid will not be servers, it will be connected devices. There are going to be billions of devices all over the place, in your home, in your office, in factories, in oil fields, in agricultural fields, on ships, in cars, in planes, everywhere. You're going to have these assets that sit at the edge that companies are going to want to be able to collect data on, do analytics on, and then take action. And if you think about it, most of these devices, by their very nature, have relatively little CPU and have relatively little disk, which makes the cloud disproportionately important for them to supplement them. It's why you see most of the big, successful IOT applications today are using AWS to supplement them. Illumina has hooked up their genome sequencing to AWS to do analytics, or you can look at Major League Baseball Statcast is an IOT application built on top of AWS, or John Deer has over 200,000 telematically enabled tractors that are collecting real-time planting conditions and information that they're doing analytics on and sending it back to farmers so they can figure out where and how to optimally plant. Tata Motors manages their truck fleet this way. Phillips has their smart lighting project. I mean, there're innumerable amounts of these IOT applications built on top of AWS where the cloud is supplementing the device's capability. But when you think about these becoming more mission-critical applications for companies, there are going to be certain functions and certain conditions by which they're not going to want to connect back to the cloud. They're not going to want to take the time for that round trip. They're not going to have connectivity in some cases to be able to make a round trip to the cloud. And what they really want is customers really want the same capabilities they have on AWS, with AWS IOT, but on the devices themselves. And if you've ever tried to develop on these embedded devices, it's not for mere mortals. It's pretty delicate and it's pretty scary and there's a lot of archaic protocols associated with it, pretty tough to do it all and to do it without taking down your application. And so what we did was we built something called Greengrass, and we announced it at Reinvent. And Greengrass is really like a software module that you can effectively have inside your device. And it allows developers to write lambda functions, it's got lambda inside of it, and it allows customers to write lambda functions, some of which they want to run in the cloud, some of which they want to run on the device itself through Greengrass. So they have a common programming model to build those functions, to take the signals they see and take the actions they want to take against that, which is really going to help, I think, across all these IOT devices to be able to be much more flexible and allow the devices and the analytics and the actions you take to be much smarter, more intelligent. It's also why we built Snowball Edge. Snowball Edge, if you think about it, is really a purpose-built Greengrass device. We have Greengrass, it's inside of the Snowball Edge, and you know, the GE wind turbine example is a good example of that. And so it's to us, I think it's the future of what the on-premises piece of hybrid's going to be. I think there're going to be billions of devices all over the place and people are going to want to interact with them with a common programming model like they use in AWS and the cloud, and we're continuing to invest very significantly to make that easier and easier for companies. >> We've talked about several feature directions. We talked about AI, machine learning, the edge. What are some of the other areas of investment that this group should care about? >> Well there's a lot. (laughs) That's not a suit question, Ariel. But there's a lot. I think, I'll name a few. I think first of all, as I alluded to earlier, we are not close to being done expanding geographically. I think virtually every tier-one country will have an AWS region over time. I think many of the emerging countries will as well. I think the database space is an area that is radically changing. It's happening at a faster pace than I think people sometimes realize. And I think it's good news for all of you. I think the database space over the last few decades has been a lonely place for customers. I think that they have felt particularly locked into companies that are expensive and proprietary and have high degrees of lock-in and aren't so customer-friendly. And I think customers are sick of it. And we have a relational database service that we launched many years ago and has many flavors that you can run. You can run MySQL, you can run Postgres, you can run MariaDB, you can run SQLServer, you can run Oracle. And what a lot of our customers kept saying to us was, could you please figure out a way to have a database capability that has the performance characteristics of the commercial-grade databases but the customer-friendly and pricing model of the more open engines like the MySQL and Postgres and MariaDB. What you do on your own, we do a lot of it at Amazon, but it's hard, I mean, it takes a lot of work and a lot of tuning. And our customers really wanted us to solve that problem for them. And it's why we spent several years building Aurora, which is our own database engine that we built, but that's fully compatible with MySQL and with Postgres. It's at least as fault tolerant and durable and performant as the commercial-grade databases, but it's a tenth of the cost of those. And it's also nice because if it turns out that you use Aurora and you decide for whatever reason you don't want to use Aurora anymore, because it's fully compatible with MySQL and Postgres, you just dump it to the community versions of those, and off you are. So there's really hardly any transition there. So that is the fastest-growing service in the history of AWS. I'm amazed at how quickly it's grown. I think you may have heard earlier, we've had 23,000 database migrations just in the last year or so. There's a lot of pent-up demand to have database freedom. And we're here to help you have it. You know, I think on the analytic side, it's just never been easier and less expensive to collect, store, analyze, and share data than it is today. Part of that has to do with the economics of the cloud. But a lot of it has to do with the really broad analytics capability that we provide you. And it's a much broader capability than you'll find elsewhere. And you know, you can manage Hadoop and Spark and Presto and Hive and Pig and Yarn on top of AWS, or we have a managed elastic search service, and you know, of course we have a very high scale, very high performing data warehouse in Redshift, that just got even more performant with Spectrum, which now can query across all of your S3 data, and of course you have Athena, where you can query S3 directly. We have a service that allows you to do real-time analytics of streaming data in Kinesis. We have a business intelligence service in QuickSight. We have a number of machine learning capabilities I talked about earlier. It's a very broad array. And what we find is that it's a new day in analytics for companies. A lot of the data that companies felt like they had to throw away before, either because it was too expensive to hold or they didn't really have the tools accessible to them to get the learning from that data, it's a totally different day today. And so we have a pretty big investment in that space, I mentioned Glue earlier to do ETL on all that data. We have a lot more coming in that space. I think compute, super interesting, you know, I think you will find, I think we will find that companies will use full instances for many, many years and we have, you know, more than double the number of instances than you'll find elsewhere in every imaginable shape and size. But I would also say that the trend we see is that more and more companies are using smaller units of compute, and it's why you see containers becoming so popular. We have a really big business in ECS. And we will continue to build out the capability there. We have companies really running virtually every type of container and orchestration and management service on top of AWS at this point. And then of course, a couple years ago, we pioneered the event-driven serverless capability in compute that we call Lambda, which I'm just again, blown away by how many customers are using that for everything, in every way. So I think the basic unit of compute is continuing to get smaller. I think that's really good for customers. I think the ability to be serverless is a very exciting proposition that we're continuing to to fulfill that vision that we laid out a couple years ago. And then, probably, the last thing I'd point out right now is, I think it's really interesting to see how the basic procurement of software is changing. In significant part driven by what we've been doing with our Marketplace. If you think about it, in the old world, if you were a company that was buying software, you'd have to go find bunch of the companies that you should consider, you'd have to have a lot of conversations, you'd have to talk to a lot of salespeople. Those companies, by the way, have to have a big sales team, an expensive marketing budget to go find those companies and then go sell those companies and then both companies engage in this long tap-dance around doing an agreement and the legal terms and the legal teams and it's just, the process is very arduous. Then after you buy it, you have to figure out how you're going to actually package it, how you're deploy to infrastructure and get it done, and it's just, I think in general, both consumers of software and sellers of software really don't like the process that's existed over the last few decades. And then you look at AWS Marketplace, and we have 35 hundred product listings in there from 12 hundred technology providers. If you look at the number of hours, that software that's been running EC2 just in the last month alone, it's several hundred million hours, EC2 hours, of that software being run on top of our Marketplace. And it's just completely changing how software is bought and procured. I think that if you talk to a lot of the big sellers of software, like Splunk or Trend Micro, there's a whole number of them, they'll tell you it totally changes their ability to be able to sell. You know, one of the things that really helped AWS in the early days and still continues to help us, is that we have a self-service model where we don't actually have to have a lot of people talk to every customer to get started. I think if you're a seller of software, that's very appealing, to allow people to find your software and be able to buy it. And if you're a consumer, to be able to buy it quickly, again, without the hassle of all those conversations and the overhead associated with that, very appealing. And I think it's why the marketplace has just exploded and taken off like it has. It's also really good, by the way, for systems integrators, who are often packaging things on top of that software to their clients. This makes it much easier to build kind of smaller catalogs of software products for their customers. I think when you layer on top of that the capabilities that we've announced to make it easier for SASS providers to meter and to do billing and to do identity is just, it's a very different world. And so I think that also is very exciting, both for companies and customers as well as software providers. >> We certainly touched on a lot here. And we have a lot going on, and you know, while we have customers asking us a lot about how they can use all these new services and new features, we also tend to get a lot of questions from customers on how we innovate so quickly, and they can think about applying some of those lessons learned to their own businesses. >> So you're asking how we're able to innovate quickly? >> Mmm hmm. >> I think there's a few things that have helped us, and it's different for every company. But some of these might be helpful. I'll point to a few. I think the first thing is, I think we disproportionately index on hiring builders. And we think of builders as people who are inventors, people who look at different customer experiences really critically, are honest about what's flawed about them, and then seek to reinvent them. And then people who understand that launch is the starting line and not the finish line. There's very little that any of us ever built that's a home run right out of the gate. And so most things that succeed take a lot of listening to customers and a lot of experimentation and a lot of iterating before you get to an equation that really works. So the first thing is who we hire. I think the second thing is how we organize. And we have, at Amazon, long tried to organize into as small and separable and autonomous teams as we can, that have all the resources in those teams to own their own destiny. And so for instance, the technologists and the product managers are part of the same team. And a lot of that is because we don't want the finger pointing that goes back and forth between the teams, and if they're on the same team, they focus all their energy on owning it together and understanding what customers need from them, spending a disproportionate amount of time with customers, and then they get to own their own roadmaps. One of the reasons we don't publish a 12 to 18 month roadmap is we want those teams to have the freedom, in talking to customers and listening to what you tell us matters, to re-prioritize if there are certain things that we assumed mattered more than it turns out it does. So, you know I think that the way that we organize is the second piece. I think a third piece is all of our teams get to use the same AWS building blocks that all of you get to use, which allow you to move much more quickly. And I think one of the least told stories about Amazon over the last five years, in part because people have gotten interested in AWS, is people have missed how fast our consumer business at Amazon has iterated. Look at the amount of invention in Amazon's consumer business. And they'll tell you that a big piece of that is their ability to use the AWS building blocks like they do. I think a fourth thing is many big companies, as they get larger, what starts to happen is what people call the institutional no, which is that leaders walk into meetings on new ideas looking to find ways to say no, and not because they're ill intended but just because they get more conservative or they have a lot on their plate or things are really managed very centrally, so it's hard to imagine adding more to what you're already doing. At Amazon, it's really the opposite, and in part because of the way we're organized in such a decoupled, decentralized fashion, and in part because it's just part of our DNA. When the leaders walk into a meeting, they are looking for ways to say yes. And we don't say yes to everything, we have a lot of proposals. But we say yes to a lot more than I think virtually any other company on the planet. And when we're having conversations with builders who are proposing new ideas, we're in a mode where we're trying to problem-solve with them to get to yes, which I think is really different. And then I think the last thing is that we have mechanisms inside the company that allow us to make fast decisions. And if you want a little bit more detail, you should read our founder and CEO Jeff Bezos's shareholder letter, which just was released. He talks about the fast decision-making that happens inside the company. It's really true. We make fast decisions and we're willing to fail. And you know, we sometimes talk about how we're working on several of our next biggest failures, and we hope that most of the things we're doing aren't going to fail, but we know, if you're going to push the envelope and if you're going to experiment at the rate that we're trying to experiment, to find more pillars that allow us to do more for customers and allow us to be more relevant, you are going to fail sometimes. And you have to accept that, and you have to have a way of evaluating people that recognizes the inputs, meaning the things that they actually delivered as opposed to the outputs, cause on new ventures, you don't know what the outputs are going to be, you don't know consumers or customers are going to respond to the new thing you're trying to build. So you have to be able to reward employees on the inputs, you have to have a way for them to continue to progress and grow in their career even if they work on something didn't work. And you have to have a way of thinking about, when things don't work, how do I take the technology that I built as part of that, that really actually does work, but I didn't get it right in the form factor, and use it for other things. And I think that when you think about a culture like Amazon, that disproportionately hires builders, organizes into these separable, autonomous teams, and allows them to use building blocks to move fast, and has a leadership team that's looking to say yes to ideas and is willing to fail, you end up finding not only do you do more inventing but you get the people at every level of the organization spending their free cycles thinking about new ideas because it actually pays to think of new ideas cause you get a shot to try it. And so that has really helped us and I think most of our customers who have made significant shifts to AWS and the cloud would argue that that's one of the big transformational things they've seen in their companies as well. >> Okay. I want to go a little bit deeper on the subject of culture. What are some of the things that are most unique about the AWS culture that companies should know about when they're looking to partner with us? >> Well, I think if you're making a decision on a predominant infrastructure provider, it's really important that you decide that the culture of the company you're going to partner with is a fit for yours. And you know, it's a super important decision that you don't want to have to redo multiple times cause it's wasted effort. And I think that, look, I've been at Amazon for almost 20 years at this point, so I have obviously drank the Kool Aid. But there are a few things that I think are truly unique about Amazon's culture. I'll talk about three of them. The first is I think that we are unusually customer-oriented. And I think a lot of companies talk about being customer-oriented, but few actually are. I think most of the big technology companies truthfully are competitor-focused. They kind of look at what competitors are doing and then they try to one-up one another. You have one or two of them that I would say are product-focused, where they say, hey, it's great, you Mr. and Mrs. Customer have ideas on a product, but leave that to the experts, and you know, you'll like the products we're going to build. And those strategies can be good ones and successful ones, they're just not ours. We are driven by what customers tell us matters to them. We don't build technology for technology's sake, we don't become, you know, smitten by any one technology. We're trying to solve real problems for our customers. 90% of what we build is driven by what you tell us matters. And the other 10% is listening to you, and even if you can't articulate exactly what you want, trying to read between the lines and invent on your behalf. So that's the first thing. Second thing is that we are pioneers. We really like to invent, as I was talking about earlier. And I think most big technology companies at this point have either lost their will or their DNA to invent. Most of them acquire it or fast follow. And again, that can be a successful strategy. It's just not ours. I think in this day and age, where we're going through as big a shift as we are in the cloud, which is the biggest technology shift in our lifetime, as dynamic as it is, being able to partner with a company that has the most functionality, it's iterating the fastest, has the most customers, has the largest ecosystem of partners, has SIs and ISPs, that has had a vision for how all these pieces fit together from the start, instead of trying to patch them together in a following act, you have a big advantage. I think that the third thing is that we're unusually long-term oriented. And I think that you won't ever see us show up at your door the last day of a quarter, the last day of a year, trying to harass you into doing some kind of deal with us, not to be heard from again for a couple years when we either audit you or try to re-up you for a deal. That's just not the way that we will ever operate. We are trying to build a business, a set of relationships, that will outlast all of us here. And I think something that always ties it together well is this trusted advisor capability that we have inside our support function, which is, you know, we look at dozens of programmatic ways that our customers are using the platform and reach out to you if you're doing something we think's suboptimal. And one of the things we do is if you're not fully utilizing resources, or hardly, or not using them at all, we'll reach out and say, hey, you should stop paying for this. And over the last couple of years, we've sent out a couple million of these notifications that have led to actual annualized savings for customers of 350 million dollars. So I ask you, how many of your technology partners reach out to you and say stop spending money with us? To the tune of 350 million dollars lost revenue per year. Not too many. And I think when we first started doing it, people though it was gimmicky, but if you understand what I just talked about with regard to our culture, it makes perfect sense. We don't want to make money from customers unless you're getting value. We want to reinvent an experience that we think has been broken for the prior few decades. And then we're trying to build a relationship with you that outlasts all of us, and we think the best way to do that is to provide value and do right by customers over a long period of time. >> Okay, keeping going on the culture subject, what about some of the quirky things about Amazon's culture that people might find interesting or useful? >> Well there are a lot of quirky parts to our culture. And I think any, you know lots of companies who have strong culture will argue they have quirky pieces but I think there's a few I might point to. You know, I think the first would be the first several years I was with the company, I guess the first six years or so I was at the company, like most companies, all the information that was presented was via PowerPoint. And we would find that it was a very inefficient way to consume information. You know, you were often shaded by the charisma of the presenter, sometimes you would overweight what the presenters said based on whether they were a good presenter. And vice versa. You would very rarely have a deep conversation, cause you have no room on PowerPoint slides to have any depth. You would interrupt the presenter constantly with questions that they hadn't really thought through cause they didn't think they were going to have to present that level of depth. You constantly have the, you know, you'd ask the question, oh, I'm going to get to that in five slides, you want to do that now or you want to do that in five slides, you know, it was just maddening. And we would often find that most of the meetings required multiple meetings. And so we made a decision as a company to effectively ban PowerPoints as a communication vehicle inside the company. Really the only time I do PowerPoints is at Reinvent. And maybe that shows. And what we found is that it's a much more substantive and effective and time-efficient way to have conversations because there is no way to fake depth in a six-page narrative. So what we went to from PowerPoint was six-page narrative. You can write, have as much as you want in the appendix, but you have to assume nobody will read the appendices. Everything you have to communicate has to be done in six pages. You can't fake depth in a six-page narrative. And so what we do is we all get to the room, we spend 20 minutes or so reading the document so it's fresh in everybody's head. And then where we start the conversation is a radically different spot than when you're hearing a presentation one kind of shallow slide at a time. We all start the conversation with a fair bit of depth on the topic, and we can really hone in on the three or four issues that typically matter in each of these conversations. So we get to the heart of the matter and we can have one meeting on the topic instead of three or four. So that has been really, I mean it's unusual and it takes some time getting used to but it is a much more effective way to pay attention to the detail and have a substantive conversation. You know, I think a second thing, if you look at our working backwards process, we don't write a lot of code for any of our services until we write and refine and decide we have crisp press release and frequently asked question, or FAQ, for that product. And in the press release, what we're trying to do is make sure that we're building a product that has benefits that will really matter. How many times have we all gotten to the end of products and by the time we get there, we kind of think about what we're launching and think, this is not that interesting. Like, people are not going to find this that compelling. And it's because you just haven't thought through and argued and debated and made sure that you drew the line in the right spot on a set of benefits that will really matter to customers. So that's why we use the press release. The FAQ is to really have the arguments up front about how you're building the product. So what technology are you using? What's the architecture? What's the customer experience? What's the UI look like? What's the pricing dimensions? Are you going to charge for it or not? All of those decisions, what are people going to be most excited about, what are people going to be most disappointed by. All those conversations, if you have them up front, even if it takes you a few times to go through it, you can just let the teams build, and you don't have to check in with them except on the dates. And so we find that if we take the time up front we not only get the products right more often but the teams also deliver much more quickly and with much less churn. And then the third thing I'd say that's kind of quirky is it is an unusually truth-seeking culture at Amazon. I think we have a leadership principle that we say have backbone, disagree, and commit. And what it means is that we really expect people to speak up if they believe that we're headed down a path that's wrong for customers, no matter who is advancing it, what level in the company, everybody is empowered and expected to speak up. And then once we have the debate, then we all have to pull the same way, even if it's a different way than you were advocating. And I think, you always hear the old adage of where, two people look at a ceiling and one person says it's 14 feet and the other person says, it's 10 feet, and they say, okay let's compromise, it's 12 feet. And of course, it's not 12 feet, there is an answer. And not all things that we all consider has that black and white answer, but most things have an answer that really is more right if you actually assess it and debate it. And so we have an environment that really empowers people to challenge one another and I think it's part of why we end up getting to better answers, cause we have that level of openness and rigor. >> Okay, well Andy, we have time for one more question. >> Okay. >> So other than some of the things you've talked about, like customer focus, innovation, and long-term orientation, what is the single most important lesson that you've learned that is really relevant to this audience and this time we're living in? >> There's a lot. But I'll pick one. I would say I'll tell a short story that I think captures it. In the early days at Amazon, our sole business was what we called an owned inventory retail business, which meant we bought the inventory from distributors or publishers or manufacturers, stored it in our own fulfillment centers and shipped it to customers. And around the year 1999 or 2000, this third party seller model started becoming very popular. You know, these were companies like Half.com and eBay and folks like that. And we had a really animated debate inside the company about whether we should allow third party sellers to sell on the Amazon site. And the concerns internally were, first of all, we just had this fundamental belief that other sellers weren't going to care as much about the customer experience as we did cause it was such a central part of everything we did DNA-wise. And then also we had this entire business and all this machinery that was built around owned inventory business, with all these relationships with publishers and distributors and manufacturers, who we didn't think would necessarily like third party sellers selling right alongside us having bought their products. And so we really debated this, and we ultimately decided that we were going to allow third party sellers to sell in our marketplace. And we made that decision in part because it was better for customers, it allowed them to have lower prices, so more price variety and better selection. But also in significant part because we realized you can't fight gravity. If something is going to happen, whether you want it to happen or not, it is going to happen. And you are much better off cannibalizing yourself or being ahead of whatever direction the world is headed than you are at howling at the wind or wishing it away or trying to put up blockers and find a way to delay moving to the model that is really most successful and has the most amount of benefits for the customers in question. And that turned out to be a really important lesson for Amazon as a company and for me, personally, as well. You know, in the early days of doing Marketplace, we had all kinds of folks, even after we made the decision, that despite the have backbone, disagree and commit weren't really sure that they believed that it was going to be a successful decision. And it took several months, but thankfully we really were vigilant about it, and today in roughly half of the units we sell in our retail business are third party seller units. Been really good for our customers. And really good for our business as well. And I think the same thing is really applicable to the space we're talking about today, to the cloud, as you think about this gigantic shift that's going on right now, moving to the cloud, which is, you know, I think in the early days of the cloud, the first, I'll call it six, seven, eight years, I think collectively we consumed so much energy with all these arguments about are people going to move to the cloud, what are they going to move to the cloud, will they move mission-critical applications to the cloud, will the enterprise adopt it, will public sector adopt it, what about private cloud, you know, we just consumed a huge amount of energy and it was, you can see both in the results in what's happening in businesses like ours, it was a form of fighting gravity. And today we don't really have if conversations anymore with our customers. They're all when and how and what order conversations. And I would say that this going to be a much better world for all of us, because we will be able to build in a much more cost effective fashion, we will be able to build much more quickly, we'll be able to take our scarce resource of engineers and not spend their resource on the undifferentiated heavy lifting of infrastructure and instead on what truly differentiates your business. And you'll have a global presence, so that you have lower latency and a better end user customer experience being deployed with your applications and infrastructure all over the world. And you'll be able to meet the data sovereignty requirements of various locales. So I think it's a great world that we're entering right now, I think we're at a time where there's a lot less confusion about where the world is headed, and I think it's an unprecedented opportunity for you to reinvent your businesses, reinvent your applications, and build capabilities for your customers and for your business that weren't easily possible before. And I hope you take advantage of it, and we'll be right here every step of the way to help you. Thank you very much. I appreciate it. (applause) >> Thank you, Andy. And thank you, everyone. I appreciate your time today. >> Thank you. (applause) (upbeat music)
SUMMARY :
of Worldwide Marketing, Amazon Web Services, Ariel Kelman. It is my pleasure to introduce to come up on stage here, I have a bunch of questions here for you, Andy. of a state of the state on AWS. And I think if you look at that collection of things, a lot of customers moving to AWS, And of course that's not the case. and how they should think about their relationship And I think the reality is when you look at the cloud, talk about a subject that's on the minds And I think that you can expect, over time, So as people are looking to move and it has clustering so that you don't and talk about something not on the cloud, And I think that if you look out 10 years from now, What are some of the other areas of investment and we have, you know, more than double and you know, while we have customers and listening to what you tell us matters, What are some of the things that are most unique And the other 10% is listening to you, And I think any, you know lots of companies moving to the cloud, which is, you know, And thank you, everyone. Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amadeus | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
France | LOCATION | 0.99+ |
Sweden | LOCATION | 0.99+ |
Ningxia | LOCATION | 0.99+ |
China | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Ariel Kelman | PERSON | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
26 years | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
Ariel | PERSON | 0.99+ |
two people | QUANTITY | 0.99+ |
10 feet | QUANTITY | 0.99+ |
six pages | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
six-page | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
14 feet | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
47% | QUANTITY | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
12 feet | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
five slides | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
350 million dollars | QUANTITY | 0.99+ |
10X | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
US | LOCATION | 0.99+ |
second reason | QUANTITY | 0.99+ |
McDonalds | ORGANIZATION | 0.99+ |
Lowell Anderson, AWS - AWS Summit SF 2017 - #AWSSummit - #theCUBE
>> Narrator: Live from San Francisco, it's The Cube! Covering AWS Summit 2017, brought to you by Amazon Web Services. (upbeat music) >> Hi, welcome back to The Cube. We are live in San Francisco at the AWS Summit at Moscone Center. Really excited to be here. A tremendous amount of buzz going on. I'm Lisa Martin with my cohost George Gilbert and we're very excited to have Lowell Anderson, product marketing guru at AWS. Welcome back, Cube alumni! >> Lowell: It's great to be here, Lisa, thank you. >> Great to have you here as well. The keynote this morning was so energetic with Werner and Nextdoor is going to be on the program in a little bit. Over a thousand product launches last year. Not only are there superpowers now that AWS, I like that. You don't have a T-shirt, but maybe next time. But I think the word that I heard most today so far is customer. And I think that it's such a, and as AWS really talks about, it's a really differentiated way of thinking, of doing business. I'd love to understand what the products that were announced today. Walk us through some of the key highlights there. Customer logos were everywhere. So talk to us about how customers are influencing the development of the new services and products coming from AWS. >> Yeah, well, you know, for us, customers are always core to what drives our innovation. It's how we start, we start with what our customers want, and we work backwards from that to try to deliver a lot of the new features and services that we talked about today. And Werner covered a huge breadth of things, but they really fall into maybe four or five categories. He started talking about, directly for developers, talking about what we're doing with a product called CodeStar, which is designed to really help developers build and deploy software applications in the Cloud. He also then went and talked about our new marketplace, SaaS Contracts' capability, which makes it super easy for customers to sign up and purchase SaaS applications using multi-year contracts on AWS, but it also makes it easier for ISVs to make their offerings available for our customers. So again, really trying to make that easy for customers. We talked a lot about what we're doing in artificial intelligence, with the general availability of Amazon Lex today, and then a really entertaining video with Polly, where we saw that avatar speaking and the new whispering capability, so adding a lot more value to our suite of artificial intelligence services. Some exciting stuff in analytics, where we talked about Redshift Spectrum, which is the new search capability on Amazon Redshift that allows customers to search not just the data in their Redshift database, but also search all the unstructured data they have in S3. And then some really exciting announcements here on the database space with DynamoDB DAX, which is an accelerator for DynamoDB. And we also talked about the availability of a new version of Aurora for Postgres. So a lot of new capabilities, both in databases, big data, analytics, machine learning and artificial intelligence, and our offerings for SaaS Contracts as well. >> And that was all before lunch. (laughs) >> Lowell: Yeah, a lot of stuff. >> Lowell, following up on, in order of, let's say the comments on AI and the announcements made there. Microsoft, Google, Amazon all have gone beyond frameworks and tools to fully trained services that a normal developer can get their hands around. But in the areas of conversational user interface, natural language understanding, image recognition. Why do you think that those three vendors, the three vendors have been able to make such progress in those areas, to make that capability accessible, and there's so many other areas where we're still down in the weeds? >> I think there's, we sort of see it in, sort of focusing in maybe three different areas that are really targeted at what our customers are asking for. We have some very sophisticated customers who really want to build their own deep learning and machine learning applications, and they want services like MXNet, which is a highly scalable deep learning framework, that they can do and build these deep learning models. So there's a very sophisticated, targeted customer who wants that. But we also have customers that want to build and train and create prediction algorithms, and they use Amazon Machine Learning, which is a managed service which allows them to look at their past transactional data and build prediction models from it. And then the third piece is kind of what you mentioned, which is services that are really designed for the average developer, so they can really easily add capabilities like chatbots and speech and visual recognition to their applications with a simple API interface. And I think what you touched on is, why did we focus here, Well I think, as Andy also talked about today, that it's really early days in this space. And we're going to see a really, really strong amount of innovation here. And Amazon, which has been doing this for many, many years, and thousands of developers focused on this in our retail side, we're really working hard to bring that technology out, so that our customers can use it. And Lex, which is based on Alexa, which we're all familiar with from using the Echo. Bringing that out and making that type of capability available for average developers to use is a piece of that. So I think you're just going to continue to see that and over the course of the next year you're going to see continued new services coming from us on machine learning and artificial intelligence, across all those three spectrums. >> So let me jump to another subject which is really a hot button for our customers, both on the vendor side and the enterprise side, which is the hybrid cloud, I don't know whether we should call it migration or journey or endpoint. But let's take a couple scenarios. Let's say you're a Hadoop customer, and you've got Cloudera on-prem, you're a big bank, you've put an instance of it on Amazon and on Azure so that you can move your data around and you're relatively free. >> Lowell: Sure. >> Now the big use case has been data warehouse offload. So all of a sudden you have two really great data warehouses that are best in class on Amazon. With Redshift, with now the significant expansion of it, and Snowflake, and then you have Teradata, which now can take their on-prem capabilities and put them on the Cloud. How does the customer weigh the cost/benefit of lowest common denominator versus-- >> Yeah, yeah, sure. I think for us and for our customers it's not a one-size-fits-all. Every customer approaches this differently, and so what our focus has been on is to give them the range of choice. So if you want to use Cloudera, you can deploy it on EC2 and you can manage that yourself, and that's going to work great for you. But if you want a more managed service where maybe you don't want to have to manage the scalability of that Cloudera deployment, maybe you want to use EMR and deploy your Hadoop applications on EMR which manages that scalability for you. And so you make those tradeoffs and each of our customers makes those tradeoffs for different reasons and in different ways and at different times. And so our focus has always been to really try to give them that flexibility, to give them services where they can make the choice themselves about which direction they want to go for their individual applications, and maybe mix it up and try different ways of running these types of applications. And so we have a full range of those types of, from the ability to deploy those directly onto EC2 and manage it themselves, all the way to fully managed services that we maintain all the scalability and management and monitoring ourselves. >> One of the interesting things that Andy Jassy said in his fireside chat just in the last hour or so about HyperCloud was that most enterprises are going to operate in HyperCloud for the next several years, and there are those customers that are going to have to, or want to have their own data centers for whatever type of application. But something also that he brought up in that context, and I know you know a lot about this, George, is VMware. So when I was looking at the announcement that was made in the last six months or so about VMware, vSphere-based cloud services, VMware has just sold off their vCloud Air, kind of competing product, wondering with the VMware Cloud on Amazon, how does that... what are really the primary drivers there? Is that sort of a way to take those VMware customers eventually towards hybrid cloud, or is that an opportunity to maybe compete with some of the other guys who might have more traction in the legacy application migration space? >> I think for us, it's again, it comes back to our customers saying, some of our workloads that maybe for a long period of time have been deployed on VMware and we've been using VMware ESX for many, many years on-premise, and we have these applications that have been deployed for many years there, and they're highly integrated, they use specific features of VMware, and maybe we also like using VMware's management tools, we like using vCloud to manage all of these different instances of our VMware virtualization platform, but we just want to run it in the Cloud, because we want that scalability. When you deploy that stuff on-premise, you're still kind of locked in. Every time you want to expand, you've got to go out and you've got to buy more hardware. You really don't have the agility to expand that business, both as it grows, or as it declines. So you're paying for that hardware to power it and run it no matter what. And so they're telling us we'd like to get some of this up into the Cloud, but we don't want necessarily to have to, we've built these apps, we're comfortable with how they're running them, but we want to run them up in the Cloud and we want to do it with low risk. And that's what this VMware relationship is about, is letting those enterprises that have spent years building and maintaining and using VMware and their various management tools, to do that up in the Cloud. That's really what it's about. >> So let's switch gears to another topic that Andy talked about, since all his topics were topical. Edge computing and IIoT. That's another big shift that's coming along and changing the architecture so we have more computing at the edge again, and huge amounts of data. Obviously there's many scenarios, but how do you think customers will basically think through this, or how should they think through how much analytics and capability is at the edge, that issue of should it look like what is in the Cloud? Or should it be really tight and light and embedded? >> I think we're seeing just an increasing range. And also a really interesting mix, where you have some very intelligent devices, your laptop and so on, that is connected to the Cloud and it has a pretty significant amount of processing power, and so there can be applications that run on that machine that are very sophisticated. But if we're going to start to expand that universe of edge devices out to simple sensors for pipelines, and simple ways to monitor the thermostat in your home, and simple ways to measure and monitor and track all sorts of, you know, automobiles and so on, that there's going to be a range of different on-premise or edge types of compute, that we need to support in the Cloud. And so I think what Andy's saying is that we want to build the Cloud to be the system that can act as the, has the analytics power to ingest data from these maybe tens of millions of different devices, which will have a range of different compute power, and support those applications on a case by case basis. >> We've got to wrap things up here, and I know this conversation could continue for many hours. I think what we've heard here today is a tremendous amount of innovation, and I made the joke, all announced before lunch, but really it was. We're seeing the flexibility, we're seeing the customers really drive the innovation. Also the fact that AWS starting in the startup space with the developers, that's still a very key target market for you, even as things go up to the enterprise. So continued best luck with everything going forward. We're excited to be at re:Invent in just, what, five or six months from now, and with many, many more thousands of people and hearing the great things that continue to come from the leader in public cloud. >> Lowell: All right. Thank you, Lisa. >> Thanks for joining us, Lowell, we appreciate it. Next time I want the superpower T-shirt. (laughs) >> (laughs) Okay, I'll take you up on that. >> All right. I'm Lisa Martin for my cohost George Gilbert. Thanks so much for watching, stick around. We are live at the AWS Summit in San Francisco, and we will be right back. (upbeat music)
SUMMARY :
brought to you by Amazon Web Services. and we're very excited to have and Nextdoor is going to be on the program in a little bit. and the new whispering capability, And that was all before lunch. in those areas, to make that capability accessible, and over the course of the next year you're going to see So let me jump to another subject which is and Snowflake, and then you have Teradata, and that's going to work great for you. that are going to have to, or want to have their own and we want to do it with low risk. and changing the architecture so we have more computing that there's going to be a range of different that continue to come from the leader in public cloud. Lowell: All right. Thanks for joining us, Lowell, we appreciate it. and we will be right back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Lowell | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
George Gilbert | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Lowell Anderson | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Werner | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
three vendors | QUANTITY | 0.99+ |
Echo | COMMERCIAL_ITEM | 0.99+ |
third piece | QUANTITY | 0.99+ |
vCloud | TITLE | 0.99+ |
four | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
EMR | TITLE | 0.99+ |
Moscone Center | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Cloudera | TITLE | 0.98+ |
Redshift | TITLE | 0.98+ |
VMware ESX | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Alexa | TITLE | 0.98+ |
each | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
EC2 | TITLE | 0.97+ |
next year | DATE | 0.97+ |
AWS Summit | EVENT | 0.97+ |
HyperCloud | TITLE | 0.96+ |
Nextdoor | ORGANIZATION | 0.96+ |
vSphere | TITLE | 0.96+ |
AWS Summit 2017 | EVENT | 0.96+ |
#AWSSummit | EVENT | 0.95+ |
Cloud | TITLE | 0.95+ |
five categories | QUANTITY | 0.94+ |
S3 | TITLE | 0.94+ |
last six months | DATE | 0.94+ |
three different areas | QUANTITY | 0.93+ |
DynamoDB | TITLE | 0.92+ |
Azure | TITLE | 0.91+ |
six months | QUANTITY | 0.91+ |
Teradata | ORGANIZATION | 0.89+ |
AWS Summit SF 2017 | EVENT | 0.88+ |
Aurora | TITLE | 0.87+ |
five | QUANTITY | 0.87+ |
tens of millions | QUANTITY | 0.86+ |
data warehouses | QUANTITY | 0.85+ |
Polly | PERSON | 0.85+ |
Postgres | ORGANIZATION | 0.85+ |
Jerry Chen, Greylock | AWS Re:Invent 2013
okay welcome back day two of the cube here and Las Vegas for live this is looking angles exclusive coverage of Amazon Web Services reinvent I'm John furrier with Dave vellante co-host of the cube Dave we got our first segment here we're pleased to have Jerry chin new venture capitalist cloud guru was at VMware it's been in the enterprise for a while guys welcome welcome to the cube Jay to kick off here at amazon reinvent Jerry welcome back decided Amy thanks for having guys cube alumni how was Hong Kong you just back from I'm stack I think Hong Kong was great my my body and time clocks someplace our Pacific though so I don't know them jet lag but thank God in Vegas I never need to leave the building so I don't need to know what time is on my mom actually in so it's good to be here so Amazon's pushing the cloud hard obviously they are the cloud huge market share on infrastructure as a service check the boxes there they got like thirty six percent by are not I think it's much higher than that actually her but jesse was saying today well I mean by vechs the next 14 it's got to be higher than thirty six percent I think it's closer to seven but ok that's infrastructure service but the actions platform as a service and SAS yeah if you can I got to get your take on guys we're following OpenStack you were just in Hong Kong you got amazon public cloud you get OpenStack coming up you know as that horse those a two-horse race right now clouds Dax out there but really it's OpenStack is like the enterprise hope it's the great hope for the enterprise with Amazon kind of rolling rolling out massive services what's your take on the two and and and is it a two-horse race and what's what's what's the what's the difference between the two you know I don't think it's a it's a two horse race yet but Amazon is quickly becoming the marker soph monopoly of the public cloud at the rate they're going and and it there have the size and scale that pretty soon to be really hard to compete and I think only google and maybe Marcus off and the public cloud space can really compete but if you take a step back and look at you know to your question OpenStack versus amazon I was in Hong Kong last week the OpenStack design summit and openstax philosophies one be all things to all people right it's open source multiple projects Amazon's philosophy is they want to be one cloud all people so you saw their announcements today around enterprise use cases desktop use cases startup use cases me to use cases there won't be one cloud to all people so it is not the race isn't over yet but very different philosophies right now between the two different cams was there much to talk about incorporating amazon api's into the whole OpenStack framework you know six months ago you heard a lot about that we had a crowd chatter on that run what was the the buzz there you know I I'll be honest into to the point that you guys brought up early around the Amazon ap is almost are becoming a lingua franca for infrastructure of a service but quite frankly debating whatnot they're the right api's or not isn't I think where the actions and the actions add to the point you made around pass and other developer services so the actual API so you do the api's right should be pretty easy for developers to adopt you just create really great developer service around it database services storage services security services those are what developers really care about so I feel like we have you know sometimes called cloud plus there are infrastructure service plus and you got sass minus you know it's like what you have with Salesforce do you feel like we really need that pass layer does that just sort of bifurcate into one of those two there's there's a there's a school of thought that says the world goes into two worlds a long telus a sax so there's an app for everything in which case you have SAS or SATA minus and then you know infrastructure private cloud for a budget likes the apps there's no middle ground for pass you know I'm more towards the middle ground because in a world where we have multiple SAS providers in multiple clouds I believe you're going to have multiple SAS multiple clouds you're going to need to integrate and stitch together a mash-up of applications right you have work day for HCM Salesforce for crm applications your own custom website running on amazon there are three different kinds now servers now how are you connect the data are going to move data around there's going to be at least some kind of past layer integration layer or cloud layer that needs to help stitch together this multi-cloud world so you like the pivotal play a pill I think the concept Indian concept right I think Paul is is a pulse of visionary and bus my friends to work there their announcement yes sir was was I think a step in the right direction that they're planning a flag saying that there has to be something beyond amazon there has to be a relevant private cloud initiative be it VMware or OpenStack of someplace else and let's create some services around it and the angle are taking around data and data services i think is proud of the right the right bed because all these new applications will need these data services to be relevant we were talking about pivotal yesterday one of the things that we were critical on and but also hopeful as you pointed out it's early right so true pivotal a mulligan or a pass if you will is this early and it's really a new company if you think about a 1,600 employees but new but it's window dressing announcement it really wasn't really i mean so the same logos i mean come on that they're trying to overhype and that's that was that's what people are talking about saying hey guys just be honest and say we're working as fast as i can because amazon is not going to break the enterprise right away I mean they also have a longer road going hard at the enterprise so they are going after IBM we must saw in the keynote that called out IBM specifically around some of the advertising there on the show yeah so Amazon is clearly trying to knock on the door or the enterprise so the question we are asking and talking about is how much time is it till they proliferate the enterprise I mean they're in there now toe in the water little beachhead still not enterprise-ready in the ends of the SLA s and the demands or does it matter so what's your take how much time is really on the radar for Amazon when will the clock be expiring for the IBM's HP pivotal's in terms of retooling so I think the evolution around enterprise public cloud like Amazon would take three potential paths so path one around amazon amazon invests enough engineering and product talent to make their cloud enterprise friendly privacy security reliability and they're they're hiring a bunch of folks a bunch of folks my old place vmware try to do that that's path one path to is you see a category of startups out there trying to meet amazon more cloud and enterprise friendly security privacy reliability right so that's path to and as a Greylock a venture capitalist we're investing a bunch of companies trying to you make that happen or past three is developers out there I'm engineer around the weaknesses amazon so the new Amazon is an enterprise friendly they know and about Amazon's got a bunch of weakness around security and privacy and he's just right there application around those weaknesses so I think those are the three evolutionary path paths I think it's a race to see who wins right one two or three yeah there's no doubt that Amazon is forcing the hand of the big guys he's seeing that clearly we have a question on our crowd check go to crowd chatting at / reinvent we've got a live live crowd-sourced thought leader chat there all those to Twitter and LinkedIn pendulum will you sign in but the question Jerry to you is how our cloud providers catering to provide low latency access to developing markets like India Indonesia Philippines etc you know given that the Hurricanes just destroyed all the infrastructure considering there's huge potential explosive internet growth so given that those new emerging markets are essentially refreshing their infrastructure what is the the cloud providers take on the end you do you work in that area what you're giving the opinion on what's going on in those areas sure I mean I think that the world is looking at two or three different clouds you say there's a u.s. dominated cloud maybe a China dominate cloud and rest of the world right generally a lot of analyst kind of segment the world in three major pockets when you think about developing markets or other geographies like Asia South Asia or South America huge markets lot of developers all applications it's the reason why I think there's only a handful of providers that can have the scoop in the reeds to reach globally I think Equinix Rackspace on Google Marcus off or all global footprint players everyone else I think you're going to look at a Federation of multiple players so every region has a local telco cloud provider it could be like an entity or rakuten in Japan it could be a sink tell in Singapore South East Asia so I think you're going to see a global brand around like Amazon or or VMware and VMware trying to franchise our own cloud or Microsoft and then I would see partnerships working between the different geographies and maybe OpenStack is that partnership maybe amazon API is the way different class communicate its remains to be seen what that interface between the different gos look like in the future what do you see as IBM's role I mean first of all do they have the global scale are you sort of purposefully leaving them out or just forget about them and just don't feel like they can compete on that global scale what do you see is their role in OpenStack so um bunch of questions there IBM didn't mean to leave them out there are definitely relevant especially for the large enterprises so I think you're seeing enterprise adoption come from large startups or small starts growing up in the cloud as well as large enterprises that are looking to modernize your applications and I think IBM has a great role to play from kind of that top-down approach I think IBM between a combination of a soft layers which is their their acquired cloud provider combined with their global services and their consulting business will be really relevant to large enterprises my mind so talk about the Amazon enterprise marchi obviously they're talking about cloud trails which is kind of like a monitoring service compliance oriented and I'll see vbi so you you've been close to the vdi movement so that's those are I started VDI hearted the beady eye movement so you know being there what is your take on that because that's very enterprising and that's rude good for business I'm what sir what's their chances there well I think so first on the vdi market we started that at VMware at 05 06 we coined the term VDI and I think it's a great service for large enterprises than need secure mass desktops I think I would love to see in a VDI service from VMware in Amazon five six seven years ago because now video i think is part of a larger solution it's it's it's significant but not enough right he's now enterprise to care about their madness desktops like VDI but my ipad devices iOS devices Android devices they really want kind of a holistically managed desktop or workspace environment so if i were amazon i would expand beyond windows and two other you know operating systems to manage like android and iOS but that's other serious about you know managing enterprise workspaces do they have do they have advantage and you're in your opinion despite the fact that they're so late to market do they have an advantage in that and I mean in essence they are starting around mobile developers aren't they whereas when you started that was especially a consideration Wright and Citrix sort of found its way there right but I think between um amazon I think Google's in a great position because they own so much of the Android stack right if they want to create an enterprise friendly manage um Android environment for Chromebooks Android devices they can start creating a bunch of great developer services like magic google drive but secured on on kind of a google cloud or something like that that could be pretty compelling I don't know if they're going there i think dropbox has a great opportunity kind of be that back and platform obviously Greylock investment but dropbox has a huge opportunity to be that kind of manage secure servers across mobile devices and desktop devices it's all a sudden the one overarching fact you have between Windows iOS and Android is your data and drop boxes on all three platforms chair we got to get rolling and we got in our next guest but I want to ask you actually talk about what you're investing in at greylock rate locked here 1dc you guys have done amazing deals I mean just recently in the past decade Greylock has emerged from just a tier 1 BC to a mega success good investments and if you're on the enterprise team they're actually the consumer side kick ass what's going on for you guys what are you investing in what are you looking at and if price is not an easy game to invest in obviously it's hard but what are you guys doing what are you investing what are you looking for I'm thinking about looking at across the categories most relevant for this audience is I'm really interested looking at startups that can either a make amazon a more enterprise funding cloud or be startups that will pose alternative or challenge to amazon in the enterprise cloud space and you do that either by you know focus on enterprise requirements or focus on enterprise services like data storage security that matter enterprises focus on doing that really really well better than vmware better than Microsoft there in the Amazon I think in the build a really big enterprise cloud business around those technology services you're essentially betting on that transformation from the way the world is the cloud is post of the world known as buying servers they're all trying to find a lab partner that's the direction and and are you bullish on this integrated stack offering obviously DevOps has been a big success you see Facebook you see Google you see Amazon building their own gear they were kind of saying we're not playing an open compute but sure that aside DevOps is a software model absolutely and so the integrated stack which are common on integrated stack and how that's going to involve for both the mainstream of DevOps absolutely so you see this DevOps culture permeating first development of applications now how you manage your infrastructure so you look at what's happened with open compute and open source switches which I think open compute project announced a couple days ago you're seeing that kind of DevOps culture and how they manage and update their applications / minate storage compute and now networking that's going to be kind of a common adoption curve throughout the cloud so the way DevOps technologies are getting adopted from languages to frameworks of databases is the same way we're seeing storage compute and networking technologies get adopted in this next cloud wave what's your take on the iphone for the enterprise amazon cloud kind of metaphor and OpenStack being more the Android we were talking earlier right just get your thoughts there an OpenStack also has a lot of legs right now but it's very open iPhone model or Amazon is kind of closed or some say lock in alright but it still apps are not closed right so the metaphor the metaphor was you know iphone is to Amazon as Android is to OpenStack and I think at a high level that kind of makes sense but not really because there's no Google behind OpenStack like there's a google behind Android so I think Rackspace is was an early leader and still as a leader in the OpenStack space but there's also red hat there's a bunch of the players there so as a result there's no single entity kind of driving OpenStack like Google's driving Android so that analogy can breaks down and then as far as Apple analogy to Amazon I I think Amazon is a lot more open than the iOS ecosystem is because just the fact that there's no governing board to prove her apps to launch on amazon right I can go stand up on an ec2 instance lost my application use it I don't need wait for this there's not a 20-page approval process so knowingly directionally that's more correct than not but it's analogy breaks down when you really get into it and OpenStack your prospects roman sec what's your what's your outlook on OpenStack real quick I think OpenStack so holistically i think is great a more bullets than sort of sub projects that i am overall I think they keep launching new projects some are better than others the core processing around compute and storage and this um API management I'm bullish on I'm supposed to be bullish on what they're doing around containers like docker and core OS and kind of adopting this next generation of cloud platforms well we got to go we got some fans out there want to hear what your take on VDI so go tweet to at jerry chen j ER are wide CH en we got a break here we'd love to have you on a little longer we got our next guest coming on it's the cube live in Las Vegas day two of Amazon's reinvent changing the cloud game and the enterprise and we get all the detailed coverage here on the key we'll be right back after this short break the cute
SUMMARY :
the question Jerry to you is how our
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Singapore | LOCATION | 0.99+ |
Japan | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
android | TITLE | 0.99+ |
20-page | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
Vegas | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
thirty six percent | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
first segment | QUANTITY | 0.99+ |
Greylock | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Android | TITLE | 0.99+ |
South America | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Amy | PERSON | 0.99+ |
Dave vellante | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
two-horse | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
jesse | PERSON | 0.99+ |
last week | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
six months ago | DATE | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
iphone | COMMERCIAL_ITEM | 0.99+ |
ipad | COMMERCIAL_ITEM | 0.99+ |
two horse | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
three major pockets | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Dave | PERSON | 0.98+ |
John | PERSON | 0.98+ |
windows | TITLE | 0.98+ |
two different cams | QUANTITY | 0.98+ |
Amazon Web Services | ORGANIZATION | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
India | LOCATION | 0.98+ |