Dec 10th Keynote Analysis Dave Vellante & Dave Floyer | AWS re:Invent 2020
>>From around the globe. It's the queue with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hi, this is Dave Volante. Welcome back to the cubes. Continuous coverage of AWS reinvent 2020, the virtual version of the cube and reinvent. I'm here with David foyer. Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, which was headlined by Peter DeSantis. David. Good to see you. Good to see you. So David, we have a very tight timeframe and I just want to cover a couple of things. Something that I've learned for many, many years, working with you is the statement. It's all about recovery. And that really was the first part of Peter's discussion today. It was, he laid out the operational practices of AWS and he talked a lot about, he actually had some really interesting things up there. You know, you use the there's no compression algorithm for experience, but he talked a lot about availability and he compared AWS's availability philosophy with some of its competitors. >>And he talked about generators being concurrent and maintainable. He got, he took it down to the batteries and the ups and the thing that impressed me, most of the other thing that you've taught me over the years is system thinking. You've got to look at the entire system. That one little component could have Peter does emphasis towards a huge blast radius. So what AWS tries to do is, is constrict that blast radius so he can sleep at night. So non-disruptive replacements of things like batteries. He talked a lot about synchronous versus asynchronous trade-offs and it was like, kind of async versus sync one-on-one synchronous. You got latency asynchronous, you got your data loss to exposure. So a lot of discussions around that, but what was most interesting is he CA he compared and contrasted AWS's philosophy on availability zones, uh, with the competition. And he didn't specifically call out Microsoft and Google, but he showed some screenshots of their websites and the competition uses terms like usually available and generally available this meaning that certain regions and availability zone may not be available. That's not the case with AWS, your thoughts on that. >>They have a very impressive track record, uh, despite the, a beta the other day. Um, but they've got a very impressive track record. I, I think there is a big difference, however, between a general purpose computing and, uh, mission critical computing. And when you've got to bring up, uh, databases and everything else like that, then I think there are other platforms, uh, which, uh, which in the longterm, uh, AWS in my view, should be embracing that do a better job in mission critical areas, uh, in terms of bringing things up and not using data and recovery. So that's, that's an area which I think AWS will need to partner with in the past. >>Yeah. So, um, the other area of the keynote that was critical was, um, he spent a lot of time on custom Silicon and you and I have talked about this a lot, of course, AWS and Intel are huge partners. Uh, but, but we know that Intel owns its own fabs, uh, it's competitors, you know, we'll outsource to the other, other manufacturers. So Intel is motivated to put as much function on the real estate as possible to create general purpose processors and, and get as much out of that real estate as they possibly can. So what AWS has been been doing, and they certainly didn't throw Intel under the bus. They were very complimentary and, and friendly, but they also lay it out that they're developing a number of components that are custom Silicon. They talked about the nitro controllers, uh, inferential, which is, you know, specialized chips around, around inference to do things like PI torch, uh, and TensorFlow. >>Uh, they talked about training them, you know, the new training ship for training AI models or ML models. They spent a lot of time on Gravatar, which is 64 bit, like you say, everything's 64 bit these days, but it's the arm processor. And so, you know, they, they didn't specifically mention Moore's law, but they certainly taught, they gave, uh, a microprocessor one Oh one overview, which I really enjoyed. They talked about, they didn't specifically talk about Moore's law, but they talked about the need to put, put on more, more cores, uh, and then running multithreaded apps and the whole new programming models that, that brings out. Um, and, and, and basically laid out the case that these specialized processors that they're developing are more efficient. They talked about all these cores and the overhead that, that those cores bring in the difficulty of keeping those processors, those cores busy. >>Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, and sharing cores, which like, it was like going back to the old days of, of microprocessor development. But the point being that as you add more cores and you have that overhead, you get non-linear, uh, performance improvements. And so, so it defeats the notion of scale out, right? And so what I, what I want to get to is to get your take on this as you've been talking for a long, long time about arm in the data center, and remind me just like object storage. We talked for years about object storage. It never went anywhere until Amazon brought forth simple storage service. And then object storage obviously is, you know, a mainstream mainstream storage. Now I see the same thing happening, happening with, with arm and the data center specifically, of course, alternative processes are taking off, but, but what's your take on all this? You, you listened to the keynote, uh, give us your takeaways. >>Well, let's go back to first principles for a second. Why is this happening? It's happening because of volume, volume, volume, volume is incredibly important, obviously in terms of cost. Um, and if you, if you're, if you look at a volume, uh, arm is, is, was based on the volumes that came from that from the, uh, from the, um, uh, handhelds and all of their, all of the mobile stuff that's been generating. So there's billions of chips being made, uh, on that. >>I can interrupt you for a second, David. So we're showing a slide here, uh, and, and it's, it's, it, it, it relates to volume and somewhat, I mean, we, we talk a lot about the volume that flash for instance gained from the consumer. Uh, and, and, and now we're talking about these emerging workloads. You call them matrix workloads. These are things like AI influencing edge work, and this gray area shows these alternative workloads. And that's really what Amazon is going after. So you show in this chart, you know, basically very small today, 2020, but you show a very large and growing position, uh, by the end of this decade, really eating into traditional, the traditional space. >>That, that that's absolutely correct. And, and that's being led by what's happening in the mobile market. If you look at all of the work that's going on, on your, on your, uh, Apple, uh, Apple iPhone, there's a huge amount of, uh, modern, uh, matrix workloads are going there to help you with your photography and everything like that. And that's going to come into the, uh, into the data center within, within two years. Uh, and that's what, what, uh, AWS is focusing on is capabilities of doing this type of new workload in real time. And, and it's hundreds of times, hundreds of times more processing, uh, to do these workloads and it's gotta be done in real time. >>Yeah. So we have a, we have a chart on that this bar chart that you've, you've produced. Uh, I don't know if you can see the bars here. Um, I can't see them, but, but maybe we can, we can editorialize. So on the left-hand side, you basically have traditional workloads, uh, on blue and you have matrix workloads. What you calling these emerging workloads and red you, so you show performance 0.9, five versus 50, then price performance for traditional 3.6. And it's more than 150 times greater for ARM-based workload. >>Yeah. And that's a analysis of the previous generation of arm. And if you take the new ones, the M one, for example, which has come in to the, uh, to the PC area, um, that's going to be even higher. So the arm is producing hybrid computers, uh, multi, uh, uh, uh, heterogeneous computers with multiple different things inside the computer. And that is making life a lot more efficient. And especially in the inference world, they're using NPUs instead of GPU's, they conferred about four times more NPUs that you can GPU's. And, um, uh, it, it's just a, uh, it's a different world and, uh, arm is ahead because it's done all the work in the volume area, and that's now going to go into PCs and, and it's going to, going to go into the data center. >>Okay, great. Now, yeah, if we could, uh, uh, guys bring up the, uh, the, the other chart that's titled workloads moving to ARM-based servers, this one is just amazing to me, David, you'll see that I, for some reason, the slides aren't translating, so, uh, forget that, forget the slides. So, um, but, but basically you have the revenue coming from arm as to be substantially higher, uh, in the out years, uh, or certainly substantially growing more than the traditional, uh, workload revenue. Now that's going to take a decade, but maybe you could explain, you know, why you see that. >>Yeah, the, the, the, the, the reason is that these matrix workloads, uh, and also, uh, the offload of like nitro is doing it's the offload of the storage and the networking from the, the main CPU's, uh, the dis-aggregation of computing, uh, plus the traditional workloads, which can move, uh, over or are moving over and where AWS, uh, and, and Microsoft and the PC and Apple, and the PC where those leaders are leading us is that they are doing the hard work of making sure that their software, uh, and their API APIs can utilize the capabilities of arm. Uh, so, uh, it's, it's the it, and the advantage that AWS has of course, is that enormous economies of scale, across many, many users. Uh, that's going to take longer to go into the, the enterprise data center much longer, but the, the, uh, Microsoft, Google and AWS, they're going to be leading the charge of this movement, all of arm into the data center. Uh, it was amazing some of the people or what some of the arm customers or the AWS customers were seeing today with much faster performance and much lower price. It was, they were, they were affirming. Uh, and, and the fundamental reason is that arm are two generations of production. They are in at the moment at five nano meters, whereas, um, Intel is still at 10. Uh, so that's a big, big issue that, uh, Intel have to address. Yeah. And so >>You get, you've been getting this core creep, I'll call it, which brings a lot of overhead. And now you're seeing these very efficient, specialized processes in your premises. We're going to see these explode for these new workloads. And in particular, the edge is such an enormous opportunity. I think you've pointed out that you see a big, uh, uh, market for edge, these edge emergent edge workloads kind of start in the data center and then push out to the edge. Andy Jassy says that the edge, uh, or, or we're going to bring AWS to the edge of the data center is just another edge node. I liked that vision, your thoughts. >>Uh, I, I think that is a, a compelling vision. I think things at the edge, you have many different form factors. So, uh, you, you will need an edge and a car for example, which is cheap enough to fit into a car and it's, but it's gotta be a hundred times more processing than it is in the, in the computers, in the car at the moment, that's a big leap and, and for, to get to automated driving, uh, but that's going to happen. Um, and it's going to happen on ARM-based systems and the amount of work that's going to go out to the edge is enormous. And the amount of data that's generated at the edge is enormous. That's not going to come back to the center, that's going to be processed at the edge, and the edge is going to be the center. If you're like of where computing is done. Uh, it doesn't mean to say that you're not going to have a lot of inference work inside the data center, but a lot of, lot of work in terms of data and processing is move, is going to move into the edge over the next decade. >>Yeah, well, many of, uh, AWS is edge offerings today, you know, assume data is going to be sent back. Although of course you see outpost and then smaller versions of outposts. That's a, to me, that's a clue of what's coming. Uh, basically again, bringing AWS to, to, to the edge. I want to also touch on, uh, Amazon's, uh, comments on renewable. Peter has talked a lot about what they're doing to reduce carbon. Uh, one of the interesting things was they're actually reusing their cooling water that they clean and reuse. I think, I think you said three or multiple times, uh, and then they put it back out and they were able to purify it and reuse it. So, so that's a really great sustainable story. There was much more to it. Uh, but I think, you know, companies like Amazon, especially, you know, large companies really have a responsibility. So it's great to see Amazon stepping up. Uh, anyway, we're out of time, David, thanks so much for coming on and sharing your insights really, really appreciate it. Those, by the way, those slides of Wiki bond.com has a lot of David's work on there. Apologize for some of the data not showing through, but, uh, working in real time here. This is Dave Volante for David foyer. Are you watching the cubes that continuous coverage of AWS reinvent 2020, we'll be right back.
SUMMARY :
It's the queue with digital coverage of Who's the CTO Wiki Bon, and we're going to break down today's infrastructure keynote, That's not the case with AWS, your thoughts on that. a beta the other day. uh, inferential, which is, you know, specialized chips around, around inference to do things like PI Uh, they talked about training them, you know, the new training ship for training AI models or ML models. Uh, and so they talked about symmetric, uh, uh, a simultaneous multi-threading, uh, on that. So you show in this chart, you know, basically very small today, 2020, but you show a very And that's going to come into the, uh, into the data center within, So on the left-hand side, you basically have traditional workloads, And especially in the inference world, they're using NPUs instead of more than the traditional, uh, workload revenue. the main CPU's, uh, the dis-aggregation of computing, in the data center and then push out to the edge. and the edge is going to be the center. Uh, one of the interesting things was they're actually reusing their cooling water
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter DeSantis | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Andy Jassy | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dec 10th | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
hundreds of times | QUANTITY | 0.99+ |
3.6 | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
0.9 | QUANTITY | 0.99+ |
five nano meters | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
64 bit | QUANTITY | 0.99+ |
two generations | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
more than 150 times | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
two years | QUANTITY | 0.95+ |
first part | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
first principles | QUANTITY | 0.94+ |
next decade | DATE | 0.93+ |
one | QUANTITY | 0.93+ |
2020 | TITLE | 0.92+ |
end of this decade | DATE | 0.9+ |
one little component | QUANTITY | 0.9+ |
billions of chips | QUANTITY | 0.88+ |
a decade | QUANTITY | 0.85+ |
Moore | PERSON | 0.81+ |
Wiki bond.com | ORGANIZATION | 0.76+ |
second | QUANTITY | 0.74+ |
hundred times | QUANTITY | 0.71+ |
Invent | EVENT | 0.7+ |
about four times | QUANTITY | 0.69+ |
a second | QUANTITY | 0.68+ |
Peter Burris, Wikibon | Action Item, Feb 9 2018
>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (upbeat music) Once again, we're broadcasting from theCUBE studio in beautiful Palo Alto, California, and I have joining me here in the studio George Gilbert, David Floyer, both Wikibon analysts, and remote, welcome Neil Raden and Jim Kobielus. This week, we're going to talk about something that's actually quite important, and it's one of those examples of an innovation in which technology that is maturing in multiple domains is brought together in unique and interesting ways to potentially dramatically revolutionize how work gets done. Specifically, we're talking about something we call augmented programming. The notion of augmented programming borrows from some of the technologies associated with new or declarative low-code development environments, machine learning, and an increasing understanding of the role that automation's going to play, specifically as pertains to human and human-augmented activities. Now, low-code programming has been around for a while. Machine learning's been around for a while, and, increasingly, some of these notions of automation have been around for a while. But it's how they are coming together to create new approaches and new possibilities that can dramatically improve the speed of systems development, the quality of systems development, and, ultimately, very importantly, the ongoing manageability of those systems. So, Jim Kobielus, let's start with you. What are some of the issues associated with augmented programming that users need to be focused on? >> Yeah, well, the primary issue, or, really, the driver, is that we need to increase the productivity of developers greatly, because required of them to build programs, applications faster with fewer resources, and deploy them more rapidly in DevOps environments, and to manage that code, and to optimize that code for 10 zillion downstream platforms from mobile to web to the Internet of Things, and so forth. They need power tooling to be able to drive this process. Now, low-code platforms, you know, that whole low-code space has been around for years. It's very much evolved from what used to be called rapid application development, which itself evolved from the 4GL languages of decades past, and so forth. Looking at it now, here, we're moving towards the end of the second decade of this century. All low-code development space has evolved, it is rapidly emerging into, BPM, on the one hand, orchestration modeling tools. Robotic process automation, on the other hand, to enable the average end user or business analyst to quickly gin up an application based on being able to wire together UI components fairly rapidly, and drive it from the UI on in. What we're seeing now is that more and more machine learning is being used in the process of developing low-code application, or in the low-code development of applications. More machine learning is being used in a variety of capacities, one of which is simply to be able to infer the appropriate program code for external assets like screenshots and wireframes, but also from database schema and so forth. A lot of machine learning is coming to this space in a major way. >> But it sounds, though, there's still going to be some degree of specialization, and the nature of the tools that we might use in this notion of augmented programming. So, RPA may be associated with a certain class of applications and environmental considerations, and there'll be other tools, for example, that might be associated with different application considerations and environmental attributes as well. But David Floyer, one of the things that we're concerned about is, a couple weeks ago, we talked about the notion of data-aware middleware, where the idea that, increasingly, we'll see middleware emerge that's capable of moving data in response to the metadata attributes of the data, combined with invisibility to the application patterns. But when we think about this notion of augmented programming, what are some of the potential limits that people have to think about as they consider these tools? >> Peter, that's a very good question. The key for all of these techniques is to use the right tools in the right place. A lot of the environments where the leading edge of this environment assumes an environment where the programmer has access to all of his data, he owns it, and he is the only person there. The challenge is, in many applications, you are sharing data. You are sharing data across the organization, you are sharing data between programmers. Now, this introduces a huge amount of complexity, and there have been many attempts to try and tackle this. There've been data dictionaries, there've been data management, ways of managing this data. They haven't had a very good history. The efforts involved in trying to make those work within an organization have been, at best, spasmodic. >> (laughs) Spasmodic, good word! >> When we go into this environment, I think the key is, make sure that you are applying these tools to the areas initially where somebody does have access to all the data, and then carefully look at it from the point of view of shared data, because you have a whole lot of issues in state environments, which we do not have in non-state environments, and the complexity of locking data, the complexity of many people accessing that data, that requires another set of tools. I'm all in favor of these low-code-type environments, but you have to make sure that you're applying the right tools for the right type of applications. >> And specifically, for example, a lot of metadata that's typically associated with a database is not easily revealed to an application developer, nor an application. And so, you have to be very, very careful about how you exploit that. Now, Neil Raden, there has been over the years, as David mentioned, a number of passes at doing this that didn't go so well, but there are some business reasons to think why this time it might go a little bit better. Talk a little bit about some of the higher-level business considerations that are on the table that may catalyze better adoption this time of these types of tools. >> One thing is that, no matter what kind of an organization you are, whether you're a huge multinational or an SMB or whatever, all of these companies are really rotten with what we call shadow systems. In other words, companies have applications that do what they do, and what they don't do, people cobble together. The vast majority of 'em are done in Access and Excel, still. Even in advanced organizations, you'll find this. If there's a way to eliminate that, because it's a real killer of productivity, then that's a real positive. I suppose my concern is that when you deal at that level, how are you going to maintain coherency and consistency in those systems over time without adding, like he said, orchestration of those systems? What David is saying, I think, is really key. >> Yeah, I, go ahead, sorry, Neil. Go ahead. >> No, that's all right. What I was-- >> I think-- >> Peter: Sorry. Bad host. >> David: You think? >> Neil: No, go ahead. >> No, what I was going to say was, and a crucial feature of this is that a lot of times, the application is owned by a business line, and the business line presumes that they own their data, and they have modeled those systems for a certain type of work, for a certain volume of work, for a certain distribution of control, and when you reveal a lot of this stuff, you sometimes break those assumptions. That can lead to real serious breaks in the system. >> You know, they're not always evil, as we like to characterize them. Some of them are actually well-thought-out and really good system, better than anything they could get 'em from the IT organizations. But the point is, they're usually pretty brittle, and they require a lot of effort for the people who develop them to keep them running because they don't use the kind of tools and approaches and platforms and methodologies that lend themselves to good-quality software. I think there's real potential for RTA in that area. >> I think there are also some interesting platforms that are driving to help in this particular area, particularly of applications which go across departments in an organization. ServiceNow, for example, has a very powerful platform for very high-level production of systems, and it's being used a lot of the time to solve problems of procedures, procedures going across different departments, automating those procedures. I think there are some extremely good tools coming out which will significantly help, but they do help more in the serial procedures, rather than the concurrent procedures. >> And there are some expectations about the type of tools you use, and the extensibility of those tools, et cetera, which leads me, anyway, George, to ask the question about some of the machine learning attributes of this. We've got to be careful about machine learning being positioned as the panacea for all business problems, which too often seems to be the case. But we are certainly, it's reasonable to observe that machine learning can, in fact, help us in important ways at understanding how patterns in applications and data are working, how people are working together. Talk a little bit about the machine learning attributes of some of these tools. >> Well, I like to say that every few years, we have a technology we get so excited about that we assume it tastes like chocolate, costs a dollar, and cures cancer. Machine learning is that technology right now. The interesting thing about robotic process automation in many low-code environments is that they're sort of inheriting the mantle of the old application macros, and even cross-application macros from the early desktop office wars. The difference now is, unlike then, there were APIs that those scripts could talk to, and they could then treat the desktop applications as an application platform. As David said, and Neil, we're going through application user interfaces now, and when you want to do a low-code programming environment, you want often to program by example. But then you need to generalize parts, you know, when you move this thing to this place, you might now want to generalize that. That's where machine learning can start helping take literal scripts, and adding more abstract constructs to them. >> So, you're literally digitizing some of the digital primitives that are in some of these applications, and that allows you to reveal data the machine learning can apply to make observations, recommendations about patterns, and actually do code generation. >> And you know, I would add one thing, that it's not just about the UI anymore, because we're surfacing, as we were talking earlier, the data-driven middleware. Another way of looking at what used to be the system catalog, we had big applications all talking to a central database. But now that we have so many repositories, we're sort of extricating the system catalog so that we can look at and curate data in many locations. These tools can access that because they have user interfaces, as well as APIs. And then, in addition, you don't have to go against a database that is unprotected with an applications business logic. More and more, we have microservices and serverless functions where they embody the business logic, and you can go against them, and they enforce the rules as well. >> That's great, so, David Floyer-- >> I should point out-- >> Hold on, Jim. Dave Floyer, this is not a technology set that suddenly is emerging on the scene independent of other changes. There's also some important changes in the hardware itself that are making it possible for us to reveal data differently, so that these types of tools and these types of technologies can be applied. I'm specifically thinking about something as mundane as SSD, flash-based storage, and other types of technologies that allow us to different things with data so that we can envision working with this stuff. Give us a quick list down on the infrastructure, some of the key technologies in making this possible. >> When we look at systems architectures now, what we never had was fast memories, fast storage. We had very, very slow storage, and we had to design systems to take account of that. What is coming in now is much, much faster storage built on things like NVMe, other fabrics, which really get to any data within microseconds, as opposed to the milliseconds. That's thousands of times faster. What you can do with these is, not only can the access density that you can achieve to the data is much, much higher than it was. Many, again, many thousand times higher. That enables you to take a different approach to sharing data. Instead of having to share data at the disk level, you can now, for example, take a snapshot of data. You can allow that snapshot to be the snapshot of, for example, the analytics system on the hour, or on the day, or whatever timescale that you want it. And then, in parallel, you can use huge amounts of analytic data against a snapshot of that same data while the same operational system is working. There are some techniques there which I think are very exciting, indeed. The other big change is that we're going to be talking machine to machine. Applications were designed for human, most of applications were designed for a human to be the recipient at the other end. One of the differences when you're dealing with machines is now you have to get your code done in microseconds, as opposed to seconds. Again, a thousand times faster. This is a very exciting area, but when we're looking at low-code, for example, you're still going to need those well-crafted algorithms, those well-crafted code, very fast code that you're going to need as one of the tools of programmers. There's still going to be a need for people who can create these very fast algorithms. An exciting time all the way around for programmers. >> What were you going to say, Jim? And I want to come back and have you talk about DevOps for a second. >> Yeah, I think I was going to, I'll add to what David was just saying. Most low-code tools are not entirely no-code, meaning what they do is they auto-generate code, pursuant to some business declared a specification. The code, the actual, professional programmers can go in and modify that code and tweak it and optimize it. And I want to tie in now to something that George was talking about, the role of ML in this process. ML can make a huge mess, in the sense that ML can be an enabler for more people who don't know whole lot about development. You want to build stuff willy-nilly, so there's more code out there than you can shake a stick at, and there's no standards. But also, I'm seeing, and I saw this past week, MIT has a project, they already have a tool, that's able to do this. It's able to take ML, use ML to take a snapshot or a segment of code out of one program, and then modify it so that it fit and then transplant it into another application and modify it so it fits the context of the new application along various attributes, and so forth. What I'm getting at is that ML can be, according to what, say, MIT has done, ML can be a tool for enabling reuse of code and re-contextualization and tweaking of code. In other words, ML can be a handmaiden of enforcing standards as code gets repurposed throughout these low-code environments. I think that ML can be, it's a double-edged sword, in terms of enabling stronger or weaker governance over the whole development process. >> Yeah, and I want to add to that, Jim, that it's not just you can enforce, or at least, reveal standards and compliance, but also increases the likelihood that we become a little bit more tool-dependent. And then going back to what you were talking about, a little bit less tool-dependent, I should say. Going back to what you were talking about, David, it increases the likelihood that people are using the right tool for the right job, which is a pretty crucial element of this, especially as we do in adoption. So, Jim, give us a couple of quick observations on what a development organization is going to have to do differently to get going on utilizing some of these technologies. What are the top two or three things that folks are going to have to think about? >> First of all, in the low-code space, there are general-purpose tools that can bang out code for various target languages, for various applications, and there are highly special-purpose tools that can go gangbusters on auto-ginning web application code and mobile code and IoT code. First and foremost, you got to decide how much of the ocean you want to boil off, in terms of low-code. I recommend that if you have a requirement for accelerating, say, mobile code development, then go with low-code tools that are geared to iOS and Android and so forth, as your target platform, and stay there. Don't feel like you have to get some monster suite that can do everything, potentially. That's one critical thing. Another critical thing is it's not, the tool that you adopt, it needs to be more than just a development tool. It needs to also have capabilities built in to help your team govern those code builds within whatever DevOps, CIC, or repository you have inside your organization, make sure that the tool you've got plays well with your DevOps environment, with your workflows, with your code repositories. And then, number three, we keep forgetting this, but the front-end development is still not a walk in the woods. In fact, specifying a complex business logic that drives all this code generation, this is stuff for professional developers more often than not. These are complex, even RPA tools are, quite frankly, not as user-friendly as maybe potentially they could be down the road, 'cause you still need somebody to think through the end-to-end application, and then to specify those steps at a declarative level that need to be accomplished before the RPA tool can do its magic and build something that you might want to then crystallize as a repeatable asset in your organization. >> So it doesn't take the thinking out of application development. >> James: Oh, no, no, no no. >> All right, so, let's do this. Let's hit the action items and see what we all think folks should do next. David Floyer, let me start with you. What's the action item out of this? >> The action item is horses for courses. The right horse for the right course, the right tools for the right job. Understand where things are stateless and where things are state, and use the appropriate tools, and, as Jim was just saying, make sure that there is integration of those tools into the current processes and procedures for coding. >> George Gilbert, action item. >> I would say that, building on that, start with pilots where it involves one or a couple simple applications. Or, I should say, one or a couple enterprise applications, but with less, less sort of branching, if-then type of logic built in. It could be hardwired-- >> So, simple flows? >> Simple flows, so that over time you can generalize that and play with how the RPA tools or low-code tools can generalize their auto-generated code. >> Peter: Neil Raden, action item. >> My suggestion is that if you involve someone who's going to learn how to use these tools and develop an application or applications for you, make sure that you're dealing with someone who's going to be around for a while, because otherwise, you're going to end up with a lot of orphan code that you can't maintain. We've certainly seen that before. >> David: That's great. >> Peter: Jim Kobielus, action item. >> Yeah, action item is, approach low-code as tooling for the professional developer, not to necessarily bring in untrained, non-traditional developers. Like Neil said, make sure that the low-code environment itself is there for the long haul, it'll be managed and used by professional developers, and make sure that they are provided with the front-end visual workspace that helps them do their jobs most effectively, that is user-friendly for them to get stuff done in a hurry. And don't worry about bringing in the freelance, untrained developers into your organization, or somehow re-tasking your business analysts to become coders. That's probably not the best idea in the long run, for maintainability of the code, if nothing else. >> Certainly not in the intermediate term. Okay, so here's the action item. Here's our Wikibon Action Item. As digital business progresses, it needs to be able to create digital assets that are predicated on valuable data faster, in a more flexible way, with more business knowledge embedded and imbued directly in how the process works. A new class of tools is emerging that we think will actually allow this to happen more successfully. It combines mature knowledge in the application development world with new insights in how machine learning works, and new understanding of the impacts of automation on organization. We call these augmented programming tools, and essentially, we call them augmented programming, because in this case, the system is taking some degree of responsibility for the business to generate code, identify patterns, and ultimately do a better job maintaining how applications get organized and run. While these technologies have potential power, we have to acknowledge that there's not ever going to be a one-size-fits-all at all. In fact, we believe very strongly that we're going to see a range of different tools emerge that will allow developers to take advantage of this approach, given their starting point of the artifacts that are available, and the characteristics of the applications that have to be built. One of the ones that we think is particularly important is robotic process automation, or RPA, which starts with the idea of being able to discover something about the way applications work by looking at how the application behaves onscreen, encapsulate that, generalize it so that it can be used as a tool in future application development work. We also note that these application development technologies will not operate independent of other technology and organizational changes within the business. Specifically, on the technology side, we are encouraged that there's a continuing evolution of hardware technology that's going to take advantage of faster data access utilizing solid-state disks, NVMe over fabric, and new types of system architectures that are much better-suited for rapid shared data access. Additionally, we observed that there's new classes of technologies that are emerging that allow a data control plane to actually operate based on metadata characteristics, and informed by application patterns, often through things like machine learning. One of the organizational issues that we think is really crucial is that folks should not presume that this is going to be a path for taking anybody in the business and turning them into an application developer. You still have to be able to think like an application developer and imagine how you turn a business process into something that looks like a program. But another group that we think has to be considered here is not just the DevOps people, although that's important, but go down a level. The good old DBAs who have always suffered through new advances in tools that made the assumption that the data that's in a database is always available, and they don't have to worry about transaction scaling, and they don't have to worry about the way that the database manager's set up. It would be unfortunate if the value of these tools from a collaboration standpoint, to work better with the business, to work better with the younger programmers, ended up failing because developers continue to not pay attention to how the underlying systems that currently control a lot of the data operate. Okay, once again, this has been, we really appreciate you participating. Thank you, David Floyer and George Gilbert, and on the remote, Neil Raden and Jim Kobielus. We've been talking about augmented programming. This has been Wikibon Action Item. (upbeat music)
SUMMARY :
of the role that automation's going to play, and drive it from the UI on in. and the nature of the tools that we might use and he is the only person there. and the complexity of locking data, business considerations that are on the table that when you deal at that level, Yeah, I, go ahead, sorry, Neil. What I was-- Peter: Sorry. and the business line presumes that they own their data, that lend themselves to good-quality software. that are driving to help in this particular area, and the extensibility of those tools, et cetera, and adding more abstract constructs to them. and that allows you to reveal data that it's not just about the UI anymore, some of the key technologies in making this possible. You can allow that snapshot to be the snapshot of, and have you talk about DevOps for a second. and modify it so it fits the context of the new application And then going back to what you were talking about, make sure that the tool you've got So it doesn't take the thinking Let's hit the action items make sure that there is integration of those tools but with less, Simple flows, so that over time you can generalize that that you can't maintain. and make sure that they are provided with that this is going to be a path
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Feb 9 2018 | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
10 zillion | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
iOS | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
This week | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
DevOps | TITLE | 0.97+ |
one program | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
thousand times | QUANTITY | 0.94+ |
CIC | TITLE | 0.94+ |
past week | DATE | 0.94+ |
ServiceNow | TITLE | 0.93+ |
One thing | QUANTITY | 0.93+ |
Access | TITLE | 0.92+ |
one thing | QUANTITY | 0.89+ |
thousands of times | QUANTITY | 0.88+ |
one critical thing | QUANTITY | 0.88+ |
a dollar | QUANTITY | 0.87+ |
couple weeks ago | DATE | 0.85+ |
second decade of this century | DATE | 0.84+ |
number three | QUANTITY | 0.76+ |
decades | DATE | 0.75+ |
couple simple applications | QUANTITY | 0.73+ |
one of | QUANTITY | 0.71+ |
couple enterprise applications | QUANTITY | 0.67+ |
a second | QUANTITY | 0.63+ |
double | QUANTITY | 0.61+ |
top | QUANTITY | 0.57+ |
two | QUANTITY | 0.53+ |
ML | TITLE | 0.51+ |
4GL | OTHER | 0.48+ |
Action Item | Converged & Hyper Converged Infrastructure
Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (electronic music) Every week, we bring together the Wikibon research team and we present the action items that we believe are most crucial for users to focus on against very important topics. This week, I'm joined by George Gilbert, David Floyer, here in the Cube studios in Palo Alto. And on the phone we have Ralph Phinos, Dave Vellante, and Jim Kobielus. Thank you guys, thank you team for being part of today's conversation. What were going to talk about today in Action Item is the notion of what we're calling enterprise hyperscale. Now we're going to take a route to get there that touches upon many important issues, but fundamentally the question is, at what point should enterprises choose to deploy their own hardware at scale to support applications that will have a consequential business impact on their shareholder, customer, and employee value? Now to kick us off here, 'cause this is a very complex topic, and it involves a lot of different elements, David Floyer, first question to you. What is the core challenge that enterprises face today as they think about build, buy, or rent across this increasingly mushed hardware continuum, or system continuum? >> So the biggest challenge from the traditional way that enterprises have put together systems is that the cost and the time to manage these systems is going up and up. And as we go from just systems of record, to with analytic systems being mainly in bash mode, towards systems of intelligence where the real-time analytics are going to combine in with the systems of record. So the complexity of the systems and the software layers are getting more and more complicated. And it takes more and more time and effort and elapsed time to keep things current. >> Why is it that not everybody can do this, David? Is there a fundamental economic reason to play here? >> Well, if you take systems, and build them yourself and put them together yourself, you'll always end up with the cheapest system. The issue is that the cost of maintaining those systems, and even more, the elapsed time cost of maintaining those systems, the time to value to putting in new releases, etc., has been extending. And there comes a time when that cost of delaying implementing new systems overwhelms the cost that you can save in the hardware itself. >> So there's some scale efficiencies in thinking about integration from a time standpoint. Dave Vellante, we've been looking at this for quite some time, and we think about true private could, for example. But if you would, kind of give us that core dynamic in simple terms between what is valuable to the business, what isn't valuable to the business, and the different options between renting and buying your cells, what is that kind of core dynamic at play? >> OK, so as we talked about a lot in our true private cloud research, hyper-converged systems are an attempt to substantially mimic public cloud environments on-prem. And this creates this bifurcated buying dynamic that I think is worth exploring a little bit. The big cloud players, as everybody talks about, have lots of engineers running around, they have skill, and they have time. So they'll spend time to build proprietary technologies and use their roll-your-own components to automate processes. In other words, they'll spend time to save money. And this is essentially the hyperscale as a form of their R&D, and they have an end-year lead, whatever it's five, six, four years on the enterprise. And that's not likely to change, that dynamic. The enterprise buyers, on the other hand, they don't have the resources, they're stretched thin, so they'll spend money to save time. So enterprises they want to cut labor costs, and shift useless IT labor to so-called vendor R&D. To wit, our forecasts show that about $150 billion is going to come out of low-value IT operations over the next ten years, and will shift to integrated products. >> So ultimately we end up seeing the vendors effectively capturing a lot of that spend that otherwise had been internally. Now this raises a new dynamic, when we think about this, David Floyer, in that there are still vendors that have to return something to their shareholders. There's this increased recognition that businesses or enterprises want this cloud experience, but not everybody is able to offer it, and we end up then with some really loosely-defined definitions. What's the continuum of where systems are today, from traditional all the way out to cloud, what does that look like? >> So a useful way of looking at it is to see what has happened over time and where we think it's going. We started with separate systems completely. Converged systems then came in, where the vendor put them together and reduced the time a little bit to value. But really the maintenance was still a responsibility of-- >> [Peter] But what was brought together? >> [David F] It was the traditional arrays, it was the servers-- >> Racks, power supplies-- >> All of that stuff put together, and delivered as a package. The next level up was so-called hyper-converged, where certainly some of the hyperconverged vendors went and put in software for each layer, software for the storage layer, software for the networking layer, put in more management. But a lot of vendors really took hyperconverged as being the old stuff with a few extra flavors. >> So they literally virtualized those underlying hardware sources, got some new efficiencies and economies. >> That's right, so they software virtualized each of those components. When you look at the cloud vendors, just skipping one there, they have gone hyperscale. And they have put in, as Dave spoke earlier, they have put in all of their software to make that hyperscale work. What we think in the middle of that is enterprise hyperscale, which is coming in, where you have the what we call service end. We have that storage capability, we have the networking capability, and the CPU capabilities, all separated, able to be scaled in whatever direction is required, and any processor to be able to get at any data through that network, with very, very little overhead. And it's software for the storage, it's software and firmware for the networking. The processor is relieved of all that processing. We think that architecture is going to mimic what the hyperscale have. But the vendors now have an opportunity of putting in the software to emulate that cloud experience, and take away from the people who want on-site equipment, take away all of the work that's necessary to keep that software stack up to date. The vendors are going to maintain that software stack as high as they can go. >> So David, is this theory, or are there practical examples of this happening today? >> Oh, absolutely, there are practical examples of those happening. There are practical examples at the lower levels, with people like Micron and SolidScale. That's at a technology level, when we're talking about hyperscale-- Well if you're looking at it from a practical point of view, ARCOL have put it into the marketplace. ARCOL cloud on-premises, ARCOL converged systems, where they are taking the responsibility of maintaining all of the software, all the way up to the database stack. And in the future, probably beyond that, towards the ARCOL applications as well. So they're taking that approach, putting it in, and arguing, persuasively, that the customer should focus on time to value as opposed to cost of just the hardware. >> Well we can also look at SaaS vendors right, who many of the have come off of infrastructure as a service, deployed their own enterprise hyperscale, increasingly starting to utilize some of this hyperscale componentry, as a basis for building things out. Now one of the key reasons why we want to do this, and George I'll turn it to you, is because as David mentioned earlier, the idea is we want to bring analytics and operations more closely together to improve automation, augmentation, other types of workloads. What is it about that effort that's encouraging this kind of adoption of these new approaches? >> [George] Well databases typically make great leaps forward when we have changes in the underlying trade-offs or relative price performance of compute storage and networking. What we're talking about with hyperscale, I guess either on-prem or the cloud version, is that we can build scale out that databases can support without having to be rewritten, so that they work just the way they did on tightly-coupled symmetric multiprocessors, shared memory. And so now they can go from a few nodes, or half a dozen nodes, or even say a dozen nodes, to thousands. And as David's research has pointed out, they have latency to get to memory in any node from any node in five microseconds. So building up from that, the point is we can now build databases that really do have the horsepower to handle the analytics to inform the transactions in the same database. Or, if you do separate them, because you don't want to touch a current system of record, you have a very powerful analytic system that can apply more data and do richer analytics to inform a decision in the form of a transaction, than you could with traditional architectures. >> So it's the data that's driving the need for a data-rich system that's architected in the context of data needs, that's driving a lot of this change. Now, David Floyer, we've talked about data tiering. We've talked about the notion of primary, secondary, and tertiary data. Without revisiting that entirely, what is it about this notion of enterprise hyperconverge that's going to make it easier to naturally place data where it belongs in the infrastructure? >> Well underlying this is that moving data is extremely expensive, so you want to, where possible, move the processing to the data itself. The origin of that data may be at the edge, for example, in IOT. It may be in a large central headquarters. It may be in the cloud, it may be operational data, end-user data, for people using their phones, which is available from the cloud. So there are multiple sources. So you want to place the processing as close to that data as possible so that you have the least cost of both moving it, and you have the lowest latency. And that's particularly important when you've got systems of intelligence where you want to combine the two. >> So Jim Kobielus, it seems as though there's a compelling case to be made here to focus on time, time to value, time to deploy, on the one hand, as well as another aspect of time, the time associated with latency, the time associated with reducing path length, and optimizing for path length. Which again has a scale impact. What are developers thinking? Are developers actually going to move the market to these kinds of solutions, or are they going to try to do something different? >> I think what developers will do is that they will begin to move the market towards hyperconverged systems. Much of the development that's going on now is for artificial intelligence, deep learning, and so forth, where you're building applications that have an increasing degree of autonomy, being able to make decisions based on system of record data, system of engagement data, system of insight data, in real time. What that increasingly requires, Peter, is a development platform that combines those different types of data bases, or data stores, and also combines the processing for deep learning, machine learning, and so forth. On devices that are increasingly tinier and tinier, and embedded in mobile devices and what not. So what I'm talking about here is an architecture for development where developers are going to say, I want to be able to develop it in the cloud, I'm going to need to. 'Cause we have huge teams of specialists who are building and training and deploying and iterating these in a cloud environment, a centralized modeling context, but then deploying their results of their work down to the smallest systems where these models will need to run, if not autonomously, in some loosely-coupled fashion with tier two and tier three systems, which will also be hyperconverged. And each of those systems in each of those tiers will need a self-similar data fabric, and an AI processing fabric. So what developers are saying is, I want to be able to take it and model it, and deploy it to these increasingly nano-scopic devices at the edge, and I need each of those components at every tier to have the same capabilities and hyperconverged form factors, essentially. >> For hyperscale, so here's where we are, guys. Where we are is that there are compelling economic reasons why we're going to see this notion of enterprise hyperscale emerge. It appears that the workloads are encouraging that. Developers seem to be moving towards adopting these technologies. But there's another group that we haven't talked about. Dave Vellante, the computing industry is not a simple go-to-market model. There's a lot of reasons why channels, partnerships, etc. are so complex. How are they going to weigh in on this change? >> [Dave Vellante] Well the cloud clearly is having an impact on the channel. I mean if you look at sort of the channel guys, you got the sort of box sellers, which still comprises most of the channel. You got more solution orientation, and then increasingly, you know, the developers are becoming a form of a channel. And I think the channel still has a lot of influence over how customers buy, and I think one of the reasons that people buy roll-your-own still, and it's somewhat artificial, is that the channel oftentimes prefers it that way. It's more complicated, and as their margins get squeezed, the channel players can maintain services, on top of those roll-your-own components. So I think buyers got to be careful, and they got to make sure that their service provider's motivations align with, you know, their desired outcomes, and they're not doing the roll-your-own bespoke approach for the wrong reasons. >> Yeah, and we've seen that a fair amount as we've talked to senior IT folks, that there's a clear misalignment, often, between what's being pushed from a technology standpoint and what the application actually requires, and that's one of the reasons why this question is so rich and so important. But Ralph Phinos, kind of sum up, when you think about some of these issues as they pertain to where to make investments, how to make investments. From our perspective, is there a relatively simple approach to thinking this through, and understanding how best to put your money to get the most value out of the technologies that you choose? (static hissing) Alright, I think we've lost Ralph there, so I'll try to answer the question myself. (chuckles) (David laughs) So here's how we would look at it, and David Floyer, help me out and see if you disagree with me. But at the end of the day, what we're looking for is we're suggesting to customers that have a cost orientation should worry a little bit less about risk, a little bit less about flexibility, and they can manage how that cost happens. And the goal is to try to reduce the cost as fast as possible, and not worry so much about the future options that they'll face in terms of how to reduce future types of cost out. And so that might push them more towards this public hyperscale approach. But for companies that are thinking in terms of revenue, that have to ensure that their systems are able to respond to competitive pressures, customer needs, that are increasingly worried about buying future options with today's technology choices. That there's a scale, but that's the group that's going to start looking more at the enterprise hyperscale. Clearly that's where SAS players are. Yeah. And then the question is and what requires further research is, where's that break point going to be? So if I'm looking at this from an automation, from a revenue standpoint, then I need a little bit greater visibility in where that break point's going to be between controlling my own destiny, with the technology that's crucial to my business, versus not having to deal with the near-term costs associated with doing the integration myself. But this time to value, I want to return to this time to value. >> [David] It's time to value that is the crucial thing here, isn't it? >> [Peter] Time to value now, and time to future value. >> And time to future value, yes. What is the consequence of doing everything yourself is that the time to put in new releases, the time to put in patches, the time to make your system secure, is increasingly high. And the more that you integrate systems into systems of intelligence, with the analytics and the systems of record, the more you start to integrate, the more complex the total environment, the more difficult it's going to be for people to manage that themselves. So in that environment, you would be pushing towards getting systems where the vendor is doing as much of that integration as they can-- And that's where they get the economies from. The vendors get the economies of scale because they can feed back into the system faster than anybody else. Rather than taking a snowflake approach, they're taking a volume approach, and they can feed back for example artificial intelligence in operational efficiency, in security. There's many, many opportunities for vendors to push down into the marketplace those findings. And those vendors can be cloud vendors as well. If you look at Microsoft, they can push down into their Azure Stack what they're finding in terms of artificial intelligence and in terms of capabilities. They can push those down into the enterprises themselves. So the more that they can go up the stack into the database layers, maybe even into the application layers, the higher they can go, the lower the cost, the lower the time to value will be for them to deploy applications using that. >> Alright, so we've very quickly got some great observations on this important dynamic. It's time for action items. So Jim Kobielus, let me start with you. What's the action item for this whole notion of hyperscale? Action items, Jim Kobielus. >> Yeah, the action item for hyperscale is to consider the degree of convergence you require at the lowest level of the system, the edge device. How much of that needs to be converged down to a commoditized component that can be flexible enough that you can develop a wide range of applications on top of that-- >> Excellent, hold on, OK. George Gilbert, action item. >> Really quickly you have to determine, are you going to keep your legacy system of record database, and add like an analytic database on a hyperscale infrastructure, so that you're not doing a heart and lung transplant on an existing system. If you can do that and you can manage the latency between the existing database and culling to the analytic database, that's great. Then there's little disruption. Otherwise you have to consider integrating the analytics into a hyperscale-ready legacy database. >> David Vellante, action item. >> Tasks like LUN management, and server provisioning, and just generally infrastructure management, and non-strategic. So as fast as possible, shift your "IT labor resources" up the stack toward more strategic initiatives, whether they're digital initiatives, data orientation, and other value-producing activities. >> David Floyer, action item. >> Well I was just about to say what Dave Vellante just said. So let me focus a little bit more on a step in order to get to that position. >> So Dave Floyer, action item. (David laughs) >> So the action item that I would choose would be that you have to know what your costs are, and you have to be able to, as senior management, look at those objectively and say, "What is my return on spending all of "this money and making the system operate?" The more that you can reduce the complexity, buy in, converge systems, hyperconverge systems, hyperscale systems, that are going to put that responsibility onto the vendors themselves, the better position you're going to be to really add value to the bottom line of applications that really can start to use all of this capability, advanced analytics that's coming into the marketplace. >> So I'm going to add an action item before I do a quick summary. And I'm just going to insert it. My action item, the relationship that you have with your vendors is going to change. It used to be focused on procurement and reducing the cost of acquisition. Increasingly, for those high-value, high-performing, revenue-producing, differentiating applications, it's going to be strategic vendor management. That implies a whole different range of activities. And companies that are going to build their business with technology and digital are going to have to move to a new relationship management framework. Alright, so let's summarize today's action item meeting. First of I want to thank very much George Gilbert, David Floyer, here in the studio with me. David Vellante, Ralph Phinos, Jim Kobielus on the phone. Today we talked about enterprise hyperscale. This is part of a continuum that we see happening, because the economics of technology are continuing to assert themselves in the marketplace, and it's having a significant range of impacts on all venues. When we think about scale economies, we typically think about how many chips we're going to stamp out, or how many copies of an operating system is going to be produced, and that still obtains, and it's very important. But increasingly users have to focus their attention to how we're going to generate economies out of the IT labor that's necessary to keep the digital businesses running. If we can shift some of those labor costs to other players, then we want to support those technology sets that embed those labor costs directly in the form of technology. So over the next few years, we're going to see the emergence of what we're calling enterprise hyperscale that embeds labor costs directly into hyperscale packaging, so that companies can focus more on generating revenue out of technology, and spend less time on the integration of work. The implications of that is that the traditional buying process of trying to economize on the time to purchase, the time to get access to the piece parts, is going to give way to a broader perspective on time to ultimate value of the application or of the outcome that we seek. And that's going to have a number of implications that CIOs have to worry about. From an external standpoint, it's going to mean valuing technology differently, valuing packaging differently. It means less of a focus on the underlying hardware, more of a focus on this common set of capabilities that allow us to converge applications. So whereas converge technology talked about converging hardware, enterprise hyperscale increasingly is about converging applications against common data, so that we can run more complex and interesting workloads and revenue-producing workloads, without scaling the labor and management costs of those workloads. A second key issue is, we have to step back and acknowledge that sometimes the way products go to market, and our outcomes or our desires, do not align. That there is the residual reality in the marketplace that large numbers of channel partners and vendors have an incentive to try to push more complex technologies that require more integration, because it creates a greater need for them and creates margin opportunities. So ensure that as you try to achieve this notion of converged applications and not converged infrastructure necessarily, that you are working with a partner who follows that basic program. And the last thing is I noted a second ago, that that is going to require a new approach to thinking about strategic vendor management. For the last 30 years, we've done a phenomenal job of taking cost out of technology, by focusing on procurement and trying to drive every single dime out of a purchase that we possibly could. Even if we didn't know what that was going to mean from an ongoing maintenance and integration and risk-cost standpoint, what we need to think about now is what will be the cost to the outcome. And not only this outcome, but because we're worried about digital business, future outcomes, that are predicated on today's decisions. So the whole concept here is, from a relationship management standpoint, the idea of what relationship is going to provide us the best time to value today, and streams of time to value in the future. And we have to build our relationships around that. So once again I want to thank the team. This is Peter Burris. Thanks again for participating or listening to the Action Item. From the Cube studios in Palo Alto, California, see you next week. (electronic music)
SUMMARY :
And on the phone we have Ralph Phinos, is that the cost and the time to The issue is that the cost of maintaining those systems, and the different options between renting and buying So they'll spend time to build proprietary What's the continuum of where systems are today, But really the maintenance was still a responsibility of-- the old stuff with a few extra flavors. So they literally virtualized those underlying putting in the software to emulate that cloud experience, and arguing, persuasively, that the customer the idea is we want to bring analytics and operations build databases that really do have the horsepower So it's the data that's driving the need for as possible so that you have the least cost the market to these kinds of solutions, in the cloud, I'm going to need to. It appears that the workloads are encouraging that. and they got to make sure that their service provider's And the goal is to try to reduce the cost is that the time to put in new releases, What's the action item for this whole notion of hyperscale? Yeah, the action item for hyperscale is to George Gilbert, action item. culling to the analytic database, that's great. So as fast as possible, shift your "IT labor resources" a step in order to get to that position. So Dave Floyer, action item. hyperscale systems, that are going to put that economize on the time to purchase,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
David | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Ralph Phinos | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Ralph | PERSON | 0.99+ |
David F | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
about $150 billion | QUANTITY | 0.99+ |
This week | DATE | 0.99+ |
each layer | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ARCOL | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
four years | QUANTITY | 0.98+ |
five microseconds | QUANTITY | 0.98+ |
a dozen nodes | QUANTITY | 0.98+ |
second key issue | QUANTITY | 0.98+ |
half a dozen nodes | QUANTITY | 0.97+ |
Azure Stack | TITLE | 0.89+ |
Micron | ORGANIZATION | 0.84+ |
last 30 years | DATE | 0.8+ |
Cube studios | ORGANIZATION | 0.79+ |
SAS | ORGANIZATION | 0.76+ |
Cube | ORGANIZATION | 0.74+ |
single | QUANTITY | 0.72+ |
Action Item | ORGANIZATION | 0.68+ |
second ago | DATE | 0.67+ |
next few years | DATE | 0.64+ |
three | OTHER | 0.61+ |
next | QUANTITY | 0.58+ |
tier two | OTHER | 0.56+ |