Peter Burris Big Data Research Presentation
(upbeat music) >> Announcer: Live from San Jose, it's theCUBE presenting Big Data Silicon Valley brought to you by SiliconANGLE Media and its ecosystem partner. >> What am I going to spend time, next 15, 20 minutes or so, talking about. I'm going to answer three things. Our research has gone deep into where are we now in the big data community. I'm sorry, where is the big data community going, number one. Number two is how are we going to get there and number three, what do the numbers say about where we are? So those are the three things. Now, since when we want to get out of here, I'm going to fly through some of these slides but again there's a lot of opportunity for additional conversation because we're all about having conversations with the community. So let's start here. The first thing to know, when we think about where this is all going is it has to be bound. It's inextricably bound up with digital transformation. Well, what is digital transformation? We've done a lot of research on this. This is Peter Drucker who famously said many years ago, that the purpose of a business is to create and keep a customer. That's what a business is. Now what's the difference between a business and a digital business? What's the business between Sears Roebuck, or what's the difference between Sears Roebuck and Amazon? It's data. A digital business uses data as an asset to create and keep customers. It infuses data and operations differently to create more automation. It infuses data and engagement differently to catalyze superior customer experiences. It reformats and restructures its concept of value proposition and product to move from a product to a services orientation. The role of data is the centerpiece of digital business transformation and in many respects that is where we're going, is an understanding and appreciation of that. Now, we think there's going to be a number of strategic capabilities that will have to be built out to make that possible. First off, we have to start thinking about what it means to put data to work. The whole notion of an asset is an asset is something that can be applied to a productive activity. Data can be applied to a productive activity. Now, there's a lot of very interesting implications that we won't get into now, but essentially if we're going to treat data as an asset and think about how we could put more data to work, we're going to focus on three core strategic capabilities about how to make that possible. One, we need to build a capability for collecting and capturing data. That's a lot of what IoT is about. It's a lot of what mobile computing is about. There's going to be a lot of implications around how to ethically and properly do some of those things but a lot of that investment is about finding better and superior ways to capture data. Two, once we are able to capture that data, we have to turn it into value. That in many respects is the essence of big data. How we turn data into data assets, in the form of models, in the form of insights, in the form of any number of other approaches to thinking about how we're going to appropriate value out of data. But it's not just enough to create value out of it and have it sit there as potential value. We have to turn it into kinetic value, to actually do the work with it and that is the last piece. We have to build new capabilities for how we're going to apply data to perform work better, to enact based on data. Now, we've got a concept we're researching now that we call systems of agency, which is the idea that there's going to be a lot of new approaches, new systems with a lot of intelligence and a lot of data that act on behalf of the brand. I'm not going to spend a lot of time going into this but remember that word because I will come back to it. Systems of agency is about how you're going to apply data to perform work with automation, augmentation, and actuation on behalf of your brand. Now, all this is going to happen against the backdrop of cloud optimization. I'll explain what we mean by that right now. Very importantly, increasingly how you create value out of data, how you create future options on the value of your data is going to drive your technology choices. For the first 10 years of the cloud, the presumption is all data was going to go to the cloud. We think that a better way of thinking about it is how is the cloud experience going to come to the data. We've done a lot of research on the cost of data movement and both in terms of the actual out-of-pocket costs but also the potential uncertainty, the transaction costs, etc, associated with data movement. And that's going to be one of the fundamental pieces or elements of how we think about the future of big data and how digital business works, is what we think about data movement. I'll come to that in a bit. But our proposition is increasingly, we're going to see architectural approaches that focus on how we're going to move the cloud experience to the data. We've got this notion of true private cloud which is effectively the idea of the cloud experience on or near premise. That doesn't diminish the role that the cloud's going to play on industry or doesn't say that Amazon and AWS and Microsoft Azure and all the other options are not important. They're crucially important but it means we have to start thinking architecturally about how we're going to create value of data out of data and recognize that means that it, we have to start envisioning how our organization and infrastructure is going to be set up so that we can use data where it needs to be or where it's most valuable and often that's close to the action. So if we think then about that very quickly because it's a backdrop for everything, increasingly we're going to start talking about the idea of where's the workload going to go? Where's workload the dog going to be against this kind of backdrop of the divorce of infrastructure? We believe that and our research pretty strongly shows that a lot of workloads are going to go to true private cloud but a lot of big data is moving into the cloud. This is a prediction we made a few years ago and it's clearly happening and it's underway and we'll get into what some of the implications are. So again, when we say that a lot of the big data elements, a lot of the process of creating value out of data is going to move into the cloud. That doesn't mean that all the systems of agency that build or rely on that data, the inference engines, etc, are also in a public cloud. A lot of them are going to be distributed out to the edge, out to where the action needs to be because of latency and other types of issues. This is a fundamental proposition and I know I'm going fast but hopefully I'm being clear. All right, so let's now get to the second part. This is kind of where the industry's going. Data is an asset. Invest in strategic business capabilities to appreciate, to create those data assets and appreciate the value of those assets and utilize the cloud intelligently to generate and ensure increasing returns. So the next question is well, how will we get there? Now. Right now, not too far from here, Neil Raden for example, was on the show floor yesterday. Neil made the observation that, as he wandered around, he only heard the word big data two or three times. The concept of big data is not dead. Whether the term is or is not is somebody else's decision. Our perspective, very simply, is that the notion is bifurcating. And it's bifurcating because we see different strategic imperatives happening at two different levels. On the one hand, we see infrastructure convergence. The idea that increasingly we have to think about how we're going to bring and federated data together, both from a systems and a data management standpoint. And on the other hand, we're going to see infrastructure or application specialization. That's going to have an enormous implication over next few years, if only because there just aren't enough people in the world that understand how to create value out of data. And there's going to be a lot of effort made over the next few years to find new ways to go from that one expertise group to billions of people, billions of devices, and those are the two dominant considerations in the industry right now. How can we converge data physically, logically, and on the other hand, how can we liberate more of the smarts associated with this very, very powerful approach so that more people get access to the capacities and the capabilities and the assets that are being generated by that process. Now, we've done at Wikibon, probably I don't know, 18, 20, 23 predictions overall on the role that or on the changes being wrought by digital business. Here I'm going to focus on four of them that are central to our big data research. We have many more but I'm just going to focus on four. The first one, when we think about infrastructure convergence we worry about hardware. Here's a prediction about what we think is going to happen with hardware and our observation is we believe pretty strongly that future systems are going to be built on the concept of how do you increase the value of data assets. The technologies are all in place. Simpler parts that it more successfully bind specifically through all its storage and network are going to play together. Why, because increasingly that's the fundamental constraint. How do I make data available to other machines, actors, sources of change, sources of process within the business. Now, we envision or we are watching before our very eyes, new technologies that allow us to take these simple piece parts and weave them together in very powerful fabrics or grids, what we call UniGrid. So that there is almost no latency between data that exists within one of these, call it a molecule, and anywhere else in that grid or lattice. Now again, these are not systems that are going to be here in five years. All the piece parts are here today and there are companies that are actually delivering them. So if you take a look at what Micron has done with Mellanox and other players, that's an example of one of these true private cloud oriented machines in place. The bottom line though is that there is a lot of room left in hardware. A lot of room. This is what cloud suppliers are building and are going to build but increasingly as we think about true private cloud, enterprises are going to look at this as well. So future systems for improving data assets. The capacity of this type of a system with low latency amongst any source of data means that we can now think about data not as... Not as a set of sources that have to be each individually, each having some control over its own data and sinks woven together by middleware and applications but literally as networks of data. As we start to think about distributing data and distributing control and authority associated with that data more broadly across systems, we now have to think about what does it mean to create networks of data? Because that, in many respects, is how these assets are going to be forged. I haven't even mentioned the role that security is going to play in all of this by the way but fundamentally that's how it's likely to play out. We'll have a lot of different sources but from a business standpoint, we're going to think about how those sources come together into a persistent network that can be acted upon by the business. One of the primary drivers of this is what's going on at the edge. Marc Andreessen famously said that software is eating the world, well our observation is great but if software's eating the world, it's eating it at the edge. That's where it's happening. Secondly, that this notion of agency zones. I said I'm going to bring that word up again, how systems act on behalf of a brand or act on behalf of an institution or business is very, very crucial because the time necessary to do the analysis, perform the intelligence, and then take action is a real constraint on how we do things. And our expectation is that we're going to see what we call an agency zone or a hub zone or cloud zone defined by latency and how we architect data to get the data that's necessary to perform that piece of work into the zone where it's required. Now, the implications of this is none of this is going to happen if we don't use AI and related technologies to increasingly automate how we handle infrastructure. And technologies like blockchain have the potential to provide a interesting way of imagining how these networks of data actually get structured. It's not going to solve everything. There's some people that think the blockchain is kind of everything that's necessary but it will be a way of describing a network of data. So we see those technologies on the ascension. But what does it mean for DBMS? In the old way, in the old world, the old way of thinking, the database manager was the control point for data. In the new world these networks of data are going to exist beyond a single DBMS and in fact, over time, that concept of federated data actually has a potential to become real. When we have these networks of data, we're going to need people to act upon them and that's essentially a lot of what the data scientist is going to be doing. Identifying the outcome, identifying the data that's required, and weaving that data through the construction and management, manipulation of pipelines, to ensure that the data as an asset can persist for the purposes of solving a near-term problem or over whatever duration is required to solve a longer term problem. Data scientists remain very important but we're going to see, as a consequence of improvements in tooling capable of doing these things, an increasing recognition that there's a difference between a data scientist and a data scientist. There's going to be a lot of folks that participate in the process of manipulating, maintaining, managing these networks of data to create these business outcomes but we're going to see specialization in those ranks as the tooling is more targeted to specific types of activities. So the data scientist is going to become or will remain an important job, going to lose a little bit of its luster because it's going to become clear what it means. So some data scientists will probably become more, let's call them data network administrators or networks of data administrators. And very importantly as I said earlier, there's just not enough of these people on the planet and so increasingly when we think about again, digital business and the idea of creating data assets. A central challenge is going to be how to create the data or how to turn all the data that can be captured into assets that can be applied to a lot of different uses. There's going to be two fundamental changes to the way we are currently conceiving of the big data world on the horizon. One is well, it's pretty clear that Hadoop can only go so far. Hadoop is a great tool for certain types of activities and certain numbers of individuals. So Hadoop solves problems for an important but relatively limited subset of the world. Some of the new data science platforms that we just talked about, that I just talked about, they're going to help with a degree of specialization that hasn't been available before in the data world, will certainly also help but it also will only take it so far. The real way that we see the work that we're doing, the work that the big data community is performing, turned into sources of value that extend into virtually every single corner of humankind is going to be through these cloud services that are being built and increasingly through packaged applications. A lot of computer science, it still exists between what I just said and when this actually happens. But in many respects, that's the challenge of the vendor ecosystem. How to reconstruct the idea of packaged software, which has historically been built around operations and transaction processing, with a known data model and an unknown or the known process and some technology challenges. How do we reapply that to a world where we now are thinking about, well we don't know exactly what the process is because the data tells us at the moment that the actions going to be taking place. It's a very different way of thinking about application development. A very different way of thinking about what's important in IT and very different way of thinking about how business is going to be constructed and how strategy's going to be established. Packaged applications are going to be crucially important. So in the last few minutes here, what are the numbers? So this is kind of the basis for our analysis. Digital business, role of data is an asset, having an enormous impact in how we think about hardware, how do we think about database management or data management, how we think about the people involved in this, and ultimately how we think about how we're going to deliver all this value out to the world. And the numbers are starting to reflect that. So why don't you think about four numbers as I go through the two or three slides. Hundred and three billion, 68%, 11%, and 2017. So of all the numbers that you will see, those are four of the most important numbers. So let's start by looking at the total market place. This is the growth of the hardware, software, and services pieces of the big data universe. Now we have a fair amount of additional research that breaks all these down into tighter segments, especially in software side. But the key number here is we're talking about big numbers. 103 billion over the course of next 10 years and let's be clear that 103 billion dollars actually has a dramatic amplification on the rest of the computing industry because a lot of the pricing models associated with, especially the software, are tied back to open source which has its own issues. And very importantly, the fact that the services business is going to go through an enormous amount of change over the next five years as service companies better understand how to deliver some of these big data rich applications. The second point to note here is that it was in 2017 that the software market surpassed the hardware market in big data. Again, for first number of years we focused on buying the hardware and the system software associated with that and the software became something that we hope to discover. So I was having a conversation here in theCUBE with the CEO of Transwarp which is a very interesting Chinese big data company and I asked what's the difference between how you do things in China and how we do things in the US? He said well, in the US you guys focus on proof of concept. You spend an enormous amount of time asking, does the hardware work? Does the database software work? Does the data management software work? In China we focus on the outcome. That's what we focus on. Here you have to placate the IT organization to make sure that everybody in IT is comfortable with what's about to happen. In China, were focused on the business people. This is the first year that software is bigger than hardware and it's only going to get bigger and bigger over time. It doesn't mean again, that hardware is dead or hardware is not important. It's going to remain very important but it does mean that the centerpiece of the locus of the industry is moving. Now, when we think about what the market shares look like, it's a very fragmented market. 60%, 68% of the market is still other. This is a highly immature market that's going to go through a number of changes over the next few years. Partly catalyzed by that notion of infrastructure convergence. So in four years our expectation is that, that 68% is going to start going down pretty fast as we see greater consolidation in how some of these numbers come together. Now IBM is the biggest one on the basis of the fact that they operate in all these different segments. They operating the hardware, software, and services segment but especially because they're very strong within the services business. The last one I want to point your attention to is this one. I mentioned earlier on, that our expectation is that the market increasingly is going to move to a packaged application orientation or packaged services orientation as a way of delivering expertise about big data to customers. Splunk is the leading software player right now. Why, because that's the perspective that they've taken. Now, perhaps we're a limited subset. It's perhaps for a limited subset of individuals or markets or of sectors but it takes a packaged application, weaves these technologies together, and applies them to an outcome. And we think this presages more of that kind of activity over the course of the next few years. Oracle, kind of different approach and we'll see how that plays out over the course of the next five years as well. Okay, so that's where the numbers are. Again, a lot more numbers, a lot of people you can talk to. Let me give you some action items. First one, if data was a core asset, how would IT, how would your business be different? Stop and think about that. If it wasn't your buildings that were the asset, it wasn't the machines that were the asset, it wasn't your people by themselves who were the asset, but data was the asset. How would you reinstitutionalize work? That's what every business is starting to ask, even if they don't ask it in the same way. And our advice is, then do it because that's the future of business. Not that data is the only asset but data is a recognized central asset and that's going to have enormous impacts on a lot of things. The second point I want to leave you with, tens of billions of users and I'm including people and devices, are dependent on thousands of data scientists that's an impedance mismatch that cannot be sustained. Packaged apps and these cloud services are going to be the way to bridge that gap. I'd love to tell you that it's all going to be about tools, that we're going to have hundreds of thousands or millions or tens of millions or hundreds of millions of data scientists suddenly emerge out of the woodwork. It's not going to happen. The third thing is we think that big businesses, enterprises, have to master what we call the big inflection. The big tech inflection. The first 50 years were about known process and unknown technology. How do I take an accounting package and do I put on a mainframe or a mini computer a client/server or do I do it on the web? Unknown technology. Well increasingly today, all of us have a pretty good idea what the base technology is going to be. Does anybody doubt it's going to be the cloud? We got a pretty good idea what the base technology is going to be. What we don't know is what are the new problems that we can attack, that we can address with data rich approaches to thinking about how we turn those systems into actors on behalf of our business and customers. So I'm a couple minutes over, I apologize. I want to make sure everybody can get over to the keynotes if you want to. Feel free to stay, theCUBE's going to be live at 9:30. If I got that right. So it's actually pretty exciting if anybody wants to see how it works, feel free to stay. Georgia's here, Neil's here, I'm here. I mentioned Greg Terrio, Dave Volante, John Greco, I think I saw Sam Kahane back in the corner. Any questions, come and ask us, we'll be more than happy. Thank you very much for, oh David Volante. >> David: I have a question. >> Yes. >> David: Do you have time? >> Yep. >> David: So you talk about data as a core asset, that if you look at the top five companies by market cap in the US, Google, Amazon, Facebook, etc. They're data companies, they got data at the core which is kind of what your first bullet here describes. How do you see traditional companies closing that gap where humans, buildings, etc at the core as we enter this machine intelligence era, what's your advice to the traditional companies on how they close that gap? >> All right. So the question was, the most valuable companies in the world are companies that are well down the path of treating data as an asset. How does everybody else get going? Our observation is you go back to what's the value proposition? What actions are most important? what's data is necessary to perform those actions? Can changing the way the data is orchestrated and organized and put together inform or change the cost of performing that work by changing the cost transactions? Can you increase a new service along the same lines and then architect your infrastructure and your business to make sure that the data is near the action in time for the action to be absolute genius to your customer. So it's a relatively simple thought process. That's how Amazon thought, Apple increasingly thinks like that, where they design the experience and they think what data is necessary to deliver that experience. That's a simple approach but it works. Yes, sir. >> Audience Member: With the slide that you had a few slides ago, the market share, the big spenders, and you mentioned that, you asked the question do any of us doubt that cloud is the future? I'm with Snowflake, I don't see many of those large vendors in the cloud and I was wondering if you could speak to what are you seeing in terms of emerging vendors in that space. >> What a great question. So the question was, when you look at the companies that are catalyzing a lot of the change, you don't see a lot of the big companies being at the leadership. And someone from Snowflake just said, well who's going to lead it? That's a big question that has a lot of implications but at this point time it's very clear that the big companies are suffering a bit from the old, from the old, trying to remember what the... RCA syndrome. I think Clay Christensen talked about this. You know, the innovators dilemma. So RCA actually is one of the first creators. They created the transistor and they held a lot of original patents on it. They put that incredible new technology, back in the forties and fifties, under the control of the people who ran the vacuum tube business. When was the last time anybody bought RCA stock? The same problem is existing today. Now, how is that going to play out? Are we going to see a lot of, as we've always seen, a lot of new vendors emerge out of this industry, grow into big vendors with IPO related exits to try to scale their business? Or are we going to see a whole bunch of gobbling up? That's what I'm not clear on but it's pretty clear at this point in time that a lot of the technology, a lot of the science, is being done in smaller places. The moderating feature of that is the services side. Because there's limited groupings of expertise that the companies that today are able to attract that expertise. The Googles, the Facebooks, the AWSs, etc, the Amazons. Are doing so in support of a particular service. IBM and others are trying to attract that talent so they can apply it to customer problems. We'll see over the next few years whether the IBMs and the Accentures and the big service providers are able to attract the kind of talent necessary to diffuse that knowledge into the industry faster. So it's the rate at which that the idea of internet scale computing, the idea of big data being applied to business problems, can diffuse into the marketplace through services. If it can diffuse faster that will have both an accelerating impact for smaller vendors, as it has in the past. But it may also again, have a moderating impact because a lot of that expertise that comes out of IBM, IBM is going to find ways to drive in the product faster than it ever has before. So it's a complicated answer but that's our thinking at this point time. >> Dave: Can I add to that? >> Yeah. (audience member speaking faintly) >> I think that's true now but I think the real question, not to not to argue with Dave but this is part of what we do. The real question is how is that knowledge going to diffuse into the enterprise broadly? Because Airbnb, I doubt is going to get into the business of providing services. (audience member speaking faintly) So I think that the whole concept of community, partnership, ecosystem is going to remain very important as it always has and we'll see how fast those service companies that are dedicated to diffusing knowledge, diffusing knowledge into customer problems actually occurs. Our expectation is that as the tooling gets better, we will see more people be able to present themselves truly as capable of doing this and that will accelerate the process. But the next few years are going to be really turbulent and we'll see which way it actually ends up going. (audience member speaking faintly) >> Audience Member: So I'm with IBM. So I can tell you 100% for sure that we are, I hired literally 50 data scientists in the last three months to go out and do exactly what you're saying. Sit down with clients and help them figure out how to do data science in the enterprise. And so we are in fact scaling it, we're getting people that have done this at Google, Facebook. Not a whole lot of those 'cause we want to do it with people that have actually done it in legacy fortune 500 Companies, right? Because there's a little bit difference there. >> So. >> Audience Member: So we are doing exactly what you said and Microsoft is doing the same thing, Amazon is actually doing the same thing too, Domino Data Lab. >> They don't like they're like talking about it too much but they're doing it. >> Audience Member: But all the big players from the data science platform game are doing this at a different scale. >> Exactly. >> Audience Member: IBM is doing it on a much bigger scale than anyone else. >> And that will have an impact on ultimately how the market gets structured and who the winners end up being. >> Audience Member: To add too, a lot of people thought that, you mentioned the Red Hat of big data, a lot of people thought Cloudera was going to be the Red Hat of big data and if you look at what's happened to their business. (background noise drowns out other sounds) They're getting surrounded by the cloud. We look at like how can we get closer to companies like AWS? That was like a wild card that wasn't expected. >> Yeah but look, at the end of the day Red Hat isn't even the Red Hat of open source. So the bottom line is the thing to focus on is how is this knowledge going to diffuse. That's the thing to focus on. And there's a lot of different ways, some of its going to diffuse through tools. If it diffuses through tools, it increases the likelihood that we'll have more people capable of doing this in IBM and others can hire more. That Citibank can hire more. That's an important participant, that's an important play. So you have something to say about that but it also says we're going to see more of the packaged applications emerge because that facilitates the diffusion. This is not, we haven't figured out, I don't know exactly, nobody knows exactly the exact shape it's going to take. But that's the centerpiece of our big data researches. How is that diffusion process going to happen, accelerate, and what's the resulting structure going to look like? And ultimately how are enterprises going to create value with whatever results. Yes, sir. (audience member asks question faintly) So the recap question is you see more people coming in and promising the moon but being incapable of delivering because they are, partly because the technology is uncertain and for other reasons. So here's our approach. Or here's our observation. We actually did a fair amount of research on this. When you take a look at what we call a approach to doing big data that's optimized for the costs of procurement i.e. let's get the simplest combination of infrastructure, the simplest combination of open-source software, the simplest contracting, to create that proof of concept that you can stand things up very quickly if you have enough expertise but you can create that proof of concept but the process of turning that into actually a production system extends dramatically. And that's one of the reasons why the Clouderas did not take over the universe. There are other reasons. As George Gilbert's research has pointed out, that Cloudera is spending 53, 55 % of their money right now just integrating all the stuff that they bought into the distribution five years ago. Which is a real great recipe for creating customer value. The bottom line though is that if we focus on the time to value in production, we end up taking a different path. We don't focus as much on whether the hardware is going to work and the network is going to work and the storage can be integrated and how it's going to impact the database and what that's going to mean to our Oracle license pool and all the other things that people tend to think about if they're focused on the technology. And so as a consequence, you get better time to value if you focus on bringing the domain expertise, working with the right partner, working with the appropriate approach, to go from what's the value proposition, what actions are associated with a value proposition, what's stated in that area to perform those actions, how can I take transaction costs out of performing those actions, where's the data need to be, what infrastructure do I require? So we have to focus on a time to value not the time to procure. And that's not what a lot of professional IT oriented people are doing because many of them, I hate say it, but many of them still acquire new technology with the promise to helping the business but having a stronger focus on what it's going to mean to their careers. All right, I want to be really respectful to everybody's time. The keynotes start in about five minutes which means you just got time. If you want to stay, feel free to stay. We'll be here, we'll be happy to talk but I think that's pretty much going to close our presentation broadcast. Thank you very much for being an attentive audience and I hope you found this useful. (upbeat music)
SUMMARY :
brought to you by SiliconANGLE Media that the actions going to be taking place. by market cap in the US, Google, Amazon, Facebook, etc. or change the cost of performing that work in the cloud and I was wondering if you could speak to the idea of big data being applied to business problems, (audience member speaking faintly) Our expectation is that as the tooling gets better, in the last three months to go out and do and Microsoft is doing the same thing, but they're doing it. Audience Member: But all the big players from Audience Member: IBM is doing it on a much bigger scale how the market gets structured They're getting surrounded by the cloud. and the network is going to work
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Sam Kahane | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Neil Raden | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
John Greco | PERSON | 0.99+ |
Citibank | ORGANIZATION | 0.99+ |
Greg Terrio | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
David Volante | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Sears Roebuck | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Domino Data Lab | ORGANIZATION | 0.99+ |
Peter Drucker | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
11% | QUANTITY | 0.99+ |
George Gilbert | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
68% | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
53, 55 % | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Facebooks | ORGANIZATION | 0.99+ |
103 billion | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
Accentures | ORGANIZATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Hundred | QUANTITY | 0.99+ |
Transwarp | ORGANIZATION | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
50 data scientists | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
three times | QUANTITY | 0.99+ |
103 billion dollars | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
first bullet | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
Secondly | QUANTITY | 0.99+ |
five years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
hundreds of millions | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Peter Burris, Wikibon | Action Item Quick Take: NVMe over Fabrics, Feb 2018
(gentle electronic music) >> Hi, I'm Peter Burris. Welcome to another Wikibon Action Item Quick Take. A lot of new technology throughout the entire stack, including still Inside Systems. One in particular's pretty important, tell us about it. >> Thank you, NVMe over Fabric is what I'm going to talk about. And my take on this is that it's going to be very real in 2018. It's going to support all the protocols, it'll support iSCSI, it'll support Fibre Channel, InfiniBand and Ethernet. So it's going to affect all storage. The incremental costs are low, very low. The performance of it is absolutely outstanding and fantastic, and there'll be huge savings, potential huge savings on things, for example, like core licensing. So the savings within storage and the savings across the system will be large. My view is it should become the design standard in 2018 for storage. So the Action Item here is to assume that you are going to be implementing NVMe over Fabrics over the next 18 months as part of all storage purchases and ensure that all the NICs and the software etc will support it. So the key question to ask of any vendor is 'What is your committed NVMe rollout in 2018 and the start of 2019?' >> David Floyer, thank you very much. Once again, the idea here is NVMe becoming not just a technology standard, but now becoming ready for prime time in a commercial way. This has been a Wikibon Action Item Quick Take. Thanks for watching. (gentle electronic music)
SUMMARY :
Welcome to another Wikibon Action Item Quick Take. So the Action Item here is to assume Once again, the idea here is NVMe
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.97+ |
months | DATE | 0.84+ |
iSCSI | OTHER | 0.77+ |
One | QUANTITY | 0.77+ |
2019 | DATE | 0.71+ |
InfiniBand | OTHER | 0.71+ |
18 | QUANTITY | 0.55+ |
Channel | OTHER | 0.5+ |
Peter Burris, Wikibon | Action Item Quick Take: Hortonworks, Feb 2018
(rhythmic techno) >> Hi, this is Peter Burris. Welcome to a Wikibon Action Item Quick Take. It's earning season. Hortonworks revealed some numbers. Mark responded. George, what happened? >> So, Hortonworks had a good year and a good quarter, in terms of meeting the expectations they set for Wall Street and analysts. There was a little disapointment in the guidance. And, normally, we don't really talk about earnings on a show like this, but I think it's worth pointing out or focusing on it because it highlights an issue, which is something that we've lost sight of. We've been in this environment, now, for 10 years, where we see pricing in this slow motion collapse based on metered pricing models or subscription pricing models, as well as open source. But what hasn't changed is the cost of fielding a direct sales force to get customers to do enterprise-wide adoption. Everyone talks about land and expand, which is like self-service or, at best, inside sales. But to get wide-scale adoption, you need to call high, you need to have solutions architects who can map it to an enterprise architectures, enterprise-specific architecture and infrastructure. I think we're going to see convergence and consolidation. Howtonworks does have a very broad product line and we're seeing evidence of uptake of the new products, especially for data in motion to go with its data lake product. But I think this is something we're going to have to watch with all vendors. Can they afford to build the go-to market channel that will make their customers successful? >> Once again, software's complex, especially enterprise software that promises to do complex and rich things. This has been a Wikibon Action Item Quick Take. Thank you for watching. (quiet techno)
SUMMARY :
Welcome to a Wikibon Action Item Quick Take. But to get wide-scale adoption, you need to call high, to do complex and rich things.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephane Monoboisset | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Teresa Tung | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Jamie | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jamie Sharath | PERSON | 0.99+ |
Rajeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeremy | PERSON | 0.99+ |
Ramin Sayar | PERSON | 0.99+ |
Holland | LOCATION | 0.99+ |
Abhiman Matlapudi | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Rajeem | PERSON | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Rajeev Krishnan | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Sally Jenkins | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Stephane | PERSON | 0.99+ |
John Farer | PERSON | 0.99+ |
Jamaica | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Abhiman | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
130% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
183% | QUANTITY | 0.99+ |
14 million | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
38% | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
24 million | QUANTITY | 0.99+ |
Theresa | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Accelize | ORGANIZATION | 0.99+ |
32 million | QUANTITY | 0.99+ |
Peter Burris, Wikibon | Action Item Quick Take: AWS Low Code, Feb 2018
(electronic pop music) >> Hi, I'm Peter Burris. Welcome to a Wikibon Action Item Quick Take. One of the biggest challenges that all cloud players face is how to bring more developers into the ranks. Jim Kobielus, Amazon did something interesting to, or I should say, AWS did something interesting this week. Tell us about it. >> Well, they haven't actually done it, Peter, but there is rumor that they're doing it. Let me explain. Darryl Taft, who's a very well-seasoned veteran reporter with TechTarget now... Darryl reported that AWS is "appealing to the masses" with a low-code development project. I think that's exciting. He's got it on strong background that they've got Adam Bosworth, formerly of Microsoft, heading up their low-code tool development effort. I think one of the things that AWS is missing is a strong tool for developers, especially professional developers, trying to rapidly build cloud applications, and also for the run-of-the-mill business user who wants to quickly put together an application right in the Amazon cloud. I'm impressed that they've got Adam Bosworth, who was very much one of the drivers behind the Access database at Microsoft, going forward. So going forward, I'm looking forward to seeing, hopefully, they say they've been developing it since last summer, AWS... I'm hoping to see an actual low-code tool from AWS that would bring them into this space in a major way, really to encourage more development of cloud applications running natively in the very sprawling and complex AWS world. >> All right, so, AWS being rumored to expand their attractiveness to developers. This has been a Wikibon Action Item Quick Take. (electronic pop music)
SUMMARY :
is how to bring more developers into the ranks. Darryl reported that AWS is "appealing to the masses" All right, so, AWS being rumored to expand
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Adam Bosworth | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Darryl | PERSON | 0.99+ |
Darryl Taft | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
last summer | DATE | 0.97+ |
this week | DATE | 0.95+ |
one | QUANTITY | 0.94+ |
One | QUANTITY | 0.92+ |
TechTarget | ORGANIZATION | 0.92+ |
Wikibon | ORGANIZATION | 0.89+ |
Amazon cloud | ORGANIZATION | 0.74+ |
Wikibon | TITLE | 0.57+ |
Peter Burris, Wikibon | Action Item, Feb 9 2018
>> Hi, I'm Peter Burris, and welcome to Wikibon's Action Item. (upbeat music) Once again, we're broadcasting from theCUBE studio in beautiful Palo Alto, California, and I have joining me here in the studio George Gilbert, David Floyer, both Wikibon analysts, and remote, welcome Neil Raden and Jim Kobielus. This week, we're going to talk about something that's actually quite important, and it's one of those examples of an innovation in which technology that is maturing in multiple domains is brought together in unique and interesting ways to potentially dramatically revolutionize how work gets done. Specifically, we're talking about something we call augmented programming. The notion of augmented programming borrows from some of the technologies associated with new or declarative low-code development environments, machine learning, and an increasing understanding of the role that automation's going to play, specifically as pertains to human and human-augmented activities. Now, low-code programming has been around for a while. Machine learning's been around for a while, and, increasingly, some of these notions of automation have been around for a while. But it's how they are coming together to create new approaches and new possibilities that can dramatically improve the speed of systems development, the quality of systems development, and, ultimately, very importantly, the ongoing manageability of those systems. So, Jim Kobielus, let's start with you. What are some of the issues associated with augmented programming that users need to be focused on? >> Yeah, well, the primary issue, or, really, the driver, is that we need to increase the productivity of developers greatly, because required of them to build programs, applications faster with fewer resources, and deploy them more rapidly in DevOps environments, and to manage that code, and to optimize that code for 10 zillion downstream platforms from mobile to web to the Internet of Things, and so forth. They need power tooling to be able to drive this process. Now, low-code platforms, you know, that whole low-code space has been around for years. It's very much evolved from what used to be called rapid application development, which itself evolved from the 4GL languages of decades past, and so forth. Looking at it now, here, we're moving towards the end of the second decade of this century. All low-code development space has evolved, it is rapidly emerging into, BPM, on the one hand, orchestration modeling tools. Robotic process automation, on the other hand, to enable the average end user or business analyst to quickly gin up an application based on being able to wire together UI components fairly rapidly, and drive it from the UI on in. What we're seeing now is that more and more machine learning is being used in the process of developing low-code application, or in the low-code development of applications. More machine learning is being used in a variety of capacities, one of which is simply to be able to infer the appropriate program code for external assets like screenshots and wireframes, but also from database schema and so forth. A lot of machine learning is coming to this space in a major way. >> But it sounds, though, there's still going to be some degree of specialization, and the nature of the tools that we might use in this notion of augmented programming. So, RPA may be associated with a certain class of applications and environmental considerations, and there'll be other tools, for example, that might be associated with different application considerations and environmental attributes as well. But David Floyer, one of the things that we're concerned about is, a couple weeks ago, we talked about the notion of data-aware middleware, where the idea that, increasingly, we'll see middleware emerge that's capable of moving data in response to the metadata attributes of the data, combined with invisibility to the application patterns. But when we think about this notion of augmented programming, what are some of the potential limits that people have to think about as they consider these tools? >> Peter, that's a very good question. The key for all of these techniques is to use the right tools in the right place. A lot of the environments where the leading edge of this environment assumes an environment where the programmer has access to all of his data, he owns it, and he is the only person there. The challenge is, in many applications, you are sharing data. You are sharing data across the organization, you are sharing data between programmers. Now, this introduces a huge amount of complexity, and there have been many attempts to try and tackle this. There've been data dictionaries, there've been data management, ways of managing this data. They haven't had a very good history. The efforts involved in trying to make those work within an organization have been, at best, spasmodic. >> (laughs) Spasmodic, good word! >> When we go into this environment, I think the key is, make sure that you are applying these tools to the areas initially where somebody does have access to all the data, and then carefully look at it from the point of view of shared data, because you have a whole lot of issues in state environments, which we do not have in non-state environments, and the complexity of locking data, the complexity of many people accessing that data, that requires another set of tools. I'm all in favor of these low-code-type environments, but you have to make sure that you're applying the right tools for the right type of applications. >> And specifically, for example, a lot of metadata that's typically associated with a database is not easily revealed to an application developer, nor an application. And so, you have to be very, very careful about how you exploit that. Now, Neil Raden, there has been over the years, as David mentioned, a number of passes at doing this that didn't go so well, but there are some business reasons to think why this time it might go a little bit better. Talk a little bit about some of the higher-level business considerations that are on the table that may catalyze better adoption this time of these types of tools. >> One thing is that, no matter what kind of an organization you are, whether you're a huge multinational or an SMB or whatever, all of these companies are really rotten with what we call shadow systems. In other words, companies have applications that do what they do, and what they don't do, people cobble together. The vast majority of 'em are done in Access and Excel, still. Even in advanced organizations, you'll find this. If there's a way to eliminate that, because it's a real killer of productivity, then that's a real positive. I suppose my concern is that when you deal at that level, how are you going to maintain coherency and consistency in those systems over time without adding, like he said, orchestration of those systems? What David is saying, I think, is really key. >> Yeah, I, go ahead, sorry, Neil. Go ahead. >> No, that's all right. What I was-- >> I think-- >> Peter: Sorry. Bad host. >> David: You think? >> Neil: No, go ahead. >> No, what I was going to say was, and a crucial feature of this is that a lot of times, the application is owned by a business line, and the business line presumes that they own their data, and they have modeled those systems for a certain type of work, for a certain volume of work, for a certain distribution of control, and when you reveal a lot of this stuff, you sometimes break those assumptions. That can lead to real serious breaks in the system. >> You know, they're not always evil, as we like to characterize them. Some of them are actually well-thought-out and really good system, better than anything they could get 'em from the IT organizations. But the point is, they're usually pretty brittle, and they require a lot of effort for the people who develop them to keep them running because they don't use the kind of tools and approaches and platforms and methodologies that lend themselves to good-quality software. I think there's real potential for RTA in that area. >> I think there are also some interesting platforms that are driving to help in this particular area, particularly of applications which go across departments in an organization. ServiceNow, for example, has a very powerful platform for very high-level production of systems, and it's being used a lot of the time to solve problems of procedures, procedures going across different departments, automating those procedures. I think there are some extremely good tools coming out which will significantly help, but they do help more in the serial procedures, rather than the concurrent procedures. >> And there are some expectations about the type of tools you use, and the extensibility of those tools, et cetera, which leads me, anyway, George, to ask the question about some of the machine learning attributes of this. We've got to be careful about machine learning being positioned as the panacea for all business problems, which too often seems to be the case. But we are certainly, it's reasonable to observe that machine learning can, in fact, help us in important ways at understanding how patterns in applications and data are working, how people are working together. Talk a little bit about the machine learning attributes of some of these tools. >> Well, I like to say that every few years, we have a technology we get so excited about that we assume it tastes like chocolate, costs a dollar, and cures cancer. Machine learning is that technology right now. The interesting thing about robotic process automation in many low-code environments is that they're sort of inheriting the mantle of the old application macros, and even cross-application macros from the early desktop office wars. The difference now is, unlike then, there were APIs that those scripts could talk to, and they could then treat the desktop applications as an application platform. As David said, and Neil, we're going through application user interfaces now, and when you want to do a low-code programming environment, you want often to program by example. But then you need to generalize parts, you know, when you move this thing to this place, you might now want to generalize that. That's where machine learning can start helping take literal scripts, and adding more abstract constructs to them. >> So, you're literally digitizing some of the digital primitives that are in some of these applications, and that allows you to reveal data the machine learning can apply to make observations, recommendations about patterns, and actually do code generation. >> And you know, I would add one thing, that it's not just about the UI anymore, because we're surfacing, as we were talking earlier, the data-driven middleware. Another way of looking at what used to be the system catalog, we had big applications all talking to a central database. But now that we have so many repositories, we're sort of extricating the system catalog so that we can look at and curate data in many locations. These tools can access that because they have user interfaces, as well as APIs. And then, in addition, you don't have to go against a database that is unprotected with an applications business logic. More and more, we have microservices and serverless functions where they embody the business logic, and you can go against them, and they enforce the rules as well. >> That's great, so, David Floyer-- >> I should point out-- >> Hold on, Jim. Dave Floyer, this is not a technology set that suddenly is emerging on the scene independent of other changes. There's also some important changes in the hardware itself that are making it possible for us to reveal data differently, so that these types of tools and these types of technologies can be applied. I'm specifically thinking about something as mundane as SSD, flash-based storage, and other types of technologies that allow us to different things with data so that we can envision working with this stuff. Give us a quick list down on the infrastructure, some of the key technologies in making this possible. >> When we look at systems architectures now, what we never had was fast memories, fast storage. We had very, very slow storage, and we had to design systems to take account of that. What is coming in now is much, much faster storage built on things like NVMe, other fabrics, which really get to any data within microseconds, as opposed to the milliseconds. That's thousands of times faster. What you can do with these is, not only can the access density that you can achieve to the data is much, much higher than it was. Many, again, many thousand times higher. That enables you to take a different approach to sharing data. Instead of having to share data at the disk level, you can now, for example, take a snapshot of data. You can allow that snapshot to be the snapshot of, for example, the analytics system on the hour, or on the day, or whatever timescale that you want it. And then, in parallel, you can use huge amounts of analytic data against a snapshot of that same data while the same operational system is working. There are some techniques there which I think are very exciting, indeed. The other big change is that we're going to be talking machine to machine. Applications were designed for human, most of applications were designed for a human to be the recipient at the other end. One of the differences when you're dealing with machines is now you have to get your code done in microseconds, as opposed to seconds. Again, a thousand times faster. This is a very exciting area, but when we're looking at low-code, for example, you're still going to need those well-crafted algorithms, those well-crafted code, very fast code that you're going to need as one of the tools of programmers. There's still going to be a need for people who can create these very fast algorithms. An exciting time all the way around for programmers. >> What were you going to say, Jim? And I want to come back and have you talk about DevOps for a second. >> Yeah, I think I was going to, I'll add to what David was just saying. Most low-code tools are not entirely no-code, meaning what they do is they auto-generate code, pursuant to some business declared a specification. The code, the actual, professional programmers can go in and modify that code and tweak it and optimize it. And I want to tie in now to something that George was talking about, the role of ML in this process. ML can make a huge mess, in the sense that ML can be an enabler for more people who don't know whole lot about development. You want to build stuff willy-nilly, so there's more code out there than you can shake a stick at, and there's no standards. But also, I'm seeing, and I saw this past week, MIT has a project, they already have a tool, that's able to do this. It's able to take ML, use ML to take a snapshot or a segment of code out of one program, and then modify it so that it fit and then transplant it into another application and modify it so it fits the context of the new application along various attributes, and so forth. What I'm getting at is that ML can be, according to what, say, MIT has done, ML can be a tool for enabling reuse of code and re-contextualization and tweaking of code. In other words, ML can be a handmaiden of enforcing standards as code gets repurposed throughout these low-code environments. I think that ML can be, it's a double-edged sword, in terms of enabling stronger or weaker governance over the whole development process. >> Yeah, and I want to add to that, Jim, that it's not just you can enforce, or at least, reveal standards and compliance, but also increases the likelihood that we become a little bit more tool-dependent. And then going back to what you were talking about, a little bit less tool-dependent, I should say. Going back to what you were talking about, David, it increases the likelihood that people are using the right tool for the right job, which is a pretty crucial element of this, especially as we do in adoption. So, Jim, give us a couple of quick observations on what a development organization is going to have to do differently to get going on utilizing some of these technologies. What are the top two or three things that folks are going to have to think about? >> First of all, in the low-code space, there are general-purpose tools that can bang out code for various target languages, for various applications, and there are highly special-purpose tools that can go gangbusters on auto-ginning web application code and mobile code and IoT code. First and foremost, you got to decide how much of the ocean you want to boil off, in terms of low-code. I recommend that if you have a requirement for accelerating, say, mobile code development, then go with low-code tools that are geared to iOS and Android and so forth, as your target platform, and stay there. Don't feel like you have to get some monster suite that can do everything, potentially. That's one critical thing. Another critical thing is it's not, the tool that you adopt, it needs to be more than just a development tool. It needs to also have capabilities built in to help your team govern those code builds within whatever DevOps, CIC, or repository you have inside your organization, make sure that the tool you've got plays well with your DevOps environment, with your workflows, with your code repositories. And then, number three, we keep forgetting this, but the front-end development is still not a walk in the woods. In fact, specifying a complex business logic that drives all this code generation, this is stuff for professional developers more often than not. These are complex, even RPA tools are, quite frankly, not as user-friendly as maybe potentially they could be down the road, 'cause you still need somebody to think through the end-to-end application, and then to specify those steps at a declarative level that need to be accomplished before the RPA tool can do its magic and build something that you might want to then crystallize as a repeatable asset in your organization. >> So it doesn't take the thinking out of application development. >> James: Oh, no, no, no no. >> All right, so, let's do this. Let's hit the action items and see what we all think folks should do next. David Floyer, let me start with you. What's the action item out of this? >> The action item is horses for courses. The right horse for the right course, the right tools for the right job. Understand where things are stateless and where things are state, and use the appropriate tools, and, as Jim was just saying, make sure that there is integration of those tools into the current processes and procedures for coding. >> George Gilbert, action item. >> I would say that, building on that, start with pilots where it involves one or a couple simple applications. Or, I should say, one or a couple enterprise applications, but with less, less sort of branching, if-then type of logic built in. It could be hardwired-- >> So, simple flows? >> Simple flows, so that over time you can generalize that and play with how the RPA tools or low-code tools can generalize their auto-generated code. >> Peter: Neil Raden, action item. >> My suggestion is that if you involve someone who's going to learn how to use these tools and develop an application or applications for you, make sure that you're dealing with someone who's going to be around for a while, because otherwise, you're going to end up with a lot of orphan code that you can't maintain. We've certainly seen that before. >> David: That's great. >> Peter: Jim Kobielus, action item. >> Yeah, action item is, approach low-code as tooling for the professional developer, not to necessarily bring in untrained, non-traditional developers. Like Neil said, make sure that the low-code environment itself is there for the long haul, it'll be managed and used by professional developers, and make sure that they are provided with the front-end visual workspace that helps them do their jobs most effectively, that is user-friendly for them to get stuff done in a hurry. And don't worry about bringing in the freelance, untrained developers into your organization, or somehow re-tasking your business analysts to become coders. That's probably not the best idea in the long run, for maintainability of the code, if nothing else. >> Certainly not in the intermediate term. Okay, so here's the action item. Here's our Wikibon Action Item. As digital business progresses, it needs to be able to create digital assets that are predicated on valuable data faster, in a more flexible way, with more business knowledge embedded and imbued directly in how the process works. A new class of tools is emerging that we think will actually allow this to happen more successfully. It combines mature knowledge in the application development world with new insights in how machine learning works, and new understanding of the impacts of automation on organization. We call these augmented programming tools, and essentially, we call them augmented programming, because in this case, the system is taking some degree of responsibility for the business to generate code, identify patterns, and ultimately do a better job maintaining how applications get organized and run. While these technologies have potential power, we have to acknowledge that there's not ever going to be a one-size-fits-all at all. In fact, we believe very strongly that we're going to see a range of different tools emerge that will allow developers to take advantage of this approach, given their starting point of the artifacts that are available, and the characteristics of the applications that have to be built. One of the ones that we think is particularly important is robotic process automation, or RPA, which starts with the idea of being able to discover something about the way applications work by looking at how the application behaves onscreen, encapsulate that, generalize it so that it can be used as a tool in future application development work. We also note that these application development technologies will not operate independent of other technology and organizational changes within the business. Specifically, on the technology side, we are encouraged that there's a continuing evolution of hardware technology that's going to take advantage of faster data access utilizing solid-state disks, NVMe over fabric, and new types of system architectures that are much better-suited for rapid shared data access. Additionally, we observed that there's new classes of technologies that are emerging that allow a data control plane to actually operate based on metadata characteristics, and informed by application patterns, often through things like machine learning. One of the organizational issues that we think is really crucial is that folks should not presume that this is going to be a path for taking anybody in the business and turning them into an application developer. You still have to be able to think like an application developer and imagine how you turn a business process into something that looks like a program. But another group that we think has to be considered here is not just the DevOps people, although that's important, but go down a level. The good old DBAs who have always suffered through new advances in tools that made the assumption that the data that's in a database is always available, and they don't have to worry about transaction scaling, and they don't have to worry about the way that the database manager's set up. It would be unfortunate if the value of these tools from a collaboration standpoint, to work better with the business, to work better with the younger programmers, ended up failing because developers continue to not pay attention to how the underlying systems that currently control a lot of the data operate. Okay, once again, this has been, we really appreciate you participating. Thank you, David Floyer and George Gilbert, and on the remote, Neil Raden and Jim Kobielus. We've been talking about augmented programming. This has been Wikibon Action Item. (upbeat music)
SUMMARY :
of the role that automation's going to play, and drive it from the UI on in. and the nature of the tools that we might use and he is the only person there. and the complexity of locking data, business considerations that are on the table that when you deal at that level, Yeah, I, go ahead, sorry, Neil. What I was-- Peter: Sorry. and the business line presumes that they own their data, that lend themselves to good-quality software. that are driving to help in this particular area, and the extensibility of those tools, et cetera, and adding more abstract constructs to them. and that allows you to reveal data that it's not just about the UI anymore, some of the key technologies in making this possible. You can allow that snapshot to be the snapshot of, and have you talk about DevOps for a second. and modify it so it fits the context of the new application And then going back to what you were talking about, make sure that the tool you've got So it doesn't take the thinking Let's hit the action items make sure that there is integration of those tools but with less, Simple flows, so that over time you can generalize that that you can't maintain. and make sure that they are provided with that this is going to be a path
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Dave Floyer | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Feb 9 2018 | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
10 zillion | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Android | TITLE | 0.99+ |
iOS | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
This week | DATE | 0.98+ |
One | QUANTITY | 0.97+ |
DevOps | TITLE | 0.97+ |
one program | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
thousand times | QUANTITY | 0.94+ |
CIC | TITLE | 0.94+ |
past week | DATE | 0.94+ |
ServiceNow | TITLE | 0.93+ |
One thing | QUANTITY | 0.93+ |
Access | TITLE | 0.92+ |
one thing | QUANTITY | 0.89+ |
thousands of times | QUANTITY | 0.88+ |
one critical thing | QUANTITY | 0.88+ |
a dollar | QUANTITY | 0.87+ |
couple weeks ago | DATE | 0.85+ |
second decade of this century | DATE | 0.84+ |
number three | QUANTITY | 0.76+ |
decades | DATE | 0.75+ |
couple simple applications | QUANTITY | 0.73+ |
one of | QUANTITY | 0.71+ |
couple enterprise applications | QUANTITY | 0.67+ |
a second | QUANTITY | 0.63+ |
double | QUANTITY | 0.61+ |
top | QUANTITY | 0.57+ |
two | QUANTITY | 0.53+ |
ML | TITLE | 0.51+ |
4GL | OTHER | 0.48+ |
Peter Burris, Wikibon | Action Item Quick Take: Teradata, Feb 2018
(electronic pop music) >> Hi, I'm Peter Burris. Welcome to a Wikibon Action Item Quick Take. This week, Teradata announced some earnings and some changes. Neil Raden, what happened? >> A couple of years ago, and don't hold my feet to the fire, but most people considered Teradata dying out, a company with great technology that just wasn't current with where things were going. They saw that, too, and they've done a tremendous job at reinventing themselves. The numbers were evident in their 4th quarter and full fiscal year numbers. They weren't spectacular, but they did beat everybody's estimate, which is a good thing. They also showed something like $250 million in subscription income, which was probably zero a year and a half ago. So that's a good thing. I think it's showing that they're making progress. They're not out of the woods yet, obviously, but I think that the program is a good program and the numbers are showing it. The other thing that I really, really like is that they elevated Oliver Ratzesberger to COO. So he's now basically in charge of pretty much everything, right? (laughs) He's going to take charge of the entire organization's sales, and marketing, and service, and so forth. He was in charge of product before this. Really, good things have happened in terms of their technology with Oliver. I've known Oliver for a while, and he's been with eBay, did a great job there. I think he's going to stick around. Sales, products, services, and marketing under one team, that's a pretty tall order. But I think he's up to it, and I'm looking forward to the 2018 year and seeing how well they do. >> Excellent, Neil. So, Teradata transitioning and finding people that can make it happen. This has been a Wikibon Action Item Quick Take. (electronic pop music)
SUMMARY :
Welcome to a Wikibon Action Item Quick Take. I think he's going to stick around. and finding people that can make it happen.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
$250 million | QUANTITY | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Oliver Ratzesberger | PERSON | 0.99+ |
Oliver | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
2018 year | DATE | 0.99+ |
4th quarter | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
one team | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
a year and a half ago | DATE | 0.95+ |
couple of years ago | DATE | 0.93+ |
2018-01-26 Wikibon Action Item with Peter Burris
>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. (light instrumental music) No one can argue that big data and related technologies have had significant impact on how businesses run, especially digital businesses. The evidence is everywhere. Just watch Amazon as it works its way through any number of different markets. It's highly dependent upon what you can get out of big data technologies to do a better job of anticipating customer needs, predict best actions, make recommendations, et cetera. On the other hand, nobody can argue, however, that the overall concept of big data has had significant issues from a standpoint of everybody being able to get similar types of value. It just hasn't happened. There have been a lot of failures. So today, from our Palo Alto studios, I've asked David Floyer, who's with me here, Jim Kobielus and Ralph Finos and George Gilbert are on the line, and what we're going to talk about is effectively where are we with big data pipelines and from a maturity standpoint to better increase the likelihood that all businesses are capable of getting value out of this. Jim, why don't you take us through it. What's the core issue as we think about the maturing of machine analytics, big data pipelines? >> Yeah, the core issue is the maturation of the machine learning pipeline, how mature is it? And the way Wikibon looks at the maturation of the machine learning pipeline independent of the platforms that are used to implement that pipeline are three issues. To what extent has it been standardized? Is there a standard conception, various tasks, phases, functions, and their sequence. Number two, to what extent has this pipeline at various points or end to end been automated to enable through point consistency. And number three, to what extent has this pipeline been accelerated not through just automation but through a very (static) and collaboration and handling things like governance in a repeatable way? Those are core issues in terms of the ML pipeline. But in the broader sense, the ML pipeline is only one work stream in the broader application development pipeline that includes code, development, and testing the pipeline. So really dev ops is really the broader phenomenon here. ML pipeline is one segment of the dev ops pipeline. >> So we need to start thinking about how we can envision the ML pipeline creating assets that businesses can use in a lot of different ways. For those assets specifically or models, machine learning models that can be used in more high value analytic systems. This pressure has been in place for quite a while. But David Floyer, there's a reason why right now this has become important. Why don't you give us a quick overview of kind of like where does this go? Why now? >> Why now? Why now is because automation is in full swing, and you've just seen the Amazon having the ability now to automate warehouses, and they've just announced the ability to automate stores, brick and mortar stores. You go in. You pick something up. You walk out. And that's all you have to do. No lines at checkout. No people in the checkout, a completely automated store. So that business model of automation of business processes is, to me, what all this has to lead up to. We have to take the existing automation that we have, which is the systems of record and other automation that we've had for many other years, and then we have to take the new capabilities of AI and other areas of automation, and apply those to those existing automation and start on this journey. It's a 10 year journey or more to automating as many of those business processes as possible. Something like 80% or 90% are there and can be automated. It's an exciting future, but what we have to focus on is being able to do it now and start doing it now. >> So that requires that we really do take an asset-oriented approach to all of this. At the end of the day, it's impossible to imagine business taking on increasing complexity within the technology infrastructure if it hasn't taken care of business in very core ways, not the least of which is do we have, as a business, have a consistent approach to thinking about how we build these models? So Jim, you've noted that there's kind of three overarching considerations. Help us go into it a little bit. Where are the problems that businesses are facing? Where are they seeing the lack of standardization creating the greatest issues? >> Yeah, well, first of all, the whole notion of a machine learning pipeline has a long vintage. It actually descends from the notion of a data mining pipeline, but the data mining industry, years ago, consolidated or had a consensus around some model called Crisp. I won't bore you with the details there. Taking it forward to an analytical pipeline or a machine learning pipeline, the critical issues we see now is the type of asset that's being built and productionized is a machine learning model, which is a statistical model that is increasingly built on artificial neural networks, you know, to drive things like data learning. Some of the critical things up front, the preparation of all the data in terms of ingest and transformation and cleansing, that's an old set of problems well established, and there's a lot of tools on the market that do that really well. That's all critical for data preparation prior to the modeling process really truly beginning. >> So is that breaking down, Jim? Is that the part that's breaking down? Is that the upfront understanding of the processes, or is it somewhere else in the pipeline process that is-- >> Yeah, it's in the middle, Peter. The modeling itself for machine learning is where, you know, there's a number of things that have to happen for these models to be highly predictive. A, you have to do something called feature engineering, and that's really fundamentally looking for the predictors in large data sets that you can build into models. And you can use various forms. So feature engineering is a highly manual process that to some increasingly is being automated. But a lot of it is really leading edge technology is in the research institutes of the world. That's a huge issue of how to automate more of the upfront feature engineering. That feeds into the second core issue is that there's 10 zillion ways to skin the statistical model cat, the algorithms. You know, from the older models, the port vic-machine, to the newer artificial neural networks convolution. Blah, blah, blah. So a core issue, okay, you have a feature set through feature engineering, which of the 10 zillion algorithms should you use to actually build the model based on that feature set. So there are tools on the market that can accelerate some of these selection and testing of those alternate ways of building out those models. But once again, that highly manual process, traditionally manual process and selecting the items, building the models, still needs a lot of manual care and feeding to really be done right. It's human judgment. You really need high power data scientists. And then three, once you have the models built, training them. Training is critical with actual data to determine whether the models actually are predictive or do face recognition or whatever it is with a high degree of accuracy. Training itself is a very complicated pipeline in its own right. It takes a lot of time. It takes a lot of resources, a lot of storage. You got to, you know, your data link and so forth. The whole issue of standardizing on training of machine learning models is a black art on its own. And I'm just scratching the surface of these issues that are outstanding in terms of actually getting greater automation into a highly manual, highly expert-driven process. Go ahead, David. >> Jim, can I just break in? You've mentioned three things. They're very much in the AI portion of this discussion. The endpoint has to be something which allows automation of the business process, and fundamentally, it's real time automation. I think you would agree with that. So the outcome of that model then has to be a piece of code that is going to be as part of the overall automation system in the enterprise and has to fit in, and if it's going to be real time, it's got to be really fast as well. >> In other words, if the asset that's created by this pipeline is going to be used in some other set of activities? >> Correct, so it needs to be tested in that set of activities and part of a normal curve. So what is the automation? What is that process to get that code into a place where it can actually be useful to the enterprise and save money? >> Yeah, David, it's called dev ops, and really dev ops means a number of different things including especially a source code, code control repository. You know, in the broader scheme of things that repository for your code for dev ops for continuous release and cycles needs to be expanded, and it's scoped to include machine learning models, deep learning, whatever it is you're building based on the data. What I'm getting at is a deepening repository of what I call logic that is driving your applications. It's code. It's Java, C++, or Sharp or whatever. It's statistical and predictive model. It's orchestration models you're using for BPM and so forth. It's maybe graph models. It's a deep and thickening layer of logic that needs to be pushed into your downstream applications to drive these levels of automation. >> Peter: So Jim? >> It has to be governed and consolidated. >> So Jim? The bottom line is we need maturity in the pipeline associated with machine learning and big data so that we can increase maturity in how we apply those assets elsewhere in the organization? Have I got that right? >> Right. >> George, what is that going to look like? >> Well, I want to build on what Jim was talking about earlier and my way of looking at this, at the pipeline, is actually to break it out into four different ones. And actually, Jim, as he's pointed out, there's more than potentially four. But the first is the design time for the applications, these new modern, operational, analytic applications, and I'll tie that back to the systems of record and effect. The second is the run time pipeline for these new operational, analytic applications, and those applications really have a separate pipeline for design time and run time of the machine learning models. And the reason I keep them separate is they are on a separate development and deployment and administration scaffolding from the operational applications. And the way it works with the systems of record, which of course, we're not going to be tearing out for decades, they might call out to one of these new applications, feed in some predictors, or have some calculated, and then they get a prediction or a prescription back for the system of record. I think the parts-- >> So George, what has to happen is we have to be able to ensure that the development activities that actually build the applications the business finds valuable and the processes by which we report into the business some of the outcomes of these things and the pipelines associated with building these models, which are the artifacts and the assets created by the pipelines, all have to come together. Are we talking about a single machine or big data pipeline? George, you mentioned four. Are we going to see pipelines for machine learning and pipelines for deep learning and pipelines for other types of AI? Are we going to see a portfolio of pipelines? What do you guys think? >> I think so, but here's the thing. I think there's going to be a consolidated data lake from which all of these pipelines draw the data that are used for modeling and downstream deployment. But if you look at training of models, you know, deep learning models, which are like their name indicates, they're deep, hierarchical. They're used for things like image recognition and so forth. The data there is video and speech and so forth. And there's different kinds of algorithms that they use to build, and there's different types of training that needs to happen for deep learning versus like other machine learning models versus whatever else-- >> So Jim, let me stop you because-- >> There are different processes. >> Jim, let me stop you. So I want to get to the meat of this guys. Tell me what a user needs to do from a design standpoint to inform their choice of pipeline building, and then secondarily, what kind of tools they're going to need. Does it start with the idea that there's different algorithms? There's different assets that are being created at the model level? Is it really going to feed that? And that's going to lead to a choice of tools? Is it the application requirements? How mature, how standardized, can we really put in place conventions for doing this now so it becomes a strategic business capability? >> I think there has to be a recognition. There's different use cases downstream. 'Cause these are different types of applications entirely built from AI in the broadest sense. And they require different data, different algorithm. But you look at the use cases. So in other words, the use cases, like Chatbox. That's a use case now for AI. That's a very different use case from say self-driving vehicle. So those need entirely different pipelines in every capacity to be able to build out and deploy and manage those disparate applications. >> Let me make sure I got this, Jim. What you're saying is that the process of creating a machine learning asset, a model, is going to be different at the pipeline level. It's not going to be different at the data level. It's going to be different at the pipeline level. George, does that make sense? Is that right? Do you see it that way, too, as we talk to folks? >> I do see what Jim is saying in the sense that if you're using sort of operational tooling or guardrails to maintain the fidelity of your model that's being called by an existing system of record, that's a very different tooling from what's going to be managing your IOT models, which have to get distributed and which may have sort of a central canonical version and then an edge specific instance. In other words, I do think we're going to see different tooling because we're going to see different types of applications being fed and maintained by these models. >> Organizationally, we might have a common framework or approach, but the different use cases will drive different technology selections, and those pipelines themselves will be regarded as assets that generate machine learning and other types of assets that then get applied inside these automation applications. Have I got that right, guys? >> Yes. >> Yes. A quick example to illustrate exactly what we're referring to here. So IOT, George brought up IOT analytics with AI built in its edge applications. We're going to see a bifurcation between IOT analytic applications where the training of the models is done in a centralized way because you've got huge amounts of data that needs to be training these very complex models that are running in the cloud but driving all these edge nodes and gateways and so forth, but then you're going to have another pipeline for edge-based training of models for things like autonomous operation where more of the actual training will happen at the edges, at the perimeter. It'll be different types of training using different types of data with different types of time lags and so forth built in. But there will be distinct pipelines that need to be managed in a broader architecture. >> So issues like the ownership of the data, the intellectual property control, the data, the location of the data, the degree to which regulatory compliance is associated with it, how it gets tested, all those types of issues are going to have an impact on the nature of the pipelines that we build here. >> Yes. >> So look, one of the biggest challenges that every IT organization has, in fact every business has, is the challenge that if you have this much going on, the slowest part of it slows everything else down. So there's always an impedance mismatch organizationally. Are we going to see a forcing of data science, application development, routines, practices, and conventions start to come together because the app development world, which is being asked to go faster and faster and faster is at some point in time say, I can't wait for these guys to do their sandbox stuff? What do you think, guys? Are we going to see that? David, I'll look at you first, and Jim, I'll go to you. >> Sure, I think that the central point of control for this is going to have to be the business case for developing this automation, and therefore from that, what's required in that system of record. >> Peter: Where the money is. >> Where the money is. What is required to make that automation happen, and therefore from that, what are you going to pick as your ways of doing that? And I think that at the moment, it seems to me as an outsider, it's much more driven by the data scientists rather than the people, the business line, and eventually the application developers themselves. I think that shift has to happen. >> Well, yeah, well, one of our predictions has been that the tools are improving and that that's going to allow for a separation, increased specialization of the data science world, and we'll see the difference between people who are really doing data science and people who are doing support work. And I think what we're saying here is those people who do support work are going to end up moving closer to the application development world. Jim, I think that's basically some research that you've done as well. Have I got that right? Okay, so let me wrap up our Action Item here. David Floyer, do you have a quick observation, a quick Action Item for this segment? >> For this segment? The Action Item to me is putting together a business case for automation, the fundamental reduction of costs and improvement of business model, and that to me, is what starts this off. How are you going to save money? Where is it most important? Where in your business model is it most important? And what we've done is some very recent research is put out a starting point for this discussion, a business model of a 10 billion dollar company, and we're predicting that it saves 14 billion dollars. >> Let's come to that. The Action Item is basically, start getting serious about this stuff because based on business cases, yeah. All right, so let me summarize very quickly. For Jim Kobielus and George Gilbert and Ralph Finos, who seem to have disappeared off our screens and David Floyer, our Action Item is this. That the leaders in the industry, in the digital world, are starting to apply things like machine learning, deep learning, and other AI forms very aggressively to compete, and that's going to force everybody to get better at this. The challenge, of course, is if you're forcing, or if you're spending most of your time on the underlying technology, you're not spending most of your time figuring out how to actually deliver the business results. Our expectation is that over the course of the next year, one of the things that are going to happen significantly within organizations will be a drive to improve the degree to which machine learning pipelines become more standardized reflecting of good data science practices within the business which itself will change based on the nature of the business, regulatory businesses versus non-regulatory businesses, for example. Having those activities be reflected in the tooling choices, have those tooling choices then be reflected in the types of models you want to build, and those models, those machine learning models ultimately reflecting the needs of the business case. This is going to be a domain that requires a lot of thought in a lot of IT organizations, a lot of inventions yet to be done here. But it's going to, we believe, drive a degree of specialization within the data science world as the tools improve and a realignment of crucial value-creating activities within the business so that what is data science becomes data science. What's more support, what's more related to building these pipelines, and operating these pipelines becomes more associated with dev ops and application development overall. All right, so for the Wikibon team, Jim Kobielus, Ralph Finos, George Gilbert, and here in the studio with me, David Floyer, this has been Wikibon's Action Item. We look forward to seeing you again. (light instrumental music)
SUMMARY :
that the overall concept of big data has had of the platforms that are used to implement the ML pipeline creating assets the ability to automate stores, brick and mortar stores. At the end of the day, it's impossible to imagine Some of the critical things up front, the preparation and that's really fundamentally looking for the predictors So the outcome of that model then has to be What is that process to get that code into a place where it that needs to be pushed into your downstream applications at the pipeline, is actually to break it out created by the pipelines, all have to come together. that needs to happen for deep learning versus And that's going to lead to a choice of tools? I think there has to be a recognition. It's not going to be different at the data level. or guardrails to maintain the fidelity of your model or approach, but the different use cases will drive huge amounts of data that needs to be training the location of the data, the degree to which is the challenge that if you have this much going on, is going to have to be the business case for developing and eventually the application developers themselves. and that that's going to allow for a separation, and that to me, is what starts this off. Our expectation is that over the course
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
10 year | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2018-01-26 | DATE | 0.99+ |
14 billion dollars | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
10 billion dollar | QUANTITY | 0.99+ |
10 zillion algorithms | QUANTITY | 0.99+ |
10 zillion ways | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three issues | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
four | QUANTITY | 0.97+ |
one segment | QUANTITY | 0.96+ |
second core issue | QUANTITY | 0.92+ |
three things | QUANTITY | 0.89+ |
years | DATE | 0.87+ |
Palo Alto | LOCATION | 0.87+ |
single machine | QUANTITY | 0.85+ |
Chatbox | TITLE | 0.82+ |
decades | QUANTITY | 0.79+ |
one work stream | QUANTITY | 0.79+ |
C+ | TITLE | 0.77+ |
Number two | QUANTITY | 0.69+ |
number three | QUANTITY | 0.69+ |
three overarching considerations | QUANTITY | 0.65+ |
Sharp | TITLE | 0.53+ |
IOT | ORGANIZATION | 0.49+ |
Action Item with Peter Burris
>> Hi, I'm Peter Burris. Welcome to Wikibon's Action Item. On Action Item, every week I assemble the core of the Wikibon research time here in our theCUBE Palo Alto studios, as well as remotely, to discuss a seminal topic that's facing the technology industry, and business overall, as we navigate this complex transition of digital business. Here in the studio with me this week, I have David Floyer. David, welcome. >> Thank you. >> And then remotely, we have George Gilbert, Neil Raden, Jim Kobielus, and Ralph Finos. Guys, thank you very much for joining today. >> Hi, how are you doing? >> Great to be here. >> This week, we're going to discuss something that's a challenge to talk about in a small format, but we're going to do our best, and that is, given that the industry is maneuvering through this significant transformation from a product orientation to a services orientation, what's that going to mean for business models? Now this is not a small question, because there are some very, very big players that the technology industry has been extremely dependent upon to drive forward invention, and innovation, and new ideas, and customers, that are entirely dependent upon this ongoing stream of product revenue. On the other hand, we've got companies like AWS, and others that are much more dependent upon the notion of services revenue, where the delivery of the value is in a continuous service orientation. And we conclude most of the SaaS players in that as well, like sales force, etc. So how are those crucial companies, that have been so central to the development of the technology industry, and still are essential to the future of the technology industry, going to navigate this transition? Similarly, how are the services companies, for those circumstances in which the customer does want a private asset that they can utilize as a basis for performing their core business, how are they going to introduce a product orientation? What's that mix, what's that match going to be? And that's what we're going to talk about today. So David, I've kind of laid it out, but really, where are we in this notion of product to service in some of these business model changes? >> It's an early stage, but it's very, very profound changes going on. We can see it from the amount of business of the cloud business supplies are providing. You can see that Amazon, Google, IBM, and Microsoft Azure, all of those are putting very large resources into creating services to be provided to the business itself. But equally, we are aware that services themselves need to be on premise as well, so we're seeing the movement of true private cloud, for example, which is going to be provided as a service as well, so if we take some examples, like for example, Oracle, the customer, they're a cloud customer, they're providing exactly the same service on premise as they provide in the cloud. >> And by service, you mean in how the customer utilizes the technologies. >> Correct. >> The asset arrangement may be very different, but the proposition of what the customer gets out of the assets are essentially the same. >> Yes, the previous model was, we provide you with a product, you buy a number of those products, you put them together, you service it, you look after it. The new model, here coming in with TPC, with the single throat to choke, is that the vendor will look after the maintenance of everything, putting in new releases, bringing things up to date, and they will have a smaller set of things that they will support, and as a result, it's win-win. It's win for the customer, because he's costs are lower, and he can concentrate on differentiated services. >> And secure and privatize his assets. >> Right, and the vendor wins because they have economies of scale, they can provide it at a much lower cost as well. And even more important to both sides is that the time to value of new releases is much, much quicker, and time to security exposures, time to a whole number of other things, improve with this new model. >> So Jim, when we think about this notion of a services orientation, ultimately, it starts to change the relationships between the customer and the vendor. And the consequence of that is, not surprisingly, that a number of different considerations, whether they be metrics, or other elements, become more important. Specifically we start thinking about the experience that the customer has of using something. Walk us through this kind of transition to an experience-oriented approach to conceiving of whether or not the business model's being successful. >> Right, your customer will now perceive the experience in the context of an entire engagement that is multi-channel, multi-touch point, multi-device, multi-application, and so forth, where they're expecting the same experience, the same value, the same repeatable package of goodies, whatever it is they get from you, regardless of the channel through which you're touching them or they're touching you. That channel may be provided through a private, on-premises implementation of your stack, or through a public cloud implementation of your capability, or most likely through all of the above, combined into a hybrid true private cloud. Regardless of the packaging, and the delivery of that value in the context of the engagement the customer expects it to be, self-service increasingly, predictable, managed by the solution provider, guaranteed with a fast continuous release in update cycle. So, fundamentally it's an experience economy, because the customer has many other options to go to, of providers that can provide them with a good or better experience, in terms of the life cycle of things that you're doing for them. So bottom line, the whole notion of a TPC really gets to that notion that the experience is the most important thing, the cloud experience, that can be delivered on-prem, or can be delivered in the public environment. And that's really the new world. With a multi-cloud is that master sort of a matrix of the seamless cross-channel experience. >> We like to think of the notion of a business model as worrying about three fundamental questions. How are you going to create value? How are you going to deliver value? And how are you going to capture value? Where the creation is how shared it's going to be, it's going to be a network of providers, you're going to have to work with OEMs. The delivery, is it going to be online, is it going to be on-prem? Those types of questions, but this notion of value capture is a key feature, David, of how this is changing. And George, I want to ask you a question. The historical norm is that value capture took place in the form of, I give you a product, you give me cash. But when we start moving to a services-orientation, where the services is perhaps being operated and delivered by the supplier, it introduces softer types of exchange mechanisms, like, how are you going to use my data? Are you going to improve the fidelity of the system by pooling me with a lot of other customers? Am I losing my differentiation? My understanding of customers, is that being appropriated and munged with others to create models? Take us through this soft value capture challenge that a service provider has, and what specifically, I guess actually the real challenge that the customer has as they try to privatize their assets, George. >> So, it's a big question that you're asking, and let me use an example to help sort of make concrete the elaboration, or an explanation. So now we're not just selling software, but we might be selling sort of analytic data services. Let's say, a vendor like IBM works with Airbus to build data services where the aircraft that Airbus sells to its airline customers, that provides feedback data that both IBM has access to, to improve its models about how the aircraft work, as well as that data would also go back to Airbus. Now, Airbus then can use that data service to help its customers with prescriptions about how to operate better on certain routes, how to do maintenance better, not just predictive maintenance, but how to do it more just in time with less huge manuals. The key here is that since it's a data service that's being embedded with the product, multiple vendors can benefit from that data service. And the customer of the traditional software company, so in this case, Airbus being the customer of IBM, has to negotiate to make sure its IP is protected to some extent, but at the same time, they want IBM to continue working with that data feedback because it makes their models richer, the models that Airbus gets access to richer over time. >> But presumably that has to be factored into the contractual obligations of both parties enter into, to make sure that those soft dollars are properly commensurated in the agreements. That's not something that we're seeing a lot in the industry, but the model of how we work closely with our clients and our customers is an important one. And it's likely to change the way that IT thinks about itself as a provider of services. Neil, what kinds of behaviors are IT likely to start exhibiting as it finds itself, if not competing, at least trying to mimic the classes of behaviors that we're seeing from service providers inside their own businesses? >> Yeah, well, IT organizations grew over the last, I dunno, 50 years or so, organically, and it was actually amazing how similar their habits and processes, and ways of doing things were the same across industries, and locations, and so forth. But the problem was that everything they had to deal with, whether they were the computers, or the storage, or the networks, and so forth, were all really expensive. So they were always in a process of managing from scarcity. The business wanted more and more from them, and they had lower and lower budgets, because they had to maintain what they had, so it created a lot of tension between IT and organizations, and because of that, whenever a conversation happened between other groups within the business and IT, IT always seemed to have the last word, no, or okay. Whatever the decision was, it was really IT's. And what I see happening here is, when the IT business becomes less insular, I think a lot of this tension between IT and the rest of the organization will start to dissipate. And that's what I'm hoping will happen, because they started this concept of IT vs the business, but if you went out in an organization and asked 100 people what they did, not one of them would say, "I'm the business," right? They have a function, but IT created this us vs them thing, to protect themselves, and I think that once they're able to utilize external services for hardware, for software, for whatever else they have to do, they become more like a commercial operation, like supply-side, or procurement, or something, and managing those relationships, and getting the services that they're paying for, and I think ultimately that could really help organizations, by breaking down those walls in IT. >> So it used to be that an IT decision to make an investment would have uncertain returns, but certain costs, and there are multiple reasons why those returns would be uncertain, or those benefits would be uncertain. Usually it was because some other function would see the benefits under their umbrella, you know, marketing might see increased productivity, or finance would see increased productivity as a consequence of those investments, but the costs always ended up in IT. And that's one of the reasons why we yet find ourself in this nasty cycle of constantly trying to push costs down, because the benefits always showed up somewhere else, the costs always showed up inside IT. But it does raise this question ultimately of, does this notion of an ongoing services orientation, is it just another way of saying, we're letting a lock in back in the door in a big way? Because we're now moving from a relationship, a sourcing relationship that's procurement oriented, buy it, spend as little money as possible, get value out of it, as opposed to a services orientation, which is effectively, move responsibility for this part of the function off into some other service provider, perpetually. And that's going to have a significant implication, ultimately, on the question of whether or not we buy services, default to services. Ralph, what do you think, where are businesses going to end up on this, are we just going to see everything end up being a set of services, or is there going to be some model that we might use, and I'll ask the team this, some model that we might use to conceive when it should be a purchase, and when it should be a service? What do you think, Ralph? >> Yeah, I think the industry's gravitating towards a service model, and I think it's a function of differentiation. You know, if you're an enterprise, and you're running a hundred different workloads, and 15 of them are things that really don't differentiate you from your competition, or create value that's differentiable in some kind of way, it doesn't make any sense to own that kind of functionality. And I think, in the long run, more and more aspects, or a higher percentage of workload is going to be in that category. There will always be differentiation workloads, there will always be workloads requiring unique kinds of security, especially around transactions. But in the net, the slow march of service makes a lot of sense to me. >> What do you think, guys? Are we going to see, uh, do we agree with Ralph, number one? And number two, what about those exceptions? Is there a framework that we can start to utilize to start helping folks imagine what are the exceptions to that rule, what do you think David? >> Sure, I think that there are circumstances when... >> Well first, do we generally agree with the march? >> Absolutely, absolutely. >> I agree too. >> Yes, fully agree that more and more services are going to be purchased, and a smaller percentage of the IT budget from an enterprise will go into specific purchases of assets. But there are some circumstances where you will want to make sure that you have those assets on premise, that there is no other call on those assets, either from the court, or from difference of priority between what you need and what a service provider needs. So in both those circumstances, they may well choose to purchase it, or to have the asset on the premise so that it's clearly theirs, and clearly their priority of when to use it, and how to use it. So yes, clearly, an example might be, for example, if you are a bank, and you need to guarantee that all of that information is yours, because you need to know what assets are owned by who, and if you give it to a service provider, there are circumstances where there could be a legal claim on that service provider, which would mean that you'll essentially go out of business. So there are very clear examples of where that could happen, but in general, I agree. There's one other thing I'd like to add to this conversation. The interesting thing from an IT point of view, an enterprise IT, is that you'll have fewer people to do business with, you'll be buying a package of services. So that means many of the traditional people that you did business with, both software and hardware, will not be your customers anymore, and they will have to change their business models to deal with this. So for example, Permabit has become an OEM supplier of capabilities of data management inside. And Kaminario has just announced that it's becoming a software vendor. >> Nutanix. >> Nutanix is becoming a software vendor, and is either allowing other people to take the single throat to choke, or putting together particular packages where it will be the single throat to choke. >> Even NetAct, which is a pretty consequential business, has been been around for a long time, is moving in this direction. >> Yes, a small movement in that direction, but I think a key question for many of these vendors are, do I become an OEM supplier to the... >> Customer owner. >> The customer owner. Or what's my business model going to be? Should I become the OEM supplier, or should I try and market something directly in some sort of way to the vendors? >> Now this is a very important point, David, because one of the reasons, for a long time, why the OEM model ran into some challenges, is precisely over customer ownership. But when data from operations of the product, or of the service is capable of flowing, not only to the customer engagement originator, but also to the OEM supplier, the supplier has pretty significant, the OEM company has pretty significant visibility, ultimately, into what is going on with their product. And they can use that to continuously improve their product, while at the same time, reducing some of the costs associated with engagement. So the flowing of data, the whole notion of digital business allows a single data about operation to go to multiple parties, and as a consequence, all those parties now have viable business models, if they do it right. >> Yeah, absolutely. And Kaminario will be be a case in point. They need metadata about the whole system, as a whole, to help them know how to apply the best patches to their piece of software, and the same is true for other suppliers of software, the Permabit, or whoever those are, and it's the responsibility of that owner or the customer to make sure that all of those people can work in that OEM environment effectively, and improve their product as well. >> Yeah, so great conversation guys. This is a very, very rich and fertile domain, and I think it's one that we're going to come back to, if not directly, at least in talking about how different vendors are doing things, or how customers have to, or IT organizations have to adjust their behaviors to move from a procurement to a strategic sourcing set of relationships, etc. But what I'd like to do now, as we try to do every week, is getting to the Action Item round, and I'm going to ask each of you guys to give me, give our audience, give our users, the action item, what do they do differently on next Monday as a consequence of this conversation? And George Gilbert, I'm going to start with you. George, action item. >> Okay, so mine is really an extension of what we were talking about when I was raising my example, which is your OEM supplier, let's say IBM, or a company we just talked to recently, C3 IoT, is building essentially what are application data services that would accompany your products that you, who used to be a customer, are selling a supply chain master, say. So really trying to boil that down is, there is a model of your product or service could be the digital twin, and as your vendor keeps improving it, and you offer it to your customers, you need to make sure that as the vendor improves it, that there is a version that is backward compatible with what you are using. So there's the IP protection part, but then there's also the compatibility protection part. >> Alright, so George, your action item would be, don't focus narrowly on the dollars being spent, factor those soft dollars as well, both from a value perspective, as well an ongoing operational compatibility perspective. Alright, Jim Kobielus, action item. >> Action item's for IT professionals to take a quick inventory of what of your assets in computing you should be outsourcing to the cloud as services, it's almost everything. And also, to inventory, what of your assets must remain in the form of hard discreet tangible goods or products, and my contention is that, I would argue that the edge, the OT, the operational technology, the IOT, sensors and actuators that are embedded in your machine tools and everything else, that you're running the business on, are the last bastion of products in this new marketplace, where everything else becomes a service. Because the actual physical devices upon which you've built your OT are essentially going to remain hard tangible products forevermore, of necessity, and you'll probably want to own those, because those are the very physical fabric of your operation. >> So Jim, your action item is, start factoring the edge into your consideration of the arrangements of your assets, as you think about product vs services. >> Yes. >> Neil Raden, action item. >> Well, I want to draw a distinction between actually, sorry, between actually, ah damn, sorry. (laughs) >> Jim: I like your fan, Neil. >> Peter: Action item, get your monitor right. >> You know. I want to draw the distinction between actually moving to a service, as opposed to just doing something that's a funding operate. Suppose we have 500 Oracle applications in our company running on 35 or 40 Oracle instances, and we have this whole army of Oracle DBAs, and programmers, and instance tuners, and we say well, we're going to give all the servers to the Salvation Army, and we're going to move everything to the Oracle cloud. We haven't really changed anything in the way the IT organization works. So if we're really looking for change in culture and operation, and everything else, we have to make sure we're thinking about how we're changing, reading the way things get done and managed in the organization. And I think just moving to the cloud is very often just a budgetary thing. >> So your action item would be, as you go through this process, you're going to re-institutionalize the way you work, get ready to do it. Ralph Finos, action item. >> Yeah, I think if you're a vendor, if you're an IT industry vendor, you kind of want to begin to look a lot like, say, a Honda or Toyota in terms of selling the hardware to get the service in the long term relationship in the lock-in. I think that's really where the hardware vendors, as one group of providers, is going to want to go. And I think you want, as a user and an enterprise, I think you're going to want to drive your vendors in that direction. >> So your action item would be, for a user anyway, move from a procurement orientation that's focused on cost, to a vendor management orientation that's focused on co-development, co-evolution of the value that's being delivered by the service. David Floyer, action item. >> So my action item is for vendors, a whole number of smaller vendors. They have to decide whether they're going to invest in the single most expensive thing that they can do, which is an enterprise sales force, for direct selling of their products to enterprise IT, and-or whether they're going to take an OEM type model, and provide services to a subset, for example, to focus on the cloud service providers, which Kaminario are doing, or focus on selling indirectly to all of the, the vendors who are owning the relationship with the enterprise. So that, to me, is a key decision, very important decision as the number of vendors will decline over the next five years. >> Certainly, what we have, visibility to what we have right now, so your action item is, as a small vendor, choose whose sales force you're going to use, yours or somebody else's. >> Correct. >> Alright. So great conversation guys. Let me kind of summarize this a bit. This week, we talked about the evolving business models in the industry, and the basic notion, or the reason why this has become such an important consideration, is because we're moving from an era where the types of applications that we were building were entirely being used internally, and were therefore effectively entirely private, vs increasingly trying to extend even those high-volume transaction processing applications into other types of applications that deliver things out to customers. So the consequence of the move to greater integration, greater external delivery of things within the business, has catalyzed this movement to the cloud. And as a consequence, this significant reformation, from a product to a services orientation, is gripping the industry, and that's going to have significant implications on how both buyers and users of technology, and sellers and providers of technology are going to behave. We believe that the fundamental question is going to come down to, what process are you going to use to create value, with partnerships, go it alone? How are you going to deliver that value, through an OEM sales force, through a network of providers? And how are you going to capture value out of that process, through money, through capturing of data, and more of an advertising model? These are not just questions that feature in the consumer world, they're questions that feature significantly in the B2B world as well. Our expectations, over the next few years, we expect to see a number of changes start to manifest themselves. We expect to see, for example, a greater drive towards experience of the customer as a dominant consideration. And today, it's the cloud experience that's driving many of these changes. Can we get the cloud experience, both the public cloud, and on premise, for example? Secondly, our expectations that we're going to see a lot of emphasis on how soft exchanges of value take place, and how we privatize those exchanges. Hard dollars are always going to flow back and forth, even if they take on subscription, as opposed to a purchase orientation, but what about that data that comes out of the operations? Who owns that, and who gets to lay claim to future revenue streams as a consequence of having that data? Similarly, we expect to see that we will have a new model that IT can use to start focusing its efforts on more business orientation, and therefore not treating IT as the managers of hardware assets, but rather managers of business services that have to remain private to the business. And then finally, our expectation is that this march is going to continue. There will be significant and ongoing drive to increase the role that a service's business model plays in how value is delivered, and how value is captured. Partly because of the increasing dominant role that data's playing as an asset in digital business. But we do believe that there are some concrete formulas and frameworks that can be applied to best understand how to arrange those assets, how to institutionalize and work around those assets, and that's a key feature of how we're working with our customers today. Alright, once again, team, thank you very much for this week's Action Item. From theCUBE studios in beautiful Palo Alto, I want to thank David Floyer, George Gilbert, Jim Kobielus, Neil Raden, and Ralph Finos, this has been Action Item.
SUMMARY :
Here in the studio with me this week, I have David Floyer. And then remotely, we have George Gilbert, Neil Raden, that have been so central to the development of the cloud business supplies are providing. And by service, you mean in how the customer but the proposition of what the customer Yes, the previous model was, we provide you with the time to value of new releases is much, that the customer has of using something. because the customer has many other options to go to, Where the creation is how shared it's going to be, the models that Airbus gets access to richer over time. But presumably that has to be factored into because they had to maintain what they had, or is there going to be some model that we might use, But in the net, the slow march of service So that means many of the traditional people the single throat to choke, or is moving in this direction. do I become an OEM supplier to the... Should I become the OEM supplier, So the flowing of data, the whole notion of digital business and it's the responsibility of that owner or the customer and I'm going to ask each of you guys to give me, could be the digital twin, and as your vendor don't focus narrowly on the dollars being spent, And also, to inventory, what of your assets of the arrangements of your assets, Well, I want to draw a distinction between And I think just moving to the cloud is get ready to do it. in terms of selling the hardware to get the service co-development, co-evolution of the value and provide services to a subset, for example, what we have right now, so your action item is, So the consequence of the move to greater integration,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Airbus | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Honda | ORGANIZATION | 0.99+ |
George | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Ralph Finos | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Peter | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
50 years | QUANTITY | 0.99+ |
Salvation Army | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kaminario | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
100 people | QUANTITY | 0.99+ |
NetAct | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
single | QUANTITY | 0.99+ |
both parties | QUANTITY | 0.99+ |
next Monday | DATE | 0.99+ |
This week | DATE | 0.99+ |
500 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
40 | QUANTITY | 0.98+ |
35 | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Permabit | ORGANIZATION | 0.96+ |
Secondly | QUANTITY | 0.95+ |
single data | QUANTITY | 0.92+ |
theCUBE | ORGANIZATION | 0.91+ |
CUBEConversation with John Furrier & Peter Burris
(upbeat music) >> Hello everyone, welcome to a special CUBE Conversation here at the SiliconANGLE Media, CUBE and Wikibon studio in Palo Alto. I'm John Furrier, co-founder of SiliconANGLE Media, Inc. I'm here with Peter Burris, head of research, for a special Amazon Web Services re:Invent preview. We just had a great session with Peter's weekly Action Item roundtable meeting with analysts surrounding the trend. So that'll be up on YouTube, check that out. Really in-depth conversation around what to expect at Amazon Web Service's re:Invent coming up in about a week and a half, and great content in there. But I want to go here, Peter, have a conversation with you back and forth, 'cause we've been having a debate, ping-ponging back and forth around what we think might happen. We certainly have some visibility in some of the news that might be happening at re:Invent. But you guys have been doing a great job with the research. I want to get your thoughts and I want to just have a conversation around Amazon Web Services. Continuing to kick ass, they've had a run on their own for many, many years now. But they got competition. The visibility in Wall Street is clear. They know the profitability. The numbers are all taking shape. Microsoft stock's up from 26 to wherever it is now. It's clear the cloud is the game. That's what's going on, and you have, again, the top three: Amazon, Azure, Google. And then, you can argue four through seven, including Alibaba and others, big game going on. This is causing a lot of opportunities, but disruption to business models, technology architectures, and ultimately how customers are going to deploy their IT and/or their digital business. Your thoughts? >> I think one of the most interesting things about this, John, is that in the first 10 years of the cloud, it was implied that it was a cost play. Don't do IT anymore, it's blah, blah, blah, blah, blah, do the cloud, do AWS. And I think that because the competition is so real now, and a lot of businesses are starting to realize what actually could be done if you're able to use your data in new and different ways, and dramatically accelerate and transform your businesses, that all this has become a value play. And the minute that it becomes a value play, in other words, new types of work, new types of capabilities, then for Amazon, for AWS, it becomes an ecosystem play. So I think one of the things that's most interesting about this re:Invent, is it's, from my opinion, it's going to be the first one where it's truly a strong ecosystem story. It's about how Amazon is providing services that the rest of the world's going to be able to consume and create new types of value through the Amazon ecosystem. >> Great point, I want to bring up a topic that we've been talking on theCUBE in some of my other CUBE Conversations, as it relates to the ecosystem is, in all these major ways, and we've seen many, you've covered many ways as an analyst over the years, there's always been a gestation period between a disruptive enabler, you could talk about TCP/IP, you could talk about HTTP, there's always a period of gestation. Sometimes it's accelerated now more than ever, but you start to see the impact of that disruptive enabler. Certainly cloud, and what Amazon has done, has been a disruptive enabler. Value's been created, more value's being created, more and more everyday we're seeing it. You're starting to see new things pop up from this gestation period, problems that become opportunities. And competitors that are now partners, partners that are now competitors. So a full changeover is happening in the landscape because of it. So the question for you is, what are you seeing, given your experience in seeing other ways before, what is starting to be clear in terms of visibility that are becoming known points of obvious straight and narrow trends that are happening with this cloud enabling? >> Well, let's talk about perhaps one of the biggest differences between traditional IT and cloud-oriented IT. And to kind of tell that story, I'll do something that a lot of people don't think about when they think about innovation. But if you really think about innovation, you got to break it down into two distinct acts. There's the act of inventing, which is an engineering act. It's, how do I take knowledge of physics, or knowledge of sociology, or knowledge of something, and invent something new that reflects my understanding of the problem and creating a solution? And then there's an innovation act, which is always a social act. It's getting people to change the way they do things. Businesses to change the way they do things. That's a social act. And one of the interesting things about the transition, this transition, this cloud-based transition, is we're moving into a world where the social acts are much more synonymous with the actual engineering act. And by that I mean, when something is positioned as a service, that the customer gets and just acts on it because they're now renting a service, that is truly an innovation process. You are adopting it as a service and embedding it more quickly. What we're seeing now in many respects, going back to your core point, is everything being done as a service, it means that the binding of the inventing and the innovating is much more strong, and much more immediate. And AWS re:Invent's been a forum where we see this. It's not just inventing or putting forward a new product that may get out to market in six months or nine months. It is, here is a service, people are consuming it, we're embedding it in our other AWS stuff. We're putting this AI into how folks are going to manage AWS, and the invention innovation process collapses very quickly. >> That's a good point. I would just give you some validation on that by seeing other trend points that talk about that social piece. You hear about social engineering in cyber security, that that's now a big part of how hackers are getting in, through social engineering. Open-source software is a social engineering act, 'cause it's got a community dynamic. Blockchains, huge social engineering around how these companies are forming. So I would 100% agree, that's a great, great point. The other thing I'd ask you to elaborate on is something that is a trend that's obvious, 'cause everyone talks about the old way, new way. Legacy is being disrupted. New players like Amazon are disrupting the people like Oracle. And Oracle thinks they're winning, Amazon thinks they're winning. The scoreboards aren't the same, but here's the question. Technology used to be built to solve technology problems. You build a box, you ship it, and it works. Software, craft it, ship it. It does work or it doesn't work. Now software and technology we can use to solve non-technology problems. This brings it to a whole nother level when you take your social comment, an invention. This is now a new dynamic that tend to be, I don't want to say minimized in the old days, but the old ways was, load some boxes, rack it up, and you got a PC on your desk. We could work effectively on a network. Now it's completely going non-technology problems, healthcare, verticals. >> Here's the way we look at it, John. >> John: What's your thoughts on that? >> Our simple bromide is that we are in the midst of the transition in computing. And by that I mean, for the first 50 years we talked about known process, unknown technology. By that I mean, for example, have you ever seen a GAAP accounting convention wandering out in the wild? No, it doesn't exist, it's manmade, it's artifice. There's nothing wrong with it. We all agree what an accounting thing is, but it's all highly stylized and extremely well-defined. It's a known process. And the first 50 years were about taking those known processes in accounting, and in HR, and a lot of other domains, and then saying, okay, what's the right technology to automate as much of this as possible? And we did a phenomenal job of it. It started with mainframes, then client/server. And was it this server, or that server? Unix or something else? TCP/IP or some other network? But that was the first 50 years of computing. Now we've got a lot of those things out. In fact, cloud kind of summarizes and puts forward a common set of experiences, still a lot of technology questions are going to be important. I don't want to suggest that that's not important. But increasingly it's, okay, what are the processes that we're going to try to automate? So we're now in a world where the technology's much more known, but the processes are incredibly unknown. So we went from a known-- >> So what is that impact to the cloud players, like Amazon? Because what I'm trying to figure out is, what will be the posture on the keynotes? Is it going to be a speeds and feeds show? Or is it going to be much more holistic, business impact, or societal impact? >> The obvious one is that Amazon increasingly has to be able to render these common building blocks for infrastructure up through to developers, and a new way of thinking about how do you solve problems. And so a lot more of what we're likely to see this year is Amazon continuing to move up the stack and say, here's how you're going to look at a problem, here's how you're going to solve the problem, here's the tooling, and here's the ecosystem that we're going to bring along with us. So it's much more problem-solving at the value level, going back to what we talked about earlier, problem solving that creates new types of business value, as opposed to problem solving to reduce the costs of existing infrastructure. >> Now we have a VIP chat on crowdchat.net/awsreinvent. If you want to participate, we're going to open it. We're going to keep it open for a long time, weigh in on that. We just had a great research meeting that you do weekly called Action Item, which is a format that's designed to flush out the latest and greatest research that's tied to current events or trends. And then unpack the action item for buyers and customers, large businesses in the industry. What's the summary for the meeting we just had here? A lot of stuff being talked about, Unigrid, we're talking about under the hood with data, a lot of good stuff. What's the bottom line? How do you up-level it for the CIO or CXO that's watching or listening, doesn't have time to get in the weeds? >> Well, I think the three fundamental conclusions that we reached this year is that we expect AWS to spend a lot of time talking about AI, both as a generalized way of moving up the stack, as we talked about. Here's the services the developers are going to work with. Here's the tool kits that they're going to utilize, et cetera, to solve more general problems. But also AI being embedded more deeply within AWS and how it runs as a service, and how it integrates and works with other clouds. So AI machine learning for IT operations management through AWS. So AI's going to be a major feature. The second one we think that we're going to hear a lot about is, Amazon's been putting forward this notion that they were going to facilitate migration of Legacy applications into AWS. That's been a slog, but we expect to see a more focused effort by going after specific big software houses, that have large installed bases of on-premise stuff, and see if they can't, with the software house, bring more of that infrastructure, or more of those installations, into AWS. Now, I don't want to call VMware an application house, but not unlike what they did with VMware about a year and a half ago. The last one is that we don't think that Amazon is going to put forward a general purpose IoT Edge solution this year. We think that they're going to reveal further what their approach to those problems are, which is, bigger networks, more PoPs. >> More scale. >> More scale, a lot of additional services for building applications that operate within that framework, but not that kind of, here's what the hybrid cloud by Amazon is going to look like. >> Let's talk about competition in China. Obviously, they kind of go hand in hand. Obviously, Andy Jassy and the Amazon Web Services team are seeing for the first time, massive competition. Obviously Microsoft stocks, I might have mentioned earlier. So you're starting to see the competition wheels cranking. Oracle's certainly been all over Amazon, we know that. Microsoft's just upping their game, trying to catch up, and their numbers are looking good. You got SAP playing the multicloud game. You got Google differentiating on things like TenserFlow and other AI and developer tools. This is interesting. This is the first time Amazon's really had some competition, I won't say nipping at its heels, but putting pressure. It's not the one game in town. People are talking multicloud, kind of talking about lock-in. And then you got the China situation. You got Alibaba, technically the number four cloud by some standards. Some will argue that position. The point is, it's massive. >> Yeah, I think it's by any reasonable standard. They are a big cloud player. >> So let's go through that. China, let's start with China. Amazon just announced, and the news was broken by the Wall Street Journal, who actually got it wrong and didn't correct their story for almost 24 hours. Really kind of screwed up the market, everyone thought that they were selling AWS to China. It was a unique deal. Rob Hof and the team reported and corrected, >> Peter: At SiliconANGLE. >> At siliconangle.com, we got it right, and that is is that it was a $300 million data center deal, not intellectual property, but this is the China playbook. >> They sold their physical assets. They didn't sell their IP. They didn't sell the services or the ability to provide the services. >> Based upon my reporting, and this is again still, the facts on the ground are loose, 'cause China, it's hard to get the data. But from what I can gather, they were already doing business in China. Apple went through this, even though they're hardware, they still have software. Everyone has that standoff, but ultimately getting into China requires a government-owned partner, or a Chinese company. Government-owned is quasi, you could argue that. And then they expand from there. Apple now has, I think, six stores or more in Shanghai and all over China. So this is a growth opportunity for Amazon if they play it right. Thoughts on that? I mean, obviously we cover a lot of the Chinese companies here. >> Well, I don't want to present myself as an expert on this, John. I've been watching, the Silicon Valley ANGLE reporting has been my primary information source. But I think that it's interesting. We talk about hard assets and soft assets. Hard assets are buildings, machines, and in the IT world, it's the hardware, it's the building, et cetera. And when China talks about ownership, they talk about ownership of those assets. And it sounds to me anyway, like AWS has done a very interesting thing, where they said, okay, fine, you want 51% of the hard assets? Have 51% of the hard, have 100% of the hard assets. But we are going to decide what those assets look like, and we are going to continue to own and operate the software that runs on those assets. So it sounds like, through that, they're going to provide a service into China, whatever the underlying hardware assets are running on. Interesting play. >> Well, we get the story right, and the story is, they're going into China, and they had to cut a deal. (laughs) That's the story. >> But for the hard assets. >> For the hard assets, they didn't get intellectual property. I think it's a good deal for Amazon. We'll see, we're going to watch that closely. I'm going to ask Andy Jassy that specific question. Now on the competition. The FUD is off the charts, fear, uncertainty and doubt. You see that in competitive markets, the competition throwing FUD. Sometimes it's really blatantly obvious FUD, sometimes it's just how they report numbers. I've been, not critical, but pointing out that Azure includes Office 365. Well when you start getting down that road, do you bundle in the sales floor as a cloud player? So all these things start to-- >> Peter: Yeah. >> Of course, so what is true cloud? Are people parsing the categories too narrowly, in your opinion? What's the opinion from the research team on, on what is cloud? >> Well, what is cloud? We like to talk about the cloud experience where the data demand's free or business. So the cloud experience is basically, it's self-provisioning, it's a service, it is continuous, and it allows you a range of different options about what assets you do or do not want to own, according to the physical realities, the legal realities, and intellectual property realities of the data that runs your business. So that's kind of what we mean by cloud. So let's talk about a couple of these. First-- >> Hold on, before you get to those, Andy Jassy said a couple years ago, he believes all enterprises will move to the cloud. (laughs) I mean, he was kind of, of course, he's buying 100% Amazon, and Amazon's defined as cloud. But he's kind of referring to that the enterprise on-premise current business model, and the associated technology, will move to cloud. Now, I'm not sure he would agree that the true private cloud is the same as Amazon. But if he cuts a deal with VMware like he did, is that AWS? So will his prediction come true? Ultimately, everyone's saying that will never be full cloud. >> I think this is one of those things where we got to be a little bit careful about trying to read too much into what he said. But here's what we think. Our advice to customers is don't think about moving your enterprise to the cloud, think about moving the cloud to your enterprise. And I think that's the whole basis for the hybrid cloud conversation that we're having. And the reason why we say the cloud experience where your data demands, is that there are physical realities that every enterprise is going to have to deal with, latency, bandwidth. There are legal realities that every enterprise is going to have to deal with. GDPR, what it means to handle privacy and handle data. And then there's finally intellectual property realities that every enterprise is going to have to deal with. Amazon not wanting to sell its IP to a Chinese partner, to comply with Chinese laws. Every business faces these issues. And they're not going to go away. And that's what's going to shape every businesses configuration of how they're using the cloud. >> And by the way, when I did ask him that question, it might have been three years ago. I can't actually remember, I'm losing my mind here. But at that time, cloud was not yet endorsed as the viable way. So he might have been referring to, again, I'm going to ask him this when I see him in my one on one. He might have been referring to old enterprise ways. So I mean-- >> Let's be honest. Amazon has done such an incredible job of making this a real thing. And our opinion is that they're going to continue to grow as fast as the cloud industry, however we define it. What we tend to define, we think that SaaS is going to be a big player, and it's going to be the biggest part of the player. We think Infrastructure as a Service is going to continue to be critically important. We think that the competition for developers is going to heat up in a big way. AI, machine learning, deep learning, all of those things are going to be part of that competition. In our view of things, we're going to see SaaS be much bigger in a few years. We're going to see this notion of true private cloud, which is a cloud experience on-premise with your assets, because you need to control your data in different ways, is going to be bigger than IaaS, but it's all going to be cloud. >> I mean, in all poise, my opinion and what I'm looking for this year, Peter, just to kind of wrap up the segment is, I think, and if you look at Amazon's new ad campaign, the builders, that's a topic that we talked about last year. >> Peter: Developers. >> Developers. We are living in a world where DevOps is now going mainstream. And there are still cultural issues around, what does that actually mean for a business? The personnel, how they operate, and some of the things you guys point out in your true private cloud report, illuminates those things. And that is, whoever can automate and create great tooling for the DevOps culture going forward, whatever that's called, new developers, new normal? Whatever it is, that to me is going to be the competitive landscape. >> Let me parse that slightly, or put it slightly differently. I think everybody put forward this concept of DevOps as, hey, business, redefine yourself around DevOps. And it hasn't gone as well as a lot of people thought it would. I think what's really going to happen, I don't think you're disagreeing with me, John, is that we need to bring more developers into cloud building that cloud experience, building more of the application value, building more of the enterprise value, in cloud. Now that's happening, and they are going to start snapping this DevOps concept into place. But I think it really is going to boil down to, how are developers going to fully embrace the cloud? What's it going to look like? It's going to be multicloud. Let's go back to the competition. Microsoft, you're right, but they're a big SaaS player. Companies are building enormous relations, big contracts, with Microsoft. They're going to be there. Google, last year they couldn't get out of their own way. Diane Greene comes in, we see a much more focused effort. There's some real engineering that's going on for Google Cloud Services, or Platform, that wasn't there before. Google is emerging as a big player. We're having a lot of conversations with users, where they're taking Google very seriously. IBM is still out there, still got some things going on. You've already mentioned Alibaba, Tencent, a whole bunch of other players in the globe. This is going to be a market that's going to be very, very contentious, but Amazon's going to get there first share. >> And I think we pointed out years ago, that DevOps will merge to cloud developers. You nailed it, I think you just said it. Okay, Peter Burris, here for the Amazon Web Service preview. Of course theCUBE will be there with two sets. We're going to have over 75 interviews over the course of 3 days. In the hall, look for theCUBE, if you've watched this video and you want to come by. If you got a ticket, it's sold out. But come by if you have a ticket. We'll be there, in Las Vegas, for Amazon Web Services re:Invent. I'm John Furrier, thanks for watching this CUBE Conversation from Palo Alto. (upbeat techno music)
SUMMARY :
It's clear the cloud is the game. is that in the first 10 years of the cloud, So the question for you is, it means that the binding This brings it to a whole nother level And the first 50 years were about So it's much more problem-solving at the value level, flush out the latest and greatest research that's tied to Here's the services the developers are going to work with. but not that kind of, Obviously, Andy Jassy and the Amazon Web Services team I think it's by any reasonable standard. and the news was broken by the Wall Street Journal, and that is is that it was a $300 million data center deal, or the ability to provide the services. 'cause China, it's hard to get the data. And it sounds to me anyway, (laughs) That's the story. The FUD is off the charts, fear, uncertainty and doubt. of the data that runs your business. that the enterprise on-premise current business model, that every enterprise is going to have to deal with, And by the way, when I did ask him that question, And our opinion is that they're going to continue to grow the builders, that's a topic that we talked about last year. and some of the things you guys point out But I think it really is going to boil down to, And I think we pointed out years ago,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Shanghai | LOCATION | 0.99+ |
China | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
SiliconANGLE Media, Inc. | ORGANIZATION | 0.99+ |
Tencent | ORGANIZATION | 0.99+ |
$300 million | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
51% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
3 days | QUANTITY | 0.99+ |
six stores | QUANTITY | 0.99+ |
20170908 Wikibon Analyst Meeting Peter Burris
(upbeat music) >> Welcome to this week's edition of Wikibon Research Meeting on the Cube. This week we're going to talk about a rather important issue that raises a lot of questions about the future of the industry and that is, how are information technology organizations going to manage the wide array of new applications, new types of users, new types of business relationships that's going to engender significant complexity in the way applications are organized, architected and run. One of the possibilities is that we'll see an increased use of machine learning, ultimately inside information technology and operations management applications and while this has tremendous potential, it's not without risk and it's not going to be simple. These technologies sound great on paper but they typically engender an enormous amount of work and a lot of complexity themselves to run. Having said that, there are good reasons to suspect that this approach will in fact be crucial to ultimately helping IT achieve the productivity that it needs to support digital business needs. Now a big challenge here is that the technology, while it looks good, as I said, nonetheless is pretty immature and today's world, there's a breadth first and a depth first approach to thinking about this. Breadth first works on or worries about end to end visibility into how applications work across multiple clouds, on premise in the cloud, across applications, wherever they might be. You get an enormous amount of visibility and alerts but you also get a lot of false positives and that creates a challenge because these tools just don't have enormous visibility into how the individual components are working or how their relationships are set up, they just look at the broad spectrum of how work is being conducted. The second class is looking at depth first which is really based on the digital twin notion that's popular within the IOT world and that is vendors delivering out of the box models that are capable of doing a great job of creating a digital simulacrum of a particular resource so that it can be modeled and tracked and tested. Now again, a lot of potential, a lot of questions about how machine learning and iTom are going to come together. George, what is one of the key catalysts here? Somewhere in here there's a question about people. >> Okay there's a talent question, always with the introduction of new technology, it's people processed technology. The people end of the equation here is that we've been trying to upskill and create a new class of application developer as Jim has identified. This new class is a data scientist and they focus on data intensive applications and machine learning technology. The reason I bring up the technology is when we have this landscape that you described, that is getting so complex where we're building on business transaction applications, extending them with systems of engagement and then the operational infrastructure that supports both of them, we're getting many orders of magnitude more complexity in multiple dimensions and in data and so we need a major step function in the technology to simplify the management of that because just the way we choked on the deployment, mainstream deployment of big data technology in terms of lack of the specialized administrators, we are similarly choking on the deployment of very high value machine learning applications because it takes a while to train a new generation of data scientists. >> So George, we got a lot of challenges here in trying to train people but we're also expecting that we're going to be better trained technology with some of these new questions, so Jim let me throw it to you. When we think ultimately about this machine learning approach, what are some of the considerations that people have to worry about as they envision the challenges associated with training some of these new systems? >> Yeah I think one of the key challenges with training new systems for iTom is, do you have a reference data set? The predominant approach to machine learning is something called supervised learning where you're training it on rhythm against some data that represents what you're trying to detect or predict or classify. If for IT and operations management, you're looking for anomalies, for unprecedented events, black swan events and so forth. Clearly, if they're unprecedented, there's probably not going to be a reference data set that you can use to detect them or hopefully before they happen and neutralize them. That's an important consideration and supervised learning breaks down if you can't find a reference data example. Now there are approaches to machine learning, they're called cluster analysis or unsupervised learning, alert to something called cluster analysis algorithms which would be able to look for clusters in the data that might be indicative of correlations that might be useful to drill into, might be indicative of anomalous events and so forth. What I'm getting as it that when you're then considering ML, machine learning in the broader perspective of IT and operations management, do you go supervised learning, do you go with unsupervised learning for the anopolis, do you, if you want to remediate it, that you have a clear set of steps to follow from precedent, you might also want something called reinforcement learning. What I'm getting at is that all the aspects of training the models to acquire the knowledge necessary to manage the IT operations. >> Jim, let me interrupt, what we've got here is a lot of new complexity and we've got a need for more people and we've got a need for additional understanding of how we're going to train these systems but this is going to become an increasingly challenging problem. David Floyer, you've done some really interesting research on with the entire team that we call unigrid. Unigrid is looking at the likely future of systems as we're capable of putting more data proximate to other data and use that as a basis for dramatically improving our ability to, in a speedy, nearly real-time way, drive automation between many of these new application forms. It seems as though depth first, or what we're calling depth first, is going to be an essential element of how unigrid's going to deploy. Take us through that scenario and what do you think about how these are going to come together? >> Yes, I agree. The biggest, in our opinion, the biggest return on investment is going to come from being able to take the big data models, the complex models and make those simple enough that they can, in real time, help the acceleration, the automation of business processes. That seems to be the biggest return on this and unigrid is allowing a huge amount more data to be available in near real-time, 100 to 1000 times more data and that gives us an opportunity for business analytics which includes of course AI and machine learning and basic models, etc. to be used to take that data and apply it to the particular business problem, whether it be fraud control, whether it be any other business processing. The point I'm making here is that coding techniques are going to be very, very stretched. Coding techniques for an edge application in the enterprise itself and also of course coding techniques for pushing down stuff to the IOT and to the other agents. Those coding techniques are going to focus on performance first to begin with. At the same time, a lot of that coding will come from ISVs into existing applications and with it, the ISVs have the problem of ensuring that this type of system can be managed. >> So George, I'm going to throw it back to you at this point in time because based on what Dave has just said, that there's new technology on the horizon that has the potential to drive the business need for this type of technology, we'll get to that in a little bit more detail in a second, but is it possible that at least the depth first side of these ML and IT and iTom applications could become the first successful packaged apps that use machine learning in a featured way? >> That's my belief, and the reason is that even though there's going to be great business value in linking, say big data apps and systems of record and web mobile apps, say for fraud prevention or detection applications where you really want low latency integration, most of the big data applications today are more high latency integration where you're doing training and inferencing more in batch mode and connecting them with high latency with the systems of record or web and mobile apps. When you have that looser connection, high latency connection, it's possible to focus just on the domain, the depth first. Because it's depth first, the models have much more knowledge built in about the topology and operation of that single domain and that knowledge is what allows them to have very precise and very low latency remediation either recommendations or automated actions. >> But the challenge with just looking at it from a depth first standpoint is that as the infrastructure, as the relationships amongst technologies and toolings inside an infrastructure application portfolio is that information is not revealed and becomes more crucial overall to the operation of the system. Now we got to look a little bit at this notion of breadth first, the idea of tooling support end to end. That's a little bit more problematic, there's a lot of tools that are trying to do that today, a lot of services trying to do that today, but one of the things that's clearly missing is an overall good understanding of the dependency that these two tools have on machine learning. Jim, what can you tell us about how overall some of these breadth first products seem to be dependent or not on some of these technologies. >> Yeah, first of all breadth first products, what's neat is above, basically an overall layer is graph analysis, graph modeling to be able to follow a hundred interactions of transactions and business flows across your distributed IT infrastructure, to be able to build that entire narrative of what's causing a problem or might be causing a problem. That's critically important but as you're looking at depth first and you just go back and forth between depth first, like digital twin as a fundamental concept and a fundamentally important infrastructure for depth first, because the digital twin infrastructure maintains the data that can be used for training data for supervised machine learning looking into issues from individual entities. If you can combine overall graph modeling at the breadth first level for iTom with the supervised learning based on digital twin for depth first, that makes for a powerful combination. I'm talking in a speculative way, George has been doing the research, but I'm seeing a lot of uptake of graph modeling technology in the sphere, now maybe George could tell us otherwise, but I think that's what needs to happen. >> I think conceptually, the technology is capable of providing this George, I think that it's going to take some time however, to see it fully exploited. What do you got to say about that? >> I do want to address Jim, your comments about training which is the graph that you're referring to is precisely the word when I use topology figuring that more people will understand that and it's in the depth first product that the models have been pre-trained, supervised and trained by the vendor so they come baked in to know how to figure out the customer's topology and build what you call the graph. Technically, that's the more correct way of describing it and that those models, pre-trained and supervised have enough knowledge also to figure out the behavior which I call the operations of those applications, it's when you get into the breadth first that it's harder because you have no bounds to make assumptions about, it's harder to figure out that topology and operational behavior. >> But coming back to the question I asked, the fact that it's not available today, as depth first products accrete capabilities and demonstrate success, and let's presume that they are because there is evidence that they are, that will increase the likelihood that they are generating data that can then be used by breadth first products. But that raises an interesting question. It's a question that certainly I've thought about as well, is that is, Nick, ultimately where is the clearing house for ascertaining the claims these technologies will not and work together, have you seen examples in the past of standards, at this level of complexity coming together that can ensure that claims in fact, or that these technologies can in fact increasingly work together. Have we've seen other places where this has happened? >> Good question. My answer is that I don't know. >> Well but there have been standards bodies for example that did some extremely complex stuff in IO. Where we saw an explosion in the number of storage and printer and other devices and we saw separation of function between CPUs and channels where standards around SCUZI and what not, in fact were relatively successful, but I don't know that they're going to be as, but there is specific engineering tests at the electricity and physics level and it's going to be interesting to see whether those types of tests emerge here in the software world. All right, I want to segue from this directly into business impacts because ultimately there's a major question for every user that's listening to this and that is this is new technology, we know the business is going to demand it in a lot of ways. The machine learning in business activities, as David Floyer talked about, business processes, but the big question is how is this going to end up in the IT organization? In fact is it going to turn into a crucial research that makes IT more or less successful? Neil Raden, we've got examples of this happening again in the past, where significant technology discontinuities just hit both the business and IT at the same time. What happened? >> Well, in a lot of cases it was a disaster. In many more cases, it was a financial disaster. We had companies spending hundreds of billions of dollars implementing an ERP system and at the end, they still didn't have what they wanted. Look, people not just in IT, not just in business, not just in technology, consistently take complex problems and try to reduce them to something simple so they can understand them. Nowhere is that more common than in medical research where they point at a surrogate endpoint and they try to prove the surrogate endpoint but they end up proving nothing about the disease they're trying to cure. I think that this problem now, it's gone beyond an inventory of applications and organizations, far too complex for people to really grasp all at once. Rather than come up with a simplified solution, I think we can be looking to software vendors to be coming up with packages to do this. But it's not going to be a black box. It's going to require a great deal of configuration and tuning within each company because everyone's a little different. That's what I think is going to happen and the other thing is, I think we're going to have AI on AI. You're going to have a data scientist work bench where the work bench recommends which models to try, runs the replicates, crunches the numbers, generates the reports, keeps track of what's happening, goes back to see what's happened because five years ago, data scientists were basically doing everything in R and Java and Python and there's a mountain of terrible code out there that's unmaintainable because they're not professional programmers, so we have to fix that. >> George? >> Neil, I would agree with you for the breadth first products where the customer has to do a lot of the training on the job with their product. But in the depth first products, they actually build in such richly trained models that there really is, even in the case of some of the examples that we've researched, they don't even have facilities for customers to add say the complex event processing for analytics for new rules. In other words, they're trained to look at the configuration settings, the environment variables, the setup across services, the topology. In other words it's like Steve Jobs says, it just works on a predefined depth first domain like a big data stack. >> So we're likely to see this happen in the depth first and then ultimately see what happens in the breadth first but at the end of the day, it still has to continue to attract capital to make these technologies work, make them evolve and make the business cases possible. David, again you have spent a lot of time looking at this notion of business case and we can see that there's a key value to using machine learning in say fraud detection, but putting shoes on the cobbler's children of IT has been a problem for years. What do you think? Are we going to see IT get the resources it needs starting with depth first but so that it can build out a breadth oriented solution? >> My view is that for what it's worth, is we're going to focus or IT is going to focus on getting in applications which use these technologies and they will go into the places for that business where it makes most sense. If you're an insurance company, you can make hundreds of millions of dollars with fraud detection. If you are in other businesses, you want to focus on security or potential security. The applications that go in with huge amounts more data and more complexity within them, initially in my view will be managed as specific applications and the requirements of AI requirements to manage them will be focused on those particular applications, often by the ISVs themselves. Then from that, they'll be learning about how to do it and from that will come broader type of solutions. >> That's further evidence that we're going to see a fair amount of initial successes more in the depth first side, application specific management. But there's going to be a lot of efforts over the next few years for breadth first companies to grow because there's potentially significant increasing returns from being the first vendor out there that can build the ecosystem that ties all of these depth first products together. Neil, I want to leave you with a last thought here. You mentioned it earlier and you've done a lot of work on this over the years, you assert that at the end of the day, a lot of these new technologies, similar to what David just said, are going to come in through applications by application providers themselves. Just give us a quick sense of what that scenario's going to look like. >> I think that the technology sector runs on two different concepts. One is I have a great idea, maybe I could sell it. Did you hear that, I just got a message my connection was down there. Technology vendors will say that I have a, >> All right we're actually losing you, so Dave Alante, let me give you the last word. When you think about some of the organizational implications of doing this, what do we see as some of the biggest near term issues that IT's going to have to focus on to move from being purely reactive to actually getting out in front and perhaps even helping to lead the business to adopt these technologies. >> Well I think it's worth instructive to review the problem that's out there and the business impact that it'll have an what many of the vendors have proposed through software, but I think there are also some practical things that IT organizations can do before they start throwing technology at the problem. We all know that IT has been reactive generally to operations issues and it's affected a laundry list of things in the business, not only productivity, availability of critical systems, data quality, application performance and on and on. But the bottom line is it increases business risk and cost and so when the organizations that I talk to, they obviously want to be proactive. Vendors are promising that they have tools to allow them to be more proactive, but they really want to reduce the false positives. They don't want to chase down trivial events and of course cloud complicates all this. What the vendor community has done is it's promised end to end visibility on infrastructure platforms including clouds and the ability to discover and manage events and identify anomalies in a proactive manner. Maybe even automate remediation steps, all important things, I would suggest that these need to map to critical business processes and organizations need to have an understanding or they're not going to understand the business impact and it's got to extend to cloud. Now, is AI and ML the answer, maybe, but before going there, I would suggest that organizations look at three things that they can do. The first is, the fact is that most outages on infrastructure come from failed or poorly applied changes, so start with good change management and you'll attack probably 70% of the problem in our estimation. The second thing that we, I think would point to users, is that they should narrow down their promises and get their SLA's firmed up so they can meet them and exceed them and build up credibility with an organization before taking on wider responsibilities and increasing project skills and I think the third thing is start acting like a cloud provider. You got to be clear about the services that you offer, you want to communicate the SLA's, you know clearly they're associated with those services and charge for them appropriately so that you can fund your business. Do these three things before you start throwing technology at the problem. >> That's a great wrap. The one thing I'd add to that Dave, before we actually get to the wrap itself is that I find it intriguing that the processes of thinking through the skills we need and the training that we're going to have to do of people and increasing the training, whether it's supervised, unsupervised, reinforced, of some of these systems, will help us think through exactly the type of prescriptions that you just put forward. All right, let's wrap. This has been a great research meeting. This week, we talked about the emergence of machine learning technologies inside IT operations management solutions. The observation we make is that increasingly, businesses becoming dependent on multicloud including a lot of SAS technologies and application forms and using that as a basis for extending their regional markets and providing increasingly specialized services to customers. This is putting an enormous pressure on the relationship between brand, customer experience and technology management. As customers demand to be treated more uniquely, the technology has to respond, but as we increase the specificity of technology, it increases the complexity associated with actually managing that technology. We believe that there will be an opportunity for IT organizations to utilize machine learning and related AI type and big data technologies inside their iTom capabilities but that the journey to get there is not going to be simple. It's not going to be easy and it's going to require an enormous amount of change. The first thing we observe is that there is this idea of what we call breadth first technology or breadth first machine learning in iTom, which is really looking end to end. The problem is, without concrete deep models, we look at individual resources or resource pools, end up with a lot of false positives and you lose a lot of the opportunity to talk about how different component trees working together. Depth first, which is probably the first place that machine learning's going to show up in a lot of these iTom technologies, provides an out of the box digital twin from the vendor that typically involves or utilizes a lot of testing on whether or not that twin in fact is representative and is an accurate simulacrum of the resource that's under management. Our expectation is that we will see a greater utilization of depth first tooling inactivity, even as users continue to experiment with breadth first options. As we look on the technology horizon, there will be another forcing function here and that is the emergence of what we call unigrid. The idea that increasingly you can envision systems that bring storage, network and computing under a single management framework at enormous scale, putting data very close to other data so that we can run dramatically new forms of automation within a business, and that is absolutely going to require a combination of depth first as well as breadth first technology to evolve. A lot of need, lot of change on how the IT organization works, a lot of understanding of how this training's going to work. The last point we'll make here is that this is not something that's going to work if IT pursues this in isolation. This is not your old IT where we advocated for some new technology, bought it in, played for it, create a solution and look around for the problem to work with. In fact, the way that this is likely to happen and it further reinforces the depth first approach of being successful here is we'll likely see the business demand certain classes of applications that can in fact be made more functional, faster, more reliable, more integratable through some of these machine learning like technologies to provide a superior business outcome. That will require significant depth first capabilities in how we use machine learning to manage those applications. Speed them up, make them more complex, make them more integrated. We're going to need a lot of help to ensure that we're capable of improving the productivity of IT organizations and related partnerships that actually sustain a business's digital business capabilities. What's the bottom line? What's the action item? The action item here is user organizations need to start exploring these new technologies, but do so in a way that has proximate near term implications for how the organization works. For example, remember that most outages are in fact created not by technology but by human error. Button up how you think about utilizing some of these technologies to better capture and report and alert folks, alert the remainder of the organization to human error. The second thing to note very importantly, is that the promises of technology are not to be depended upon as we work with business to establish SLA's. Get your SLA's in place so the business can in fact have visibility to some of the changes that you're making through superior SLA's because that will help you with the overall business case. Now very importantly, cloud suppliers are succeeding as new business entities because they're doing a phenomenal job of introducing this and related technologies into their operations. The cloud business is not just a new procurement model. It's a new operating model and start to think about how your overall operating plans and practices and commitments are or are not ready to fully incorporate a lot of these new technologies. Be more of a cloud supplier yourselves. All right, that closes this week's Friday research meeting from Wikibon on the Cube. We're going to be here next week, talk to you soon. (upbeat music)
SUMMARY :
and a lot of complexity themselves to run. in the technology to simplify the management of that so Jim let me throw it to you. What I'm getting at is that all the aspects is going to be an essential element and basic models, etc. to be used to take that data low latency integration, most of the big data applications from a depth first standpoint is that as the infrastructure, is graph analysis, graph modeling to be able to follow going to take some time however, to see it fully exploited. that the models have been pre-trained, supervised and demonstrate success, and let's presume that they are My answer is that I don't know. but I don't know that they're going to be as, and at the end, they still didn't have what they wanted. a lot of the training on the job with their product. but at the end of the day, it still has to continue of AI requirements to manage them will be focused that scenario's going to look like. Did you hear that, I just got a message near term issues that IT's going to have to focus on and the ability to discover and manage events but that the journey to get there is not going to be simple.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave Alante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
Unigrid | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
two tools | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Nick | PERSON | 0.99+ |
This week | DATE | 0.99+ |
Java | TITLE | 0.99+ |
this week | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
second class | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
unigrid | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
each company | QUANTITY | 0.98+ |
first products | QUANTITY | 0.98+ |
first product | QUANTITY | 0.98+ |
first vendor | QUANTITY | 0.98+ |
hundreds of millions of dollars | QUANTITY | 0.97+ |
1000 times | QUANTITY | 0.97+ |
first side | QUANTITY | 0.97+ |
first level | QUANTITY | 0.96+ |
first standpoint | QUANTITY | 0.96+ |
first domain | QUANTITY | 0.96+ |
first options | QUANTITY | 0.96+ |
third thing | QUANTITY | 0.96+ |
Friday | DATE | 0.96+ |
single | QUANTITY | 0.96+ |
first approach | QUANTITY | 0.95+ |
iTom | ORGANIZATION | 0.95+ |
hundreds of billions of dollars | QUANTITY | 0.95+ |
first approach | QUANTITY | 0.94+ |
Wikibon Research Meeting | EVENT | 0.93+ |
Cloud & Hybrid IT Analytics: 1 on 1 with Peter Burris, Wikibon
>> Hey, welcome back everyone. We're here live in the Palo Alto Cube studios for our special digital live event sponsored by CA Technologies. I'm here with Peter Burris, Head of Research Wikibon.com, General Manager of Research for SiliconANGLE Media. Peter, you gave the Keynote this morning along with Sudip Datta talking about analytics. Interesting connection. Dave has been around for a while but now it's more instrumental. CA's had analytics, and monitoring for a while, now it's more instrumental. That seems to be the theme we're seeing here with the research that you're representing and your insight around digital business. Some of the leading research on the topic. Your thoughts on how they connect, what should users know about the connection between data and business, CA analytics and data? >> I think two things, John, first off as I kind of mentioned number one is that more devices are going to be more instrumental to the flow of, to the information flow to the data flows are going to create business value, and that's going to increase the need for greater visibility into how each of these things work together individually, but increasingly it's not just about having individual devices or individual things up and running or having visibility into them. You have to understand how they end up interacting with each other and so the whole modern anthropology becomes more important. We need to start finding ways of improving the capability of monitoring while at the same time simplifying it is the only way that we're going to achieve the goal of these increasingly complex infrastructures that nonetheless consistently deliver the business value that the business requires and customers expect. >> It's been interesting, monitoring has been around for awhile, you can monitor this, you can monitor that, you can kind of bring it all together in a database, but as we move to the cloud and you're seeing internet or things as you pointed out, there's a real connection here and the point that I wanted to talk about is, you mentioned the internet as a computer. Okay, which involves, system software kind of thinking, Let's tease that out. I want to unpack that concept because if the internet now is the platform that everyone will be basing and reimagining their business around, how do companies need to figure this out because this is on everyone's mind because it might miss the fact that it costs a hell of a lot of cash just to move stuff from the edge to the cloud or even just architectural strategies. What's that importance of the internet as a computer? >> Well, the notion of internet scale computing has been around for quite sometime. And the folks who take that kind of systems approach to things, may of them are sitting within 50 miles of where we sit right here. In fact, most of them. So, Google looks at the internet as a computer, that it can process. Facebook sees things the same way. So, if you look at some of these big companies that are actually thinking about internet scale computing, any service, any data, anytime, anywhere, then that thinking has started to permeate, certainly Silicon Valley. And in my conversations with CIO's, they increasingly want to think the same way. What is it, what, how do I have to think about my business relative to all of the available resources that are out there so I can have my company think about gaining access to a service wherever it might be. Gaining access to data that would be relevant to my company, wherever it might be. Appropriately moving the data, minimizing the amount of data that I have to move. Moving the events to the data when necessary. So, the, this is, in many respects the architectural question in IT today. How do we think about the way we weave together all these possible resources, possible combinations into something that sustains, sustainably delivers business value in a coherent manageable, predictable way? >> It's interesting, you and I have both seen many waves of innovation going back to the mini computer mainframe days and there used to be departments called data processing and this would be departments that handle analytics and monitoring. But now we're in a new era, a modern era where everything can be instrumented which elevates the notion of a department into a holistic perspective. You brought this up in your talk during the Keynote where it said data has to permeate throughout the organization whether it's IOT edge or wherever, so how do companies move from that department mindset, oh, the department handles the data warehouse or analytics, to a much more strategic, intelligent system? >> Well, that's an interesting question, John. I think it's one of the biggest things a business, you're going to have to think about. On the one hand, our expectations, we will continue to see a department. And the reason why that is, but not in a way that's historically been thought about, one of the reasons why that is, is because the entire business is going to share claims against the capabilities of technology. Marketing's going to lay a claim to it. Sales is going to lay claim to it. Manufacturing and finance are going to lay claims to it. And those claims have to be arbitrated. They have to be negotiated. So there will be a department, a group that's responsible for ensuring that the fundamental plant, the fundamental capabilities of the business are high quality and up and running and sustained. Having said that, the way that that is manifest is going to be much faster, much more local, much more in response to customer needs which often will break down functional type barriers. And so it's going to be this interesting combination of, on the one hand for efficiency and effectiveness standpoint, we're going to sustain that notion of a group that delivers while at the same time, everybody in the business is going to be participating more clearly in establishing the outcomes and how technology achieves those outcomes. It's very dynamic world and we haven't figured out how it's all going to come together. >> Well, we're seeing some trends, now you're seeing the marketing departments and these other departments taking some of that core competence that used to be kind of outsourced to the IT departments so analytics are moving in and data science and so you're seeing the early signs of that. I think modern analytics that CA was talking about was interesting, but I want to get your thoughts on the data value piece cause this is another billion dollar question or gazillion dollar question. Where is the value in the data? And from your research in the impact of digital business, where's the value come from? And how should companies think about extracting that value? >> Well, the value, first off, when we talk about the value of data we perhaps take a little license with the concept. And by that I mean, software to a computer scientist is data. It happens to be the absolutely most structured data you can possibly have. It is data that is so tightly structured that it can actually execute. So we bring software in under that rubric of the value of data. That's one way. The data is the basis for software and how we think about the business actually having consequential actions that are differentiated, increasing the digital world. One of the most important things, ultimately, about data is that unlike virtually every other asset that I can think about, money, labor, materials, all of those different types of assets are dominated by the economics of scarcity. You and I are sitting here having a conversation. I'm not running around and walking my dog right now. I can only do one thing with my time. I may have in my mind, thinking, but I can't create value at the same moment that I'm talking to you. I mean, we can create value here, I guess. Same thing if you have a machine and the machine is applied to pull a wire of a certain diameter, it's not pulling a wire of a different diameter. So these are all assets or sources that are dominated by scarcity. Data's different because the characteristics of data, the things that make data so unique and so interesting is that the same data can be applied to a lot of things at the same time. So we're talking about an asses that can actually amplify business value if it's appropriately utilized. And I think this is one of the, on the one hand, one of the reasons why data is often regarded, it's disposable, is because, oh I can just copy it or I can just do this with it or I can do that with it. It just goes away, it's ephemeral. But on the other hand, why leading businesses and a lot of these digital native companies, but increasing the other companies are now recognizing that with data as an asset, that kind of a thinking, you can apply the same data to a lot of different pursuits at the same time and quite frankly, that's what our customers want to see. Our customers want to see their requests, their needs be matched to capabilities, but also be used to build better products in the future, be used to ensure that the quality of the services that they're getting is high. That their needs are being met, their needs are being responded to. So they want to see data being applied to all these different uses. It's an absolutely essential feature in the future of digital business. >> And you've got to monitor in order to understand it. And for the folks watching, Peter had a great description in his Keynote, go check that video out around the elements of the digital business, how it's all working together. I'll let you go look at that. My final question for you is, you mention in your Keynote, the Wikibon private, true private cloud report. One of the things that's interesting in that graph, again on the Keynote he did present the slide, it's also on Wikibon.com if you're a member of the research subscription. It shows that actually the on premise assets are super valuable and that there's going to be a decline in labor, non differentiated labor or operational labor over the next six, seven years, around 1.6 billion dollars, but it shifts. And I think this was your point. Can you just explain in a little deeper way, the importance of that statistic because what it shows is, yes, automations coming. Whether it's analytics or machine learning and what not, but the value's shifting. Can you talk about that? >> Yeah, the very nature of the work that's performed within what we today call IT operations is shifting. It always has been. So when I was running around inside an IT organization, I remember some of the most frenetic activity that I saw was tape jockeys. We don't have too many tape jockeys in the world anymore, we still have tape, but we don't have a lot of tape jockeys anymore. So the first thing it suggests is that the very nature of the IT work that's going to be performed is going to change over the next few years. It's going to change largely in response to the fact that as folks recognize the value of the data and acknowledge that the placement of data to the event is going to be crucial to achieving that event within the envelope of time that that event requires. That ultimately the slow motion of dev op, which is still a maturing, changing, not broadly adopted set of concepts will start to change the nature of the work that we perform within that shared IT organization we were talking about a second ago. But the second thing it says is that we are going to be called upon to do a lot more work within an IT organization. A digital business is utilizing technology to perform a multitude of activities and that's just going to explode over the course of the next dozen years. So we have this combination of the works going to change, the amount of work that has, that's going to be performed by this group is going to expand dramatically, which means ultimately the only way out of this is the tooling is going to improve. So we expect to see significant advances in the productivity of an individual within an IT organization to support, sustain a digital business. And that's why we start to see some of the down tick in the cost of labor within IT. It's more important, more works going to be performed, but it's pretty clear that the industries now focus on improving that tooling and simplifying the way that that tooling works together. >> And having intelligence. >> Having intelligence, but also simplifying how it works together so it becomes more coherent. That's where we're going to need to improve these new levels of productivity. >> Real quick to end this segment, quickly talk about how CA connects to this because you know, they have modern analytics, they have modern monitoring strategies, the four pillars that you talked about. How do they connect into your research that you're talking about? >> Well I think one of the biggest things that a CIO is going to have to understand over the course of the next few years and we talked about a couple of them is, that this new architecture is not fully baked yet. We don't know what the new computing model is going to look like exactly. You know, not every business is Google. So Google's got a vision of it. Amazon's got a vision of it. But not every business is of those guys. So a lot of work on what is that new computing model? A second thing is this notion of ultimately where is or how is an IT organization going to deliver value? And it's clear that you're not going to deliver value by optimizing a single resource. You're going to deliver value by looking at all of these resources holistically and understand the inner connections and the interplay of these resources and how they achieve the business outcomes. So when I think about CA, I think of two things. First off, it is a company that has been at the vanguard of understanding how IT operations has worked, is working, and will likely continue to work as it evolves. And that's an important thing for a technology company that's serving IT operations to have. The second thing is, CA's core message, CA's tech core message now is evolving from just best of breed to how these things are going to come together. So the notion of modern moddering is to improve the visibility into everything as a holistic whole going back to that notion of, it's not just one device, it's how all devices holistically come together and the moddering fabric that we put in place has to focus on that and not just the productivity of any one piece. >> It's like an early day's test lick, it only gets better as they have that headroom to grow. Peter Burris head of research at Wikibon.com here, for one-on-one conversations, part of the cloud and modern analytics for digital business. Be back with more one-on-one conversations after this short break.
SUMMARY :
Some of the leading research on the topic. that nonetheless consistently deliver the business from the edge to the cloud or even just the amount of data that I have to move. of innovation going back to the mini computer mainframe is because the entire business is going to share Where is the value in the data? and the machine is applied to pull a wire It shows that actually the on premise assets of the data and acknowledge that the placement how it works together so it becomes more coherent. strategies, the four pillars that you talked about. So the notion of modern moddering is to improve part of the cloud and modern analytics
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
John | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Sudip Datta | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
billion dollar | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Wikibon.com | ORGANIZATION | 0.99+ |
CA Technologies | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
each | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
one device | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
CA | LOCATION | 0.98+ |
two things | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Palo Alto Cube | LOCATION | 0.98+ |
Keynote | TITLE | 0.98+ |
first | QUANTITY | 0.97+ |
50 miles | QUANTITY | 0.96+ |
1 | QUANTITY | 0.96+ |
around 1.6 billion dollars | QUANTITY | 0.96+ |
four pillars | QUANTITY | 0.96+ |
gazillion dollar | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
one way | QUANTITY | 0.95+ |
single resource | QUANTITY | 0.92+ |
first thing | QUANTITY | 0.9+ |
this morning | DATE | 0.89+ |
SiliconANGLE Media | ORGANIZATION | 0.88+ |
seven years | QUANTITY | 0.82+ |
next few years | DATE | 0.81+ |
couple | QUANTITY | 0.71+ |
Research | ORGANIZATION | 0.69+ |
six | QUANTITY | 0.66+ |
CA | ORGANIZATION | 0.63+ |
second ago | DATE | 0.62+ |
next dozen years | DATE | 0.6+ |
things | QUANTITY | 0.51+ |
Eric Herzog, IBM Storage | CUBE Conversation February 2020
(upbeat funk jazz music) >> Hello, and welcome to theCUBE Studios in Palo Alto, California for another CUBE Conversation, where we go in depth with thought leaders driving innovation across tech industry. I'm your host, Peter Burris. What does every CIO want to do? They want to support the business as it evolves and transforms, using data as that catalyst for better customer experience, improved operations, and more profitable options. But to do that we have to come up with a way of improving the underlying infrastructure that makes all this possible. We can't have a situation where we introduce more complex applications in response to richer business needs and have that translated into non-scalable underlying technology. CIOs in 2020 and beyond have to increasingly push their suppliers to make things simpler. And that's true in all domains, but perhaps especially storage, where the explosion of data is driving so many of these changes. So what does it mean to say that storage can be made more simple? Well to have that conversation we're going to be speaking with Eric Herzog, CMO and VP of Global Channels at IBM Storage, about, quite frankly, an announcement that IBM's doing to specifically address that question, making storage simpler. Eric, thanks very much for coming back to theCUBE. >> Great, thank you. We love to be here. >> All right, I know you got an announcement to talk about, but give us the update. What's going on with IBM Storage? >> Well, I think the big thing is, clients have told us, storage is too complex. We have a multitude of different platforms, an entry product, a mid-range product, a high-end product, then we have to traverse to the cloud. Why can't we get a simple, easy to use, but very robust feature set? So at IBM Storage with this FlashSystem announcement, we have a family that traverses entry, mid-range, enterprise and automatically can go out to a hybrid multicloud environment, all driven across a common platform, common API, common software, our award-winning Spectrum Virtualize, and innovative technologies around, whether it be cyber-resiliency, performance, incredible performance, ease of use, easier and easier to use. For example, we can do AI-based automated tiering from one flash array to another, or from storage class memory to flash. Innovation, at the same time driving better value out of the storage but not charging a lot of extra money for these features. In fact, our FlashSystems announcement, the platforms, depending on the configuration, can be as much as 50% lower than our previous generation. Now that's delivering value, but at the same time we added enhanced features, for example, the capability of even better container support than we already had in our older platform. Or our new FlashCore Modules that can deliver performance in a cluster of up to 17.2 million IOPS, up from our previous performance of 15. Yet, as I said before, delivering that enterprise value and those enterprise data services, in this case I think you said, depending on the config, up to as much as 50% less expensive than some of our previous generation products. >> So let me unpack that a little bit. So, historically, when you look at, or even today, when you look at how storage product lines are set up, they're typically set up for one footprint for the low end, one or more footprints in the mid-range, and then one or more footprints at the high-end. And those are differentiated by the characteristics of the technologies being employed, the function and services that are being offered, and the prices and financial arrangements that are part of it. Are you talking about, essentially, a common product line that is differentiated only by the configuration needs of the volume and workloads? >> Exactly. The FlashSystem traverses entry, mid-range, enterprise, and can automatically get you out to a hybrid multicloud environment, same APIs, same software, same management infrastructure. Our Storage Insights product, which is a could-based storage manager and predictive analytics, works on the entry product, at no charge, mid-range product at no charge, the enterprise product at no charge, and we've even added, in that solution, support for non-IBM platforms, again. So, delivering more value across a standard platform with a common API, a common software. Remember, today's storage is growing exponentially. Are the enterprise customers getting exponentially more storage admins? No. In fact, many of the big enterprises, after the downturn of '08 and '09 had to cut back on storage resources. They haven't hired back to how many storage resources they had in 2007 or '8. They've gotten back to full IT, but a lot of those guys are DevOps people or other functions, so, the storage admins and the IT infrastructure admins have to manage extra petabytes, extra exabytes depending on the type of company. So one platform that can do that and traverse out to the cloud automatically, gives you that innovation and that value. In fact, two of our competitors, just as example, do the same thing, have four platforms. Two other have three. We can do it with one. Simple platform, common API, common storage management, common interface, incredible performance, cyber-resiliency, but all built in something that's a common data management infrastructure with common data software, yet continuing to innovate as we've done with this release of the FlashSystem family. >> OK, so talk about the things that, common API, common software, also, I presume, common, the core module, that FlashCore Module that you have, common across the family as well? >> Almost all the family. At the very entry space we still do use interstandard SSDs but we can get as low as a street price for all-flash config of $16,000 for an all-flash array. Two, three years ago that would've been unheard of. And, by the way, it had six lines of availability, same software interface and API as a system that could go up to millions of dollars at the way high end, right? And anything in between. So common ease of use, common management, simple to manage, simple to deploy, simple to use, but not simple in the value proposition. Reduce the TCO, reduce the ROI, reduce the operational manpower, they're overtaxed as it is. So by making this across the portfolio with the FlashSystem and go out to the hybrid multicloud but bringing in all this high technology such as our FlashCore Modules and, as I said, at a reduced price to the previous generation. What more could you ask for? >> OK, so you've got some promises that you made in 2019 that you're also actually realizing. One of my favorite ones, something I think is pretty important, is storage class memory. Talk about how some of those 2019 promises are being realized in this announcement. >> So what we did is, when we announced our first FlashSystem family in 2018 using our new NVMe FlashCore Modules, we had an older FlashSystem family for several years that used, you know, the standard SaaS interface. But our first NVMe product was announced in the summer of 2018. At that time we said, all the way back then, that in early '20 we would be start shipping storage class memory. Now, by the way, those FlashSystems NVMe products that we announced back then, actually can still use storage class memory, so, we're protecting the investment of our installed base. Again, innovation with value on the installed base. >> A very IBM thing to do. >> Yes, we want to take care of the installed base, we also want to have new modern technologies, like storage class memory, like improved performance and capacity in our FlashCore Modules where we take off the shelf Flash and create our own modules. Seven year media warranty, up to 17.2 million IOPS, 17 mites of latency, which is 30% better than our next nearest competitor. By the way, we can create a 17 million IOP config in only eight rack U. One of our competitors gets close, 15 million, but it takes them 40 rack U. Again, operational manpower, 40 rack U's harder to manage, simplicity of deployment, it's harder to deploy all that in 40 rack U, we can do it in eight. >> And pricing. >> Yes. And we've even brought out now, a preconfigured rack. So what we call the FlashSystem 9200R built into the rack with a switching infrastructure, with the storage you need, IBM services will deploy it for you, that's part of the deal, and you can create big solutions that can scale dramatically. >> Now R stands for hybrid? >> Rack. >> Rack. Well talk to me about some of the hybrid packaging that you're bringing out for hybrid cloud. >> Sure, so, from a hybrid cloud perspective, our Spectrum Virtualize software, which sits on-prem, entry, mid-range and at the upper end, can traverse to a cloud called Spectrum Virtualize for Cloud. Now, one of the keys things of Spectrum Virtualize, both on-prem and our cloud version, is it supports not only IBM arrays, but through a storage virtualisation technology, over 450 arrays from multi-vendors, and in short our competition. So we can take our arrays, and automatically go out to the cloud. We can do a lot of things. Cloud air gapping, to help with malware and ransonware protection, DR, snapshots and replicas. Not only can the new FlashSystem family do that, to Spectrum Virtualize on-prem and then out, but Spectrum Virtualize coming on our FlashSystem portfolio can actually virtualize non-IBM arrays and give them the same enterprise functionality and in this case, hybrid cloud technology, not only for us, but for our competitors products as well. One user interface. Now talk about simple. Our own products, again one family, entry, mid-range and enterprise traversing the cloud. And by the way, for those of you who are heterogeneous, we can deliver those enterprise class services, including going out to a hybrid multi-cloud configuration, for our competitors products as well. One user interface, one throat to choke, one support infrastructure with our Storage Insights platform, so it's a great way to make things easier, cut the CAPEX and OPEX, but not cut the innovation. We believe in value and innovation, but in an easy deploy methodology, so that you're not overly complex. And that is killing people, the complexity of their solutions. >> All right. So there's a couple of things about cloud, as we move forward, that are going to be especially interesting. One of them is going to be containers. Everybody's talking about, and IBM's been talking about, you've been talking about this, we've talked about this a number of times, about how containers and storage and data are going to come together. How do you see this announcement supporting those emerging and evolving need for container-based applications in the enterprise. >> So, first of all, it's often tied to hybrid multi-cloudness. Many of the hybrid cloud configurations are configured on a container based environment. We support Red Hat OpenShift. We support Kubernetes environments. We can provide on these systems at no charge, persistent storage for those configurations. We also, although it does require a backup package, Spectrum Protect, the capability of backing up that persistent storage in an OpenShift or Kubernetes environment. So really it's critical. Part of our simplicity is this FlashSystem platform with this technology, can support bare metal workloads, virtualised workloads, VMware, HyperV, KVM, OVM, and now container workloads. And we do see, for the next coming years, think about bare metal. Bare metal is as old as I am. That's pretty old. Well we got tons of customers still got bare metal applications, but everyone's also gone virtualized. So it's not, are we going to have one? It's you're going to have all three. So with the FlashSystems family, and what we have with Spectrum Virtualized software, what we have with our container support, we need with bare metal support, incredible performance, whatever you need, VMware integration, HyperV integration, everything you need for a virtualized environment, and for a container environment, we have everything too. And we do think the, especially the mid to big accounts, are going to try run all three, at least for the next couple of years. This gives you a platform that can do that, at the entry point, up to the high end, and then out to a hybrid multi-cloud environment. >> With that common software and APIs across. Now, every year that you and I have talked, you've been especially passionate about the need for turning the crank, and evolving and improving the nature of automation, which is another one of the absolute necessities, as we start thinking about cloud. How is this announcement helping to take that next step, turn the crank in automation? >> So a couple of things. One is our support now for Ansible, so offering that Ansible support, integrates into the container management frameworks. Second thing is, we have a ton of AI-type specific based technology built into the FlashSystem platform. First is our cloud based storage and management predictive analytics package, Storage Insights. The base version comes for free across our whole portfolio, whether it be entry, mid-range or high-end, across the whole FlashSystems family. It gives you predictive analytics. If you really do have a support problem, it eases the support issues. For example, instead of me saying, "Peter send me those log files." Guess what? We can see the log files. And we can do it right there while you're on the phone. You've got a problem? Let's make it easier for you to get it solved. So Storage Insights across AI based, predictive analytics, performance, configuration issues, all predicatively done, so AI based. Secondly, we've integrated AI in to our Spectrum Virtualize product. So as exemplar, easier to your technology, can allow you to tier data from storage class memory to Flash, as an example, and guess what it does? It automatically knows based on usage patterns, where the data should go. Should it be on the storage class memory? Should it be on Flash core modules? And in fact, we can create a configuration, we have Flash core modules and introduce standard SSDs, which are both Flash, but our Flash core modules are substantially faster, much better latency, like I said, 30% better than the next nearest competition, up to 17.2 million IOPS. The next closest is 15. And in fact, it's interesting, one of our competitors has used storage class memory as a read cache. It dramatically helps them. But they go from 250 publicly stated mites of latency, to 125. With this product, the FlashSystem, anything that uses our Flash core modules, our FlashSystems semi 200, our FlashSystem 9200 product, and the 9200-R product. We can do 70 mites of latency, so almost twice as fast, without using storage class memory. So think what that storage class memory will offer. So we can create hybrid configurations, with StorageClass and Flash, you could have our Flash core modules, and introduce standard SSDs if you want, but it's all AI based. So we have AI based in our Storage Insights, predictive analytics, management and support infrastructure. And we have predictive analytics in things like our Easy Tier. So not only do we think storage is a critical foundation for the AI application workload and use case, which it is, but you need to imbue your storage, which we've done across FlashSystems, including what we've done with our cloud edition, because Spectrum Virtualize has a cloud edition, and an on-prem edition, seamless transparency, but AI in across that entire platform, using Spectrum Virtualize. >> All right, so let me summarize. We've got an absolute requirement from enterprise, to make storage simpler, which requires simple product families with more commonality, where that commonality delivers great value, and at the same time the option to innovate, where that innovation's going to create value. We have a lot simpler set of interfaces and technologies, as you said they're common, but they are more focused on the hybrid cloud, the multi-cloud world, that we're working in right now, that brings more automation and more high-quality storage services to bear wherever you are in the enterprise. So I've got to ask you one more question. I'm a storage administrator, or a person who is administering data, inside the infrastructure. I used to think of doing things this way, what is the one or two things that I'm going to do differently as a consequence of this kind of an announcement? >> So I think the first one, it's going to reduce your operational expenses and your operational man power, because you have a common API, a common software platform, a common foundation for data management and data movement, it's not going to be as complex for you to pull your storage configurations. Second thing, you don't have to make as many choices between high-end workloads, mid-range workloads, and entry workloads. Six lines across the board. Enterprise class data services across the board. So when you think simple, don't think simple as simplistic, low-end. This is a simple to use, simple deploy, simple to manage product, with extensive innovation and a price that's- >> So simple to secure? >> And simple to secure. Data rest encryption across the portfolio. And in fact those that use our FlashCore Modules, no performance hit on encryption, and no performance hit on data compression. So it can help you shrink the actual amount you need to buy from us, which sounds sort of crazy, that a storage company would do that, but with our data reduction technologies, compression being one of them, there's no performance hits, you can compress compressable workloads, and now, anything with a FlashCore Module, which by the way, happens to be FIPS 140-2 certified, there's no excuse not to encrypt, because encryption, as you know, has had a performance hit in the past. Now, our 7200, our 5100 FlashSystem, and our FlashSystem 9200 and 9200R, there's no performance on encrypting, so it gives you that extra resiliency, that you need in a storage world, and you don't get a non-compression, which helps you shrink how much you end up buying from IBM. So that's the type of innovation we deliver, in a simple to use, easy to deploy, easy to manage but incredible innovative value, brought into a very innovative solution, across the board, not just let's innovate at the high end or you know what I mean? Trying to make that innovation spread, which, by the way, makes it easier for the storage guy. >> Well, look, in a world, even inside a single enterprise, you're going to have branch offices, you're going to have local this, the edge, you can't let the bad guys in on a lesser platform that then can hit data on a higher end platform. So the days of presuming that there's this great differentiation in the tier are slowly coming to an end as everything becomes increasingly integrated. >> Well as you've pointed out many times, data is the asset. Not the most valuable one. It is the asset of today's digital enterprise and it doesn't matter whether you're a global Fortune 500, or you're a (mumble). Everybody is a digital enterprise these days, big, medium or small. So cyber resiliency is important, cutting costs is important, being able to modernize and optimize your infrastructure, simply and easily. The small guys don't have a storage guy, and a network guy and a server guy, they have the IT guy. And even the big guys, who used to have hundreds of storage admins in some cases, don't have hundreds any more. They've got a lot of IT people, but they cut back so these storage admins and infrastructure admins in these global enterprise, they're managing 10, 20 times the amount of storage they managed even two or three years ago. So, simple, across the board, and of course hyper multicloud is critical to these configurations. >> Eric, it's a great annoucement, congratulations to IBM to actually delivering on what your promises are. Once again, great to have you on theCUBE. >> Great, thank you very much Peter. >> And thanks to you, again, for participating in this CUBE conversation, I'm Peter Burris, see you next time. (upbeat, jazz music)
SUMMARY :
But to do that we have to come up with We love to be here. I know you got an announcement to talk about, Innovation, at the same time driving better value and the prices and financial arrangements No. In fact, many of the big enterprises, At the very entry space we still do use interstandard SSDs in 2019 that you're also actually realizing. in the summer of 2018. By the way, we can create a 17 million IOP config and you can create big solutions that you're bringing out for hybrid cloud. And by the way, for those of you who are heterogeneous, container-based applications in the enterprise. and then out to a hybrid multi-cloud environment. and evolving and improving the nature of automation, and the 9200-R product. and at the same time the option to innovate, it's not going to be as complex for you So that's the type of innovation we deliver, So the days of presuming It is the asset of today's digital enterprise Once again, great to have you on theCUBE. And thanks to you, again,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tristan | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
John | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Steve Mullaney | PERSON | 0.99+ |
Katie | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Charles | PERSON | 0.99+ |
Mike Dooley | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Tristan Handy | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Maribel Lopez | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mike Wolf | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Merim | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Rossi | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Chris Wegmann | PERSON | 0.99+ |
Whole Foods | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Chris Hoff | PERSON | 0.99+ |
Jamak Dagani | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Caterpillar | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Jerome | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lori MacVittie | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
10 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Peter McKee | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Michael Segal AWS Interview
from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hello and welcome to the cube studios in Palo Alto California for another cube conversation where we go in-depth with thought leaders driving innovation across the tech industry I'm your host Peter Burris Michael Siegel is the product manager or area vice-president strategic alliances and net scout systems Michael we are sitting here in the cube studios in Palo Alto in November of 2019 reinvent 2009 teens right around the corner net scout and AWS are looking to do some interesting things once you give us an update of what's happening yeah just a very brief introduction of what net Scout actually does so net scout assures service performance and security for the largest enterprises and service provider in the world we do it through something we refer to as visibility without borders by providing actionable intelligence necessary to very quickly identify the root cause of either performance on security issues so with that net Scout partnering very closely with AWS we are an advanced technology partner which is the highest tier for ice fees of partnership this enables us to partner with AWS on a wide range of activities including technology alignment with roadmap and participating in different launch activities of new functionality from AWS it enables us to have go-to-market activities together focusing on key campaigns that are relevant for both AWS and net Scout and it enables us also to collaborate on sales initiatives so with this wide range of activities what we can offer is a win-win-win situation for our customers for AWS and for net scout so from customers perspective beyond the fact that net Scout offering is available in AWS marketplace now this visibility without borders that I mentioned helps our customers to navigate through their digital transformation journey and migrate to AWS more effectively from AWS perspective the wienies their resources are now consumed by the largest enterprises in the world so it accelerates the consumption of compute storage networking database resources in AWS and fournette scout this is strategically important because now net Scout becoming a strategic partner to our large enterprise customers as they navigate their digital transformation journey so that's why it's really important for us to collaborate very very efficiently with AWS it's important to our customers and it's important to AWS Michael Siegel net Scout systems thanks very much for being on the tube thank you for having me and once again we'd like to thank you for joining us for another cube conversation until next time
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Palo Alto | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
November of 2019 | DATE | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Michael Siegel | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Michael Segal | PERSON | 0.99+ |
2009 | DATE | 0.98+ |
Palo Alto California | LOCATION | 0.98+ |
both | QUANTITY | 0.88+ |
net Scout | ORGANIZATION | 0.82+ |
net scout | ORGANIZATION | 0.68+ |
net Scout | ORGANIZATION | 0.63+ |
Scout | ORGANIZATION | 0.61+ |
net | TITLE | 0.58+ |
net | COMMERCIAL_ITEM | 0.43+ |
Michael Segal, NETSCOUT Systems & Eric Smith, NETSCOUT Systems | CUBEConversation, January 2020
(upbeat music) >> Narrator: From our studios, in the heart of Silicon Valley, Palo Alto, California. This is a CUBE Conversation. >> Hello and welcome to theCUBE studios, in Palo Alto California, for another CUBE Conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Anybody that's read any Wikibon research or been a part of any conversation with anybody here at SiliconANGLE, knows we're big believers in the notion of digital business, and digital business transformation. Simply put, the difference between a business and a digital business is the role that data plays in a digital business. Digital businesses use data to change their value propositions, better manage and get greater visibility and utilization out of their assets, and ultimately drive new types of customer experience. That places an enormous burden on the technologies, the digital technologies that have historically been associated with IT, but now are becoming more deeply embedded within the business. And that digital business transformation is catalyzing a whole derivative set of other transformations. Including for example, technology, data centers, security, et cetera. It's a big topic, and to start to parse it and make some sense of it, we're joined by two great guests today- Michael Segal is the area vice-president of strategic alliances at NETSCOUT Systems, and Eric Smith is the senior product line manager of NETSCOUT Systems. Gentlemen, welcome to theCUBE. >> Pleasure to be here, Peter. >> Okay, so, Michael let's get going on. Give us a quick update on NETSCOUT Systems. >> Yeah, so maybe just a quick introduction of what NETSCOUT actually does. So, NETSCOUT assures service performance and security for the largest enterprises and service providers in the world. And the way we accomplish it is through what we refer to as offering visibility without borders. Now, this visibility without borders provides actionable intelligence that enables, very quickly and efficiently to enterprises and service providers, ensure their service performance and security, understand, discover problems, root cause, and solution. So it overall reduces their mean time to repair, and it's being used to assure that digital transformation and other transformation initiatives are executed effectively by the IT organization. >> All right, so let's jump in to this notion of transformation. Now, I know that you and I have spent, on a couple different occasions, talked about the idea of digital business transformation. What does digital business transformation mean to NETSCOUT, and some of the other derivative transformations that are associated with it? >> Right, so as you described very very concisely in your introduction, the business transformation is about enabling the business through digital services and data to differentiate itself from competition very very effectively. Now, one of the aspects of this digital transformation is that now more than ever before, the CIOs are taking a very active role in this transformation because obviously, information technology is responsible for digital services and processing and analyzing data. So with that in mind, the CIOs now need to support the business aspects of agility, right? So if your business agility involves introducing new services very quickly and efficiently, the IT organization needs to support that, and at the same time, they also need to assure that the employee productivity and end user experience is maintained at the highest levels possible. So this is exactly where NETSCOUT comes in, and we support the IT organization by providing this visibility without borders, to assure that employee productivity and end user experience is maintained and any issues are resolved very quickly and efficiently. >> Especially customer experience, and that's increasingly the most important, end users that any digital business has to deal with. At this point in time Eric, I want to bring you in to the conversation. When we talk about this notion of greater visibility, greater security, over digital assets, and the role that the CIO is playing, that also suggests that there is a new class of roles for architects, for people who have historically been associated more with running the networks, running the systems, how is their role changing, and how is that part of the whole concept of data centered transformation? >> Right, so, the guys that have typically been in what you might consider network operations types of roles, their roles are evolving as well, as the entire organization does. So as Michael mentioned beforehand, no longer is the digital business wholly and solely confined to an IT department that is working just with their employees. They're now part of the business. They're not just the cost center anymore, they're actually an asset to the business. And they are supporting lines of business. So the folks that have traditionally had these roles have just maintained the network, maintained the applications, are having to become experts in other aspects. So as certain applications disaggregate, or potentially move out partially into the cloud, they kind of become cloud architects as well, whether it's a public cloud or a private cloud, they have to understand those relationships and they have to understand what happens when you spread your network out beyond your traditional data center core. >> So let's build on that, because that suggests that the ultimate solution for how we move forward has to accommodate greater visibility, end-to-end, across resources, not only that we have traditionally controlled, and therefore could decide how much visibility we had, if the tooling was right, but also resources that are outside of our direct purview. How does that work as we think about building this end to end visibility to improve the overall productivity and capability, as you said, the productivity and end user experience, of the systems we're deploying? >> Yeah, so maybe we can start with the end in mind, and what I mean by that is what you just described as end user productivity and user experience, so how do we measure it, right? So in order to measure it, what we need to look is the visibility at the service level. And what I mean by visibility at the service level is actually looking, not just at once specific component that is associated with the servers such as application, it's one component, however application is running on a network, you have service enablers, for example to authenticate, to do accounting, to do DNS resolution, so you need to look at all of these components of a service and be able to effectively provide visibility across all of them. Now, the other aspect of this visibility, as you mentioned, end-to-end, which is an excellent observation as well, because you're looking at the data center, which is still very strategic assets, your crown jewels are still going to be in the data center, some of the data will remain there, but now you are expanding to the edge, maybe colos, maybe microdata centers in the colos, then you move workloads, migrate them to public clouds, it can be IaaS, you have more SaaS providers that provide you with different services. So this aspect of end-to-end really evolves into geographically dispersed, very complex and highly scalable architecture. >> Yeah, we like to say that the cloud is not an architecture, not a strategy, for centralizing resources. Rather, it's a strategy for greater distributing resources, allowing data to be where it needs to be to perform the function, or where it gets captured, allowing the service to be able to go to the data, to be able to perform the work that needs to be conducted from a digital business standpoint. That suggests that even though a customer, let's call it the end user, and the end user experience, may get a richer set of capabilities, but the way by which that work is being performed gets increasingly complex, and partly, it sounds like, that it's complexity that has to be administered and monitored so that you don't increase the time required to understand the nature of a problem, understand the nature of the fix. Have I got that right? >> You got it absolutely right, and I would add to this that the complexity that you described is being further magnified by the fact that you lose control to some extent, as you mentioned before, right? >> Or because, let's put it this way, it becomes a contracting challenge as opposed to a command and control challenge. Now the CIO can't tell Mike, "Go fix it", the CIO has to get on the phone with a public cloud provider and say, our service level says, and that's a different type of interaction. >> Right, and usually the service provider would say, the problem is not on my side, it's on your side, so the traditional finger pointing in war rooms now, is being expanded across multiple service providers, and you need to be able to very effectively and quickly identify this is the root cause, this is why it's your fault, service provider, it's not our fault, please go and fix it. >> So let's dig into that if we can, Eric, this notion of having greater visibility so that you are in a better position to actually identify the characteristics of the problem, and where the responsibilities lie. How is that working? >> So, in the past, or when the digital transformation started it's initial rise, it wasn't. And what was happening is, as you both have alluded to a moment ago, I can no longer call Mike and Suzie downstairs, and say you know, voicemail is not working, things are just, not working. Well, you can go sic them on it and they go fix it. What's happening now is that data is leaving your data center, it may be going through something like a colo, which is aggregating the data, and then sending it on to your partner, that is providing these services. So what you have to have is a way to regain that visibility into those last mile segments, if you will, so that as you work with your partners, whether it's the colo or the in-software provider, that you can say look, I can see things from here, I can see things to there, and here's where it goes south, and this is the problem, help me fix it. And so, as you said a moment ago, you cannot let your mean time to resolution expand simply because you're engaging in these digital transformation activities. You need to remain at least as good as you did before, and hopefully better. >> Well, you have to be better, because your business is becoming more dependent on your digital business capabilities, increasingly it's becoming your business. So let me again dig a little more deeper technically into that. A lot of companies are attempting to essentially provide a summary view of that data, that's moving around a network, moving across these different centers and locations, edge, colo, et cetera, what is the right way to do it? What constitutes real truth when we talk about how these systems are going to work? >> So NETSCOUT believes, and I think most people wouldn't argue with us, that when you can actually see the packet data that goes across the network, you know what elements are talking to which ones, and you can see that, and you can build metrics, and you can build views upon that, that is very high fidelity data, and you absolutely know what's going on. We like to call it the single source of truth. So as things come from the deep part of the data center, whether it's a virtualized server farm, all the way through this core of the network, and your service enablers like Michael mentioned, all the through the colos, and out into an IAS or SaaS type of environment, if you're seeing what's actually being on the wire, and who's talking to whom, you know what's going on, and you can quickly triage and identify what the problem is so that you can solve it. >> Now is that something that increasingly architects or administrators are exploiting as they use these new classes of tools to gain that visibility into how the different services are working together? And also, is that becoming a feature of how SLAs and contracts are being written, so that we can short circuit the finger pointing with our service providers? >> Yeah, so there's kind of like you said, two parts that, the first is I think, a lot of the traditional IT operations folks, as you mentioned earlier, are learning new roles, so to some degree, it is new for them, and I don't know that everybody has started to make use of those tools yet, but that's part of what our story is to them, is that we can provide those tools for you, so that you can continue to isolate and solve these problems. And I'm sorry, what was the second part of your question? >> Well, the second part is, how does that translate into contracting? Does that knowledge about where things actually work inform a contracting process to reduce the amount of finger pointing, which by the way, is a major transaction cost and a major barrier to getting things done quickly. >> Absolutely, and so you since you have this high fidelity data at every step of the way, and you can see what's happening, you can prove to your partners where the problem lies. If I find it on my side of it, okay, no harm no foul, I'll go fix it and move on with my life. But with that data, with that high fidelity data, and being able to see all the transactions and all the applications, and all the communications that happens end-to-end, through the network between me and my partner, I can show them that they are outside of their SLA. And to your point, it should shorten the time between the finger pointing, because I have good data that says, this is the problem. You can't dispute that. And so, they're much more inclined to work with you in a hopefully, very good way, to fix the problem. >> So that brings us back to the CIO. And I want to close with you on this, Michael. That's got to make a CIO happier, who is today facing a lot of business change, and is trying to provide a lot, you said agility, I'll use the word an increasing array of business and strategy options based on digital technology. Ensuring that they have greater certainty in the nature of the services, the provider of the services, and in the service levels of the services, has got to be an essential feature of their decision making toolkit as they provide business with different ranges of options, right? >> Absolutely correct. In fact, the high fidelity data is so critical in order to accomplish this, right, so in order for the CIO to be able to demonstrate to the CEO and other key executives that his objectives are met, the KPIs for that are along the lines of your efficiency, your service delivery capabilities, and being able to monitor everything in real time. So, the high fidelity data, I just want to elaborate a little bit more on what it means, because that's the difference between having these key performance indicators that are relevant for the CIO, and relevant also for other key stakeholders, and having something that is best guess, and maybe it's going to help. So high fidelity data, the way that NETSCOUT defines it, has several components. First of all, because it's based on traffic, or packet data, or wire data, it means that we continuously monitor the data, continuously analyze it, and it's the single source of truth because there's consistency in terms of what data is being exchanged. So the more visibility you get into the data that's being exchanged between different workloads, the more intelligence you can glean from it. The other aspect is that it's really, we mentioned, the service level, and if you think of packet data, it's all layers two through seven, so you have the data link layer, you have the network, you have the transport, you have the session, you have application, you can holistically identify any application, and provide you with error codes and in context, say you know the log and latency and error codes give you the overall picture. So this all together constitutes very high fidelity data. And at the end of the day, if the CIO wants to accelerate the digital transformation with confidence, this is the kind of high fidelity data that you need in order to assure that your key performance indicators, as CIO, are being maintained. >> This is the as is truth. >> Exactly. >> All right, Michael Segal, Eric Smith, I want to thank you both for being on theCUBE. >> Thank you for having me. >> Thank you very much Peter, for having us. >> And thanks for joining us for another CUBE Conversation. I'm Peter Burris, see you next time. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, Palo Alto, California. and a digital business is the role that data plays Okay, so, Michael let's get going on. and service providers in the world. and some of the other derivative transformations and at the same time, they also need to assure that and how is that part of the whole concept and they have to understand what happens the overall productivity and capability, as you said, and what I mean by that is what you just described administered and monitored so that you don't the CIO has to get on the phone with a public cloud provider and you need to be able to very effectively and quickly the characteristics of the problem, so that as you work with your partners, Well, you have to be better, and you can see that, and you can build metrics, so that you can continue to isolate and a major barrier to getting things done quickly. and all the communications that happens end-to-end, and in the service levels of the services, So the more visibility you get into the data I want to thank you both for being on theCUBE. Thank you very much Peter, I'm Peter Burris, see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Michael Segal | PERSON | 0.99+ |
Eric Smith | PERSON | 0.99+ |
NETSCOUT Systems | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
January 2020 | DATE | 0.99+ |
NETSCOUT | ORGANIZATION | 0.99+ |
Suzie | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
Palo Alto California | LOCATION | 0.99+ |
two parts | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.96+ |
single source | QUANTITY | 0.95+ |
seven | QUANTITY | 0.92+ |
one component | QUANTITY | 0.92+ |
First | QUANTITY | 0.9+ |
Wikibon | ORGANIZATION | 0.9+ |
SiliconANGLE | ORGANIZATION | 0.88+ |
Silicon Valley, | LOCATION | 0.83+ |
Palo Alto, California | LOCATION | 0.82+ |
CUBE Conversation | EVENT | 0.75+ |
two | QUANTITY | 0.74+ |
NETSCOUT | TITLE | 0.73+ |
CUBEConversation | EVENT | 0.57+ |
Thor Wallace, NETSCOUT | CUBEConversation, January 2020
[Music] hi I'm Peter Burris and welcome to another Cube conversation where we go in depth of thought leaders from around the industry to bring you the best ideas and insights about how to improve your business with technology one of the many things that CIOs and business leaders have to think about is how are they going to execute digital transformations what will be the priorities we all know the relationship between digital transformation and the use of data differently but different technologies assert themselves a different way and very important different relationships especially with cloud vendors assert themselves in different ways and that's one of the many challenges that CIOs have to deal with today serve the business better attend to those relationships and drive the company forward to achieve its ultimate outcomes and objectives so to have that conversation today we've got a great guest Thor Wallace is the senior vice president and CIO at Netscape door welcome to the cube thank you so tell us a little bit about what the CIO at netskope does sure so let me start by telling you a little bit about net sky so net Scout is a network monitoring and a service assurance company as the CIO I'm obviously responsible for providing the tools and the environment for running the company I'm also heavily involved in for example understanding and the applications and the business direction that we're taking we're also working on improving our customer relationships and experiences for example we have a customer portal that were sort of re-evaluating and sort of improving and we're also obviously trying to drive user productivity worldwide we have very briefly about 33 locations worldwide we're corner here and outside of Boston and have large offices both in Texas and California so you're a traditional supplier of technology services it's trying to make a transition to this new world and as part of that and that's got itself is going through digital transformation so that it can better support its customers digital transformations I got that right exactly so let me tell you a little bit about sort of what we're trying to achieve what some of the Y's are and sort of show where we are at this moment yeah so we're you know we as a company are being challenged by the same sort of environment that everyone else else is being a challenge with which is to be able to move as quickly as we can and provide as much of an impact of our customers as possible so so how I've read that sort of mandate in that remit is to really focus on improving our customer experience as I said you know working with a new sort of new platform and we re platforming and refactoring our application our customer service application but also really focusing on how best to improve user productivity so those are the areas that we've been focusing on direct driving IT productivity is important to me so that's a fairly substantial argument for moving operations to the cloud and we're also part of that is transforming sort of a hardware based environment to a much more of a virtualized and software based environment so that includes cloud that includes virtualization which we've obviously have taken a lot of ground on and for example what we've already done is virtualized all of our operations in the data center over the years we've also moved a lot of workloads to cloud were you know cloud agnostic but you know we have a fairly large environment it was salesforce.com we use office 365 which are obviously major applications on the cloud so we have a workload that's quite mixed for today we can we maintain on Prem data centers we have enough large engineering footprint as well so we will kind of live in all of the worlds so we live obviously on Purim we have cloud and one of the things that I think we've learned over the years is that in order to continue the journey of cloud we need to really worry about a couple things one is we want to make sure that we are we keep our operations in in an excellent place so and I can talk more about that in a few minutes and as I said we we want to continue to maintain our ability to execute and really what I call velocity to be able to add value and so cloud actually presents some of those opportunities for us but it also obviously makes things quite complicated in that we have multiple environments we have to make sure that people still get the services and the applications they need to do their job and provide those you know in a in a very productive way in a cost-effective way so that we can maintain that as an IT organization so you've got salesforce.com you've got office 365 you've got some other objectives movies some other applications up into the cloud each of those applications though has been historically associated with a general purpose network that you get to control so that you can give different quality of service to different classes workload or applications how is that changing and what pressures is that putting on your network as you move to more cloud based operations well I think that's a huge challenge for us and I think frankly for for most people I think you have to rethink how your network is designed fundamentally from the ground up and if you think about networks in the past you know in mainly an on-prem world you basically had a backhaul a lot of traffic in our in our case 33 locations worldwide a lot of back hauling of of services and and transactions back to wherever that application exists so for example historically we've had office excuse me in the Microsoft mail system or exchange on Prem we have you know other services that are on print for example Oracle and our ERP system etc and the challenge was to move all that traffic back to basically our core data center and as you move to the cloud you have an opportunity to actually real to rethink that so we've been in the process of doing over the last say year has been to redesign our network from the ground up and moving away from sort of the central monolithic network to more of a cloud slash edge base network so with that we've also moved from hardware basically a fairly heavy investment at hardware in each of the offices for example and we're now or we've actually in the process very far along in the process of converting all that hardware into a software-defined network that allows us to do some things that we have never been able to do operationally for example we can make deployments sort of from one central location worldwide both for security and patching etc and so what we've also done is we've moved as I said we have a lot of our workloads already in the cloud and we continue to put more on the cloud one of the things that's become important is we've got to maintain and create actually a low latency environment so for example ultimately putting our you know unified communication systems and technologies and the cloud to me where is me without having a low latency environment and a low latency network so that we can actually provide dial tone well worldwide and without worrying about performance so what we've what we've already done is we've transitioned from the centralized network into an edge based Network we've actually happened now a partner that we now are putting in services into a local presence idea have worldwide into firm into three locations for equinox and with that comes the software based network and allows us to move traffic directly to the edge and therefore once we're at the edge we can go very quickly a sort of backbone speeds into whatever cloud service we need whether it's as your AWS or Salesforce or any other provider office 365 we can get that sort of speed and low latency that is created a new environment for us at which is now virtual software base gives us a tremendous amount of flexibility moving what I consider fairly heavy and significant workloads that remain on Prem it gives us the option of moving that to the cloud so and with that one of the key things that comes with that is holding making sure that we can hold our accountable are our vendors very accountable for performance so for example if we experience an issue with office 365 performance whether it's in Pune or Westford or wherever it is we want to be able to make sure that we have the information and the data that says to Microsoft in this case hey you know we're actually the performance isn't great from wherever wherever those users are wherever that office is so we want to provide them information and to basically prove that our network or our insert internal capabilities and network are performing very well but may be that there's an issue with something and performance that on their size so without this sort of fact-based information it's really hard to have those discussions with vendors so one of the things I think is important for everyone to consider when you move more to a cloud is you've got to have the ability to troubleshoot and and make sure that you can actually maintain a very complicated environment so one of the things we have done is we and we continue to do is use our own products actually to give greater visibility that we've ever had before in this new sort of multi this multi sort of cloud multi Prem environment so so which is a very powerful thing for us and a team that is using this technology is sort of seeing visibility things that they've never really been able to see before so that's been quite exciting but I think that's sort of frankly table stakes moving forward into you know deeper more cloud or sort of sort of workload independent model that we're seeking well so one of the government building this because I have conversations like this all the time and I don't think people realize the degree to which some of these changes are really going to change the way that they actually get worked on when there's a problem you have control of the network and the application and the endpoints if there is an issue you can turn to someone who works for you and say here's the deal fix this so I'll find somebody else that can fix it so you have an employment-based almost model of coercion you can get people to do what you want to do but when you move into the cloud you find yourself having to use a contracting approach to actually get crucial things done and problems crop up either way it doesn't matter if you own it all or somebody else owns at all you're going to encounter problems and so you have to accelerate and diminish the amount of back-and-forth haggling that goes on and as you said the best way to do that is to have fact-based evidence-based visibility into what's actually happening so that you can pinpoint and avoid the back-and-forth about whose issue it really is exactly I mean there's so much you know is at the end of the day IT is still responsible for user productivity so whether somebody's having you know an application issue in terms of availability or frankly if it's not performing up to what it should be you're still accountable as an organization and regardless of where the workloads are it could be as you point out you know back in the day you could always go to your data center and do a lot of investigation and really do a lot of troubleshooting within the four walls today you just don't have that visit you don't have that luxury call it and so it's a whole new world and you know we all are relying increasingly on vendors which reads a contracting star which is you know presents an issue and you know sort of having these conversations with a vendor or contractor regardless of your relationship with them you're still again you're on the hook or for doing this so you've got to have some facts you've got to have some story you have to show in terms of hey you know we're good on this side you know the issue really is on you and we've actually had situations whether it was performance issues or service interruptions or bugs from different vendors where they've impacted our you know the net Scout organization and without you know deep understanding of what's going on you really don't have anywhere to go you you really have to have this sort of greater visibility and this is one of the things that you know is a is a is a lesson learned from at least from the journey that we're taking and so I think that's part of the story of the cloud and sort of migration and virtualization story is you really have to have this newfound visibility so I think that's been you know really important for us so I'm gonna I'm gonna see if I can't generalize that a little bit because I think it's great point as you go into a network redesign to support go to operations excellent operations in a cloud you have to also go into a sourcing and information redesign so that you can be assured that you're getting the information you need to sustain the degree of control or approximate the control that you had before otherwise you've got great technology but no way to deal with problems when they arise right exactly and you know as I said we've seen this movie and Minoo without having what we have I think we would have struggle as an organization actually to resolve the issue and that's not good for the company because you know IT part of the minute the mandate and their the remit for us is to make sure that people are as productive as it can be and so not having the ability to provide that environment is actually a huge problem for I think a lot of people and one of the ways we are working with it is to you know have that sort of visibility it also means upgrading the team skills which we've done a lot of work on so you take folks that were in IT that you know may have had a certain set of skills sort of in the on-prem environment call it those skills are quite different in in that in the sort of cloud or the mix exposure environment so I think upskilling you know having more information better information is really as part of the story that we're learning and that part of it at the end of the day it's not about upgrading the network it's about upgrading the network capability exactly yeah and you can't do that if especially the new world if you don't upgrade your ability to get information about how the whole thing is working together exactly all right Thor Wallis senior vice president and CIO at net Scout thanks very much for being on the queue thank you and once again I want to thank you participating in today's conversation until next time
SUMMARY :
that if especially the new world if you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
netskope | ORGANIZATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Pune | LOCATION | 0.99+ |
January 2020 | DATE | 0.99+ |
Westford | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Thor Wallace | PERSON | 0.99+ |
Thor Wallis | PERSON | 0.99+ |
today | DATE | 0.98+ |
office 365 | TITLE | 0.98+ |
one | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
net Scout | ORGANIZATION | 0.96+ |
Netscape | ORGANIZATION | 0.96+ |
salesforce.com | OTHER | 0.95+ |
Oracle | ORGANIZATION | 0.91+ |
33 locations | QUANTITY | 0.89+ |
net sky | ORGANIZATION | 0.89+ |
both | QUANTITY | 0.89+ |
equinox | ORGANIZATION | 0.87+ |
Salesforce | ORGANIZATION | 0.85+ |
NETSCOUT | ORGANIZATION | 0.82+ |
three locations | QUANTITY | 0.8+ |
one of the key things | QUANTITY | 0.79+ |
Purim | ORGANIZATION | 0.79+ |
things | QUANTITY | 0.76+ |
lot of people | QUANTITY | 0.76+ |
about 33 locations | QUANTITY | 0.74+ |
couple things | QUANTITY | 0.74+ |
one of | QUANTITY | 0.72+ |
senior vice president | PERSON | 0.71+ |
lot of work | QUANTITY | 0.62+ |
salesforce.com | ORGANIZATION | 0.62+ |
minutes | QUANTITY | 0.6+ |
many challenges | QUANTITY | 0.55+ |
vice president | PERSON | 0.51+ |
walls | QUANTITY | 0.48+ |
CUBEConversation | EVENT | 0.46+ |
Minoo | TITLE | 0.45+ |
Cube | ORGANIZATION | 0.36+ |
Amit Sinha, Zscaler | CUBEConversations, January 2020
(funk music) (funk music) (funk music) (funk music) >> Hello and welcome to theCUBE studios in Palo Alto, California for another CUBE conversation where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Every enterprise is responding to the opportunities of cloud with significant changes in people, process, how they think about technology, how they're going to align technology overall with their business and with their business strategies. Now those changes are affecting virtually every aspect of business but especially every aspect of technology. Especially security. So what does it mean to envision a world in which significant new classes of services are being provided through cloud mechanisms and modes, but you retain and in fact, even enhance the quality of security that your enterprise can utilize. To have that conversation, we're joined today by a great guest, Amit Sinha is president and CTO at Zscaler. Amit, welcome back to theCUBE. >> Thank you Peter, it's a pleasure to be here. >> So before we get into it, what's new at Zscaler? >> Well, at Zscaler our mission is to make the internet and cloud a secure place for businesses and as I engage with our global 2000 customers and prospects, they are going through some of the digital transformation challenges that you just alluded to. Specifically for security, what is happening is that they had a lot of applications that were sitting in a data center or in their headquarters and that center of gravity is now moving to the cloud. They probably adopt their Office 365, and Box, and Salesforce, and these applications have moved out. Now in addition, the users are everywhere. They're accessing those services, not just from offices but also from their mobile devices and home. So if your users have left the building, and your applications are no longer sitting in your data center, that begs that question: Where should the security stack be? You know, it cannot be your legacy security appliances that sat in your DMZ and your IT closets. So that's the challenge that we see out there, and Zscaler is helping these large global organizations transform their security and network for a more mobile and a cloud-first world. >> Distributed world? So let me make sure I got this right. So basically, cause I think I totally agree with you >> Right. >> Just to test it, that many regarded the cloud as a centralization strategy. >> Correct. >> What we really see happening, is we're seeing enterprises more distribute their data, more distribute their processing, but they have not updated how they think about security so the presumption is, "yeah we're going to put more processing data out closer to the action but we're going to backhaul a whole bunch back to our security model," and what I hear you saying is no, you need to push those security services out to where the data is, out to where the process, out to where the user is. Have I got that right? >> You have nailed it, right. Think of it this way, if I'm a large global 2000 organization, I might have thousands of branches. All of those branches, traditionally, have used a hub-and-spoke network model. I might have a branch here in Palo Alto but my headquarters is in New York. So now I have an MPLS circuit connecting this branch to New York. If my Exchange server and applications and SAP systems are all there, then that hub-and-spoke model made sense. I am in this office >> Right. >> I connect to those applications and all my security stack is also there. But fast forward to today, all of those applications are moving and they're not just in one cloud. You know, you might have adopted Salesforce.com for CRM, you might have adopted Workday, you might have adopted Office 365. So these are SaaS services. Now if I'm sitting here in Palo Alto, and if I have to access my email, it makes absolutely no sense for me to VPN back to New York only to exit to the internet right there. What users want is a fast, nimble user experience without security coming in the way. What organizations want is no compromise in their security stack. So what you really need is a security stack that follows the user wherever they are. >> And the data. >> And the data, so my data...you know Microsoft has a front-door service here in Redwood City and if if you are a user here and trying to access that, I should be able to go straight with my entire security stack right next to it. That's what Gartner is calling SASE these days. >> Well, let's get into that in a second. It almost sounds as though what you're suggesting is that the enterprise needs to look at security as a SaaS service itself. >> 100 percent. If your users are everywhere and if your applications are in the cloud, your security better be delivered as a consistent "as-a-service," right next to where the users are and hopefully co-located in the same data center as where the applications are present so the only way to have a pervasive security model is to have it delivered in the cloud, which is what Zscaler has been doing from day one. >> Now, a little spoiler alert for everybody, Zscaler's been talking about this for 10-plus years. >> Right. >> So where are we today in the market place starting to recognize and acknowledge this transformation in the basic security architecture and platform that we're going through? >> I'm very excited to see that the market is really adopting what Zscaler has been talking about for over a decade. In fact, recently, Gartner released a paper titled "SASE," it stands for Secure Access Service Edge and there are, I believe, four principal tenets of SASE. The first one, of course, is that compute and security services have to be right at the edge. And we talked about that. It makes sense. >> For where the service is being delivered. >> You can't backhaul traffic to your data center or you can't backhaul traffic to Google's central data center somewhere. You need to have compute capabilities with things like SSL Interception and all the security services running right at the edge, connecting users to applications in the shortest path, right? So that's sort of principle number one of SASE. The second principle that Gartner talks about, which again you know, has been fundamental to Zscaler's DNA, is to keep your devices and your branch offices light. Don't shove too much complexity from a security perspective on the user devices and your branches. Keep it simple. >> Or the people running those user devices >> Absolutely >> in the branches >> Yeah, so you know, keep your branch offices like a light router, that forwards traffic to the cloud, where the heavy-lifting is done. >> Right. >> The third principle they talk about is to deliver modern security, you need to have a proxy-based architecture and essentially what a proxy architecture allows you to do is to look at content, right? Gone are the days where you could just say, stop a website called "evil.com" and allow a website "good.com," right? It's not like that anymore. You have to look at content, you know. You might get malware from a Google Drive link. You can't block Google now, right? So looking at SSL-encrypted content is needed and firewalls just can't do it. You have to have a proxy architecture that can decrypt SSL connections, look at content, provide malware services, provide policy-based access control services, et cetera and that's kind of the third principle. And finally what Gartner talks about is SASE has to be cloud-native, it has to be, sort of, born and bred in the cloud, a true multitenant, cloud-first architecture. You can't take, sort of, legacy security appliances and shove it in third-party infrastructure like AWS and GCP and deliver a cloud service and the example I use often is, just because you had a great blu-ray player or a DVD player in your home theater, you can't take 100,000 of these and shove it into AWS and become a Netflix. You really need to build that service from the ground up. You know, in a multitenant fashion and that's what we have done for security as a service through the cloud. >> So we are now, the market seems to be kind of converging on some of the principles that Zscaler's been talking about for quite some time. >> Right. >> When we think about 2020, how do you anticipate enterprises are going to respond as a consequence of this convergence in acknowledging that the value proposition and the need are starting to come together? >> Absolutely, I think we see the momentum picking up in the market, we have lots of conversations with CIO's who are going through this digital transformation journey, you know transformation is hard. There's immune response in big organizations >> Sure. >> To change. Not much has changed from a security and network architecture perspective in the last two decades. But we're seeing more and more of that. In fact, over 400 of global 2000 organizations are 100 percent deployed on Zscaler. And so that momentum is picking up and we see a lot of traction with other prospects who are beginning to see the light, as we say it. >> Well as you start to imagine the relationship between security and data, between security and data, one of the things that I find interesting is many respects to cloud, especially as it becomes more distributed, is becoming better acknowledged almost as a network of services. >> Right. >> As opposed to AWS as a data center here and that makes it a cloud data center. >> Right. >> It really is this network of services, which can happen from a lot of different places, big cloud service providers, your own enterprise, partners providing services to you. How does the relationship between Zscaler and kind of an openness >> Hm-mm. >> Going to come together? Hm-mm. >> So that you can provide services from a foreign enterprise to the enterprise's partners, customers, and others that the enterprise needs to work with. >> That's a great question, Peter and I think one of the most important things I tell our customers and prospects is that if you look at a cloud-delivered security architecture, it better embrace some of the SASE principles. One of the first things we did when we built the Zscaler platform was to distribute it across 150 data centers. And why did we do that? We did that because when a user is going to destinations, they need to be able to access any destination. The destination could be on Azure, could be on AWS, could be Salesforce, so by definition, it has to be carrier-neutral, it has to be cloud-neutral. I can't build a service that is designed for all internet traffic in a GCP or AWS, right. So how did we do that? We went and looked at one of the world's best co-location facilities that provide maximum connectivity options in any given region. So in North America, we might be in an Equinix facility and we might use tier one ISPs like GTT and Zayo that provide excellent connectivity to our customers and the destinations they want to visit. When you go to China, there's no GCP there, right so we work with China Unicom and China Telecom. When we are in India, we might work with an Airtel or a Sify, when we are in Australia, we might be working with Telstra. So we work with, you know, world class tier one ISPs in best data centers that provide maximum connectivity options. We invested heavily in internet exchange connectivity. Why? Because once you come to Zscaler, you've solved the physics problem by building the data center close to you, the next thing is, you want quickly go to your application. You don't want security to be in the way >> Right. >> Of application access. So with internet exchange connectivity, we are peered in a settlement-free way or BGP with Microsoft, with Akamai, with Apple, with Yahoo, right. So we can quickly get you to the content while delivering the full security stack, right? So we had to really take no shortcuts, back to your point of the world is very diverse and you cannot operate in a walled garden of one provider anymore and if you really build a cloud platform that is embracing some of the SASE principles we talked about, you have to do it the hard way. By building this one data center at a time. >> Well, you don't want your servicers to fall down because you didn't put the partnerships in place >and hardend them Correct. >> As much as you've hardened some of the other traffic. So as we think about kind of, where this goes, what do you envision Zscaler's, kind of big customer story is going to be in 2020 and beyond? Obviously, the service is going to be everywhere, change the way you think about security, but how, for example, is the relationship between the definition of the edge and the definition of the secure service going to co-evolve? Are people going to think about the edge differently as they start to think more in terms of a secure edge or where the data resides and the secure data, what do you think? >> Let's start off with five years and go back, right? >> We're going forward. >> Work our way back. Well, five years from now, hopefully everyone is on a 5G phone, you know, with blazing-fast internet connections, on devices that you love, your applications are everywhere, so now think of it from an IT perspective. You know, my span of control is becoming thinner and thinner, right? my users are on devices that I barely control. My network is the internet that I really don't control. My applications have moved to the cloud or either hosted in third-party infrastructure or run as a SaaS application, which I really don't control. Now, in this world, how do I provide security? How do I provide user experience? Imagine if you are the CIO and your job is to make all of this work, where will you start, right? So those are some of the big problems that we are helping our customers with. So this-- >> Let me as you a question 'cause here's where I was going with the question. I would start with, if I can't control all these things, I'm going to apply my notion of security >> Hm-mm. >> And say I am going to control that which is within >> Right. >> my security boundaries, not at a perimeter level, not at a device level, but at a service level. >> Absolutely and that's really the crux of the Zscaler platform service. We build this Zero Trust architecture. Our goal is to allow users to quickly come to Zscaler and Zscaler becomes the policy engine that is securely connecting them to all the cloud services that they want to go to. Now in addition, we also allow the same users to connect to internal applications that might have required a traditional VPN. Now think of it this way, Peter. When you connect to Google today, do you VPN to Google's network? To access Gmail? No. Why should you have to VPN to access an internal application? I mean, you get a link on your mobile phone, you click on it and it didn't work because it required a separate form of network access. So with Zscaler Internet Access and Zscaler Private Access, we are delivering a beautiful service that works across 150 data centers. Users connect to the service and the service becomes a policy engine that is securely connecting you to the destinations that you want. Now, in addition, you asked about what's going to happen in a couple of years. The same service can be extended for partners. I'm a business, I have hundreds of partners who want to connect to me. Why should I allow legacy VPN access or private circuits that expose me? I don't even know who's on the other end of the line, right? They come onto my network and you hear about the Target breaches because some HVAC contract that had unrestricted access, you hear about the Airbus breach because another contract that had access. So how do we build a true Zero Trust cloud platform that is securely allowing users, whether it's your employees, to connect to named applications that they should, or your partners that need access to certain applications, without putting them on the network. We're decoupling application access from network access. And there's one final important linchpin in this whole thing. Remember we talked about how powerless organizations >> Right. >> feel in this distributed model? Now imagine, your job is to also ensure that people are having a good user experience. How will you do that, right? What Zscaler is trying to do now is, we've been very successful in providing the secure and policy-based connectivity and our customers are asking us, hey, you're sitting in between all of this, you have visibility into what's happening on the user's device. Clearly you're sitting in the middle in the cloud and you see what's happening on the left-hand side, what's happening on the right-hand side. You know, you have the cloud effect, you can see there's a problem going on with Microsoft's network in the China region, right? Correlate all of that information and give me proactive intelligence around user experience and that's what we launched recently at Zenith Live. We call it Zscaler Digital Experience, >> Hmm. >> So overall the goal of the platform is to securely connect users and entities to named applications with Zero Trust principles. We never want security and user experience to be orthogonal requirements that has traditionally been the case. And we want to provide great user experience and visibility to our customers who've started adopting this platform. >> That's a great story. It's a great story. So, once again, I want to thank you very much for coming in and that's Amit Sinha, who is the president and CTO at Zscaler, focusing a lot on the R&D types of things that Zscaler's doing. Thanks again for being on theCUBE. >> It's my pleasure, Peter. Always enjoy talking to you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris, see you next time. (funk music) (funk music)
SUMMARY :
Every enterprise is responding to the opportunities and that center of gravity is now moving to the cloud. I totally agree with you Just to test it, that many regarded the cloud our security model," and what I hear you saying is connecting this branch to New York. and if I have to access my email, and if if you are a user here is that the enterprise needs to look at security and hopefully co-located in the same data center Zscaler's been talking about this for 10-plus years. have to be right at the edge. is to keep your devices and your branch offices light. Yeah, so you know, keep your branch You have to look at content, you know. kind of converging on some of the principles that in the market, we have lots of conversations with and we see a lot of traction Well as you start to imagine the relationship and that makes it a cloud data center. and kind of an openness Going to come together? that the enterprise needs to work with. the next thing is, you want quickly go to your application. of the world is very diverse and you cannot operate Well, you don't want your servicers to fall down So as we think about kind of, where this goes, on devices that you love, your applications are everywhere, I'm going to apply my notion of security my security boundaries, not at a perimeter level, to the destinations that you want. and you see what's happening on the left-hand side, is to securely connect users and entities to So, once again, I want to thank you very much for coming in Always enjoy talking to you. I'm Peter Burris, see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amit Sinha | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Peter | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Telstra | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Airbus | ORGANIZATION | 0.99+ |
January 2020 | DATE | 0.99+ |
China | LOCATION | 0.99+ |
100,000 | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Redwood City | LOCATION | 0.99+ |
India | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
Akamai | ORGANIZATION | 0.99+ |
150 data centers | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
100 percent | QUANTITY | 0.99+ |
GTT | ORGANIZATION | 0.99+ |
China Telecom | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Sify | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
Zayo | ORGANIZATION | 0.99+ |
SASE | TITLE | 0.99+ |
China Unicom | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Amit | PERSON | 0.99+ |
second principle | QUANTITY | 0.99+ |
third principle | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
10-plus years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Airtel | ORGANIZATION | 0.99+ |
Zscaler | PERSON | 0.99+ |
over 400 | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
Zscaler | TITLE | 0.96+ |
Equinix | ORGANIZATION | 0.96+ |
2000 customers | QUANTITY | 0.96+ |
Gmail | TITLE | 0.96+ |
Azure | TITLE | 0.95+ |
CUBE | ORGANIZATION | 0.95+ |
over a decade | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
one provider | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Eric Herzog, IBM Storage | CUBE Conversation December 2019
(funky music) >> Hello and welcome to theCUBE Studios in Palo Alto, California for another CUBE conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host Peter Burris. Well, as I sit here in our CUBE studios, 2020's fast approaching, and every year as we turn the corner on a new year, we bring in some of our leading thought leaders to ask them what they see the coming year holding in the particular technology domain in which they work. And this one is no different. We've got a great CUBE guest, a frequent CUBE guest, Eric Herzog, the CMO and VP of Global Channels, IBM Storage, and Eric's here to talk about storage in 2020. Eric? >> Peter, thank you. Love being here at theCUBE. Great solutions. You guys do a great job on educating everyone in the marketplace. >> Well, thanks very much. But let's start really quickly, quick update on IBM Storage. >> Well, been a very good year for us. Lots of innovation. We've brought out a new Storwize family in the entry space. Brought out some great solutions for big data and AI solutions with our Elastic Storage System 3000. Support for backup in container environments. We've had persistent storage for containers, but now we can back it up with our award-winning Spectrum Protect and Protect Plus. We've got a great set of solutions for the hybrid multicloud world for big data and AI and the things you need to get cyber resiliency across your enterprise in your storage estate. >> All right, so let's talk about how folks are going to apply those technologies. You've heard me say this a lot. The difference between business and digital business is the role that data plays in a digital business. So let's start with data and work our way down into some of the trends. >> Okay. >> How are, in your conversations with customers, 'cause you talk to a lot of customers, is that notion of data as an asset starting to take hold? >> Most of our clients, whether it be big, medium, or small, and it doesn't matter where they are in the world, realize that data is their most valuable asset. Their customer database, their product databases, what they do for service and support. It doesn't matter what the industry is. Retail, manufacturing. Obviously we support a number of other IT players in the industry that leverage IBM technologies across the board, but they really know that data is the thing that they need to grow, they need to nurture, and they always need to make sure that data's protected or they could be out of business. >> All right, so let's now, starting with that point, in the tech industry, storage has always kind of been the thing you did after you did your server, after you did your network. But there's evidence that as data starts taking more center stage, more enterprises are starting to think more about the data services they need, and that points more directly to storage hardware, storage software. Let's start with that notion of the ascension of storage within the enterprise. >> So with data as their most valuable asset, what that means is storage is the critical foundation. As you know, if the storage makes a mistake, that data's gone. >> Right. >> If you have a malware or ransomware attack, guess what? Storage can help you recover. In fact, we even got some technology in our Spectrum Protect product that can detect anomalous activity and help the backup admin or the storage admins realize they're having a ransomware or malware attack, and then they could take the right corrective action. So storage is that foundation across all their applications, workloads, and use cases that optimizes it, and with data as the end result of those applications, workloads, and use cases, if the storage has a problem, the data has a problem. >> So let's talk about what you see as in that foundation some of the storage services we're going to be talking most about in 2020. >> Eric: So I think one of the big things is-- >> Oh, I'm sorry, data services that we're going to be talking most about in 2020. >> So I think one of the big things is the critical nature of the storage to help protect their data. People when they think of cyber security and resiliency think about keeping the bad guy out, and since it's not an issue of if, it's when, chasing the bad guy down. But I've talked to CIOs and other executives. Sometimes they get the bad guy right away. Other times it takes them weeks. So if you don't have storage with the right cyber resiliency, whether that be data at rest encryption, encrypting data when you send it out transparently to your hybrid multicloud environment, whether malware and ransomware detection, things like air gap, whether it be air gap to tape or air gap to cloud. If you don't think about that as part of your overall security strategy, you're going to leave yourself vulnerable, and that data could be compromised and stolen. So I can almost say that in 2020, we're going to talk more about how the relationship between security and data and storage is going to evolve, almost to the point where we're actually going to start thinking about how security can be, it becomes almost a feature or an attribute of a storage or a data object. Have I got that right? >> Yeah, I mean, think of it as storage infused with cyber resiliency so that when it does happen, the storage helps you be protected until you get the bad guy and track him down. And until you do, you want that storage to resist all attacks. You need that storage to be encrypted so they can't steal it. So that's a thing, when you look at an overarching security strategy, yes, you want to keep the bad guy out. Yes, you want to track the bad guy down. But when they get in, you'd better make sure that what's there is bolted to the wall. You know, it's the jewelry in the floor safe underneath the carpet. They don't even know it's there. So those are the types of things you need to rely on, and your storage can do almost all of that for you once the bad guy's there till you get him. >> So the second thing I want to talk about along this vein is we've talked about the difference between hardware and software, software-defined storage, but still it ends up looking like a silo for most of the players out there. And I've talked to a number of CIOs who say, you know, buying a lot of these software-defined storage systems is just like buying not a piece of hardware, but a piece of software as a separate thing to manage. At what point in time do you think we're going to start talking about a set of technologies that are capable of spanning multiple vendors and delivering a more broad, generalized, but nonetheless high function, highly secure storage infrastructure that brings with it software-defined, cloud-like capabilities. >> So what we see is the capability of A, transparently traversing from on-prem to your hybrid multicloud seamlessly. They can't, it can't be hard to do. It's got to happen very easily. The cloud is a target, and by the way, most mid-size enterprise and up don't use one cloud, they use many, so you've got to be able to traverse those many, move data back and forth transparently. Second thing we see coming this year is taking the overcomplexity of multiple storage platforms coupled with hybrid cloud and merging them across. So you could have an entry system, mid-range system, a high-end system, traversing the cloud with a single API, a single data management platform, performance and price points that vary depending on your application workload and use case. Obviously you use entry storage for certain things, high-end storage for other things. But if you could have one way to manage all that data, and by the way, for certain solutions, we've got this with one of our products called Spectrum Virtualize. We support enterprise-class data service including moving the data out to cloud not only on IBM storage, but over 450 other arrays which are not IBM-logoed. Now, that's taking that seamlessness of entry, mid-range, on-prem enterprise, traversing it to the cloud, doing it not only for IBM storage, but doing it for our competitors, quite honestly. >> Now, once you have that flexibility, now it introduces a lot of conversations about how to match workloads to the right data technologies. How do you see workloads evolving, some of these data-first workloads, AI, ML, and how is that going to drive storage decisions in the next year, year and a half, do you think? >> Well, again, as we talked about already, storage is that critical foundation for all of your data needs. So depending on the data need, you've got multiple price points that we've talked about traversing out to the cloud. The second thing we see is there's different parameters that you can leverage. For example, AI, big data, and analytic workloads are very dependent on bandwidth. So if you can take a scalable infrastructure that scales to exabytes of capacity, can scale to terabytes per second of bandwidth, then that means across a giant global namespace, for example, we've got with our Spectrum Scale solutions and our Elastic Storage System 3000 the capability of racking and stacking two rack U at a time, growing the capacity seamlessly, growing the performance seamlessly, providing that high-performance bandwidth you need for AI, analytic, and big data workloads. And by the way, guess what, you could traverse it out to the cloud when you need to archive it. So looking at AI as a major force in the coming, not just next year, but in the coming years to go, it's here to stay, and the characteristics that IBM sees that we've had in our Spectrum Scale products, we've had for years that have really come out of the supercomputing and the high-performance computing space, those are the similar characteristics to AI workloads, machine workloads, to the big data workloads and analytics. So we've got the right solution. In fact, the two largest supercomputers on this planet have almost an exabyte of IBM storage focused on AI, analytics, and big data. So that's what we see traversing everywhere. And by the way, we also see these AI workloads moving from just the big enterprise guys down into small shops, as well. So that's another trend you're going to see. The easier you make that storage foundation underneath your AI workloads, the more easy it is for the big company, the mid-size company, the small company all to get into AI and get the value. The small companies have to compete with the big guys, so they need something, too, and we can provide that starting with a little simple two rack U unit and scaling up into exabyte-class capabilities. >> So all these new workloads and the simplicity of how you can apply them nonetheless is still driving questions about how the storage hierarchies evolved. Now, this notion of the storage hierarchy's been around for, what, 40, 50 years, or something like that. >> Eric: Right. >> You know, tape and this and, but there's some new entrants here and there are some reasons why some of the old entrants are still going to be around. So I want to talk about two. How do you see tape evolving? Is that, is there still need for that? Let's start there. >> So we see tape as actually very valuable. We've had a real strong uptick the last couple years in tape consumption, and not just in the enterprise accounts. In fact, several of the largest cloud providers use IBM tape solutions. So when you need to provide incredible amounts of data, you need to provide primary, secondary, and I'd say archive workloads, and you're looking at petabytes and petabytes and petabytes and exabytes and exabytes and exabytes and zetabytes and zetabytes, you've got to have a low-cost platform, and tape provides still by far the lowest cost platform. So tape is here to stay as one of those key media choices to help you keep your costs down yet easily go out to the cloud or easily pull data back. >> So tape still is a reasonable, in fact, a necessary entrant in that overall storage hierarchy. One of the new ones that we're starting to hear more about is storage-class memory, the idea of filling in that performance gap between external devices and memory itself so that we can have a persistent store that can service all the new kinds of parallelism that we're introducing into these systems. How do you see storage-class memory playing out in the next couple years? >> Well, we already publicly announced in 2019 that in 2020, in the first half, we'd be shipping storage-class memory. It would not only working some coming systems that we're going to be announcing in the first half of the year, but they would also work on some of our older products such as the FlashSystem 9100 family, the Storwize V7000 gen three will be able to use storage-class memory, as well. So it is a way to also leverage AI-based tiering. So in the old days, flash would tier to disk. You've created a hybrid array. With storage-class memory, it'll be a different type of hybrid array in the future, storage-class memory actually tiering to flash. Now, obviously the storage-class memory is incredibly fast and flash is incredibly fast compared to disk, but it's all relative. In the old days, a hybrid array was faster than an all hard drive array, and that was flash and disk. Now you're going to see hybrid arrays that'll be storage-class memory and with our easy tier function, which is part of our Spectrum Virtualize software, we use AI-based tiering to automatically move the data back and forth when it's hot and when it's cool. Now, obviously flash is still fast, but if flash is that secondary medium in a configuration like that, it's going to be incredibly fast, but it's still going to be lower cost. The other thing in the early years that storage-class memory will be an expensive option from all vendors. It will, of course, over time get cheap, just the way flash did. >> Sure. >> Flash was way more expensive than hard drives. Over time it, you know, now it's basically the same price as what were the old 15,000 RPM hard drives, which have basically gone away. Storage-class over several years will do that, of course, as well, and by the way, it's very traditional in storage, as you, and I've been around so long and I've worked at hard drive companies in the old days. I remember when the fast hard drive was a 5400 RPM drive, then a 7200 RPM drive, then a 10,000 RPM drive. And if you think about it in the hard drive world, there was almost always two to three different spin speeds at different price points. You can do the same thing now with storage-class memory as your fastest tier, and now a still incredibly fast tier with flash. So it'll allow you to do that. And that will grow over time. It's going to be slow to start, but it'll continue to grow. We're there at IBM already publicly announcing. We'll have products in the first half of 2020 that will support storage-class memory. >> All right, so let's hit flash, because there's always been this concern about are we going to have enough flash capacity? You know, is enough going to, enough product going to come online, but also this notion that, you know, since everybody's getting flash from the same place, the flash, there's not going to be a lot of innovation. There's not going to be a lot of differentiation in the flash drives. Now, how do you see that playing out? Is there still room for innovation on the actual drive itself or the actual module itself? >> So when you look at flash, that's what IBM has funded on. We have focused on taking raw flash and creating our own flash modules. Yes, we can use industry standard solid state disks if you want to, but our flash core modules, which have been out since our FlashSystem product line, which is many years old. We just announced a new set in 2018 in the middle of the year that delivered in a four-node cluster up to 15 million IOPS with under 100 microseconds of latency by creating our own custom flash. At the same time when we launched that product, the FlashSystem 9100, we were able to launch it with NVME technology built right in. So we were one of the first players to ship NVME in a storage subsystem. By the way, we're end-to-end, so you can go fiber channel of fabric, InfiniBand over fabric, or ethernet over fabric to NVME all the way on the back side at the media level. But not only do we get that performance and that latency, we've also been able to put up to two petabytes in only two rack U. Two petabytes in two rack U. So incredibly rack density. So those are the things you can do by innovating in a flash environment. So flash can continue to have innovation, and in fact, you should watch for some of the things we're going to be announcing in the first half of 2020 around our flash core modules and our FlashSystem technology. >> Well, I look forward to that conversation. But before you go here, I got one more question for you. >> Sure. >> Look, I've known you for a long time. You spend as much time with customers as anybody in this world. Every CIO I talk to says, "I want to talk to the guy who brings me "or the gal who brings me the great idea." You know, "I want those new ideas." When Eric Herzog walks into their office, what's the good idea that you're bringing them, especially as it pertains to storage for the next year? >> So, actually, it's really a couple things. One, it's all about hybrid and multicloud. You need to seamlessly move data back and forth. It's got to be easy to do. Entry platform, mid-range, high-end, out to the cloud, back and forth, and you don't want to spend a lot of time doing it and you want it to be fully automated. >> So storage doesn't create any barriers. >> Storage is that foundation that goes on and off-prem and it supports multiple cloud vendors. >> Got it. >> Second thing is what we already talked about, which is because data is your most valuable asset, if you don't have cyber-resiliency on the storage side, you are leaving yourself exposed. Clearly big data and AI, and the other thing that's been a hot topic, which is related, by the way, to hybrid multiclouds, is the rise of the container space. For primary, for secondary, how do you integrate with Red Hat? What do you do to support containers in a Kubernetes environment? That's a critical thing. And we see the world in 2020 being trifold. You're still going to have applications that are bare metal, right on the server. You're going to have tons of applications that are virtualized, VMware, Hyper-V, KVM, OVM, all the virtualization layers. But you're going to start seeing the rise of the container admin. Containers are not just going to be the purview of the devops guy. We have customers that talk about doing 10,000, 20,000, 30,000 containers, just like they did when they first started going into the VM worlds, and now that they're going to do that, you're going to see customers that have bare metal, virtual machines, and containers, and guess what? They may start having to have container admins that focus on the administration of containers because when you start doing 30, 40, 50,000, you can't have the devops guy manage that 'cause you're deploying it all over the place. So we see containers. This is the year that containers starts to go really big-time. And we're there already with our Red Hat support, what we do in Kubernetes environments. We provide primary storage support for persistency containers, and we also, by the way, have the capability of backing that up. So we see containers really taking off in how it relates to your storage environment, which, by the way, often ties to how you configure hybrid multicloud configs. >> Excellent. Eric Herzog, CMO and vice president of partner strategies for IBM Storage. Once again, thanks for being on theCUBE. >> Thank you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (funky music)
SUMMARY :
in the particular technology everyone in the marketplace. But let's start really quickly, and the things you need is the role that data plays that data is the thing of been the thing you did is the critical foundation. and help the backup admin some of the storage services that we're going to be talking of the storage to help protect their data. once the bad guy's there till you get him. So the second thing I want including moving the data out to cloud and how is that going to and the characteristics that IBM sees and the simplicity of are still going to be around. and not just in the enterprise accounts. that can service all the So in the old days, and by the way, it's very in the flash drives. in the middle of the year that delivered But before you go here, storage for the next year? and you don't want to spend and it supports multiple cloud vendors. and now that they're going to do that, Eric Herzog, CMO and vice See you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric Herzog | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Eric | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
December 2019 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
15,000 RPM | QUANTITY | 0.99+ |
5400 RPM | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
10,000 | QUANTITY | 0.99+ |
7200 RPM | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
10,000 RPM | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
two rack | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Two petabytes | QUANTITY | 0.99+ |
Global Channels | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Elastic Storage System 3000 | COMMERCIAL_ITEM | 0.99+ |
CUBE | ORGANIZATION | 0.98+ |
first half | QUANTITY | 0.98+ |
Second thing | QUANTITY | 0.98+ |
under 100 microseconds | QUANTITY | 0.98+ |
20,000 | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one way | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
one more question | QUANTITY | 0.96+ |
FlashSystem 9100 | COMMERCIAL_ITEM | 0.95+ |
four-node | QUANTITY | 0.95+ |
single | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
two petabytes | QUANTITY | 0.93+ |
CMO | ORGANIZATION | 0.92+ |
first players | QUANTITY | 0.92+ |
first half of 2020 | DATE | 0.91+ |
two largest supercomputers | QUANTITY | 0.89+ |
Red Hat | TITLE | 0.89+ |
terabytes | QUANTITY | 0.89+ |
over 450 other arrays | QUANTITY | 0.88+ |
theCUBE Studios | ORGANIZATION | 0.86+ |
next couple years | DATE | 0.85+ |
year and a half | QUANTITY | 0.85+ |
up to 15 million IOPS | QUANTITY | 0.84+ |
Spectrum Protect | COMMERCIAL_ITEM | 0.84+ |
years | QUANTITY | 0.84+ |
Guy Churchward, Datera | CUBEConversations, December 2019
(upbeat music) >> Hello and welcome to the Cube Studios in Palo Alto California, for another Cube conversation. Where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host Peter Burris. Every Enterprise is saddled with the challenge of how to get more value out of their data. While at the same time trying to find new ways of associating value with product or value with service and to work with the different technology suppliers to create an optimal relationship for how they can move their business forward within a data-driven world. It's a tall order but 2020 is going to feature an enormous amount of progress and how enterprises think about how to handle the people, process and technology of improving their overall stance towards getting value out of their data. So to have that conversation today, we're joined by a Guy Churchward, who's the CEO of Datera. Guy welcome back to the cube. >> Thank You Peter, I appreciate it. >> So before we go any further give us a quick update what's going on with Datera? >> We're doing pretty well. I mean this year's we're just going to close it off. So we're in Q4 right at the end of it. You mentioned data-driven, you know I mean that was obviously one of my key excitements, years ago we kind of moved from a hardware resiliency or Hardware-driven to software resiliency, Software defined and I do think that we've hit that data-defined, data-driven infrastructure right now. I've been in the CEO role now just about a year. I've been on the board since August of a year and change ago and part of it is we had a little bit of an impedance mismatch of message, technology and basically I go to market. So the team quite brilliantly produced this data services platform to do data driven architectures. >> Mmmh. >> But customers don't wake up every morning and go, I need to go buy a data-driven, how do I buy one? And so when I came in I realized that you know what they had was an exceptional solution but the market isn't ready yet for that thought process, and what they were really buying still was SDS, software defined storage. >> So it almost in a connect way. so I'm going to buy an SDS and connect it to something and get a little bit of flexibility over here but still worry about the lock in every where else. >> Yeah, exactly and in fact even on the SDS side. What they weren't looking for is bring your own server storage. What they were looking for was automation and they were looking to basically break out and have more data mobility and data freedom. And so that was good and then the second one was our technology really sells directly to enterprises, directly to large scale organizations and it's very difficult as a start-up, small company to basically be able to punch straight into a global account, you know. Because they'll sit back and say, well you know would you trust your family jewels to a company that's got 40 employees in Silicon Valley. >> Right. >> And so what you really have is this and get the message right and then make sure you have to flow through to the customer credibility right and we were fortunate to land a very strategic relationship with HP. And so that was our focus point. Right. So we basically got on board with HP, got into their complete program, started selling very closely to them of which their sales team has been marvelous and then we're just finishing that that year. The good news is and you know I'll give you a spoiler I care about Billings, you know I mean we actually move from an appliance business to a software business exclusively, and so we basically sell term agreement. So if you think about it from a bookings perspective, that's important but basically how much you bill out is more important. From a Billings perspective I think we're going to run roughly 350% up year-over-year. >> Ooh. >> Yeah which is kind of good. Right I mean in other words it was a bit of a pat on the back that seems very happy with that and then even from new account acquisitions if I count the amount of accounts that we bought in this year and to date, entirely since 2013 we've only had one customer churn, so all the customers are coming with us but if I count this year, if I look at 16 17 and 18 we've actually bought more customers on board in 19 than all three pulled together. So we're actually finishing a very very strong year. >> Congratulations. Now if we think about going into 2020 you're closing this quarter, but every startup has to have a notion of what's going to happen next and what role you're going to play. And what happens next. So if I look back I I see the enterprise starting to assert themselves in the cloud businesses. That's having an effect on on everybody. But it really becomes concrete you know, the rubber really meets the road at the level of data. So as you start to grow you're talking more customers, as you talk to more customers and they expressed what they need out of this new cloud oriented world, what kinds of problems are they bringing to the table as far as you're concerned? >> Yeah, I mean they initially come to us so what I would say is every account that we've run we've replaced traditional arrays storage arrays and every account we've run, we've actually competed against SDS vendors and whether that's something like Dells, VxFlex or even vSAN, VMware's vSAN and which are probably the two most well-known ones. A lot of cases I mean we actually have 100% win rate against that in these competitive situations, but interestingly most customers now are putting dual source in place. So in fact the reason that we've ridden pretty quickly and we've run lots of deals, isn't because we're going in and saying VxFlex is failing or vSAN is failing, but they want something extra, they want automation, they want desegregation, they want scale >> They want second source. In many respects of sales is, it's succeeding but you have to push a little bit harder and that is ease most easily done by bringing in another platform with crucial functionality... >> Yeah >> ...and a second source. >> And I think you're on the money there Peter because if I look at second source in the traditional array business, no CIO worth their soul is a single source vendor so they they will have Dell and they'll have HP or they'll have HP and they'll have Pure, doesn't matter and and even on HCI you'll see the HCI vendors, Nutanix is doing very well, so is Dell. So therefore they'll have that from second source if its critical. So if an environment is critical they always have a second source and so even now when you look into software-defined, this market in 2019 was very much like the, let's get the second source in place. And that shows you where we are on the maturity curve because people is basically moving on this en mass. Now that's 2019 you're asking about 20, 21, 22 moving forward. The reason that the traditional arrays weren't working for them is whether it's flexibility or it's basically management costs or maintenance, but it's data freedom. It's what they're really looking for. You know, what is a data center? Is it on-premise, is it cloud? It's definitely cloud but the question is is it on-premise cloud? Is it hybrid cloud, is it public cloud? And then you mention edge. You know we actually find customers who are looking and are saying look, the most important thing for us is being data-driven and what data-driven basically articulates is we get data in, we analyze it, we make decisions on it and we win and lose against our competition as fast as we can be accurate on that data set. And a lot of the decisions are getting made at the edge. So a lot of people are looking at saying my data center is actually at the edge, it's not in the center in the cloud, right. >> Well in many respects, it's for the first time a data center actually is what it says it is, right. Because the data center used to be where the hardware was and now increasingly enterprises are realizing that the services and the capabilities have to be where the data is. >> Yeah. >> Where the data is being produced, where the data is being utilized and certainly where the data, where decisions are being made about what to keep what not to keep, how much of it etc, and that that does start to drive forward an increased recognition that at some point in time we are going to talk more about the services that these platforms, or these devices or these software-defined environments provide. Have I got that right? >> Yeah, yeah you have and even if you look at that, you know ... what the AI/ML, you know I mean if I if I kind of step back and I look at what a customer's trying to do which is to utilize as much data as possible, in a way that they have data freedom that allows them to make decisions and that's really where AI and machine learning comes in. Right you know everybody employs that. I recently bought a camera, shockingly inside the camera it's got ML functionality into it, it's got AI built into it, my new photo editing software on my iPad is actually an ML-based system. They don't do it because it's a buzz word, they do it because basically they can get a much higher level of accuracy and then use data for enrichment, right. And then in the ML track, the classic route was I'm going to create a data lake, right. So I got my data lake and I've got everything in it then I'm going to analyze off the back of it. But everybody was analyzing once it's in the data lake. And what they've realized is to compete, they actually have to analyze much quicker. >> Right. >> And that's at the edge, and that's in real-time and that string based. And so that's really where people are sort of saying I can't ... I'm not going to have any long pole in my technology tent. I'm not going to have anything slow me down, I have to beat my competition and as part of that they need complete fluidity on their data. So I don't care whether it's at the edge or it's in the center or in the cloud, I need instant access to it for enrichment purposes and to make fast and accurate decisions. So they don't want data silos. You know, so any product out there that basically says me me me me give me my data and therefore I'm going to encrypt in such a ways you can't read it and it's not available to anybody else. They are just trying to eradicate that. And and we've sort of moved. It's a weird way of putting it but we've moved from hardware-defined to software-defined and I think we've moved into this data-defined era. But at the same time, it's the most stupid thing for me to say, because we've never not been in a data-defined era. But it's the way in which people think with their architecture as they sign up a data center now or a cloud and they're not saying, hey so about the hardware, it's based on that or it's the software. It's always going to be about the data. The access to the data, however before you get excited. (laughs) The thing that I kind of look at I say so what has fundamentally changed? And it's the fact that we always used to have to make a decision. You know, I ran a security analytics business and when you do things like log management, it's about collecting as much as data so in other words accuracy beats speed. And then security event management is speed beats accuracy. Because you can't ask questions of the same data. But technology is caught up now. So we've actually moved from the do you want accuracy? Or do you want speed? It's like "or arena". So people were building architectures in this "or" world, you know. Do you want software-defined? If you want software-defined you can't have Enterprisilities. Why not? Well, if you want an enterprise application, I mean remember the age-old adage. You should never buy a version 1.0 of an app. >> Right. But what happens is they want they want this ... people are turning around saying I need an enterprise application, I want full data access to the back of it, I actually need it to be fluid, I need it Software-defined, I don't know where it's going to be based and I don't want to do forklift upgrades. I want and and and and and. Not or, so what we've actually moved to is a software-defined era you know, and a data-defined architecture in an "and arena". And where customers are truly winning and where they're going to beat their competition, is where they don't settle and say oh I remember back two years ago, this happened and therefore we should learn from that, and we shouldn't do that. They're actually just breaking through and saying I'm going to fire the application up I want it up and running within 30 days, I want it to be an enterprise application, I need it to be flexible, I needed to have a hype of scale and then I'm going to break it down and by the way I'm not going to pay contractually to an organization to build all that infrastructure. And that's really why soup to nuts, as we move forward not only they sort of building an infrastructure is data-defined infrastructure, they don't want lock-in. They want optionality and that means they want term licenses which is sure, they don't want these proprietary silos and they need data flexibility on the back of it. And those are the progressive customers, and by the way I've not had to convince a single customer to move to software-defined or data-defined. Every client knows they're going there, the question on the journey is, how fast they want to get. >> Right, when? >> Yeah. >> So if so look every single every single enterprise, every single business person takes a look at what are regarded as the most valuable assets and then they hire people to take care of those assets, to get value out of those assets, to maintain those assets, and when we move from a hardware world where the most valuable asset is hardware that leads to one organization, one set of processes, one set of activities. Move into a software world to get the same thing. But we agree with you, we think that we are moving to a world that is data first, where data is increasingly going to be the primary citizen and as a consequence we're seeing firms reinstitutionalize how work is done, redefine the type of people they have, alter their sourcing arrangements, I mean there's an enormous amount of change happening because data is now becoming the primary citizen. So how is Datera going to help accelerate that in 2020? >> Yeah I mean and again that's part of data access. And then also part of data scale. Back probably six seven eight years ago. EMC we were even I remember Steve Manley is a good buddy of mine, we went on stage and we talked about bringing sexy back to back up. We were trying to move away from backup admins just being backup admins to backup admins actually morphing their job into being AI/ML. You know, I remember a big client of mine, and it wasn't in the EMC days, it was before that were basically saying they have to educate their IT staff, they want to bring them up as they move forward. In other words, you can't ... what you don't want is you don't want your team, because it all comes down to people. You don't want them stuck in an area to say we can't innovate forward because we can't get you away from this product, right. So one of our customers at Datera is a SaaS vendor. And their challenge is they had traditional array business even though it was in a SaaS model, it was basically hardware in the background and they would buy instances and they found that their HR cost, their headcount cost was scaling, >> With the hardware. >> Exactly, and and they were looking at and going, what does that do to my business? It does one or two things, either one is it means that cost I mean do I bear that I don't make profitability and I can't drive my business or do I lay that on my customers and then the cost goes up and therefore I'm actually not a cloud scale. And I can't hire all the people I need to hire into it. So they really needed to move to a point of saying how do I get to hyper scale? How do I drive the automation that allows me to basically take staff and do what they need to do. And so our thing isn't removing staff, it's actually taking the work that you have and the people and put them in a way they really matter. So in other words if you think about the old days of I'm going to mess this up but, I talked to somebody recently about what IT stands for. And they said IT should stand for information technology, right. I mean that's really what it is. But, but you know for the last 20 years it stood for infrastructure technology? >> Yeah. >> And that's frustrating, because in essence we got way too many people managing a lot of crap. And what they really should be doing is focusing on what makes the business happen. >> Yeah. >> And for instance I like to run a business by money in and money out, everybody else does and then you look at it and you say well, how do I get more money coming in? By being smarter and quicker than somebody else. How do I do that? By data analytics. Where do I want to put my work? Well I want to put it into the ML/AI and I want more analysts to work on it. I want my IT staff to do that. Let's move them into that. I don't want them you know rooms and reams of people trying to make it you know manage arrays that don't function the way they should or... >> One more percent out of that array of productivity. >> Yeah, abnormally trying to scale HCI solutions to a hyper scale that actually is impossible for them to do it. >> Right. >> You know and and that was the thing that really what Mark, who was the founder of Datera and the team really did is they looked at it from a cloud perspective and said it's got to be easier than this. There must be a way of doing low lights-out automation on storage. And that's why I was saying when I took over, I kind of did the company an injustice by calling it an SDS Tier 1 vendor. But in reality that was what customers could assume. And we're basically a data services platform that allows them to scale and then if you hop forward you go how do you open up the platform? How do you become data movement? How do you handle multi-cloud? How do you make sure that they don't have this issue? And the policies that they put in place and the way in which they've innovated, it allows that open and flexible choice. So for me, one is you get the scale, two you don't have forklift upgrade three is you don't have human capital cost on every decision you make, and it actually fits in in a very fluid way. And so even though customers move to us and buy us as a second source for SDS, once they've got the power of this thing they realize actually now they've got a data service platform and they start then layering in other policies and other systems and what we've seen is then a good uptick of us being seen as a strategic part of their data movement infrastructure. >> You expand. >> Exactly. >> Guy Churchward, CEO of Datera, thanks again for being on the Cube. >> My pleasure. Thank you Peter. >> And thank you for joining us for another CUBEConversation. I'm Peter Burris, see you next time. (upbeat music)
SUMMARY :
So to have that conversation today, and part of it is we had a little bit and go, I need to go buy a data-driven, and connect it to something and they were looking to basically break out and then make sure you have to flow so all the customers are coming with us and they expressed what they need Yeah, I mean they initially come to us and that is ease most easily done and so even now when you look into software-defined, have to be where the data is. and that that does start to drive forward they actually have to analyze much quicker. and it's not available to anybody else. and then I'm going to break it down and then they hire people to take care of those assets, and they would buy instances And I can't hire all the people I need to hire into it. And what they really should be doing I don't want them you know rooms and reams of people is impossible for them to do it. and said it's got to be easier than this. thanks again for being on the Cube. Thank you Peter. And thank you for joining us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Steve Manley | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
December 2019 | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
40 employees | QUANTITY | 0.99+ |
Datera | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Dells | ORGANIZATION | 0.99+ |
Guy Churchward | PERSON | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
second source | QUANTITY | 0.99+ |
2013 | DATE | 0.99+ |
Palo Alto California | LOCATION | 0.99+ |
one customer | QUANTITY | 0.99+ |
one set | QUANTITY | 0.98+ |
Cube Studios | ORGANIZATION | 0.98+ |
August | DATE | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
one organization | QUANTITY | 0.98+ |
second one | QUANTITY | 0.98+ |
single source | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
two years ago | DATE | 0.97+ |
two | QUANTITY | 0.97+ |
VxFlex | ORGANIZATION | 0.97+ |
two things | QUANTITY | 0.97+ |
six seven eight years ago | DATE | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
Q4 | DATE | 0.95+ |
vSAN | TITLE | 0.94+ |
19 | QUANTITY | 0.94+ |
HCI | ORGANIZATION | 0.94+ |
years ago | DATE | 0.94+ |
single customer | QUANTITY | 0.94+ |
single | QUANTITY | 0.93+ |
30 days | QUANTITY | 0.91+ |
350% | QUANTITY | 0.87+ |
Cube | ORGANIZATION | 0.87+ |
first | QUANTITY | 0.85+ |
two most well-known ones | QUANTITY | 0.85+ |
21 | QUANTITY | 0.83+ |
Tier 1 | OTHER | 0.83+ |
about a year | QUANTITY | 0.83+ |
16 | QUANTITY | 0.81+ |
One more percent | QUANTITY | 0.8+ |
VxFlex | TITLE | 0.77+ |
22 | QUANTITY | 0.77+ |
Billings | ORGANIZATION | 0.76+ |
this quarter | DATE | 0.75+ |
version 1.0 | OTHER | 0.74+ |
last 20 years | DATE | 0.72+ |
single business | QUANTITY | 0.71+ |
a year | DATE | 0.69+ |
dual | QUANTITY | 0.69+ |
Charlie Betz, Forrester & Tobi Knaup, D2iQ | CUBEConversation, December 2019
>>From our studios in the heart of Silicon Valley. Palo Alto, California myth is a cute conversation. >>Hello and welcome to the cube studios in Palo Alto, California. For another cube conversation. We go in depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. It's a well known fact of life at this point in time. We're going to the cloud in some manner, way, shape or form. Every business that intends to undertake a digital transformation is going to find themselves in a situation where they are using cloud resources to build new classes of applications and accelerate their opportunities to create new markets that are more profitable. What folks haven't fully internalized yet though is what it means to govern those activities. What does it mean to use data that is in the cloud in a compliant and reliable way? What does it mean to allow rapid innovation while at the same time ensuring that our businesses are not compromised by new classes of risk, new classes of compliance issues as a result of making certain liberties, uh, with how we handle governance. So that's what we're going to talk about today and we've got a great conversation for you. Toby Knapp is a co founder and CTO of day two IQ and Charlie Betts is the principal analyst at Forrester. Toby. Charlie, welcome to the cube theater. All right, so Charlie, I'm going to start with you. I kind of outline the overall nature of the problem, but let's get it very specific. What is the problem that enterprises face today as they try to accelerate their use of technology in a way that doesn't compromise the risk and compliance concerns? >>Well, we are hearing the same story over and over again. Peter, uh, companies are starting on the cloud native journey and perhaps a dev ops journey. You know, there's some similarities there. You know, one leads to the other in many cases and they S they do a proof of concept and they do a pilot and they like the results. But both of those efforts had what from monopoly, we would call it a get out of jail free card. You know, they had a pass to bypass certain regulatory or governance or compliance controls. Now they want to scale it. They want to roll it out across the enterprise and you can't give every team a get out of jail free card. >>Well, let me dig into this because is it that the speed with which we're trying to create new things, is that the key issue? Is it that the new technologies like Coobernetti's lend themselves to new style that doesn't necessarily bring good governance along with it? What is, what are those factors that are driving this problem? >>I think the central factor, Peter, is the movement from stage gated governance to governance of continuous flow. We could unpack this in various ways, but really if you look at so many governance models and people ship them to us and we comb through them and it's getting, you know, doing a lot of out lately, what we see is over and over again, this idea that delivery pauses experts come in from their perspective with a checklist they go through, they check the delivery against the checklist, and then the Greenlight is given to move on. And this is how we've run digital systems for a long time now. But now we're moving towards continuous flow, continuous iteration, >>agile, agile, DevOps, >>dev ops, all the rest. And these methods are well suited to be supported by architectures like Coobernetti's. And there are certain things you can do with automation that are very beneficial in cloud native systems, but you're up against, you know, decades of policy that assume this older model is based on older guidance like ITIL and PIM, Bach and, and COBIT and all the rest. COBIT 2019 is still based on a plan build run model, >>which is not, is not necessarily a bad thing in the grand scheme of things, but it doesn't fit into a month long sprint. >>It doesn't fit. And more and more what we're seeing when I say stage Gates are going away, what we're seeing is that the life cycle becomes internalized to the team. You still plan, build, run. But it's not something that you can put controls >>on at the high level. And so the solution seems to be is that we need to be able to foster this kind of speedy acceleration that encourages the use of agile, uh, leads to a dev ops orientation. And somehow fold good solid governance practices right into the mix. What do you think the, let's take a look at 2025, what's it going to look like? And uh, even if we're not ready for it yet? >>Well, I think you were going to govern a lot more at the level of the outcome. You're going to govern what not how as much, but there are a lot of things that still are essential and just basic solid good practice such as not having 15 different ways or a hundred different ways to configure major pieces of infrastructure. You know, there's a, in the, some of the reports, uh, the state of DevOps report that came out, there was a, uh, a note in there or a finding in there that it was best to let the developers have a lot of choice. And I understand that developer autonomy is very important, but every time a development team chooses a new technology or a new way to configure an ex, an existing technology, that's an expansion of attack surface. And I'm very concerned about that, especially as we see things like Equifax with the, uh, the struts exploit, you know, we, we have to keep our environment secure, well patched up to date. And if you only have one or two ways that things are configured, that means your staff are more likely to do the right thing as opposed to, you know, infinite levels of variation, you know, on a hundred different ways of configuring. Coobernetti's >>well, presumably we don't want the infinite levels of variation to be revealed at the business level and not down at the infrastructure level. I think one of the things that folks mean or folks aren't intending or hope to be able to do with digital business you're alluding to this is creating a digital asset, a software based asset because ultimately it's going to be more integratable, but you lose the opportunity to integrate those things if you're increasing the transaction costs by introducing a plethora of discordant governance models. Is that what you're seeing as well, Toby? >>Absolutely. And I think, uh, you know, some aspects of cloud native that make this problem a lot bigger is actually, you know, cloud native encourages sort of a self service model for infrastructure. And also we're seeing our shift, um, off, uh, power and decision making towards developers, right? So you have developers introducing a lot of these new stacks, often in a very, you know, sort of bottoms up, um, organic way. So very quickly and enterprise finds themselves with, you know, 10, 15 different ways to provision infrastructure to provision communities, clusters. Um, and often, you know, the teams that are in charge of governance aren't even aware of these things, right? Yes. So, uh, I think it starts actually with that and you know, how can we find, uh, this balance of giving developers the flexibility they want, uh, you know, having them leverage the benefits of cloud native, but at the same time making the folks that are in charge of governance, uh, aware of what's going on in, in their enterprise, uh, making them aware of the different stacks that are provisioned. Uh, and then finding the right balance between that flexibility and enforcing governance. Uh, there's ways to do that. Um, you know, there what we see a lot is, is, uh, waste, uh, people building one stack on cloud provider, a different stack on cloud provider B, a third stack, you know, at the edge or in their data center. And so when it comes to patching, security issues, upgrading versions, you know, you, you're doing three, five times the, the amount of work. >>Well, let me ask you a question because we can see that the problem is this explosion in innovation at the digital level, uh, that is running into this, uh, the, the stricture of historical practices. And as a result, people are in running governance. What is it, I mean, if I think about this, it sounds to me like the developer tooling is getting better, faster than the governance tooling. Where are we in the marketplace in terms of thinking about technologies that can improve the productivity on the governance side so that we can bring governance models to the developers so they don't have to make decisions at that level? >>Right. I think where we are in the market is, um, so obviously cloud native and Kubernetes specifically has seen rapid adoption Indiana price, right? And I think, um, you know, the governance and tools are just now catching up. Right? Right. Um, so the typical journey we see is, uh, you know, folks try out Kubernetes, they try out cloud native technologies to have a very good first experience. It's easy. And so they kind of, uh, you know, forget some of the best practices that we've learned over the years for how to secure a production stack, how to make it upgradable, maintainable, how to govern workloads and versions, um, because they'll still, schools just simply didn't exist. Uh, so far we're now seeing these tools emerge. Um, and, and really it's the same approaches that have worked for us in the past for, for running these types of infrastructure. It's, um, you're having a central pane of class for visibility. What versions am I running? Uh, you know, first being aware of what's out there and then you'll centralizing management of these, of these stacks. Um, how do I, you know, lifecycle manage my, my Kubernetes clusters and all the related technologies. Those are the tools that are just now showing up in the market, >>but it's also got to be, I presume that, uh, a degree of, uh, presuming that the tooling itself does bring forward good governance practices into a modern world. If I got that right. >>Yeah, absolutely. I think this is one of the key things that the updated INO team, uh, the infrastructure and operations and our, our view is that these become platform teams. So we've maybe relieved the INO term behind we go with the platform teams. This is one thing that they should be doing is creating reference implementations. You know, the, you know, here's your hello world stack and it's perfectly compliant. Go solve your business problem and leave the undifferentiated heavy lifting to us. You know, and this is I think, uh, going should be a welcome message. Uh, assuming that the stack is providing all the services that the developer expects. >>Well it certainly suggests that there is a reasonable and rational separation of duties and function within a business. So the people that are close to the business of building the function that the business needs are out there doing it. Meanwhile, we've got infrastructure developers that are capable of building a platform that serves as multitude of purposes with the specificity required for each workload and in compliance with the overall organization. >>There's a key message that I want to reinforce with the audience as we think about the future of INO. I, we've been thinking a lot about it at Forester. What is the future of the traditional INO organization? If I say infrastructure that implies application and I'm talking about a stack that doesn't go away, you know, there will always be a stack, a layered architecture. What is being challenged is, when I say operations, that implies dev and I'm talking now about a life cycle. That's what's merging together. And so well, the life cycle becomes something that is held internally within your feature or component team and is no longer a suitable topic of governance. Absolutely. In terms of the layered infrastructure, this is where we, it's still a thing, you know, because yes, we will platform teams, component teams, feature teams facing the business or the end user. >>Well, it's all back to the idea that a resource is a reasonably well bound, but nonetheless with the appropriate separation, uh, of, of function that delivers some business outcome. And that's gonna include both infrastructure at a software level, an application at a software level. So look, we, you spent a lot of time talking to customers about these issues when they come back to you. Uh, where are you seeing successes most obviously and why? >>Yeah, so where we see successes is where, um, you know, organizations, um, figure out a way to give developers what they want, which is in the cloud native spaces. Every development team wants to own their own communities cluster. They want to, it is their sandbox. They want to install their own applications on there. They don't want to talk to different team when they install applications. So how can you give them that while at the same time enforcing the standards that you need to, right? How do you make sure those clusters follow a certain blueprint that have the right access control rules? Um, you know, sensitive information like, like credentials are distributed in the right way. The right versions of workloads are available. Organizations that figure out how to do that, uh, they are successful at this. So the government from a central place, they have um, you know, essential pane of glass. >>Um, you know, like our product commander where they essentially set up blueprints for teams. Um, each individual team can have their own cluster. It gets provisioned with this blueprint. And then from the central place I can say, all right, here is what my production clusters should look like. Right? Here are the secrets that should be available. Here are the access control rules that need to be set. And so you find the right balance that way, right? You can enforce your governance standards while at the same time giving developers their individual clusters that development their staging of production clusters. >>And here's the options and what is an edible option and what is not. Right. Yeah. So it seems to me as if I, I mentioned this earlier, if I think about digital business, it's the opportunity to not only turn process, we're increasingly digitized process, but the real promise also is to then find ways of bringing these things together, integrate the business in response to new opportunities or new, uh, competitive factors or regulatory factors, whatever else it might be, and literally reconfigure the business quickly. That has to be more difficult if we have a wide array of, of governance models and operational principles. Trolley is, you think about customer success, uh, what does it mean for the future to be able to foster innovation with governance so that the whole thing can come together when it needs to come together? >>Well, I think that we need to move to governing again, as I said earlier, governing >>what not. How uh, >>I believe that, uh, you know, teams should be, should be making certain promises and there's a whole idea of the theory that's out there. A guy named Mark Burgess who is, you know, well known in certain certain infrastructure as code circles. So what are the promises that the team makes within the larger construct of the team of teams and is that team being accountable to those promises? And I think this is the basis of some of the new operating models we're seeing like Holacracy and teal. I think we're in very early days of looking at this. But you know, yeah, you will be held accountable for you know, objectives and key results. But how you get there, you have more degrees of freedom and yet at an infrastructure level, this is also bounded by the fact that if this is a solved problem, if this is not interesting to the business, you shouldn't be burning brain power on solving it. You know, and maybe it was interesting, you know, a couple of years ago and there was a need to explore new technologies, but really the effort should be spent solving the customer's problems. Charlie Betts, principal analyst at Forrester, Toby not co founder and CTO of D to IQ. Thanks very much for being on the cube. Thank you. Thank you, Peter, and thank you for joining us for another cube conversation. Once again, I'm Peter Burris. See you next time..
SUMMARY :
From our studios in the heart of Silicon Valley. All right, so Charlie, I'm going to start with you. They want to roll it out across the enterprise and you can't give every ship them to us and we comb through them and it's getting, you know, doing a lot of out lately, you know, decades of policy that assume this older model is based on older guidance a month long sprint. is that the life cycle becomes internalized to the team. And so the solution seems to be is that we need to be able to foster uh, the struts exploit, you know, we, we have to keep our environment a software based asset because ultimately it's going to be more integratable, but you lose the opportunity So, uh, I think it starts actually with that and you know, Well, let me ask you a question because we can see that the problem is this explosion in innovation And so they kind of, uh, you know, forget some of the best practices that we've learned over the years for but it's also got to be, I presume that, uh, a degree of, uh, You know, the, you know, here's your hello world stack So the people that are close to the business of building the function that the business needs are a stack that doesn't go away, you know, there will always be a stack, So look, we, you spent a lot of time talking Um, you know, sensitive information like, like credentials are distributed in the right way. And so you find the right balance that way, right? And here's the options and what is an edible option and what is not. How uh, a solved problem, if this is not interesting to the business, you shouldn't be burning brain
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Mark Burgess | PERSON | 0.99+ |
Charlie Betts | PERSON | 0.99+ |
Toby | PERSON | 0.99+ |
Toby Knapp | PERSON | 0.99+ |
Charlie | PERSON | 0.99+ |
December 2019 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
15 different ways | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
five times | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Forester | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Indiana | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Charlie Betz | PERSON | 0.99+ |
two ways | QUANTITY | 0.99+ |
one stack | QUANTITY | 0.98+ |
third stack | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
Tobi Knaup | PERSON | 0.98+ |
first experience | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
D2iQ | PERSON | 0.96+ |
INO | ORGANIZATION | 0.96+ |
Equifax | ORGANIZATION | 0.95+ |
each individual team | QUANTITY | 0.95+ |
agile | TITLE | 0.94+ |
IQ | ORGANIZATION | 0.94+ |
Gates | PERSON | 0.94+ |
Coobernetti | ORGANIZATION | 0.92+ |
first | QUANTITY | 0.91+ |
10 | QUANTITY | 0.91+ |
Coobernetti | PERSON | 0.9+ |
couple of years ago | DATE | 0.88+ |
Kubernetes | ORGANIZATION | 0.87+ |
hundred | QUANTITY | 0.86+ |
day two | QUANTITY | 0.86+ |
COBIT 2019 | TITLE | 0.86+ |
each workload | QUANTITY | 0.84+ |
Greenlight | ORGANIZATION | 0.84+ |
CTO | PERSON | 0.83+ |
15 different | QUANTITY | 0.81+ |
DevOps | TITLE | 0.8+ |
Kubernetes | TITLE | 0.78+ |
decades | QUANTITY | 0.75+ |
PIM | ORGANIZATION | 0.73+ |
hundred different ways | QUANTITY | 0.73+ |
Bach | ORGANIZATION | 0.65+ |
COBIT | ORGANIZATION | 0.55+ |
ITIL | ORGANIZATION | 0.47+ |
Holacracy | TITLE | 0.33+ |
Stephanie McReynolds, Alation | CUBEConversation, November 2019
>> Announcer: From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE conversation. >> Hello, and welcome to theCUBE studios, in Palo Alto, California for another CUBE conversation where we go in depth with though leaders driving innovation across tech industry. I'm your host, Peter Burris. The whole concept of self service analytics has been with us decades in the tech industry. Sometimes its been successful, most times it hasn't been. But we're making great progress and have over the last few years as the technologies matures, as the software becomes more potent, but very importantly as the users of analytics become that much more familiar with what's possible and that much more wanting of what they could be doing. But this notion of self service analytics requires some new invention, some new innovation. What are they? How's that going to play out? Well, we're going to have a great conversation today with Stephanie McReynolds, she's Senior Vice President of Marketing, at Alation. Stephanie, thanks again for being on theCUBE. >> Thanks for inviting me, it's great to be back. >> So, tell us a little, give us an update on Alation. >> So as you know, Alation was one of the first companies to bring a data catalog to the market. And that market category has now been cemented and defined depending on the industry analyst you talk to. There could be 40 or 50 vendors now who are providing data catalogs to the market. So this has become one of the hot technologies to include in a modern analytics stacks. Particularly, we're seeing a lot of demand as companies move from on premise deployments into the cloud. Not only are they thinking about how do we migrate our systems, our infrastructure into the cloud but with data cataloging more importantly, how do we migrate our users to the cloud? How do we get self-service users to understand where to go to find data, how to understand it, how to trust it, what re-use can we do of it's existing assets so we're not just exploding the amount of processing we're doing in the cloud. So that's been very exciting, it's helped us grow our business. We've now seen four straight years of triple digit revenue growth which is amazing for a high growth company like us. >> Sure. >> We also have over 150 different organizations in production with a data catalog as part of their modern analytics stack. And many of those organizations are moving into the thousands of users. So eBay was probably our first customer to move into the, you know, over a thousand weekly logins they're now up to about 4,000 weekly logins through Alation. But now we have customers like Boeing and General Electric and Pfizer and we just closed a deal with US Air Force. So we're starting to see all sorts of different industries and all sorts of different users from the analytics specialist in your organization, like a data scientist or a data engineer, all the way out to maybe a product manager or someone who doesn't really think of them as an analytics expert using Alation either directly or sometimes through one of our partnerships with folks like Tableau or Microstrategy or Power BI. >> So, if we think about this notion of self- service analytics, Stephanie, and again it's Alation has been a leader in defining this overall category, we think in terms of an individual who has some need for data but is, most importantly, has questions they think data can answer and now they're out looking for data. Take us through that process. They need to know where the data is, they need to know what it is, they need to know how to use it, and they need to know what to do if they make a mistake. How is that, how are the data catalogs, like Alation, serving that, and what's new? >> Yeah, so as consumers, this world of data cataloging is very similar if you go back to the introduction of the internet. >> Sure. >> How did you find a webpage in the 90's? Pretty difficult, you had to know the exact URL to go to in most cases, to find a webpage. And then a Yahoo was introduced, and Yahoo did a whole bunch of manual curation of those pages so that you could search for a page and find it. >> So Yahoo was like a big catalog. >> It was like a big catalog, an inventory of what was out there. So the original data catalogs, you could argue, were what we would call from an technical perspective, a metadata repository. No business user wants to use a metadata repository but it created an inventory of what are all the data assets that we have in the organizations and what's the description of those data assets. The meta- data. So metadata repositories were kind of the original catalogs. The big breakthrough for data catalogs was: How do we become the Google of finding data in the organization? So rather than manually curating everything that's out there and providing an in- user inferant with an answer, how could we use machine learning and AI to look at patterns of usage- what people are clicking on, in terms of data assets- surface those as data recommendations to any end user whether they're an analytics specialist or they're just a self- service analytics user. And so that has been the real break through of this new category called data cataloging. And so most folks are accessing a data catalog through a search interface or maybe they're writing a SQL query and there's SQL recommendations that are being provided by the catalog-- >> Or using a tool that utilizes SQL >> Or using a tool that utilizes SQL, and for most people in a- most employees in a large enterprise when you get those thousands of users, they're using some other tool like Tableau or Microstrategy or, you know, a variety of different data visualization providers or data science tools to actually access that data. So a big part of our strategy at Alation has been, how do we surface this data recommendation engine in those third party products. And then if you think about it, once you're surfacing that information and providing some value to those end users, the next thing you want to do is make sure that they're using that data accurately. And that's a non- trivial problem to solve, because analytics and data is complicated. >> Right >> And metadata is extremely complicated-- >> And metadata is-- because often it's written in a language that's arcane and done to be precise from a data standpoint, that's not easily consumable or easily accessible by your average human being. >> Right, so a label, for example, on a table in a data base might be cust_seg_257, what does that mean? >> It means we can process it really quickly in the system. >> Yeah, but as-- >> But it's useless to a human being-- >> As a marketing manager, right? I'm like, hey, I want to do some customer segmentation analysis and I want to find out if people who live in California might behave differently if I provide them an offer than people that live in Massachusetts, it's not intuitive to say, oh yeah, that's in customer_seg_ so what data catalogs are doing is they're thinking about that marketing manager, they're thinking about that peer business user and helping make that translation between business terminology, "Hey I want to run some customer segmentation analysis for the West" with the technical, physical model, that underlies the data in that data base which is customer_seg_257 is the table you need to access to get the answer to that question. So as organizations start to adapt more self- service analytics, it's important that we're managing not just the data itself and this translation from technical metadata to business metadata, but there's another layer that's becoming even more important as organizations embrace self- service analytics. And that's how is this data actually being processed? What is the logic that is being used to traverse different data sets that end users now have access to. So if I take gender information in one table and I have information on income on another table, and I have some private information that identifies those two customers as the same in those two tables, in some use tables I can join that data, if I'm doing marketing campaigns, I likely can join that data. >> Sure. >> If I'm running a loan approval process here in the United States, I cannot join that data. >> That's a legal limitation, that's not a technical issue-- >> That's a legal, federal, government issue. Right? And so here's where there's a discussion, in folks that are knowledgeable about data and data management, there's a discussion of how do we govern this data? But I think by saying how we govern this data, we're kind of covering up what's actually going on, because you don't have govern that data so much as you have to govern the analysis. How is this joined, how are we combining these two data sets? If I just govern the data for accuracy, I might not know the usage scenario which is someone wants to combine these two things which makes it's illegal. Separately, it's fine, combined, it's illegal. So now we need to think about, how do we govern the analytics themselves, the logic that is being used. And that gets kind of complicated, right? For a marketing manager to understand the difference between those things on the surface is doesn't really make sense. It only makes sense when the context of that government regulation is shared and explained and in the course of your workflow and dragging and dropping in a Tableau report, you might not remember that, right? >> That's right, and the derivative output that you create that other people might then be able to use because it's back in the data catalog, doesn't explicitly note, often, that this data was generated as a combination of a join that might not be in compliance with any number of different rules. >> Right, so about a year and a half ago, we introduced a new feature in our data catalog called Trust Check. >> Yeah, I really like this. This is a really interesting thing. >> And that was meant to be a way where we could alert end users to these issues- hey, you're trying to run the same analytic and that's not allowed. We're going to give you a warning, we're not going to let you run that query, we're going to stop you in your place. So that was a way in the workflow of someone while they're typing a SQL statement or while they're dragging and dropping in Tableau to surface that up. Now, some of the vendors we work with, like Tableau, have doubled down on this concept of how do they integrate with an enterprise data catalog to make this even easier. So at Tableau conference last week, they introduced a new metadata API, they introduced a Tableau catalog, and the opportunity for these type of alerts to be pushed into the Tableau catalog as well as directly into reports and worksheets and dashboards that end users are using. >> Let me make sure I got this. So it means that you can put a lot of the compliance rules inside Alation and have a metadata API so that Alation effectively is governing the utilization of data inside the Tableau catalog. >> That's right. So think about the integration with Tableau is this communication mechanism to surface up these policies that are stored centrally in your data catalog. And so this is important, this notion of a central place of reference. We used to talk about data catalogs just as a central place of reference for where all your data assets lie in the organizations, and we have some automated ways to crawl those sources and create a centralized inventory. What we've added in our new release, which is coming out here shortly, is the ability to centralize all your policies in that catalog as well as the pointers to your data in that catalog. So you have a single source of reference for how this data needs to be governed, as well as a single source of reference for how this data is used in the organization. >> So does that mean, ultimately, that someone could try to do something, trust check and say, no you can't, but this new capability will say, and here's why or here's what you do. >> Exactly. >> A descriptive step that says let me explain why you can't do it. >> That's right. Let me not just stop your query and tell you no, let me give you the details as to why this query isn't a good query and what you might be able to do to modify that query should you still want to run it. And so all of that context is available for any end user to be able to become more aware of what is the system doing, and why is recommending. And on the flip side, in the world before we had something like Trust Check, the only opportunity for an IT Team to stop those queries was just to stop them without explanation or to try to publish manuals and ask people to run tests, like the DMV, so that they memorized all those rules of governance. >> Yeah, self- service, but if there's a problem you have to call us. >> That's right. That's right. So what we're trying to do is trying to make the work of those governance teams, those IT Teams, much easier by scaling them. Because we all know the volume of data that's being created, the volume of analysis that's being created is far greater than any individual can come up with, so we're trying to scale those precious data expert resources-- >> Digitize them-- >> Yeah, exactly. >> It's a digital transformation of how we acquire data necessary-- >> And then-- >> for data transformation. >> make it super transparent for the end user as to why they're being told yes or no so that we remove this friction that's existed between business and IT when trying to perform analytics. >> But I want to build a little bit on one of the things I thought I heard you say, and that is that the idea that this new feature, this new capability will actually prescribe an alternative, logical way for you to get your information that might be in compliance. Have I got that right? >> Yeah, that's right. Because what we also have in the catalog is a workflow that allows individuals called Stewards, analytics Stewards to be able to make recommendations and certifications. So if there's a policy that says though shall not use the data in this way, the Stewards can then say, but here's an alternative mechanism, here's an alternative method, and by the way, not only are we making this as a recommendation but this is certified for success. We know that our best analysts have already tried this out, or we know that this complies with government regulation. And so this is a more active way, then, for the two parties to collaborate together in a distributed way, that's asynchronous, and so it's easy for everyone no matter what hour of the day they're working or where they're globally located. And it helps progress analytics throughout the organization. >> Oh and more importantly, it increases the likelihood that someone who is told you now have self- service capability doesn't find themselves abandoning it the first time that somebody says no, because we've seen that over and over with a lot of these query tools, right? That somebody says, oh wow, look at this new capability until the screen, you know, metaphorically, goes dark. >> Right, until it becomes too complicated-- >> That's right-- >> and then you're like, oh I guess I wasn't really trained on this. >> And then they walk away. And it doesn't get adopted. >> Right. >> And this is a way, it's very human centered way to bring that self- service analyst into the system and be a full participant in how you generate value out of it. >> And help them along. So you know, the ultimate goal that we have as an organization, is help organizations become our customers, become data literate populations. And you can only become data literate if you get comfortable working with the date and it's not a black box to you. So the more transparency that we can create through our policy center, through documenting the data for end users, and making it more easy for them to access, the better. And so, in the next version of the Alation product, not only have we implemented features for analytic Stewards to use, to certify these different assets, to log their policies, to ensure that they can document those policies fully with examples and use cases, but we're also bringing to market a professional services offering from our own team that says look, given that we've now worked with about 20% of our installed base, and observed how they roll out Stewardship initiatives and how they assign Stewards and how they manage this process, and how they manage incentives, we've done a lot of thinking about what are some of the best practices for having a strong analytics Stewardship practice if you're a self- service analytics oriented organization. And so our professional services team is now available to help organizations roll out this type of initiative, make it successful, and have that be supported with product. So the psychological incentives of how you get one of these programs really healthy is important. >> Look, you guys have always been very focused on ensuring that your customers were able to adopt valued proposition, not just buy the valued proposition. >> Right. >> Stephanie McReynolds, Senior Vice President of Marketing Relation, once again, thanks for being on theCUBE. >> Thanks for having me. >> And thank you for joining us for another CUBE conversation. I'm Peter Burris. See you next time.
SUMMARY :
in the heart of Silicon Valley, Palo Alto, California, and that much more wanting of what they could be doing. So, tell us a little, depending on the industry analyst you talk to. and General Electric and Pfizer and we just closed a deal and they need to know what to do if they make a mistake. of the internet. of those pages so that you could search for a page And so that has been the real break through the next thing you want to do is make sure that's arcane and done to be precise from a data standpoint, and I have some private information that identifies in the United States, I cannot join that data. and in the course of your workflow and dragging and dropping That's right, and the derivative output that you create we introduced a new feature in our data catalog This is a really interesting thing. and the opportunity for these type of alerts to be pushed So it means that you can put a lot of the compliance rules is the ability to centralize all your policies and here's why or here's what you do. let me explain why you can't do it. the only opportunity for an IT Team to stop those queries but if there's a problem you have to call us. the volume of analysis that's being created so that we remove this friction that's existed and that is that the idea that this new feature, and by the way, not only are we making this Oh and more importantly, it increases the likelihood and then you're like, And then they walk away. And this is a way, it's very human centered way So the psychological incentives of how you get one of these not just buy the valued proposition. Senior Vice President of Marketing Relation, once again, And thank you for joining us for another
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Boeing | ORGANIZATION | 0.99+ |
Pfizer | ORGANIZATION | 0.99+ |
General Electric | ORGANIZATION | 0.99+ |
Stephanie McReynolds | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
40 | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
Massachusetts | LOCATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
November 2019 | DATE | 0.99+ |
Alation | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
two parties | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
two tables | QUANTITY | 0.99+ |
two customers | QUANTITY | 0.99+ |
one table | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
50 vendors | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
SQL | TITLE | 0.99+ |
last week | DATE | 0.99+ |
US Air Force | ORGANIZATION | 0.99+ |
Microstrategy | ORGANIZATION | 0.99+ |
first customer | QUANTITY | 0.99+ |
Tableau | ORGANIZATION | 0.98+ |
Tableau | TITLE | 0.98+ |
Stewards | ORGANIZATION | 0.98+ |
Power BI | ORGANIZATION | 0.98+ |
over 150 different organizations | QUANTITY | 0.98+ |
90's | DATE | 0.97+ |
today | DATE | 0.97+ |
single | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
about 20% | QUANTITY | 0.97+ |
four straight years | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.96+ |
over a thousand weekly logins | QUANTITY | 0.96+ |
thousands of users | QUANTITY | 0.96+ |
two data | QUANTITY | 0.94+ |
Microstrategy | TITLE | 0.94+ |
first companies | QUANTITY | 0.92+ |
Tableau | EVENT | 0.9+ |
about | DATE | 0.9+ |
Silicon Valley, Palo Alto, California | LOCATION | 0.89+ |
a year and a half ago | DATE | 0.88+ |
about 4,000 weekly logins | QUANTITY | 0.86+ |
Trust Check | ORGANIZATION | 0.82+ |
single source | QUANTITY | 0.79+ |
Trust Check | TITLE | 0.75+ |
theCUBE | ORGANIZATION | 0.75+ |
customer_seg_257 | OTHER | 0.74+ |
up | QUANTITY | 0.73+ |
Alation | PERSON | 0.72+ |
decades | QUANTITY | 0.7+ |
cust_seg_257 | OTHER | 0.66+ |
Senior Vice President | PERSON | 0.65+ |
years | DATE | 0.58+ |
CUBEConversation | EVENT | 0.51+ |
Michael Segal, NETSCOUT Systems | CUBEConversation, November 2019
(upbeat music) >> Announcer: From our studios in the heart of Silicon Valley, Palo Alto, California, This is a Cube Conversation. >> Hello and welcome to theCUBE studios in Palo Alto, California for another Cube Conversation. Where we go in depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Michael Segal is the product manager, or Area Vice President of Strategic Alliances in NetScout Systems. Michael, we are sitting here in theCUBE studios in Palo Alto in November of 2019, re:Invent 2019 is right around the corner. NetScout and AWS are looking to do some interesting things. Why don't you give us an update of what's happening. >> Yeah, just very brief introduction of what NetScout actually does. So, NetScout assures service, performance and security for the largest enterprises and service providers in the world. We do it for something we refer to as visibility without borders by providing actionable intelligence necessary to very quickly identify the root cause of either performance or security issues. So with that, NetScout, partnering very closely with AWS. We are an advanced technology partner, which is the highest tier for ISVs of partnership. This enables us to partner with AWS on a wide range of activities including technology alignment with road map and participating in different launch activities of new functionality from AWS. It enables us to have go-to market activities together, focusing on key campaigns that are relevant for both AWS and NetScout. And it enables us also to collaborate on sales initiatives. So, with this wide range of activities, what we can offer is a win-win-win situation for our customers, for AWS, and for NetScout. So, from customers' perspective, beyond the fact that NetScout offering is available in AWS marketplace, now this visibility without borders that I mentioned, helps our customers to navigate through their digital transformation journey and migrate to AWS more effectively. From AWS perspective, the win is their resources are now consumed by the largest enterprises in the world, so it accelerates the consumption of compute, storage, networking, database resources in AWS. And for NetScout, this is strategically important because now NetScout becoming a strategic partner to our large enterprise customers as they navigate their digital transformation journey. So that's why it's really important for us to collaborate very, very efficiently with AWS. It's important to our customers, and it's important to AWS. >> And you're going to be at re:Invent. You're actually going to be speaking, as I understand. What are you going to be talking about? >> So we are going to be talking about best practices of migrating to AWS. NetScout also is a platinum sponsor for the re:Invent show. This demonstrates our commitment to AWS, and the fact that we want to collaborate and partner with them very, very efficiently. And beyond that also, NetScout partnered with AWS on the launch of what is referred to as Amazon VPC traffic mirroring. And, this functionality enables us to acquire traffic data and packet data very efficiently in AWS. And it's part of the technology aligns that we have with AWS and demonstrates how we utilize these technology aligns to extend NetScout visibility without borders to AWS cloud. >> There's no reason to make AWS cloud a border. >> Michael Segal: Exactly. >> Michael Segal, NetScout Systems. Thanks very much for being on theCUBE. >> Thank you for having me. >> And, once again we'd like to thank you for joining us for another Cube Conversation. Until next time. (upbeat music)
SUMMARY :
Announcer: From our studios in the heart of NetScout and AWS are looking to do some interesting things. This enables us to partner with AWS on a wide range You're actually going to be speaking, as I understand. and the fact that we want to collaborate Thanks very much for being on theCUBE. And, once again we'd like to thank you for joining us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Michael Segal | PERSON | 0.99+ |
November of 2019 | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
November 2019 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
NetScout | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
NETSCOUT Systems | ORGANIZATION | 0.99+ |
NetScout Systems | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
re | EVENT | 0.87+ |
Invent 2019 | EVENT | 0.84+ |
theCUBE | ORGANIZATION | 0.82+ |
Conversation | EVENT | 0.7+ |
Strategic Alliances | ORGANIZATION | 0.63+ |
Invent show | EVENT | 0.59+ |
CUBEConversation | EVENT | 0.55+ |
NetScout | TITLE | 0.51+ |
Cube Conversation | EVENT | 0.48+ |
VPC | TITLE | 0.38+ |
Cube | COMMERCIAL_ITEM | 0.34+ |
Derek Manky, Fortinet | CUBEConversation, November 2019
our Studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hello and welcome to the cube studios in Palo Alto California for another cube conversation where we go in-depth with thought leaders driving innovation across the tech industry I'm your host Peter Burris almost everybody's heard of the term black hat and white hat and it constitutes groups of individuals that are either attacking or defending security challenges it's been an arms race for the past 10 20 30 years as the world has become more digital and an arms race that many of us are concern that black hats appear to have the upper hand but there's new developments in technology and new classes of tooling that are actually racing to the aid of white hats and could very well upset that equilibrium in favor of the white hats to have that conversation about the Ascension of the white hats we're joined by Derek manky who's chief security insights and global threat alliances lead at Ford Annette dereck thanks for joining us for another cube conversation it's always a pleasure speaking yeah all right Derrick let's start what's going on afforda labs at four Dannette so 2019 we've seen a ton of development a lot pretty much on track with our predictions when we talked last year obviously a big increase in volume thanks offense of automation we're also seeing low volume attacks that are disrupting big business models I'm talking about targeted ransom attacks right you know criminals that are able to get into networks caused millions of dollars of damages thanks to critical revenue streams being out usually in the public sector we've seen a lot of this we've seen a rise in sophistication the adversary's are not slowing down AET s advanced evasion techniques are on the rise and so you know to do this and for the guard loves to be able to track this and map this we're not just relying on blogs anymore and you know 40 50 page white papers so we're actually looking at that playbooks now mapping the adversary's understanding their tools techniques procedures how they're operating why they're operating who are they hitting on and what what might be their next move so that's a big development on the intelligence sides here all right so I mentioned upfront this notion that the white hats may be ascending I'm implying a prediction here tell us a little bit about what we see on the horizon for that concept of the white hats ascending and specifically why is there reason to be optimistic yeah so as it's it's it's been gloomy for you for decades like he said and for many reasons right and I think those reasons there are no secrets I mean cyber criminals and black hats have always been able to move very you know with with agility right I'm sorry crime has no borders it's often a slap on the wrist that they get they can do a million things are on they don't care there's no ethics and quite frankly no no rules by right on the white hand side we've always had rules binding us we've had to we've had to take due care and we've had to move methodically which slows us down so a lot of that comes in place because of frameworks because of technology as well having to move um after it's in able to it with frameworks so specifically with you know making corrective action and things like that so those are the challenges that we face against but you know like thinking ahead to to 2020 particularly with the use of artificial intelligence everybody talks about AI you know it's it's impacted our daily lives but when it comes to cybersecurity on the white hat side um you know a proper AI and machine learning model it takes time you think it can take you years in fact in our case in our experience about four to five years before we can actually roll it out to production but the good news is that we have been investing and when I say we I'm just talking to the industry in general and wait we've been investing into this technology because quite frankly we've had to it takes a lot of data it takes a lot of smart minds a lot of investment a lot of processing power and that foundation has now been set over the last five years if we look at the blackcats it's not the case and why because they've been enjoying living off the land on a low-hanging truth path of least resistance because they've been able to so one of the things that's changing that equilibrium then is the availability of AI as you said it could take four or five years to get to a point we've actually got useful AI is it can have an impact I guess that means that we've been working on these things for four or five years what's the state of the art with AI as it pertains to security and are we seeing different phases of development start to emerge as we gain more experience with these technologies yeah absolutely and it's quite exciting right ai isn't this universal brain that's that's always good the world's problems that everyone thinks it might right it's very specific it relies on machine learning models each machine learning model is very specific to its task right I mean you know voice learning technology versus autonomous vehicle driving versus cybersecurity it's very different when it comes to the swimming purposes so so in essence the way I look at it you know there's three generations of AI we have generation 1 which was the past generation 2 which is a current where we are now and the generation 3 is where we're going so generation 1 was pretty simple right it was just a central processing lyrtle of machine learning model that'll take in data they'll correlate that data and then take action based off of it some simple inputs simple output right generation to where we're currently sitting is more advances looking at pattern recognition more advanced inputs are distributed models where we have the you know sensor is lying around networks I'm talking about even IOT devices security appliances and so forth but still report up to this centralized brain that's learning and acting on things but where things get really interesting moving forward in 2020 gets into this third generation where you have especially you know moving towards about computer sorry I'm computing where you have localized learning notes that are actually processing and learning so you can think of them as these mini brains instead of having this monolithic centralized brain you have individual learning modes individual brains doing their own machine learning that are actually connected to each other learning from each other speaking to each other it's a very powerful model we actually refer to this as federated machine learning in our industry so we've been first phase we simply use statistics to correlate events take action yeah now we're doing exceptions pattern recognition or exceptions and building patterns and in the future we're going to be able to further distribute at that so that increasingly the AI is going to work with other AI so that the aggregate this federated aggregate gets better I got that right yeah absolutely and what's the advantage of that a couple of things I'm it's very similar to the human immune system right I mean if you have you know if I were to cut my finger on my hand what's gonna happen well localized white blood cells get localized not nothing from a foreign entity or further away in my body are gonna come to the rescue and start healing right it's the same idea it's because it's interconnected within the nervous system it's the same idea of this federated machine learning right if security appliance is to detect a threat locally on-site its able to alert other security appliances so that they can actually take action on this and learn from that as well so connected machine learning models it means that that you know by properly implementing these these AI this federated AI machine learning models in an organization that that system is able to actually in an auto you may pick up what that threat is be able to act on that threat which means it's able to respond to these threats quicker shut them down to the point where it can be you know virtually instantaneous right before you know that the damage is done and bleeding starts happening so the common time safe common baseline is constantly getting better even as we're giving opportunities for local local managers to perform the work in response to local conditions so that takes us to the next notion of we've got this federated a la a I on the horizon how are people how is the role of people security professionals going to change what kind of recipes are they going to follow to ensure that they are working in a maximally productive way with these new capabilities these new federated capabilities especially as we think about the introduction of 5g and greater density of devices and faster speeds and lower latencies yeah so you know that the the the the world of cyber computer cyber security has always been incredibly complex so we're trying to simplify that and that's where again this this federated machine learning comes into place particularly with playbooks so you know if we look at 2019 and where we're going in 2020 we've put a lot of a lot of groundwork quite frankly into pioneering the work of playbooks right so when I say playbooks I'm talking about adversary's playbook knowing the offense knowing the tools techniques procedures the way that these cybercrime operations are moving right and the black hats are moving the more that we can understand that the more we can predict their next move and that centralized language right once you know that offense we can start to create automated Blue Team playbook so defensive play books that a human that that's a security technology can automatically integrate and respond to it but to getting back to your question we can actually create human readable sea cecil guides that can actually say look there's a threat here's why it's a problem here's here here are the gaps in your security that we've identified if you're some recommended course of action as my deity right so that's that's where the humans and the machines are really going to be worked working together and and quite frankly moving speed being able to do that a machine level but also being being able to simplify a complex landscape that is where we can actually gain traction right that this is part of that ascendancy of the white hat because because it's it's allowing us to move in a more agile nature it's an it's allowing us to gain ground against heat actors and quite frankly it allows us to start disrupting their business model right it's more resilient Network in the future this leads to the whole notion of self-healing networks as well that quite frankly just makes it a big pain it disrupts your business model it forces them to go back to the drawing board - well it also seems as though when we start talking about 5g that the speeds as I said the speeds the dentin see the reduced latency the the potential for a bad thing to propagate very quickly demands that we have a more consistent coherent response at both the Machine level but also at the people level we 5g into this conversation what's what will be the impact of 5g on how these playbooks and AI start to come together over the next few years yeah it's it's it's it's gonna be very impactful it's gonna take a couple of years and we're just at the dawn of 5g right now but if you think of 5g you're talking about a lot more volume essentially as we move to the future we're entering into the age of five G and edge computing and 5g and edge computing is gonna start eating the cloud in a sense that more of that processing power that was in the cloud is starting to shift now towards edge computing right this is that on-premises so it is gonna allow models like I was talking about federated machine learning models at first from the the white hats point of view which I again I think we are in the driver's seat and in a better you know more advantageous position here because we have more experience again like I said we've been doing this for years where the black hats quite frankly haven't yes they're toying with it but not to the same level at scale that we have but you know you know it's I'm always a realist this isn't a completely rosy picture I mean there it is optimistic that we are able to get this upper hand it has to be done right but if we think about the weaponization of 5g that's also very large problem right last year we're talking about sworn networks right the idea of sworn networks is a whole bunch of devices that can connect to each other share intelligence and then act to do something like a large-scale DDoS attack that's absolutely in the in the realm of possibility when it comes to the weaponization of 5g as well so one of the things I guess the last question I want to ask you is you noted that these play books incorporate the human element in ways that are uniquely human so having C so readable recipes for how people have to respond does that also elevate the conversation with the business and does allows us to do a better job of understanding risk pricing risk and appropriately investing to manage and assure the business against risk in the right way absolutely absolutely it does yeah yeah because the more you know about going back to the playbook some more you know about the office and their tools you know you the more you know about how much of a danger it is what sort of targets they're after right I mean if they're just going trying to look to to to collect a little bit of information on you know to do some reconnaissance that first phase attack might not cause a lot of damage but if this group is knowing to go in hit hard steal intellectual property shut down critical business streams to do s that in the past we know and we've seen has caused four or five million dollars from one you know from one breach that's a very good way to start classifying risk so yeah I mean it's all about really understanding the picture first on the offense and that's exactly what these automated playbook guides are going to be doing on the on the on the blue team and again not only from a CSE suite perspective certainly that on the human level but the nice thing about the play books is because we've done the research the threat hunting and understood this you know from a machine level it's also able to put a lot of those automated let's say day-to-day decisions making security operation center is so I'm talking about like sect DevOps much more efficient to so he's talking about more density at the edge amongst these devices I also want to bring back one last thought here and that is you said that historically some of the black hats have been able to act with a degree of impunity they haven't necessarily been hit hard there a lot of slapping on the wrist as I think you said talk about how the playbooks and AI is going to allow them to more appropriately share data with others that can help both now but also in some of the forensics and the the enforcement side namely the the legal and policing world how are we going to share the responsibility or how is that going to change over the next few years to incorporate some of the folks that actually can then turn a defense into a legal attack illumination this is what I call it right so again if we look at the current state we've made great strides great progress you know working with law enforcement so we've set up public private sector relationships we need to do that have security experts working with law enforcement law enforcement working on there and to train process prosecutors to understand cybercrime and so forth that foundation has been set but it's still slow-moving you know there's only a limited amount of playbooks right now it takes a lot of work to unearth and and and do to really move the needle what we need to do again like we're talking about is to integrate artificial intelligence with playbooks the more that we understand about groups the more that we do this threat illumination the more we have cover about them the more we know about them and by doing that we can start to form predictive models right basically I always say old habits die hard so you know if an attacker goes in hits a network and they're successful following a certain sequence of patterns they're likely going to follow that say that's that same sequence on their next victim or their next target so the more that we understand about that the more that we can forecast eight from a mitigation standpoint but the also by the same token the more correlation we're doing on these playbooks the more machine learning we're doing on this playbooks the more we were able to do attribution and attribution is the Holy Grail it's always been the toughest thing to do when it comes to research but by combining the framework that we're using with playbooks and AI machine learning it's a very very powerful recipe and that's that's what we need to get right and move forward in the right direction Derrick McKey ordinance chief of security insights and threat alliances thanks again for being on the cube it's a pleasure anytime happy to talk and I want to thank you for joining us for another cube conversation I'm Peter Burris see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Derrick McKey | PERSON | 0.99+ |
Derek Manky | PERSON | 0.99+ |
November 2019 | DATE | 0.99+ |
40 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Derek manky | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
third generation | QUANTITY | 0.99+ |
five million dollars | QUANTITY | 0.99+ |
first phase | QUANTITY | 0.99+ |
Derrick | PERSON | 0.99+ |
eight | QUANTITY | 0.98+ |
Palo Alto California | LOCATION | 0.97+ |
millions of dollars | QUANTITY | 0.97+ |
5g | QUANTITY | 0.97+ |
first | QUANTITY | 0.95+ |
five G | QUANTITY | 0.94+ |
each | QUANTITY | 0.94+ |
Dannette | ORGANIZATION | 0.93+ |
both | QUANTITY | 0.93+ |
decades | QUANTITY | 0.91+ |
Fortinet | ORGANIZATION | 0.9+ |
one | QUANTITY | 0.9+ |
Ford Annette | ORGANIZATION | 0.87+ |
one last thought | QUANTITY | 0.87+ |
three generations | QUANTITY | 0.85+ |
a couple of years | QUANTITY | 0.84+ |
last five years | DATE | 0.83+ |
a lot of work | QUANTITY | 0.8+ |
50 page | QUANTITY | 0.75+ |
sect DevOps | TITLE | 0.74+ |
one breach | QUANTITY | 0.73+ |
playbooks | COMMERCIAL_ITEM | 0.73+ |
past 10 20 30 years | DATE | 0.68+ |
years | QUANTITY | 0.66+ |
next few years | DATE | 0.63+ |
million | QUANTITY | 0.63+ |
about | QUANTITY | 0.62+ |
AET | ORGANIZATION | 0.6+ |
CSE | TITLE | 0.6+ |
couple of things | QUANTITY | 0.59+ |
about four | QUANTITY | 0.55+ |
2 | OTHER | 0.49+ |
generation 3 | QUANTITY | 0.46+ |
generation | OTHER | 0.46+ |
Blue | TITLE | 0.45+ |
1 | QUANTITY | 0.34+ |
Derek Manky, Fortinet | CUBEConversation, November 2019
our Studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hello and welcome to the cube studios in Palo Alto California for another cube conversation where we go in-depth with thought leaders driving innovation across the tech industry I'm your host Peter Burris almost everybody's heard of the term black hat and white hat and it constitutes groups of individuals that are either attacking or defending security challenges it's been an arms race for the past 10 20 30 years as the world has become more digital and an arms race that many of us are concern that black hats appear to have the upper hand but there's new developments in technology and new classes of tooling that are actually racing to the aid of white hats and could very well upset that equilibrium in favor of the white hats to have that conversation about the Ascension of the white hats we're joined by Derek manky who's chief security insights and global threat alliances lead at Ford Annette dereck thanks for joining us for another cube conversation it's always a pleasure speaking yeah all right Derrick let's start what's going on afforda labs at four Dannette so 2019 we've seen a ton of development a lot pretty much on track with our predictions when we talked last year obviously a big increase in volume thanks offense of automation we're also seeing low volume attacks that are disrupting big business models I'm talking about targeted ransom attacks right you know criminals that are able to get into networks caused millions of dollars of damages thanks to critical revenue streams being out usually in the public sector we've seen a lot of this we've seen a rise in sophistication the adversary's are not slowing down AET s advanced evasion techniques are on the rise and so you know to do this and for the guard loves to be able to track this and map this we're not just relying on blogs anymore and you know 40 50 page white papers so we're actually looking at that playbooks now mapping the adversary's understanding their tools techniques procedures how they're operating why they're operating who are they hitting on and what what might be their next move so that's a big development on the intelligence sides here all right so I mentioned upfront this notion that the white hats may be ascending I'm implying a prediction here tell us a little bit about what we see on the horizon for that concept of the white hats ascending and specifically why is there reason to be optimistic yeah so as it's it's it's been gloomy for you for decades like he said and for many reasons right and I think those reasons there are no secrets I mean cyber criminals and black hats have always been able to move very you know with with agility right I'm sorry crime has no borders it's often a slap on the wrist that they get they can do a million things are on they don't care there's no ethics and quite frankly no no rules by right on the white hand side we've always had rules binding us we've had to we've had to take due care and we've had to move methodically which slows us down so a lot of that comes in place because of frameworks because of technology as well having to move um after it's in able to it with frameworks so specifically with you know making corrective action and things like that so those are the challenges that we face against but you know like thinking ahead to to 2020 particularly with the use of artificial intelligence everybody talks about AI you know it's it's impacted our daily lives but when it comes to cybersecurity on the white hat side um you know a proper AI and machine learning model it takes time you think it can take you years in fact in our case in our experience about four to five years before we can actually roll it out to production but the good news is that we have been investing and when I say we I'm just talking to the industry in general and wait we've been investing into this technology because quite frankly we've had to it takes a lot of data it takes a lot of smart minds a lot of investment a lot of processing power and that foundation has now been set over the last five years if we look at the blackcats it's not the case and why because they've been enjoying living off the land on a low-hanging truth path of least resistance because they've been able to so one of the things that's changing that equilibrium then is the availability of AI as you said it could take four or five years to get to a point we've actually got useful AI is it can have an impact I guess that means that we've been working on these things for four or five years what's the state of the art with AI as it pertains to security and are we seeing different phases of development start to emerge as we gain more experience with these technologies yeah absolutely and it's quite exciting right ai isn't this universal brain that's that's always good the world's problems that everyone thinks it might right it's very specific it relies on machine learning models each machine learning model is very specific to its task right I mean you know voice learning technology versus autonomous vehicle driving versus cybersecurity it's very different when it comes to the swimming purposes so so in essence the way I look at it you know there's three generations of AI we have generation 1 which was the past generation 2 which is a current where we are now and the generation 3 is where we're going so generation 1 was pretty simple right it was just a central processing lyrtle of machine learning model that'll take in data they'll correlate that data and then take action based off of it some simple inputs simple output right generation to where we're currently sitting is more advances looking at pattern recognition more advanced inputs are distributed models where we have the you know sensor is lying around networks I'm talking about even IOT devices security appliances and so forth but still report up to this centralized brain that's learning and acting on things but where things get really interesting moving forward in 2020 gets into this third generation where you have especially you know moving towards about computer sorry I'm computing where you have localized learning notes that are actually processing and learning so you can think of them as these mini brains instead of having this monolithic centralized brain you have individual learning modes individual brains doing their own machine learning that are actually connected to each other learning from each other speaking to each other it's a very powerful model we actually refer to this as federated machine learning in our industry so we've been first phase we simply use statistics to correlate events take action yeah now we're doing exceptions pattern recognition or exceptions and building patterns and in the future we're going to be able to further distribute at that so that increasingly the AI is going to work with other AI so that the aggregate this federated aggregate gets better I got that right yeah absolutely and what's the advantage of that a couple of things I'm it's very similar to the human immune system right I mean if you have you know if I were to cut my finger on my hand what's gonna happen well localized white blood cells get localized not nothing from a foreign entity or further away in my body are gonna come to the rescue and start healing right it's the same idea it's because it's interconnected within the nervous system it's the same idea of this federated machine learning right if security appliance is to detect a threat locally on-site its able to alert other security appliances so that they can actually take action on this and learn from that as well so connected machine learning models it means that that you know by properly implementing these these AI this federated AI machine learning models in an organization that that system is able to actually in an auto you may pick up what that threat is be able to act on that threat which means it's able to respond to these threats quicker shut them down to the point where it can be you know virtually instantaneous right before you know that the damage is done and bleeding starts happening so the common time safe common baseline is constantly getting better even as we're giving opportunities for local local managers to perform the work in response to local conditions so that takes us to the next notion of we've got this federated a la a I on the horizon how are people how is the role of people security professionals going to change what kind of recipes are they going to follow to ensure that they are working in a maximally productive way with these new capabilities these new federated capabilities especially as we think about the introduction of 5g and greater density of devices and faster speeds and lower latencies yeah so you know that the the the the world of cyber computer cyber security has always been incredibly complex so we're trying to simplify that and that's where again this this federated machine learning comes into place particularly with playbooks so you know if we look at 2019 and where we're going in 2020 we've put a lot of a lot of groundwork quite frankly into pioneering the work of playbooks right so when I say playbooks I'm talking about adversary's playbook knowing the offense knowing the tools techniques procedures the way that these cybercrime operations are moving right and the black hats are moving the more that we can understand that the more we can predict their next move and that centralized language right once you know that offense we can start to create automated Blue Team playbook so defensive play books that a human that that's a security technology can automatically integrate and respond to it but to getting back to your question we can actually create human readable sea cecil guides that can actually say look there's a threat here's why it's a problem here's here here are the gaps in your security that we've identified if you're some recommended course of action as my deity right so that's that's where the humans and the machines are really going to be worked working together and and quite frankly moving speed being able to do that a machine level but also being being able to simplify a complex landscape that is where we can actually gain traction right that this is part of that ascendancy of the white hat because because it's it's allowing us to move in a more agile nature it's an it's allowing us to gain ground against heat actors and quite frankly it allows us to start disrupting their business model right it's more resilient Network in the future this leads to the whole notion of self-healing networks as well that quite frankly just makes it a big pain it disrupts your business model it forces them to go back to the drawing board - well it also seems as though when we start talking about 5g that the speeds as I said the speeds the dentin see the reduced latency the the potential for a bad thing to propagate very quickly demands that we have a more consistent coherent response at both the Machine level but also at the people level we 5g into this conversation what's what will be the impact of 5g on how these playbooks and AI start to come together over the next few years yeah it's it's it's it's gonna be very impactful it's gonna take a couple of years and we're just at the dawn of 5g right now but if you think of 5g you're talking about a lot more volume essentially as we move to the future we're entering into the age of five G and edge computing and 5g and edge computing is gonna start eating the cloud in a sense that more of that processing power that was in the cloud is starting to shift now towards edge computing right this is that on-premises so it is gonna allow models like I was talking about federated machine learning models at first from the the white hats point of view which I again I think we are in the driver's seat and in a better you know more advantageous position here because we have more experience again like I said we've been doing this for years where the black hats quite frankly haven't yes they're toying with it but not to the same level at scale that we have but you know you know it's I'm always a realist this isn't a completely rosy picture I mean there it is optimistic that we are able to get this upper hand it has to be done right but if we think about the weaponization of 5g that's also very large problem right last year we're talking about sworn networks right the idea of sworn networks is a whole bunch of devices that can connect to each other share intelligence and then act to do something like a large-scale DDoS attack that's absolutely in the in the realm of possibility when it comes to the weaponization of 5g as well so one of the things I guess the last question I want to ask you is you noted that these play books incorporate the human element in ways that are uniquely human so having C so readable recipes for how people have to respond does that also elevate the conversation with the business and does allows us to do a better job of understanding risk pricing risk and appropriately investing to manage and assure the business against risk in the right way absolutely absolutely it does yeah yeah because the more you know about going back to the playbook some more you know about the office and their tools you know you the more you know about how much of a danger it is what sort of targets they're after right I mean if they're just going trying to look to to to collect a little bit of information on you know to do some reconnaissance that first phase attack might not cause a lot of damage but if this group is knowing to go in hit hard steal intellectual property shut down critical business streams to do s that in the past we know and we've seen has caused four or five million dollars from one you know from one breach that's a very good way to start classifying risk so yeah I mean it's all about really understanding the picture first on the offense and that's exactly what these automated playbook guides are going to be doing on the on the on the blue team and again not only from a CSE suite perspective certainly that on the human level but the nice thing about the play books is because we've done the research the threat hunting and understood this you know from a machine level it's also able to put a lot of those automated let's say day-to-day decisions making security operation center is so I'm talking about like sect DevOps much more efficient to so he's talking about more density at the edge amongst these devices I also want to bring back one last thought here and that is you said that historically some of the black hats have been able to act with a degree of impunity they haven't necessarily been hit hard there a lot of slapping on the wrist as I think you said talk about how the playbooks and AI is going to allow them to more appropriately share data with others that can help both now but also in some of the forensics and the the enforcement side namely the the legal and policing world how are we going to share the responsibility or how is that going to change over the next few years to incorporate some of the folks that actually can then turn a defense into a legal attack illumination this is what I call it right so again if we look at the current state we've made great strides great progress you know working with law enforcement so we've set up public private sector relationships we need to do that have security experts working with law enforcement law enforcement working on there and to train process prosecutors to understand cybercrime and so forth that foundation has been set but it's still slow-moving you know there's only a limited amount of playbooks right now it takes a lot of work to unearth and and and do to really move the needle what we need to do again like we're talking about is to integrate artificial intelligence with playbooks the more that we understand about groups the more that we do this threat illumination the more we have cover about them the more we know about them and by doing that we can start to form predictive models right basically I always say old habits die hard so you know if an attacker goes in hits a network and they're successful following a certain sequence of patterns they're likely going to follow that say that's that same sequence on their next victim or their next target so the more that we understand about that the more that we can forecast eight from a mitigation standpoint but the also by the same token the more correlation we're doing on these playbooks the more machine learning we're doing on this playbooks the more we were able to do attribution and attribution is the Holy Grail it's always been the toughest thing to do when it comes to research but by combining the framework that we're using with playbooks and AI machine learning it's a very very powerful recipe and that's that's what we need to get right and move forward in the right direction Derrick McKey ordinance chief of security insights and threat alliances thanks again for being on the cube it's a pleasure anytime happy to talk and I want to thank you for joining us for another cube conversation I'm Peter Burris see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Derrick McKey | PERSON | 0.99+ |
Derek Manky | PERSON | 0.99+ |
November 2019 | DATE | 0.99+ |
40 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Derek manky | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
third generation | QUANTITY | 0.99+ |
five million dollars | QUANTITY | 0.99+ |
first phase | QUANTITY | 0.99+ |
Derrick | PERSON | 0.99+ |
eight | QUANTITY | 0.98+ |
Palo Alto California | LOCATION | 0.97+ |
millions of dollars | QUANTITY | 0.97+ |
5g | QUANTITY | 0.97+ |
first | QUANTITY | 0.95+ |
five G | QUANTITY | 0.94+ |
each | QUANTITY | 0.94+ |
Dannette | ORGANIZATION | 0.93+ |
both | QUANTITY | 0.93+ |
decades | QUANTITY | 0.91+ |
Fortinet | ORGANIZATION | 0.9+ |
one | QUANTITY | 0.9+ |
Ford Annette | ORGANIZATION | 0.87+ |
one last thought | QUANTITY | 0.87+ |
three generations | QUANTITY | 0.85+ |
a couple of years | QUANTITY | 0.84+ |
last five years | DATE | 0.83+ |
a lot of work | QUANTITY | 0.8+ |
50 page | QUANTITY | 0.75+ |
sect DevOps | TITLE | 0.74+ |
one breach | QUANTITY | 0.73+ |
playbooks | COMMERCIAL_ITEM | 0.73+ |
past 10 20 30 years | DATE | 0.68+ |
years | QUANTITY | 0.66+ |
next few years | DATE | 0.63+ |
million | QUANTITY | 0.63+ |
about | QUANTITY | 0.62+ |
AET | ORGANIZATION | 0.6+ |
CSE | TITLE | 0.6+ |
couple of things | QUANTITY | 0.59+ |
about four | QUANTITY | 0.55+ |
2 | OTHER | 0.49+ |
generation 3 | QUANTITY | 0.46+ |
generation | OTHER | 0.46+ |
Blue | TITLE | 0.45+ |
1 | QUANTITY | 0.34+ |
Matt Kixmoeller, Pure Storage | CUBEConversation, November 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hello and welcome to the CUBE studio in Palo Alto, California for another CUBE Conversation. Where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Every business wants Cloud, every business wants digital transformation, but the challenge is, what do you do with the data? How do you ensure that your data is set up so that you can take greater advantage of it, create more classes of business options in a digital world, while at the same time having the flexibility, the agility that you need from a storage and infrastructure standpoint to not constrain the business as it tries to move forward. It's a big topic that a lot of customers are facing. To have that conversation, we are joined by Matt Kixmoeller, who is the Vice President of Strategy at Pure Storage. Matt, welcome back to the CUBE. >> Thanks, Peter. >> So lets dispense with the necessaries. Update from Pure. >> It's a fun time at Pure, we just hit our tenth birthday, and we're fresh off the heels of our Accelerate Conference down in Austin, where we had a lot of good product news and talked a lot about what the next decade's going to be all about. >> So, one of the things you mentioned down in Austin was the notion of the modern data experience. I want to really highlight that notion of experience because that's kind of the intersection with the Cloud experience. So, talk a little bit about how the experience word in modern data and cloud is coming together. >> Absolutely, so ya know the Cloud has forever changed IT's expectation of how tech needs to work, and I think the most archaic layer in a lot of ways right now is storage, and so we've done a lot within our platform to modernize for Cloud, link to the Cloud, deliver an all flash experience, but more interesting perhaps is also just reacting to the changing nature of how customers want to use storage and procure storage. >> And that means that they don't want to buy in advance of their needs. >> I think the key thing is as a service on demand right? And, ya know it's interesting when you consider both the usage and consumption as well as the purchase pattern, right? Um, if you think about the usage and consumption, it's all about on demand and automation, and perhaps one of the best examples I can give you is the transformation around containers. Um, ya know, we see all of our call home data from our customers, and how they use the arrays obviously, and your typical array has just a handful of management operations per day, where someone changes something, provision, storage, you name it. If you look at our container environment, ya know we have a tool called PSO, Pure Service Orchestrator that orchestrates our storage as part of a container environment, and a PSO based array does thousands of these operations a day. And so, it's very obvious that if you're having to deal with the fluidity of the container Cloud, there's no way you're going to have a human admin sitting there, clicking yes, yes, yes, or doing anything like that type of provision storage. You have to plumb for automation from the beginning. >> So that's a great example of the experience necessarily must be different, where you can't use a manual approach of doing things, you have to use more of an automated approach, so as you start to consider these issues, how is that informing the evolution of the modern data experience at Pure? >> I think it's an automated first world, and you have to really prepare yourself for plumbing everything for automation for APIs, for orchestration, as opposed to thinking about processes manually. Um, we've also seen as a vendor, it's changed how people want to consume, and you know, the concepts of more Opex-based on-demand consumption are also coming to storage, and so, last year, we introduced, um, ya know one of the first models in the industry in this regard that we called, at the time, ES2, and we broadened that and launched it again this year at Accelerate, expanding it to the entire Pure Business, and called it Pure as a service. >> So, what we now have is we now have, at least, from Pure, the option to think about how I'm going to match my storage consumption with my storage spend, which is especially important in a world where, by some aspect, storage or data is growing in volumes, from a volume perspective, at 35, 40% per anum. You don't want to have to buy four years of data out because you're growing that fast, and use it today. So as you think about this, what does Pure do next with the marriage of the Cloud experience and the modern date experience? >> Well, I think a key thing, particularly around this consumption world, is to give people flexibility between On-Prem and Cloud. Ya know, we did a lot in the show to announce news around how we're linking our On-Prem offerings with the Cloud with our Cloud Block Store offering to allow workloads to move back and forth, but what if I own On-Prem storage and I want to use the Cloud. And so another thing we did as part of Pure as a service is allow for that subscription to go either direction. You might be a customer that subscribes to 100 terabytes of Pure On-Prem, and then tomorrow you get the edict that says lets move half that to the Cloud. No problem, you can move 50 terabytes to the Cloud and not pay us another dime. The next day, you want to move back. You can do that again as well, and so we've thought about how we can really evolve those procurement processes such that they are just as agile and just as flexible as a Cloud model. >> Matt Kixmoeller, Pure Storage, thanks again for being on the CUBE. >> Thank you, Peter. >> And thank you for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (upbeat music)
SUMMARY :
From our studios, in the heart of but the challenge is, what do you do with the data? So lets dispense with the necessaries. and talked a lot about what the next decade's So, one of the things you mentioned down in Austin to the changing nature of how customers want to use storage And that means that they don't want to buy one of the best examples I can give you and you have to really prepare yourself for the option to think about how I'm going to that says lets move half that to the Cloud. thanks again for being on the CUBE. And thank you for joining us for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
November 2019 | DATE | 0.99+ |
Austin | LOCATION | 0.99+ |
Peter | PERSON | 0.99+ |
Matt Kixmoeller | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Cloud Block Store | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
four years | QUANTITY | 0.99+ |
tenth birthday | QUANTITY | 0.99+ |
Accelerate | ORGANIZATION | 0.99+ |
Pure | ORGANIZATION | 0.98+ |
35, 40% | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
next decade | DATE | 0.97+ |
CUBE Conversation | EVENT | 0.96+ |
one | QUANTITY | 0.96+ |
Silicon Valley, | LOCATION | 0.95+ |
today | DATE | 0.95+ |
PSO | TITLE | 0.94+ |
first models | QUANTITY | 0.93+ |
next day | DATE | 0.93+ |
half | QUANTITY | 0.89+ |
Pure Storage | ORGANIZATION | 0.88+ |
Cloud | TITLE | 0.87+ |
Opex | ORGANIZATION | 0.77+ |
a day | QUANTITY | 0.72+ |
ES2 | ORGANIZATION | 0.71+ |
Pure Business | ORGANIZATION | 0.69+ |
Pure Service | TITLE | 0.69+ |
first | QUANTITY | 0.6+ |
Vice President | PERSON | 0.58+ |
Prem | ORGANIZATION | 0.51+ |
Derek Manky, Fortinet - Office of CISO | CUBEConversation, November 2019
(upbeat jazz music) [Woman] - From our Studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE conversation. >> Hello and welcome to theCUBE Studios in Palo Alto, California, for another CUBE conversation, where we go in depth with thought leaders driving innovation across tech industry. I'm your host Peter Burris. Almost everybody's heard of the term black-hat and white-hat. And it constitutes groups of individuals that are either attacking or defending security challenges. It's been an arms race for the past 10, 20, 30 years as the worlds become more digital. And an arms race that many of us are concerned that black-hats appear to have the upper hand. But there's new developments in technology and new classes of tooling that are actually racing to the aid of white-hats and could very well upset that equilibrium in favor of the white-hats. To have that conversation about the ascension of the white-hats, we're joined by Derek Manky, who's the Chief Security Insights & Global Threat Alliances lead at Fortinet. Derek, thanks for joining us for another CUBE conversation. >> It's always a pleasure speaking with you. [Peter] - All right. [Derek] - Happy to be here. >> Derek, let's start, what's going on at FortiLabs at Fortinet? >> So 2019, we've seen a ton of development, a lot pretty much on track with our predictions when we talked last year. Obviously a big increase in volume, thanks to offensive automation. We're also seeing low volume attacks that are disrupting big business models. I'm talking about targeted ransom attacks, right. But, you know, criminals that are able to get into networks, cause millions of dollars of damages thanks to critical revenue streams being held. Usually in the public sector we've seen a lot of this. We've seen a rise in sophistication's, the adversaries are not slowing down. AET's, the mass evasion techniques are on the rise. And so, you know, to do this on FortiGaurd Labs, to be able to track this and map this, we're not just relying on logs anymore and, you know, 40, 50 page white papers. So, we're actually looking at that playbooks now, mapping the adversaries, understanding their tools, techniques, procedures, how they're operating, why they're operating, who are they hitting and what might be their next moves. So that's a bit development on the intelligence side too. >> All right, so imagine a front this notion that the white-hats might be ascending. I'm implying a prediction here. Tell us a little bit about what we see on the horizon for that concept of the white-hats ascending and specifically, why is a reason to be optimistic? >> Yeah, so it's been gloomy for decades like you said. And for many reasons, right, and I think those reasons are no secrets. I mean, cyber criminals and black-hats have always been able to move very, you know, with agility right. Cyber crime has no borders. It's often a slap on the wrist that they get. They can do a million things wrong, they don't care, there's no ethics and quite frankly no rules binding them right. On the white-hand side, we've always had rules binding us, we've had to take due care and we've had to move methodically, which slows us down. So, a lot of that comes in place because of frameworks, because of technology as well, having to move after it's enabled to with frameworks, specifically with making corrective action and things like that. So, those are the challenges that we faced against. But you know like, thinking ahead to 2020, particularly with the use of artificial intelligence, everybody talks about AI, it's impacted our daily lives, but when it comes to cyber security, on the white-hat side a proctor AI and machine learning model takes times. It can take years. In fact in our case, our experience, about four to five years before we can actually roll it out to production. But the good news is, that we have been investing, and when I say we, I'm just talking to the industry in general and white-hat, we've been investing into this technology because quite frankly we've had to. It takes a lot of data, it takes a lot of smart minds, a lot of investment, a lot of processing power and that foundation has now been set over the last five years. If we look at the black-hats, it's not the case. And why? Because they've been enjoying living off the land on low hanging fruit. Path of least resistance because they have been able to. >> So, what are the things that's changing that, equilibrium then, is the availability of AI and as you said, it could take four, five years to get to a point where we've actually got useful AI that can have an impact. I guess that means that we've been working on these things for four, five years. What's the state of the art with AI as it pertains to security, and are we seeing different phases of development start to emerge as we gain more experience with these technologies? >> Yeah, absolutely. And it's quite exciting right. AI isn't this universal brain that solves the worlds problems that everyone thinks it might be right. It's very specific, it relies on machine learning models. Each machine learning model is very specific to it's task right, I mean, you know, voice learning technology versus autonomous vehicle jobbing versus cyber security, is very different when it comes to these learning purposes. So, in essence the way I look at it, you know, there's three generations of AI. We have generation one, which was the past. Generation two, which is the current, where we are now and the generation three is where we're going. So, generation one was pretty simple right. It was just a central processing alert machine learning model that will take in data, correlate that data and then take action based off of it. Some simple inputs, simple output right. Generation two where we're currently sitting is more advanced. It's looking at pattern recognition, more advanced inputs, distributed models where we have sensors lying around networks. I'm talking about even IoT devices, security appliances and so forth, that still record up to this centralized brain that's learning it and acting on things. But where things get really interesting moving forward in 2020 gets into this third generation where you have especially moving towards cloud computer, sorry, edge computing, is where you have localized learning nodes that are actually processing and learning. So you can think of them as these mini brains. Instead of having this monolithic centralized brain, you have individual learner nodes, individual brains doing their own machine learning that are actually connected to each other, learning from each other, speaking to each other. It's a very powerful model. We actually refer to this as federated machine learning in our industry. >> So we've been, first phase we simply used statistics to correlate events, take action, now we're doing acceptions, pattern recognition, or acceptions and building patterns, and in the future we're going to be able to further distribute that so that increasingly the AI is going to work with other AI so that the aggregate, this federated aggregate gets better, have I got that right? >> Yeah absolutely. And what's the advantage of that? A couple of things. It's very similar to the human immune system right. If you have, if I were to cut my finger on my hand, what's going to happen? Well, localized white blood cells, localized, nothing from a foreign entity or further away in my body, are going to come to the rescue and start healing that right. It's the same, it's because it's interconnected within the nervous system. It's the same idea of this federated machine learning model right. If a security appliance is to detect a threat locally on site, it's able to alert other security appliances so that they can actually take action on this and learn from that as well. So connected machine learning models. So it means that by properly implementing these AI, this federated AI machine earning models in an organization, that that system is able to actually in a auto-immune way be able to pick up what that threat is and be able to act on that threat, which means it's able to respond to these threat quicker or shut them down to the point where it can be you know, virtually instantaneous right, before the damage is done and bleeding starts happening. >> So the common baseline is continuously getting better even as we're giving opportunities for local managers to perform the work in response to local conditions. So that takes us to the next notion of, we've got this federated AI on the horizon, how are people, how is the world of people, security professionals going to change? What kind of recipes are they going to follow to insure that they are working in a maximally productive way with these new capabilities, these new federated capabilities, especially as we think about the introduction of 5G and greater density of devices and faster speeds in the relatancies? >> Yeah so, you know the world of cyber computer, cyber security has always been incredibly complex. So we're trying to simplify that and that's where again, this federated machine learning comes into place, particularly with playbooks, so if we look at 2019 and where we're going in 2020, we've put a lot of groundwork quite frankly and so pioneering the work of playbooks right. So when I say playbooks I'm talking about adversary playbooks, knowing the offense, knowing the tools, techniques, procedures, the way that these cyber crime operations are moving right and the black-hats are moving. The more that we can understand that, the more we can predict their next move and that centralized language right, once you know that offense, we can start to create automated blue team playbooks, so defensive playbooks. That security technology can automatically integrate and respond to it, but getting back to you question, we can actually create human readable CECO guides that can actually say, "Look, there's a threat," "here's why it's a problem," "here are the gaps in your security that we've identified," "here's some recommended course of action as an idea too." Right, so that's where the humans and the machines are really going to be working together and quite frankly moving at speed, being able to that at machine level but also being able to simplify a complex landscape, that is where we can actually gain traction right. This is part of that ascendancy of the white-hat because it's allowing us to move in a more agile nature, it's allowing us to gain ground against the attackers and quite frankly, it allows us to start disrupting their business model more right. It's a more resilient network. In the future this leads to the whole notion of self-healing that works as well that quite frankly just makes it a big pain, it disrupts your business model, it forces them to go back to the drawing board too. >> Well, it also seems as though, when we start talking about 5G, that the speeds, as I said the speeds, the dentancy, the reduced latency, the potential for a bad thing to propagate very quickly, demands that we have a more consistent, coherent response, at both the the machine level but also the people level. We 5G into this conversation. What's, what will be the impact to 5G on how these playbooks and AI start to come together over the next few years? >> Yeah, it's going to be very impactful. It is going to take a couple of years and we're just at the dawn of 5G right now. But if you think of 5G, your talking about a lot more volume, essentially as we move to the future, we're entering into the age of 5G and edge computing. And 5G and edge computing is going to start eating the cloud in a sense that more of that processing power that was in the cloud is starting to shift now towards edge computing right. This is at on Premis.it So, A; it is going to allow models like I was talking about, federated machine learning models and from the white-hats point of view, which again I think we are in the driver seat and a better, more advantageous position here, because we are more experienced again like I said, we've been doing this for years with black-hats quite frankly haven't. Yes, they're toying with it, but not in the same level and skill as we have. But, you know, (chuckles) I'm always a realist. This isn't a completely realsy picture, I mean, it is optimistic that we are able to get this upper hand. It has to be done right. But if we think about the weaponisation of 5G, that's also a very large problem right. Last year we're talking about swarm networks right, the idea of swarm networks is a whole bunch of devices that can connect to each other, share intelligence and then act to do something like a large scale DDoS attack. That's absolutely in the realm of possibility when it comes to the weaponisation of 5G as well. >> So one of the things, I guess the last question I want to ask you is, is you noted that these playbooks incorporate the human element in ways that are uniquely human. So, having CECO readable recipes for how people have to respond, does that also elevate the conversation with the business and does, allows us to do a better job of understanding risk, pricing risk and appropriately investing to manage and assure the business against risk in the right way? >> Absolutely. Absolutely it does, yeah. Yeah, because the more you know about going back to the playbooks, the more you know about the offense and their tools, the more you know about how much of a danger it is, what sort of targets they're after right. I mean if they're just going trying to look to collect a bit of information on, you know, to do some reconnaissance, that first phase attack might not cause a lot of damage, but if this group is known to go in, hit hard, steal intellectual property, shut down critical business streams through DoS, that in the past we know and we've seen has caused four, five million dollars from one breach, that's a very good way to start classifying risk. So yeah, I mean, it's all about really understanding the picture first on the offensive, and that's exactly what these automated playbook guides are going to be doing on the blue team and again, not only from a CoC perspective, certainly that on the human level, but the nice thing about the playbooks is because we've done the research, the threat hunting and understood this, you know from a machine level it's also able to put a lot of those automated, let's say day-to-day decisions, making security operation centers, so I'm talking about like SecDevOps, much more efficient too. >> So we've talked about more density at the edge amongst these devices, I also want to bring back one last thought here and that is, you said that historically some of the black-hats have been able to access with a degree of impunity, they have necessarily been hit hard, there's been a lot of slapping on the wrist as I think you said. Talk about how the playbooks and AI is going to allow us to more appropriately share data with others that can help both now but also in some of the forensics and the enforcement side, namely the legal and policing world. How are we going to share the responsibility, how is that going to change over the next few years to incorporate some of the folks that actually can then turn a defense into a legal attack? >> Threat elimination is what I call it right. So again, if we look at the current state, we've made great strides, great progress, you know, working with law enforcement, so we've set up public private sector relationships, we need to do that, have security experts working with law enforcement, law enforcements working on their end to train prosecutors to understand cyber crime and so forth. That foundation has been set, but it's still slow moving. You know, there's only a limited amount of playbooks right now. It takes a lot of work to unearth and do, to really move the needle, what we need to do, again like we're talking about, is to integrate a artificial intelligence with playbooks. The more that we understand about groups, the more that we do the threat illumination, the more that we uncover about them, the more we know about them, and by doing that we can start to form predictive models right. Based, I always say old habits die hard. So you know, if an attacker goes in, hits a network and their successful following a certain sequence of patterns, they're likely going to follow that same sequence on their next victim or their next target. So the more that we understand about that, the more that we can forecast A; from a mitigation standpoint, but the, also by the same token, the more correlation we're doing on these playbooks, the more machine learning we're doing on these playbooks, the more we're able to do attribution and attribution is the holy grail, it's always been the toughest thing to do when it comes to research. But by combing the framework that we're using with playbooks, and AI machine learning, it's a very very powerful recipe and that's what we need to get right and forward in the right direction. >> Derek Manky, Fortinet's Chief of Security Insights & Threat Alliances, thanks again for being on theCUBE. >> It's a pleasure. Anytime. Happy to talk. >> And I want to thank you for joining us for another CUBE conversation. I'm Peter Burris, see you next time. (upbeat jazz music) >> Yeah I thought it was pretty good. [Man] - That was great. [Derek] - Yeah, yeah.
SUMMARY :
in the heart of Silicon Valley, Palo Alto, California, that equilibrium in favor of the white-hats. [Derek] - Happy to be here. Usually in the public sector we've seen a lot of this. that the white-hats might be ascending. But the good news is, that we have been investing, What's the state of the art with AI So, in essence the way I look at it, you know, or shut them down to the point where it can be you know, and faster speeds in the relatancies? In the future this leads to the whole notion the potential for a bad thing to propagate very quickly, And 5G and edge computing is going to start eating the cloud does that also elevate the conversation with the business that in the past we know and we've seen has caused four, how is that going to change over the next few years So the more that we understand about that, Derek Manky, Fortinet's Chief of Security Insights Happy to talk. And I want to thank you for joining us Yeah I thought it was pretty good.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Derek | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Derek Manky | PERSON | 0.99+ |
November 2019 | DATE | 0.99+ |
Fortinet | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
Last year | DATE | 0.99+ |
40 | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
FortiLabs | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
third generation | QUANTITY | 0.99+ |
FortiGaurd Labs | ORGANIZATION | 0.99+ |
first phase | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
four, five million dollars | QUANTITY | 0.97+ |
50 page | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.96+ |
CISO | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.94+ |
Silicon Valley, Palo Alto, California | LOCATION | 0.93+ |
three generations | QUANTITY | 0.93+ |
Each machine | QUANTITY | 0.92+ |
Global Threat Alliances | ORGANIZATION | 0.91+ |
about four | QUANTITY | 0.9+ |
Security Insights & Threat Alliances | ORGANIZATION | 0.9+ |
generation three | QUANTITY | 0.89+ |
one breach | QUANTITY | 0.89+ |
one last thought | QUANTITY | 0.87+ |
last five years | DATE | 0.86+ |
Generation two | QUANTITY | 0.84+ |
generation one | QUANTITY | 0.82+ |
decades | QUANTITY | 0.82+ |
theCUBE Studios | ORGANIZATION | 0.81+ |
years | QUANTITY | 0.77+ |
20 | QUANTITY | 0.76+ |
CECO | ORGANIZATION | 0.69+ |
AET | ORGANIZATION | 0.65+ |
millions of dollars | QUANTITY | 0.64+ |
CoC | ORGANIZATION | 0.63+ |
next few years | DATE | 0.62+ |
Chief | PERSON | 0.62+ |
SecDevOps | TITLE | 0.62+ |
years | DATE | 0.61+ |
Security Insights | ORGANIZATION | 0.57+ |
5G | OTHER | 0.55+ |
30 years | QUANTITY | 0.54+ |
couple | QUANTITY | 0.54+ |
Premis.it | ORGANIZATION | 0.53+ |
5G | QUANTITY | 0.51+ |
past 10 | DATE | 0.48+ |
playbooks | ORGANIZATION | 0.43+ |
5G | ORGANIZATION | 0.36+ |