Image Title

Search Results for December of2020:

2018-01-26 Wikibon Research Quick Take #1 with David Foyer


 

(mid-tempo electronic music) >> Hi, I'm Peter Burris. And once again, this is another Wikibon research quick take. I'm here with David Floyer. David, Amazon did something interesting this week. What is it? What's the impact? >> Amazon, and I mean by that, Amazon, not AWS, have put into place, something following on from their data warehouse automation. They have now a store which is completely automated. You walk in, you pick something off the shelf, and you walk out. They've done all of the automation, lots and lots of cameras everywhere, lots of sophisticated work. It's taken them more than four years of hard work on AI, to get this done. The implication is if, I think this is both exciting and people who are not doing anything, they must be really fearful about this. This is an exciting time, and something that other people must get on with, which is automation of the business process that are important to them. >> Retail or not, one of the things, very quickly, that we've observed, is the process of automating employee activities is slow. The process of automating, or providing automation for customer activities is even slower. We're really talking about Amazon introducing technologies to provide the Amazon brand to the customer in an automated way. Big deal. >> Absolutely, big, big deal. >> All right, this has been a Wikibon quick take research with David Floyer. Thanks, David. (upbeat electronic music)

Published Date : Jan 26 2018

SUMMARY :

David, Amazon did something interesting this week. The implication is if, I think this is is the process of automating employee activities is slow. All right, this has been a Wikibon quick take

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

DavidPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

2018-01-26DATE

0.99+

David FoyerPERSON

0.99+

more than four yearsQUANTITY

0.99+

bothQUANTITY

0.97+

WikibonORGANIZATION

0.97+

this weekDATE

0.96+

#1QUANTITY

0.76+

lotsQUANTITY

0.46+

Wikibon Research Meeting | October 20, 2017


 

(electronic music) >> Hi, I'm Peter Burris and welcome once again to Wikibon's weekly research meeting from the CUBE studios in Palo Alto, California. This week we're going to build upon a conversation we had last week about the idea of different data shapes or data tiers. For those of you who watched last week's meeting, we discussed the idea that data across very complex distributed systems featuring significant amounts of work associated with the edge are going to fall into three classifications or tiers. At the primary tier, this is where the sensor data that's providing direct and specific experience about the things that the sensors are indicating, that data will then signal work or expectations or decisions to a secondary tier that aggregates it. So what is the sensor saying? And then the gateways will provide a modeling capacity, a decision making capacity, but also a signal to tertiary tiers that increasingly look across a system wide perspective on how the overall aggregate system's performing. So very, very local to the edge, gateway at the level of multiple edge devices inside a single business event, and then up to a system wide perspective on how all those business events aggregate and come together. Now what we want to do this week is we want to translate that into what it means for some of the new technologies, new analytics technologies that are going to provide much of the intelligence against each of this data. As you can imagine, the characteristics of the data is going to have an impact on the characteristics of the machine intelligence that we can expect to employ. So that's what we want to talk about this week. So Jim Kobielus, with that as a backdrop, why don't you start us off? What are we actually thinking about when we think about machine intelligence at the edge? >> Yeah, Peter, we at the edge, the edge of body, the device be in the primary tier that acquires fresh environmental data through its sensors, what happens at the edge? In the extreme model, we think about autonomous engines, let me just go there just very briefly, basically, it's a number of workloads that take place at the edge, the data workloads. The data is (mumbles) or ingested, it may be persisted locally, and that data then drives local inferences that might be using deep layer machine learning chipsets that are embedded in that device. It might also trigger various tools called actuations. Things, actions are taken at the edge. If it's the self-driving vehicle for example, an action may be to steer the car or brake the car or turn on the air conditioning or whatever it might be. And then last but not least, there might be some degree of adaptive learning or training of those algorithms at the edge, or the training might be handled more often up at the second or tertiary tier. The tertiary tier at the cloud level, which has visibility usually across a broad range of edge devices and is ingesting data that is originated from all of the many different edge devices and is the focus of modeling, of training, of the whole DevOps process, where teams of skilled professionals make sure that the models are trained to a point where they are highly effective for their intended purposes. Then those models are sent right back down to the secondary and the primary tiers, where act out inferences are made, you know, 24 by seven, based on those latest and greatest models. That's the broad framework in terms of the workloads that take place in this fabric. >> So Neil, let me talk to you, because we want to make sure that we don't confuse the nature of the data and the nature of the devices, which may be driven by economics or physics or even preferences inside of business. There is a distinction that we have to always keep track of, that some of this may go up to the Cloud, some of it may stay local. What are some of the elements that are going to indicate what types of actual physical architectures or physical infrastructures will be built out as we start to find ways to take advantage of this very worthwhile and valuable data that's going to be created across all of these different tiers? >> Well first of all, we have a long way to go with sensor technology and capability. So when we talk about sensors, we really have to define classes of sensors and what they do. However, I really believe that we'll begin to think in a way that approximates human intelligence, about the same time as airplanes start to flap their wings. (Peter laughs) So, I think, let's have our expectations and our models reflect that, so that they're useful, instead of being, you know hypothetical. >> That's a great point Neil. In fact, I'm glad you said that, because I strongly agree with you. But having said that, the sensors are going to go a long ways, when we... but there is a distinction that needs to be made. I mean, it may be that that some point in time, a lot of data moves up to a gateway, or a lot of data moves up to the Cloud. It may be that a given application demands it. It may be that the data that's being generated at the edge may have a lot of other useful applications we haven't anticipated. So we don't want to presume that there's going to be some hard wiring of infrastructure today. We do want to presume that we better understand the characteristics of the data that's being created and operated on, today. Does that make sense to you? >> Well, there's a lot of data, and we're just going to have to find a way to not touch it or handle it any more times than we have to. We can't be shifting it around from place to place, because it's too much. But I think the market is going to define a lot of that for us. >> So George, if we think about the natural place where the data may reside, the processes may reside, give us a sense of what kinds of machine learning technologies or machine intelligence technologies are likely to be especially attractive at the edge, dealing with this primary information. Okay, I think that's actually a softball which is, we've talked before about bandwidth and latency limitations, meaning we're going to have to do automated decisioning at the edge, because it's got to be fast, low latency. We can't move all the data up to the Cloud for bandwidth limitations. But, by contrast, so that's data intensive and it's fast, but up in the cloud, where we enhance our models, either continual learning of the existing ones or rethinking them entirely, that's actually augmented decisions, and augmented means it's augmenting a human in the process, where, most likely, a human is adding additional contextual data, performing simulations, and optimizing the model for different outcomes or enriching the model. >> It may in fact be a crucial element or crucial feature of the training by in fact, validating that the action taken by the system was appropriate. >> Yes, and I would add to that, actually, that you might, you used an analogy, people are going from two extremes where they say, some people say, "Okay, so all the analytics has to be done in the cloud," Wikibon and David Floyer, and Jim Kovielus have been pioneering the notion that we have to do a lot more at the client. But you might look back at client server computing where the client was focused on presentation, the server was focused on data integrity. Similarly, here, the edge or client is going to be focused on fast inferencing and the server is going to do many of the things that were associated with a DBMS and data integrity in terms of reproducibility, of decisions in the model for auditing, security, versioning, orchestration in terms of distributing updated models. So we're going to see the roles of the edge and the cloud rhyme with what we saw in server. Neither one goes away, they augment each other. >> So, Jim Kovielus, one of the key issues there is going to be the gateway, and the role that the gateway plays, and specifically here, we talked about the nature of again, the machine intelligence that's going to be operating more on the gateway. What are some of the characteristics of the work that's going to be performed at the gateway that kind of has oversight of groupings or collections of sensor and actuator devices? >> Right, good question. So the perfect example that everybody's familiar with now about a gateway in this environment, a smart home hub. A smart home hub, just for the sake of discussion, has visibility across two or more edge devices. It could be a smart speaker, could be the HVAC system is sensor equipped and so forth, what it does, the pool it performs, a smart hub of any sort, is that it acquires data from the edge devices, the edge devices might report all of their data directly to the hub, or the sensor devices might also do inferences and then pass on the results of the inferences it has given to the hub, regardless. What the hub does is A, it aggregates the data across those different edge devices over which it has this ability and control, B, it may perform it's own inferences based on models that look out across an entire home in terms of patterns of activity. Then it might take the hub, various actions autonomous by itself, without consulting an end user or anything else. It might take action in terms of beef up the security, adjust the HVAC, it adjusts the light in the house or whatever it might be, based on all that information streaming in real time. Possibly, its algorithms will allow you to determine what of that data shows an anomalous condition that deviates from historical patterns. Those kinds of determinations, whether it's anomalous or a usual pattern, are often taken at the hub level, 'cause it's maintaining sort of a homeostatic environment, as it were, within its own domain, and that hub might also communicate up the stream, to a tertiary tier that has oversight, let's say, of a smart city environment, where everybody in that city or whatever, might have a connection into some broader system that say, regulates utility usage across the entire region to avoid brownouts and that kind of thing. So that gives you an idea of what the role of a hub is in this kind of environment. It's really a controller. >> So, Neil, if we think about some of the issues that people really have to consider as they start to architect what some of these systems are going to look like, we need to factor both what is the data doing now, but also ensure that we build into the entire system enough of a buffer so that we can anticipate and take advantage of future ways of using that data. Where do we draw that fine line between we only need this data for this purpose now and geez, let's ensure that we keep our options open so that we can use as much data as we want at some point in time in the future? >> Well, that's a hard question, Peter, but I would say that if it turns out that this detailed data coming from sensors, that the historical aspect of it isn't really that important. If the things you might be using that data for are more current, then you probably don't need to capture all that. On the other hand, there have been many, many occasions historically, where data has been used other than its original purpose. My favorite example was scanners in grocery stores, where it was meant to improve the checkout process, not have to put price stickers on everything, manage inventory and so forth. It turned out that some smart people like IRI and some other companies said, "We'll buy that data from you, "and we're going to sell it to advertisers," and all sorts of things. We don't know the value of this data yet, it's too new. So I would err on the side of being conservative and capturing and saving as much as I could. >> So what we need to do is, we need to marry or we need to do an optimization of some form about how much is it going to cost to transmit the data versus what kind of future value or what kinds of options of future value might there be on that data. That is, as you said, a hard problem, but we can start to conceive of an approach to characterizing that ratio, can't we? >> I hope so. I know that, personally, when I download 10 gigabytes of data, I pay for 10 gigabytes of data, and it doesn't matter if it came from a mile away or 10,000 miles away. So there has to be adjustments for that. There's also ways of compressing data because this sensor data I'm sure is going to be fairly sparse, can be compressed, is redundant, you can do things like RLL encoding, which takes all the zeroes out and that sort of thing. There are going to be a million practices that we'll figure out. >> So as we imagine ourselves in this schemata of edge, hub, tertiary or primary, secondary and tertiary data and we start to envision the role that data's going to play and how we conduct or how we build these architectures and these infrastructures, it does raise an interesting question, and that is, from an economic standpoint, what do we anticipate is going to be the classes of devices that are going to exploit this data? David Foyer who's not here today, hope you're feeling better David, has argued pretty forcibly, that over the next few years we'll see a lot of advances made in microprocessor technology. Jim, I know you've been thinking about this a fair amount. What types of function >> Jim: Right. >> might we actually see being embedded in some of these chips that software developers are going to utilize to actually build some of these more complex and interesting systems? >> Yeah, first of all, one of the trends we're seeing in the chipset market for deep learning, just to be there for a moment, is that deep learning chipsets traditionally, when I say traditionally, the last several years the market has been dominated by GP's graphic processing unit. Invidia of course, is the primary provider of those. Of course, Invidia has been along around for a long time as a gaming solution provider. Now, what's happening with GPU technology, in fact, the latest generation of Invidia's architecture shows where it's going. The thing that is more deep learning optimized capabilities at the chipset level. They're called tensor processing, and I don't want to bore you with all the technical details, but the whole notion of-- >> Peter: Oh, no, Jim, do bore us. What is it? (Jim laughs) >> Basically deep learning is based on doing high speed, fast matrix map. So fundamentally, tensor cores do high velocity fast matrix math, and the industry as a whole is moving toward embedding more tensor cores directly into the chipset, higher density of tensor core. Invidia in its latest generation of chip has done that. They haven't totally taken out the gaming oriented GPU capabilities, but there are competitors and they have a growing list, more than a dozen competitors on the chipset side now. We're all going down a road of embedding far more technical processing units into every chip. Google is well known for something called GPU tensor processing units, their chip architecture. But they're one of many vendors that are going down that road. The bottom line is the chipset itself is becoming authenticated and being optimized for the core function that CPU and really GPU technology and even ASIX and FPGAs were not traditionally geared to do, which is just deep learning at a high speed, many cores, to do things like face recognition and video and voice recognition freakishly fast, and really, that's where the market is going in terms of enabling underlying chipset technology. What we're seeing is that, what's likely to happen in the chipsets of the year 2020 and beyond, they'll be predominantly tensor core processing units, But they'll be systemed on a chip that, and I'm just talking about future, not saying it's here now, systems on a chip that include some, a CPU, to managing real time OS, like a real time Linux or what not, and with highly dense tensor core processing unit. And in this capability, these'll be low power chips, and low cost commodity chips that'll be embedded in everything. Everything from your smart phone, to your smart appliances in your home, to your smart cars and so forth. Everything will have these commodity chips. 'Cause suddenly every edge device, everything will be an edge device, and will be able to provide more than augmentation, automation, all these things we've been talking about, in ways that are not necessarily autonomous, but can operate with a great degree of autonomy to help us human beings to live our lives in an environmentally contextual way at all points in time. >> Alright, Jim, let me cut you off there, because you said something interesting, a lot more autonomy. George, what does it mean, that we're going to dramatically expand the number of devices that we're using, but not expand the number of people that are going to be in place to manage those devices. When we think about applying software technologies to these different classes of data, we also have to figure out how we're going to manage those devices and that data. What are we looking at from an overall IT operations management approach to handling a geometrically greater increase in the number of devices and the amount of data that's being generated? (Jim starts speaking) >> Peter: Hold on, hold on, George? >> There's a couple dimensions to that. Let me start at the modeling side, which is, we need to make data scientists more productive or we need to push out to a greater, we need to democratize the ability to build models, and again, going back to the notion of simulation, there's this merging of machine learning and simulation where machine learning tells you correlations in factors that influence an answer. Whereas, the simulation actually lets you play around with those correlations, to find the causations, and by merging them, we make it much, much more productive to find the models that are both accurate and to optimize them for different outcomes. >> So that's the modeling issue. >> Yes. >> When we think about after we, which is great. Now as we think about some of the data management elements, what are we looking at from a data management standpoint? >> Well, and this is something Jim has talked about, but, you know we had DevOps for joining the, essentially merging the skills of the developers with the operations folks, so that there's joint responsibility of keeping stuff live. >> Well what about things like digital twins, automated processes, we've talked a little it about breadth versus depth, ITOM, What do you think? Are we going to build out, are all these devices going to reveal themselves, or are we going to have to put in place a capacity for handling all of these things in some consistent, coherent way? >> Oh, okay, in terms of managing. >> In terms of managing. >> Okay. So, digital twins were interesting because they pioneered or they made well known a concept called essentially, a symmetric network, or a knowledge graph, which is just a way of abstracting what is a whole bunch of data models and machine learning models that represents the structure and behavior of a device. In IIoT terminology, it was like an industrial device, like a jet engine. But that same construct, the knowledge graph and the digital twin, can be used to describe the application software and the infrastructure, both middleware and hardware, that makes up this increasingly sophisticated network of learning and inferencing applications. And the reason this is important, it sounds arcane, the reason it's important is we're building now vastly more sophisticated applications over great distances, and the only way we can manage them is to make the administrators far more productive. The state of the art today is, alerts on the performance of the applications, and alerts on the, essentially, the resource intensity of the infrastructure. By combining that type of monitoring with the digital twin, we can get a, essentially much higher fidelity reading on when something goes wrong. We don't get false positives. In other words, you don't have, if something goes wrong, it's like the fairy tale of the pea underneath the mattress, all the way up, 10 mattresses, you know it's uncomfortable. Here, it'll pinpoint exactly what gets wrong, rather than cascading all sorts of alerts, and that is the key to productivity in managing this new infrastructure. >> Alright guys, so let's go into the action item around here. What I'd like to do now is ask each of you for the action item that you think users are going to have to apply or employ to actually get some value, and start down this path of utilizing machine intelligence across these different tiers of data to build more complex, manageable application infrastructures. So, Jim, I'd like to start with you, what's your action item? >> My action item is related what George just said, modeled centrally, deployed in a decentralized fashion, machine learning, and use digital twin technology to do your modeling against device classes, in a more coherent way. There's not one model that won't fit all of the devices. Use digital twin technology to structure the modeling process to be able to tune a model to each class of device out there. >> George, action item. >> Okay, recognize that there's a big difference between edge and cloud, as Jim said. But I would elaborate, edge is automated, low latency decision making, extremely data intensive. Recognize that the cloud is not just where you trickle up a little bit of data, this is where you're going to use simulations, with a human in the loop, to augment-- >> System wide, system wide. >> System wide, with a human in the loop to augment how you evaluate new models. >> Excellent. Neil, action item. >> I would have people start on the right side of the diagram and start to think about what their strategy is and where they fit into these technologies. Be realistic about what they think they can accomplish and do the homework. >> Alright, great. So let me summarize our meeting this week. This week we talked about the role that the three tiers of data that we've described will play in the use of machine intelligence technologies as we build increasingly complex and sophisticated applications. We've talked about the difference between primary, secondary, and tertiary data. Primary data being the immediate experience of sensors. Analog being translated into digital, about a particular thing or set of things. Secondary being the data that is then aggregated off of those sensors for business event purposes, so that we can make a business decision, often automatically down at an edge scenario, as a consequence of signals that we're getting from multiple sensors. And then finally, tertiary data, that looks at a range of gateways and a range of systems, and is considering things at a system wide level, for modeling, simulation and integration purposes. Now, what's important about this is that it's not just better understanding the data and not just understanding the classes of technologies that we used, that will remain important. For example, we'll see increasingly powerful low cost device specific arm like processors pushed into the edge. And a lot of competition at the gateway, or at the secondary data tier. It's also important, however to think about the nature of the allocations and where the work is going to be performed across those different classifications. Especially as we think about machine learning, machine etiologies and deep learning. Our expectation is that we will see machine learning being used on all three levels, Where machine etiology is being used on against all forms of data to perform a variety of different work, but that the work that will be performed will be a... Will be naturally associated and related to the characteristics of the data that's being aggregated at that point. In other words, we won't see simulations, which are characteristics of tertiary data, George, at the edge itself. We will however, see edge devices often reduce significant amounts of data from a perhaps a video camera or something else to make relatively simple decisions that may involve complex technologies to allow a person into a building, for example. So our expectation is that over the next five years we're going to see significant new approaches to applying increasingly complex machine etiologies technologies across all different classes of data, but we're going to see them applied in ways that fit the patterns associated with that data, because it's the patterns that drive the applications. So our overall action item, it's absolutely essential that businesses that considering and conceptualizing what machine intelligence can do, but be careful about drawing huge generalizations about what the future machine intelligence is. The first step is to parse out the characteristics of the data driven by the devices that are going to generate it and the applications that are going to use it, and understand the relationship between the characteristics of that data and the types of machine intelligence work that can be performed. What is likely, is that an impedance mismatch between data and expectations of machine intelligence will generate a significant number of failures that often will put businesses back years in taking full advantage of some of these rich technologies. So, once again we want to thank you this week for joining us here on the Wikibon weekly research meeting. I want to thank George Gilbert who is here CUBE Studio in Palo Alto, and Jim Kobielus and Neil Raden who were both on the phone. And we want to thank you very much for joining us here today, and we look forward to talking to you again in the future. So this is Peter Burris, from the CUBE's Palo Alto Studio. Thanks again for watching Wikibon's weekly research meeting. (electronic music)

Published Date : Oct 20 2017

SUMMARY :

the characteristics of the data is going to have an impact that take place at the edge, the data workloads. that are going to indicate what types about the same time as airplanes start to flap their wings. It may be that the data that's being generated at the edge to not touch it or handle it any more times than we have to. and optimizing the model for different outcomes or crucial feature of the training and the server is going to do many of the things and the role that the gateway plays, is that it acquires data from the edge devices, and geez, let's ensure that we keep our options open that the historical aspect of it or we need to do an optimization of some form So there has to be adjustments for that. has argued pretty forcibly, that over the next few years in fact, the latest generation of Invidia's architecture What is it? in the chipsets of the year 2020 and beyond, that are going to be in place to manage those devices. that are both accurate and to optimize them Now as we think about some of the data management elements, essentially merging the skills of the developers and that is the key to productivity in managing the action item that you think to structure the modeling process to be able to tune a model Recognize that the cloud is not just where you trickle up to augment how you evaluate new models. Neil, action item. and do the homework. So our expectation is that over the next five years

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

George GilbertPERSON

0.99+

Jim KobielusPERSON

0.99+

Peter BurrisPERSON

0.99+

NeilPERSON

0.99+

GeorgePERSON

0.99+

Neil RadenPERSON

0.99+

PeterPERSON

0.99+

David FloyerPERSON

0.99+

DavidPERSON

0.99+

Jim KovielusPERSON

0.99+

David FoyerPERSON

0.99+

October 20, 2017DATE

0.99+

10 gigabytesQUANTITY

0.99+

last weekDATE

0.99+

10 mattressesQUANTITY

0.99+

10,000 milesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

CUBEORGANIZATION

0.99+

This weekDATE

0.99+

InvidiaORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

secondQUANTITY

0.99+

two extremesQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

LinuxTITLE

0.99+

this weekDATE

0.99+

first stepQUANTITY

0.99+

bothQUANTITY

0.98+

one modelQUANTITY

0.98+

each classQUANTITY

0.98+

three tiersQUANTITY

0.98+

eachQUANTITY

0.98+

24QUANTITY

0.98+

oneQUANTITY

0.96+

a mileQUANTITY

0.96+

more than a dozen competitorsQUANTITY

0.95+

IRIORGANIZATION

0.95+

WikibonPERSON

0.94+

sevenQUANTITY

0.94+

firstQUANTITY

0.92+

CUBE StudioORGANIZATION

0.86+

2020DATE

0.85+

couple dimensionsQUANTITY

0.79+

Palo Alto StudioLOCATION

0.78+

single business eventQUANTITY

0.75+

tertiary tierQUANTITY

0.74+

last several yearsDATE

0.71+

yearsDATE

0.7+

twinQUANTITY

0.64+

Wikibon Research Meeting | Systems at the Edge


 

>> Hi I'm Peter Burris and welcome once again to Wikibons's weekly research meeting on theCUBE. (funky electronic music) This week we're going to discuss something that we actually believe is extremely important. And if you listen to the recent press announcements this week from Deli MC, the industry increasingly is starting to believe is important. And that is, how are we going to build systems that are dependent upon what happens at the edge? The past 10 years have been dominated about the cloud. How are we going to build things in the cloud? How are we going to get data to the cloud? How are we going to integrate things in the cloud? While all those questions remain very relevant, increasingly, the technology's becoming available, the systems and the design elements are becoming available, and the expertise is now more easily bought together so that we can start attacking some extremely complex problems at the edge. A great example of that is the popular notion of what's happening with automated driving. That is a clear example of huge design requirements at the edge. Now to understand these issues, we have to be able to generalize certain attributes of the differences in the resources, whether they be hardware or software, but increasingly, especially from a digital business transformation standpoint, the differences in the characteristics of the data. And that's what we're going to talk about this week. How do different types of data, data that's generated at the edge, data that's generated elsewhere, going to inform decisions about the classes of infrastructure that we're going to have to build and support as we move forward with this transformation that's taking place in the industry. So to kick it off, Neil Raden I want to turn to you. What are some of those key data differences and what taxonomically do we regard as what we call primary, secondary, and tertiary data? Neil. >> Well, primary data come in from sensors. It's a little bit different than anything we've ever seen in terms of doing analytics. Now I know that operational systems do pick up primary data, credit card transactions, something like that. But, scanner data, not scanner data, I mean sensor data is really designed for analysis. It's not designed for record keeping. And because it's designed for analysis, we have to have a different way of treating it than we do other things. If you think about a data lake, everything that falls into that data lake has come from somewhere else, it's been used for something else. But this data is fresh, and that requires that we really have to treat it carefully. Now, the retention and stewardship of that requires a lot of thought. And I don't think industry has really thought of that through a great deal. But look, sensor data is not new, it's been around for a long time. But what's different now is the volume and the lack of latency in it. But any organization that wants to get involved in it really needs to be thinking about what's the business purpose of it. If you're just going into, IOT as we call it generically, to save a few bucks you might as well not bother. It really is something that will change your organization. Now, what do we do with this data is a real problem because for the most part, these senses are going to be remote, and there's going to be a lot of, that means they're going to generate a lot of data. So what do we do with it? Do we reduce it at the sight? That's been one suggestion. There's an issue that any model for reduction could conceivably lose data that may be important somewhere down the line. Can the data be reconstituted through metadata or some sort of reverse algorithms? You know, perhaps. Those are the things we really need to think about. My humble opinion is the software and the devices need to be a single unit. And for the most part, they need to be designed by vendors, not by individual ITs. >> So David Floyer, let's pick up on that. Software and devices as single unit, designed more by vendors who have specific demand expertise, turn into solutions and present it to business. What do you think? >> Absolutely, I completely concur with that. The initial attempts to using the sensors and connecting to the sensors were very simple things like for example, the nest, the thermostats. And that's worked very well. But if you look at it over time, the processing for that has gone into the home, into your Apple TV device or your Alexa or whatever it is. So, that's coming down and now it's getting even closer to the edge. In the future, our proposition is that it will get even closer and then those will put together solutions, all types of solutions that are appropriate to the edge that will be taking not just one sensor but multiple sensors, collecting that data together, just like in the autonomous car for example where you take the lidars and the radars and the cameras etcetera. We'll be taking that data, we'll be analyzing it, and we'll be making decisions based on that data at the edge. And vendors are going to play a crucial role in providing these solutions to IT and to the OT and to many other parts. And a large value will be in their expertise that they will develop in this area. >> So as a rule of thumb, when I was growing up and learned to drive, I was told always keep five car lengths between you and whatever's in front of you at whatever speed you're traveling. What you just described David is that there will be sensors and there will be processing that takes place in that automated car that isn't using that type of rule of thumb but know something about tire temperature, and therefore the coefficient of friction on the tires, know something about the brakes, knows what the stopping power needs to be at the speed and therefore what buffer needs to be between it and whatever else is around it. >> Absolutely. >> This is no longer a rule of thumb, this is physics and deep understanding of what it's going to require to stop that car. >> And on top of that, what you'll also want to know, outside from your car is, what type of car is in front of you? Is that an autonomous car, or is that somebody being driven bye Peter? In which case, you have 10 lengths behind you. >> But that's not going to be primary data. Is that what we mean by secondary data? >> No, that's still primary because you're going to set up a connection between you and that other car. That car is going to tell you I'm primary to you, that's primary data. >> Here's what I mean, correct use primary data but, from a standpoint of that the car in that case is submitting a signal, right? So even though to your car it's primary data, but one of the things from a design standpoint that's interesting, is that car is now transmitting a digital signal about it's state that's relevant to you so that you can combine that >> Correct. inside effectively, a gateway inside your car. >> Yes. >> So there's external information that is in fact digital coming in, combining with the sensors about what's happening in your car. Have I got that right? >> Absolutely. That to me is a sort of sengrey one, then you've got the tertiary data which is the big picture about the traffic conditions >> Routes. and the weather and the routes and that sort of thing which is at that much higher cloud level, yes. So David Vellante, we always have to make sure as we have these conversations. We've talked a bit about this data, we've talked a little bit about the classes of work that's going to be performed at the different levels. How do we ensure that we sustain the business problem in this conversation? >> So, I mean I think Wikibon's done some really good work on describing what this sort of data model looks like from edge devices where you have primary data, the gateways where you're doing aggregated data in the cloud where maybe the serious modeling occurs. And my assertion would be is that the technology to support that elongating and increasingly distributed data model has been maturing for a decade and the real customer challenge is not just technical, it's really understanding a number of factors and I'll name some. Where in the distributed data value chain are you going to differentiate? And how does the data that you're capturing in that data pipeline contribute to monetization? What are the data sources, who has access to that data, how do you trust that data, and interpret it, and act on it with confidence? There are significant IP ownership in data protection issues. Who owns the data? Is it the device manufacturer, is it the factory, etcetera. What's the business model that's going to allow you to succeed? What skill sets are required to win? And really importantly, what's the shape of the ecosystem that needs to form to go to market and succeed? These are the things that I think customers are really struggling with that I talk to. >> Now, the one thing I'd add to that and I want to come back to it is the idea that, and who is ultimately bonding the solution because this is going to end up in a court of law. But let's come to this IP issue, George. Let's talk about how local data is going to be, is going to enter into the flow of analytics, and that question of who owns data, because that's important and then have the question about some of the ramifications and liabilities associated with this. >> Okay well, just on the IP protection and the idea that a vendor has to take sort of whole product responsibility for the solution. That vendor is probably going to be dealing with multiple competitors when they're sort of enabling say, self-driving car or other, you know edge, or smaller devices. The key thing is that, a vendor will say, you know, the customer keeps their data and the customer gets the insights from that data. But that data is informing in the middle a black box, an analytic black box. It's flowing through it, that's where the insights come out, on the other side. But the data changes that black box as it flows through it. So, that is something where, you know, when the vendor provides a whole solution to Mercedes, that solution will be better when they come around to BMW. And the customers should make sure that what BMW gets the benefit of, goes back to Mercedes. That's on the IP thing. I want to add one more thing on the tertiary side which is, when you're close to the edge, it's much more data intensive. When we've talked about the reduction in data and the real-time analytics, at the tertiary level it's going to be more where time is a bigger factor and you're essentially running a simulation, it's more compute intensive. And so you're doing optimizations of the model and those flow back as context to inform both the gateway and the edge. >> David Floyer I want to turn it to you. So we've talked a little bit about the characteristics of the data, great list of Dave Vellante about some of the business considerations, we will get very quickly in a second to some of the liability issues cause that's going to be important. But take us through how, which George just said about the tertiary elements. Now we've got all the data laid out, how is that going to map to the classes of devices? And we'll then talk a bit about some of the impacts on the industry. What's it going to look like? >> So if we take the primary edge first, and you take that as a unit, you'll have a number of senses within that. >> So just released, this is data about the real world that's coming into the system to be processed? >> Yes. So it'll have, for example, cameras. If we take a simple example of making sure that bad people don't get into your site. You'll have a camera there which will be facial recognition. They'll have a badge of some sort, so you'll read that badge, you may want to take their weight, you may want to have a infrared sensor on them so that you can tell their exact distance. So, a whole set of sensors that the vendor will put together for the job of insuring you don't get bad guys in there. And what you're insuring is that bad guys don't get in there, that's obviously one, very important, and also, that you don't go and- >> Stop good guys from going in. stop good guys from going in there. So those are the two characteristics >> The false-positive problem. the false-positives. Those are the two things you're trying to design that- >> At the primary edge. at the primary edge. And there's a mass amount of data going into that, which is only going to be reduced to very, very little data coming up to the next level which is this guy came here, this was his characteristics, he didn't look well today, maybe you should see a nurse, or whatever other information you can gather from that will go up to that secondary level, and then that'll also be a record of to HR maybe, about who has arrived there or what time they arrived, to the manufacturing systems about who is there and who has those skills to do a particular job. There are multiple uses of that data which can then be used for differentiation for whatever else from that secondary layer into local systems and then equally they can be pushed up to the higher level which is, how much power should be generating today, what are the higher levels. >> We now have 4,000 people in the building, air condition therefore is going to look like this, or, it could be combined with other types of data like over time we're going to need new capacity, or payroll, or whatever else it might be. >> And each level will have its own type of AI. So you've got AI at the edge, which is to produce a specific result, and then there's AI to optimize at the secondary level and then the AI optimize bigger things at the tertiary level. >> So we're going to talk more about some of the AI next week, but for right now we're talking about classes of devices that are high performance, high bandwidth, cheap, constrained, proximate to the event. >> Yep. >> Gateways that are capable of taking that information and start to synthesize it for the business, for other business types of things, and then tertiary systems, true private cloud for example, although we may have very sizable things at the gateway as well, >> There will be true private clouds. that are capable of integrating data in a more broad way. What's the impact in the industry? Are we going to see IT firms roll in and control this sweeping, (man chuckles) as Neil said, trillions of new devices. Is this all going to be intel? Is it all going to be, you know, looking like clients and PCs? >> My strong advice is, that the devices themselves will be done by extreme specialists in those areas that they will need a set of very deep technology understanding of the devices themselves, the senses themselves, the AI software relevant to that. Those are the people that are going to make money in that area. And you're much better off partnering with those people and letting them solve the problems, and you solve, as Dave said earlier, the ones that can differentiate you within your processes, within your business. So yes, leave that to other people is my strong advice. And from an IT's point of view, just don't do it yourself. >> Well the gateway's, sound like you're suggesting, the gateway is where that boundary's going to be. >> Yes. That's where the boundary is. >> And the IT technologies may increasingly go down to the edge, but it's not clear that the IT vendor expertise goes down to the edge >> Correct. at the same degree. >> Correct. >> So, Neil let's come back to you. When we think about this arrangement of data, you know, how the use cases are going to play out, and where the vendors are, we still have to address this fundamental challenge that Dave Vellante bought up. Who's going to end up being responsible for this? Now you've worked in insurance, what does that mean from an overall business standpoint? What kinds of failure weights are we going to accommodate? How is this going to play out? What do you think? >> Well, I'd like to point out that I worked in insurance 30 years ago. (men chuckling) >> Male Voice: I didn't want to date ya Neil. (men chuckling) >> Yeah the old reliable life insurance company. Anyway, one of the things David was just discussing sounded a lot to me like complex event processing. And I'm wondering where the logical location event needs to be, because it needs some prior data to do CEP, you have to have something to compare it against. But if you're pushing it all back to the tertiary level, there's going to be a lot of latency. And the whole idea was CEP was, you know, right now. So, that I'm a little curious about. But I'm sorry, what was your question? >> Well no, let's address that. So CEP David, I agree. But I don't want to turn this into a general discussion and CEP. It's got its own set of issues. >> It's clear there have got to be complex models created. And those are going to be created in a large environment, almost certainly in a tertiary type environment. And those are going to be created by the vendors of those particular problem solvers at the primary edge. To a large extent, they're going to provide solutions in that area. And they're going to have to update those. And so, they are going to have to have lots and lots of test data for themselves and maybe some companies will provide test data if it's convenient for those, for a fee or whatever it is, to those vendors. But the primary model itself is going to be in the tertiary level, and that's going to be pushed down to the primary level itself. >> I'm going to make an assertion here that the, the way I think about this Neil is that the data coming off at the primary level is going to be the sensor data, the sensor said it was good. Then that is recorded as an event, we let somebody in the building. And that's going to be a key feature of what happens at the secondary level. I think a lot of complex processing is likely to end up at that secondary level. >> Absolutely. >> Then the data gets pushed up to the tertiary level and it becomes part of an overall social understanding of the business, it's behavior data. So increasingly, what did we do as a consequence of letting this person in the building? Oh we tried to stop him. That's going to be more of the behavioral data that ends up at the tertiary level, will still do complex event processing there. It's going to be interesting to see whether or not we end up with CEP directly in the sensor tower. Might under certain circumstances, that's a cost question though. So let me now turn it in the last few minutes here Neil back to you. At the end of the day, we've seen for years the question of how much security is enough security? And businesses said, "Oh I want to be 100% secure." And sometimes see-so said "We got that. You gave me the money, we've now made you 100% secure." But we know it's not true. Same thing is going to exist here. How much fidelity is enough fidelity down at the edge? How do we ensure that business decisions can be translated into design decisions that lead to an appropriate and optimized overall approach to the way the system operates? From a business standpoint back, what types of conversations are going to take place in the boardroom that the rest of the organization's going to have to translate into design decisions? >> You know, boy, bad actors are going to be bad actors. I don't think you can do anything to eliminate it. The best you can do is use the best processes and the best techniques to keep it from happening and hope for the best. I'm sorry, that's all I can really say about it. >> There's quite a lot of work going on at the moment from Arm, in particular. They've got a security device image ability. So, there's a lot of work going on in that very space. It's obviously interesting from an IT perspective is how do you link the different security systems, both from an Arm point of view and then from a X86 as you go further up the chain. How are they going to be controlled and how's that going to be managed? That's going to be a big IT issue. >> Yeah, I think the transmission is the weak point. >> Male Voice: What do you mean by that Neil? >> Well the data has to flow across networks, that would be the easiest place for someone to intercept it and, you know, and do something nefarious. >> Right yeah, so that's purely in a security thing. I was trying to use that as an analogy. So, at the end of the day, the business is going to have to decide how much data do we have to capture off the edge to ensure that we have the kinds of models we want, so that we can realize the specificity of actions and behaviors that we want in our business? That's partly a technology question, partly a cost question. Different sensors are able to operate at different speeds for example. But ultimately, we have to be able to bring those, that list of decisions or business issues that Dave Vellante raised, down to some of the design questions. But it's not going to be throw a $300 micro processor everything. There's going to be very, very concrete decisions that have to take place. So, George do you agree with that? >> Yes, two issues though. One, there's the existing devices that can't get re-instrumented, that they already have their software, hardware stack. >> There's a legacy in place? >> Yes. But there's another thing which is, some of the most advanced research that's been going on that produced much of today's distributed computing and big data infrastructure, like the Berkeley Analytics lab, and say their contributions spark in related technologies. They're saying we have to throw everything out and start over for secure real-time systems. That you have to build from hardware all the way up. In other words, you're starting from the sand to re-think something that's secure and real-time that you can't layer it on. >> So very quickly David, that's a great point George. Building on what George has said very quickly, the primary responsibility for bonding the behavior or the attributes of these devices are going to be with the vendor. >> Of creating the solution? >> Correct. >> That's going to be the primary responsibility. But obviously from an IT point of view, you need to make sure that that device is doing the job that's important for your business, not too much, not too little, is doing that job, and that you are able to collect the necessary data from it that is going to be of value to you. So that's a question of qualification of the devices themselves. >> Alright so, David Vellante, Neil Raden, David Floyer, George Gilbert, action item round. I want one action item from you guys from this conversation. Keep it quick, keep it short, keep it to the point. David Floyer, what's your action item? >> So my action item is don't go into areas that you don't need to. You do not need to become experts, IT in general does not need to become experts at the edge itself. Rely on partners, rely on vendors to do that unless of course you're one of those vendors. In which case, you'll need very, very deep knowledge. >> Or you choose that that's where you're value stream your differentiations is going to be which means you just became one of those values. >> Yes, exactly. >> George Gilbert. >> I would build on that and I would say that if you look at the skills required to build these full stack solutions, there's data science, there's application development, there's the analytics. Very few of those solutions are going to have skills all in one company. So the go-to market model for building these is going to be something that, at least at this point in time, we're going to have to look to like combinations like IBM working with sort of supply chain masters. >> Good. Neil Raden, action item. >> The question is not necessarily one of technology because that's going to evolve. But I think as an organization, you need to look at it from this end which is, would employing this create a new business opportunity for us? Something we're not already doing. Or number two, change our operations in some significant way. Or number three, you know, the old red queen thing. We have to do it to keep up with the competition. >> Male Voice: David Vellante, action item. >> Okay well look, at the risk of sounding trite, you got to start the planning process from the customer on in, and so often people don't. You got to understand where you're going to add value for customers and constructing and external and internal ecosystem that can really juice that value creation. >> Alright, fantastic guys. So let me quickly summarize. This week on the Wikibon Friday research meeting in the cube, we discussed a new way of thinking about data characteristics that will inform system design and a business value that's created. We observe that data is not all the same when we think about these very complex, highly distributed, and decentralized systems that we're going to build. That there's a difference between primary data, secondary data, and tertiary data. Primary data is data that is generated from real world events or measurements and then turned into signals that can be acted upon very proximate to that real world set of conditions. A lot of sensors will be there, a lot of processing will be moved down there, and a lot of actuators and actions will take place without referencing other locations within the cloud. However, we will see circumstances where the events that are taken, or the decisions that are taken on those vents, will be captured in some sort of secondary tier that will then record something about the characteristics of the actions and events that were taken, and then summarized and then pushed up to a tertiary tier where that data can then be further integrated in other attributes and elements of the business. The technology to do this is broadly available but not universally successfully applied. We expect to see a lot of new combinations of edge-related device to work with primary data. That is going to be a combination of currently successful firms in the OT or operational technology world, most likely in partnership with a lot of other vendors that have demonstrated significant expertise and understanding the problems, especially the business problems, associated with the fidelity of what happens at the edge. The IT industry is going to approach very aggressively and very close to this at that secondary level, through gateways and other types of technologies. And even though we'll see IT technology continue to move down to the primary level, it's not clear exactly how vendors will be able to follow that. More likely, we'll see the adoption of IT approaches to doing things at the primary level by vendors that have the main expertise in how that level works. We will however see significantly interesting true private cloud and public cloud data end up from the tertiary level end up with a whole new sets of systems that are going to be very important from an administration and management standpoint because they have to work within the context of the fidelity of this overall system together. The final point we want to make is that these are not technology problems by themselves. While significant technology problems are on the horizon about how we think about handling this distribution of data, managing it appropriately, our ability, ultimately, to present the appropriate authority at different levels within that distributive fabric to ensure the proper working condition in a way that nonetheless we can recreate if we need to. But these are, at bottom, fundamentally business problems. They're business problems related to who owns the intellectual property that's being created, they're business problem related to what level in that stack do I want to show my differentiation to my customers and they're business problems from a liability and legal standpoint as well. The action item is, all firms will in one form or another be impacted by the emergence of the edge as a dominate design as consideration for their infrastructure but also for their business. Three ways, or a taxonomy that looks at three classes of data, primary, secondary, and tertiary, will help businesses sort out who's responsible, what partnerships I need to put in place, what technologies and I going to employ, and very importantly, what overall business exposure I'm going to accommodate as I think ultimately about the nature of the processing and business promises that I'm making to my marketplace. Once again, this has been the Wikibon Friday research meeting here on theCUBE. I want to thank all the analysts who were here today, but especially thank you for paying attention and working with us. And by all means, let's hear those comments back about how we're doing and what you think about this important question of different classes of data driven by different needs of the edge. (funky electronic music)

Published Date : Oct 13 2017

SUMMARY :

A great example of that is the popular notion And for the most part, they need to be designed present it to business. that are appropriate to the edge that will be taking and learned to drive, I was told of what it's going to require to stop that car. Is that an autonomous car, or is that But that's not going to be primary data. That car is going to tell you I'm primary inside your car. Have I got that right? the big picture about the traffic conditions and the weather and the routes What's the business model that's going to allow you to succeed? Now, the one thing I'd add to that the benefit of, goes back to Mercedes. of the liability issues cause that's going to be important. and you take that as a unit, and also, that you don't go and- So those are the two characteristics Those are the two things you're trying to design that- and then that'll also be a record of to HR maybe, air condition therefore is going to look like this, a specific result, and then there's AI to optimize high bandwidth, cheap, constrained, proximate to the event. Is it all going to be, you know, looking like clients and PCs? Those are the people that are going to make money in that area. Well the gateway's, sound like you're suggesting, at the same degree. How is this going to play out? Well, I'd like to point out that I worked in insurance Male Voice: I didn't want to date ya Neil. And the whole idea was CEP was, you know, right now. But I don't want to turn this into be in the tertiary level, and that's going to be And that's going to be a key feature of That's going to be more of the behavioral data and the best techniques to keep it from happening and how's that going to be managed? Well the data has to flow across networks, capture off the edge to ensure that we have can't get re-instrumented, that they already have their some of the most advanced research that's been going on are going to be with the vendor. the necessary data from it that is going to be of value to you. Keep it quick, keep it short, keep it to the point. IT in general does not need to Or you choose that that's where you're is going to be something that, at least at this point in time, Neil Raden, action item. We have to do it to keep up with the competition. You got to understand where you're going to add value sets of systems that are going to be very important

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

David FloyerPERSON

0.99+

NeilPERSON

0.99+

Neil RadenPERSON

0.99+

Dave VellantePERSON

0.99+

David VellantePERSON

0.99+

DavidPERSON

0.99+

GeorgePERSON

0.99+

George GilbertPERSON

0.99+

Peter BurrisPERSON

0.99+

MercedesORGANIZATION

0.99+

BMWORGANIZATION

0.99+

100%QUANTITY

0.99+

$300QUANTITY

0.99+

IBMORGANIZATION

0.99+

10 lengthsQUANTITY

0.99+

two characteristicsQUANTITY

0.99+

Berkeley AnalyticsORGANIZATION

0.99+

next weekDATE

0.99+

4,000 peopleQUANTITY

0.99+

two issuesQUANTITY

0.99+

PeterPERSON

0.99+

todayDATE

0.99+

each levelQUANTITY

0.99+

OneQUANTITY

0.99+

one suggestionQUANTITY

0.98+

Three waysQUANTITY

0.98+

five carQUANTITY

0.98+

bothQUANTITY

0.98+

This weekDATE

0.97+

two thingsQUANTITY

0.97+

this weekDATE

0.97+

30 years agoDATE

0.97+

oneQUANTITY

0.97+

WikibonORGANIZATION

0.97+

WikibonsORGANIZATION

0.97+

trillions of new devicesQUANTITY

0.97+

single unitQUANTITY

0.97+

one sensorQUANTITY

0.96+

one formQUANTITY

0.96+

firstQUANTITY

0.94+

one companyQUANTITY

0.94+

Apple TVCOMMERCIAL_ITEM

0.92+

one action itemQUANTITY

0.92+

three classesQUANTITY

0.91+

intelORGANIZATION

0.89+

WikibonEVENT

0.86+

one moreQUANTITY

0.79+

secondQUANTITY

0.76+

past 10 yearsDATE

0.75+

CEPORGANIZATION

0.75+

Deli MCORGANIZATION

0.73+

CEPTITLE

0.68+

ArmORGANIZATION

0.65+

Wikibon FridayEVENT

0.64+

AlexaTITLE

0.64+

yearsQUANTITY

0.62+

few bucksQUANTITY

0.6+

Wikibon Research Meeting


 

>> Dave: The cloud. There you go. I presume that worked. >> David: Hi there. >> Dave: Hi David. We had agreed, Peter and I had talked and we said let's just pick three topics, allocate enough time. Maybe a half hour each, and then maybe a little bit longer if we have the time. Then try and structure it so we can gather some opinions on what it all means. Ultimately the goal is to have an outcome with some research that hits the network. The three topics today, Jim Kobeielus is going to present on agile and data science, David Floyer on NVMe over fabric and of course keying off of the Micron news announcement. I think Nick is, is that Nick who just joined? He can contribute to that as well. Then George Gilbert has this concept of digital twin. We'll start with Jim. I guess what I'd suggest is maybe present this in the context of, present a premise or some kind of thesis that you have and maybe the key issues that you see and then kind of guide the conversation and we'll all chime in. >> Jim: Sure, sure. >> Dave: Take it away, Jim. >> Agile development and team data science. Agile methodology obviously is well-established as a paradigm and as a set of practices in various schools in software development in general. Agile is practiced in data science in terms of development, the pipelines. The overall premise for my piece, first of all starting off with a core definition of what agile is as a methodology. Self-organizing, cross-functional teams. They sprint toward results in steps that are fast, iterative, incremental, adaptive and so forth. Specifically the premise here is that agile has already come to data science and is coming even more deeply into the core practice of data science where data science is done in team environment. It's not just unicorns that are producing really work on their own, but more to the point, it's teams of specialists that come together in co-location, increasingly in co-located environments or in co-located settings to produce (banging) weekly check points and so forth. That's the basic premise that I've laid out for the piece. The themes. First of all, the themes, let me break it out. In terms of the overall how I design or how I'm approaching agile in this context is I'm looking at the basic principles of agile. It's really practices that are minimal, modular, incremental, iterative, adaptive, and co-locational. I've laid out how all that maps in to how data science is done in the real world right now in terms of tight teams working in an iterative fashion. A couple of issues that I see as regards to the adoption and sort of the ramifications of agile in a data science context. One of which is a co-location. What we have increasingly are data science teams that are virtual and distributed where a lot of the functions are handled by statistical modelers and data engineers and subject matter experts and visualization specialists that are working remotely from each other and are using collaborative tools like the tools from the company that I just left. How can agile, the co-location work primer for agile stand up in a world with more of the development team learning deeper and so forth is being done on a scrutiny basis and needs to be by teams of specialists that may be in different cities or different time zones, operating around the clock, produce brilliant results? Another one of which is that agile seems to be predicated on the notion that you improvise the process as you go, trial and error which seems to fly in the face of documentation or tidy documentation. Without tidy documentation about how you actually arrived at your results, how come those results can not be easily reproduced by independent researchers, independent data scientists? If you don't have well defined processes for achieving results in a certain data science initiative, it can't be reproduced which means they're not terribly scientific. By definition it's not science if you can't reproduce it by independent teams. To the extent that it's all loosey-goosey and improvised and undocumented, it's not reproducible. If it's not reproducible, to what extent should you put credence in the results of a given data science initiative if it's not been documented? Agile seems to fly in the face of reproducibility of data science results. Those are sort of my core themes or core issues that I'm pondering with or will be. >> Dave: Jim, just a couple questions. You had mentioned, you rattled off a bunch of parameters. You went really fast. One of them was co-location. Can you just review those again? What were they? >> Sure. They are minimal. The minimum viable product is the basis for agile, meaning a team puts together data a complete monolithic sect, but an initial deliverable that can stand alone, provide some value to your stakeholders or users and then you iteratively build upon that in what I call minimum viable product going forward to pull out more complex applications as needed. There's sort of a minimum viable product is at the heart of agile the way it's often looked at. The big question is, what is the minimum viable product in a data science initiative? One way you might approach that is saying that what you're doing, say you're building a predictive model. You're predicting a single scenario, for example such as whether one specific class of customers might accept one specific class of offers under the constraining circumstances. That's an example of minimum outcome to be achieved from a data science deliverable. A minimum product that addresses that requirement might be pulling the data from a single source. We'll need a very simplified feature set of predictive variables like maybe two or three at the most, to predict customer behavior, and use one very well understood algorithm like linear regressions and do it. With just a few lines of programming code in Python or Aura or whatever and build us some very crisp, simple rules. That's the notion in a data science context of a minimum viable product. That's the foundation of agile. Then there's the notion of modular which I've implied with minimal viable product. The initial product is the foundation upon which you build modular add ons. The add ons might be building out more complex algorithms based on more data sets, using more predictive variables, throwing other algorithms in to the initiative like logistic regression or decision trees to do more fine-grained customer segmentation. What I'm giving you is a sense for the modular add ons and builds on to the initial product that generally weaken incrementally in the course of a data science initiative. Then there's this, and I've already used the word incremental where each new module that gets built up or each new feature or tweak on the core model gets added on to the initial deliverable in a way that's incremental. Ideally it should all compose ultimately the sum of the useful set of capabilities that deliver a wider range of value. For example, in a data science initiative where it's customer data, you're doing predictive analysis to identify whether customers are likely to accept a given offer. One way to add on incrementally to that core functionality is to embed that capability, for example, in a target marketing application like an outbound marketing application that uses those predictive variables to drive responses in line to, say an e-commerce front end. Then there's the notion of iterative and iterative really comes down to check points. Regular reviews of the standards and check points where the team comes together to review the work in a context of data science. Data science by its very nature is exploratory. It's visualization, it's model building and testing and training. It's iterative scoring and testing and refinement of the underlying model. Maybe on a daily basis, maybe on a weekly basis, maybe adhoc, but iteration goes on all the time in data science initiatives. Adaptive. Adaptive is all about responding to circumstances. Trial and error. What works, what doesn't work at the level of the clinical approach. It's also in terms of, do we have the right people on this team to deliver on the end results? A data science team might determine mid-way through that, well we're trying to build a marketing application, but we don't have the right marketing expertise in our team, maybe we need to tap Joe over there who seems to know a little bit about this particular application we're trying to build and this particular scenario, this particular customers, we're trying to get a good profile of how to reach them. You might adapt by adding, like I said, new data sources, adding on new algorithms, totally changing your approach for future engineering as you go along. In addition to supervised learning from ground troops, you might add some unsupervised learning algorithms to being able to find patterns in say unstructured data sets as you bring those into the picture. What I'm getting at is there's a lot, 10 zillion variables that, for a data science team that you have to add in to your overall research plan going forward based on, what you're trying to derive from data science is its insights. They're actionable and ideally repeatable. That you can embed them in applications. It's just a matter of figuring out what actually helps you, what set of variables and team members and data and sort of what helps you to achieve the goals of your project. Finally, co-locational. It's all about the core team needs to be, usually in the same physical location according to the book how people normally think of agile. The company that I just left is basically doing a massive social engineering exercise, ongoing about making their marketing and R&D teams a little more agile by co-locating them in different cities like San Francisco and Austin and so forth. The whole notion that people will collaborate far better if they're not virtual. That's highly controversial, but none-the-less, that's the foundation of agile as it's normally considered. One of my questions, really an open question is what hard core, you might have a sprawling team that's doing data science, doing various aspects, but what solid core of that team needs to be physically co-located all or most of the time? Is it the statistical modeler and a data engineer alone? The one who stands up how to do cluster and the person who actually does the building and testing of the model? Do the visualization specialists need to be co-located as well? Are other specialties like subject matter experts who have the knowledge in marketing, whatever it is, do they also need to be in the physical location day in, day out, week in and week out to achieve results on these projects? Anyway, so there you go. That's how I sort of appealed the argument of (mumbling). >> Dave: Okay. I got a minimal modular, incremental, iterative, adaptive, co-locational. What was six again? I'm sorry. >> Jim: Co-locational. >> Dave: What was the one before that? >> Jim: I'm sorry. >> Dave: Adaptive. >> Minimal, modular, incremental, iterative, adaptive, and co-locational. >> Dave: Okay, there were only six. Sorry, I thought it was seven. Good. A couple of questions then we can get the discussion going here. Of course, you're talking specifically in the context of data science, but some of the questions that I've seen around agile generally are, it's not for everybody, when and where should it be used? Waterfalls still make sense sometimes. Some of the criticisms I've read, heard, seen, and sometimes experienced with agile are sort of quality issues, I'll call it lack of accountability. I don't know if that's the right terminology. We're going for speed so as long as we're fast, we checked that box, quality can sacrifice. Thoughts on that. Where does it fit and again understanding specifically you're talking about data science. Does it always fit in data science or because it's so new and hip and cool or like traditional programming environments, is it horses for courses? >> David: Can I add to that, Dave? It's a great, fundamental question. It seems to me there's two really important aspects of artificial intelligence. The first is the research part of it which is developing the algorithms, developing the potential data sources that might or might not matter. Then the second is taking that and putting it into production. That is that somewhere along the line, it's saving money, time, etc., and it's integrated with the rest of the organization. That second piece is, the first piece it seems to be like most research projects, the ROI is difficult to predict in a new sort of way. The second piece of actually implementing it is where you're going to make money. Is agile, if you can integrate that with your systems of record, for example and get automation of many of the aspects that you've researched, is agile the right way of doing it at that stage? How would you bridge the gap between the initial development and then the final instantiation? >> That's an important concern, David. Dev Ops, that's a closely related issue but it's not exactly the same scope. As data science and machine learning, let's just net it out. As machine learning and deep learning get embedded in applications, in operations I should say, like in your e-commerce site or whatever it might be, then data science itself becomes an operational function. The people who continue to iterate those models in line the operational applications. Really, where it comes down to an operational function, everything that these people do needs to be documented and version controlled and so forth. These people meaning data science professionals. You need documentation. You need accountability. The development of these assets, machine learning and so forth, needs to be, is compliance. When you look at compliance, algorithmic accountability comes into it where lawyers will, like e-discovery. They'll subpoena, theoretically all your algorithms and data and say explain how you arrived at this particular recommendation that you made to grant somebody or not grant somebody a loan or whatever it might be. The transparency of the entire development process is absolutely essential to the data science process downstream and when it's a production application. In many ways, agile by saying, speed's the most important thing. Screw documentation, you can sort of figure that out and that's not as important, that whole pathos, it goes by the wayside. Agile can not, should not skip on documentation. Documentation is even more important as data science becomes an operational function. That's one of my concerns. >> David: I think it seems to me that the whole rapid idea development is difficult to get a combination of that and operational, boring testing, regression testing, etc. The two worlds are very different. The interface between the two is difficult. >> Everybody does their e-commerce tweaks through AB testing of different layouts and so forth. AB testing is fundamentally data science and so it's an ongoing thing. (static) ... On AB testing in terms of tweaking. All these channels and all the service flow, systems of engagement and so forth. All this stuff has to be documented so agile sort of, in many ways flies in the face of that or potentially compromises the visibility of (garbled) access. >> David: Right. If you're thinking about IOT for example, you've got very expensive machines out there in the field which you're trying to optimize true put through and trying to minimize machine's breaking, etc. At the Micron event, it was interesting that Micron's use of different methodologies of putting systems together, they were focusing on the data analysis, etc., to drive greater efficiency through their manufacturing process. Having said that, they need really, really tested algorithms, etc. to make sure there isn't a major (mumbling) or loss of huge amounts of potential revenue if something goes wrong. I'm just interested in how you would create the final product that has to go into production in a very high value chain like an IOT. >> When you're running, say AI from learning algorithms all the way down to the end points, it gets even trickier than simply documenting the data and feature sets and the algorithms and so forth that were used to build up these models. It also comes down to having to document the entire life cycle in terms of how these algorithms were trained to make the predictors of whatever it is you're trying to do at the edge with a particular algorithm. The whole notion of how are all of these edge points applications being trained, with what data, at what interval? Are they being retrained on a daily basis, hourly basis, moment by moment basis? All of those are critical concerns to know whether they're making the best automated decisions or actions possible in all scenarios. That's like a black box in terms of the sheer complexity of what needs to be logged to figure out whether the application is doing its job as best a possible. You need a massive log, you need a massive event log from end to end of the IOT to do that right and to provide that visibility ongoing into the performance of these AI driven edge devices. I don't know anybody who's providing the tool to do it. >> David: If I think about how it's done at the moment, it's obviously far too slow at the moment. At the same time, you've got to have some testing and things like that. It seems to me that you've got a research model on one side and then you need to create a working model from that which is your production model. That's the one that goes through the testing and everything of that sort. It seems to me that the interface would be that transition from the research model to the working model that would be critical here and the working model is obviously a subset and it's going to be optimized for performance, etc. in real time, as opposed to the development model which can be a lot to do and take half a week to manage it necessary. It seems to me that you've got a different set of business pressures on the working model and a different set of skills as well. I think having one team here doesn't sound right to me. You've got to have a Dev Ops team who are going to take the working model from the developers and then make sure that it's sound and save. Especially in a high value IOT area that the level of iteration is not going to be nearly as high as in a lower cost marketing type application. Does that sound sensible? >> That sounds sensible. In fact in Dev Ops, the Dev Ops team would definitely be the ones that handle the continuous training and retraining of the working models on an ongoing basis. That's a core observation. >> David: Is that the right way of doing it, Jim? It seems to me that the research people would be continuing to adapt from data from a lot of different places whereas the operational model would be at a specific location with a specific IOT and they wouldn't have necessarily all the data there to do that. I'm not quite sure whether - >> Dave: Hey guys? Hey guys, hey guys? Can I jump in here? Interesting discussion, but highly nuanced and I'm struggling to figure out how this turns into a piece or sort of debating some certain specifics that are very kind of weedy. I wonder if we could just reset for a second and come back to sort of what I was trying to get to before which is really the business impact. Should this be applied broadly? Should this be applied specifically? What does it mean if I'm a practitioner? What should I take away from, Jim your premise and your sort of fixed parameters? Should I be implementing this? Why? Where? What's the value to my organization - the value I guess is obvious, but does it fit everywhere? Should it be across the board? Can you address that? >> Neil: Can I jump in here for a second? >> Dave: Please, that would be great. Is that Neil? >> Neil: Neil. I've never been a data scientist, but I was an actuary a long time ago. When the truth actuary came to me and said we need to develop a liability insurance coverage for floating oil rigs in the North Sea, I'm serious, it took a couple of months of research and modeling and so forth. If I had to go to all of those meetings and stand ups in an agile development environment, I probably would have gone postal on the place. I think that there's some confusion about what data science is. It's not a vector. It's not like a Dev Op situation where you start with something and you go (mumbling). When a data scientist or whatever you want to call them comes up with a model, that model has to be constantly revisited until it's put out of business. It's refined, it's evaluated. It doesn't have an end point like that. The other thing is that data scientist is typically going to be running multiple projects simultaneously so how in the world are you going to agilize that? I think if you look at the data science group, they're probably, I think Nick said this, there are probably groups in there that are doing fewer Dev Ops, software engineering and so forth and you can apply agile techniques to them. The whole data science thing is too squishy for that, in my opinion. >> Jim: Squishy? What do you mean by squishy, Neil? >> Neil: It's not one thing. I think if you try to represent data science as here's a project, we gather data, we work on a model, we test it, and then we put it into production, it doesn't end there. It never ends. It's constantly being revised. >> Yeah, of course. It's akin to application maintenance. The application meaning the model, the algorithm to be fit for purpose has to continually be evaluated, possibly tweaked, always retrained to determine its predictive fit for whatever task it's been assigned. You don't build it once and assume its strong predictive fit forever and ever. You can never assume that. >> Neil: James and I called that adaptive control mechanisms. You put a model out there and you monitor the return you're getting. You talk about AB testing, that's one method of doing it. I think that a data scientist, somebody who really is keyed into the machine learning and all that jazz. I just don't see them as being project oriented. I'll tell you one other thing, I have a son who's a software engineer and he said something to me the other day. He said, "Agile? Agile's dead." I haven't had a chance to find out what he meant by that. I'll get back to you. >> Oh, okay. If you look at - Go ahead. >> Dave: I'm sorry, Neil. Just to clarify, he said agile's dead? Was that what he said? >> Neil: I didn't say it, my son said it. >> Dave: Yeah, yeah, yeah right. >> Neil: No idea what he was talking about. >> Dave: Go ahead, Jim. Sorry. >> If you look at waterfall development in general, for larger projects it's absolutely essential to get requirements nailed down and the functional specifications and all that. Where you have some very extensive projects and many moving parts, obviously you need a master plan that it all fits into and waterfall, those checkpoints and so forth, those controls that are built into that methodology are critically important. Within the context of a broad project, some of the assets being build up might be machine loading models and analytics models and so forth so in the context of our broader waterfall oriented software development initiative, you might need to have multiple data science projects spun off within the sub-projects. Each of those would fit into, by itself might be indicated sort of like an exploration task where you have a team doing data visualization, exploration in more of an open-ended fashion because while they're trying to figure out the right set of predictors and the right set of data to be able to build out the right model to deliver the right result. What I'm getting at is that agile approaches might be embedded into broader waterfall oriented development initiatives, agile data science approaches. Fundamentally, data science began and still is predominantly very smart people, PhDs in statistics and math, doing open-ended exploration of complex data looking for non-obvious patterns that you wouldn't be able to find otherwise. Sort of a fishing expedition, a high priced fishing expedition. Kind of a mode of operation as how data science often is conducted in the real world. Looking for that eureka moment when the correlations just jump out at you. There's a lot of that that goes on. A lot of that is very important data science, it's more akin to pure science. What I'm getting at is there might be some role for more structure in waterfall development approaches in projects that have a data science, core data science capability to them. Those are my thoughts. >> Dave: Okay, we probably should move on to the next topic here, but just in closing can we get people to chime in on sort of the bottom line here? If you're writing to an audience of data scientists or data scientist want to be's, what's the one piece of advice or a couple of pieces of advice that you would give them? >> First of all, data science is a developer competency. The modern developers are, many of them need to be data scientists or have a strong grounding and understanding of data science, because much of that machine learning and all that is increasingly the core of what software developers are building so you can't not understand data science if you're a modern software developer. You can't understand data science as it (garbled) if you don't understand the need for agile iterative steps within the, because they're looking for the needle in the haystack quite often. The right combination of predictive variables and the right combination of algorithms and the right training regimen in order to get it all fit. It's a new world competency that need be mastered if you're a software development professional. >> Dave: Okay, anybody else want to chime in on the bottom line there? >> David: Just my two penny worth is that the key aspect of all the data scientists is to come up with the algorithm and then implement them in a way that is robust and it part of the system as a whole. The return on investment on the data science piece as an insight isn't worth anything until it's actually implemented and put into production of some sort. It seems that second stage of creating the working model is what is the output of your data scientists. >> Yeah, it's the repeatable deployable asset that incorporates the crux of data science which is algorithms that are data driven, statistical algorithms that are data driven. >> Dave: Okay. If there's nothing else, let's close this agenda item out. Is Nick on? Did Nick join us today? Nick, you there? >> Nick: Yeah. >> Dave: Sounds like you're on. Tough to hear you. >> Nick: How's that? >> Dave: Better, but still not great. Okay, we can at least hear you now. David, you wanted to present on NVMe over fabric pivoting off the Micron news. What is NVMe over fabric and who gives a fuck? (laughing) >> David: This is Micron, we talked about it last week. This is Micron announcement. What they announced is NVMe over fabric which, last time we talked about is the ability to create a whole number of nodes. They've tested 250, the architecture will take them to 1,000. 1,000 processor or 1,000 nodes, and be able to access the data on any single node at roughly the same speed. They are quoting 200 microseconds. It's 195 if it's local and it's 200 if it's remote. That is a very, very interesting architecture which is like nothing else that's been announced. >> Participant: David, can I ask a quick question? >> David: Sure. >> Participant: This latency and the node count sounds astonishing. Is Intel not replicating this or challenging in scope with their 3D Crosspoint? >> David: 3D Crosspoint, Intel would love to sell that as a key component of this. The 3D Crosspoint as a storage device is very, very, very expensive. You can replicate most of the function of 3D Crosspoint at a much lower price point by using a combination of D-RAM and protective D-RAM and Flash. At the moment, 3D Crosspoint is a nice to have and there'll be circumstances where they will use it, but at the meeting yesterday, I don't think they, they might have brought it up once. They didn't emphasize it (mumbles) at all as being part of it. >> Participant: To be clear, this means rather than buying Intel servers rounded out with lots of 3D Crosspoint, you buy Intel servers just with the CPU and then all the Micron niceness for their NVMe and their Interconnect? >> David: Correct. They are still Intel servers. The ones they were displaying yesterday were HP1's, they also used SuperMicro. They want certain characteristics of the chip set that are used, but those are just standard pieces. The other parts of the architecture are the Mellanox, the 100 gigabit converged ethernet and using Rocky which is IDMA over converged ethernet. That is the secret sauce which allows you and Mellanox themselves, their cards have a lot of offload of a lot of functionality. That's the secret sauce which allows you to go from any point to any point in 5 microseconds. Then create a transfer and other things. Files are on top of that. >> Participant: David, Another quick question. The latency is incredibly short. >> David: Yep. >> Participant: What happens if, as say an MPP SQL database with 1,000 nodes, what if they have to shuffle a lot of data? What's the throughput? Is it limited by that 100 gig or is that so insanely large that it doesn't matter? >> David: They key is this, that it allows you to move the processing to wherever the data is very, very easily. In the principle that will evolve from this architecture, is that you know where the data is so don't move the data around, that'll block things up. Move the processing to that particular node or some adjacent node and do the processing as close as possible. That is as an architecture is a long term goal. Obviously in the short term, you've got to take things as they are. Clearly, a different type of architecture for databases will need to eventually evolve out of this. At the moment, what they're focusing on is big problems which need low latency solutions and using databases as they are and the whole end to end use stack which is a much faster way of doing it. Then over time, they'll adapt new databases, new architectures to really take advantage of it. What they're offering is a POC at the moment. It's in Beta. They had their customers talking about it and they were very complimentary in general about it. They hope to get it into full production this year. There's going to be a host of other people that are doing this. I was trying to bottom line this in terms of really what the link is with digital enablement. For me, true digital enablement is enabling any relevant data to be available for processing at the point of business engagement in real time or near real time. The definition that this architecture enables. It's a, in my view a potential game changer in that this is an architecture which will allow any data to be available for processing. You don't have to move the data around, you move the processing to that data. >> Is Micron the first market with this capability, David? NV over Me? NVMe. >> David: Over fabric? Yes. >> Jim: Okay. >> David: Having said that, there are a lot of start ups which have got a significant amount of money and who are coming to market with their own versions. You would expect Dell, HP to be following suit. >> Dave: David? Sorry. Finish your thought and then I have another quick question. >> David: No, no. >> Dave: The principle, and you've helped me understand this many times, going all the way back to Hadoop, bring the application to the data, but when you're using conventional relational databases and you've had it all normalized, you've got to join stuff that might not be co-located. >> David: Yep. That's the whole point about the five microseconds. Now that the impact of non co-location if you have to join stuff or whatever it is, is much, much lower. It's so you can do the logical draw in, whatever it is, very quickly and very easily across that whole fabric. In terms of processing against that data, then you would choose to move the application to that node because it's much less data to move, that's an optimization of the architecture as opposed to a fundamental design point. You can then optimize about where you run the thing. This is ideal architecture for where I personally see things going which is traditional systems of record which need to be exactly as they've ever been and then alongside it, the artificial intelligence, the systems of understanding, data warehouses, etc. Having that data available in the same space so that you can combine those two elements in real time or in near real time. The advantage of that in terms of business value, digital enablement, and business value is the biggest thing of all. That's a 50% improvement in overall productivity of a company, that's the thing that will drive, in my view, 99% of the business value. >> Dave: Going back just to the joint thing, 100 gigs with five microseconds, that's really, really fast, but if you've got petabytes of data on these thousand nodes and you have to do a join, you still got to go through that 100 gig pipe of stuff that's not co-located. >> David: Absolutely. The way you would design that is as you would design any query. You've got a process you would need, a process in front of that which is query optimization to be able to farm all of the independent jobs needed to do in each of the nodes and take the output of that and bring that together. Both the concepts are already there. >> Dave: Like a map. >> David: Yes. That's right. All of the data science is there. You're starting from an architecture which is fundamentally different from the traditional let's get it out architectures that have existed, by removing that huge overhead of going from one to another. >> Dave: Oh, because this goes, it's like a mesh not a ring? >> David: Yes, yes. >> Dave: It's like the high performance compute of this MPI type architecture? >> David: Absolutely. NVMe, by definition is a point to point architecture. Rocky, underneath it is a point to point architecture. Everything is point to point. Yes. >> Dave: Oh, got it. That really does call for a redesign. >> David: Yes, you can take it in steps. It'll work as it is and then over time you'll optimize it to take advantage of it more. Does that definition of (mumbling) make sense to you guys? The one I quoted to you? Enabling any relevant data to be available for processing at the point of business engagement, in real time or near real time? That's where you're trying to get to and this is a very powerful enabler of that design. >> Nick: You're emphasizing the network topology, while I kind of thought the heart of the argument was performance. >> David: Could you repeat that? It's very - >> Dave: Let me repeat. Nick's a little light, but I could hear him fine. You're emphasizing the network topology, but Nick's saying his takeaway was the whole idea was the thrust was performance. >> Nick: Correct. >> David: Absolutely. Absolutely. The result of that network topology is a many times improvement in performance of the systems as a whole that you couldn't achieve in any previous architecture. I totally agree. That's what it's about is enabling low latency applications with much, much more data available by being able to break things up in parallel and delivering multiple streams to an end result. Yes. >> Participant: David, let me just ask, if I can play out how databases are designed now, how they can take advantage of it unmodified, but how things could be very, very different once they do take advantage of it which is that today, if you're doing transaction processing, you're pretty much bottle necked on a single node that sort of maintains the fresh cache of shared data and that cache, even if it's in memory, it's associated with shared storage. What you're talking about means because you've got memory speed access to that cache from anywhere, it no longer is tied to a node. That's what allows you to scale out to 1,000 nodes even for transaction processing. That's something we've never really been able to do. Then the fact that you have a large memory space means that you no longer optimize for mapping back and forth from disk and disk structures, but you have everything in a memory native structure and you don't go through this thing straw for IO to storage, you go through memory speed IO. That's a big, big - >> David: That's the end point. I agree. That's not here quite yet. It's still IO, so the IO has been improved dramatically, the protocol within the Me and the over fabric part of it. The elapsed time has been improved, but it's not yet the same as, for example, the HPV initiative. That's saying you change your architecture, you change your way of processing just in the memory. Everything is assumed to be memory. We're not there yet. 200 microseconds is still a lot, lot slower than the process that - one impact of this architecture is that the amount of data that you can pass through it is enormously higher and therefore, the memory sizes themselves within each node will need to be much, much bigger. There is a real opportunity for architectures which minimize the impact, which hold data coherently across multiple nodes and where there's minimal impact of, no tapping on the shoulder for every byte transferred so you can move large amounts of data into memory and then tell people that it's there and allow it to be shared, for example between the different calls and the GPUs and FPGAs that will be in these processes. There's more to come in terms of the architecture in the future. This is a step along the way, it's not the whole journey. >> Participant: Dave, another question. You just referenced 200 milliseconds or microseconds? >> David: Did I say milliseconds? I meant microseconds. >> Participant: You might have, I might have misheard. Relate that to the five microsecond thing again. >> David: If you have data directly attached to your processor, the access time is 195 microseconds. If you need to go to a remote, anywhere else in the thousand nodes, your access time is 200 microseconds. In other words, the additional overhead of that data is five microseconds. >> Participant: That's incredible. >> David: Yes, yes. That is absolutely incredible. That's something that data scientists have been working on for years and years. Okay. That's the reason why you can now do what I talked about which was you can have access from any node to any data within that large amount of nodes. You can have petabytes of data there and you can have access from any single node to any of that data. That, in terms of data enablement, digital enablement, is absolutely amazing. In other words, you don't have to pre put the data that's local in one application in one place. You're allowing an enormous flexibility in how you design systems. That coming back to artificial intelligence, etc. allows you a much, much larger amount of data that you can call on for improving applications. >> Participant: You can explore and train models, huge models, really quickly? >> David: Yes, yes. >> Participant: Apparently that process works better when you have an MPI like mesh than a ring. >> David: If you compare this architecture to the DSST architecture which was the first entrance into this that MP bought for a billion dollars, then that one stopped at 40 nodes. It's architecture was very, very proprietary all the way through. This one takes you to 1,000 nodes with much, much lower cost. They believe that the cost of the equivalent DSSD system will be between 10 and 20% of that cost. >> Dave: Can I ask a question about, you mentioned query optimizer. Who develops the query optimizer for the system? >> David: Nobody does yet. >> Jim: The DBMS vendor would have to re-write theirs with a whole different pensive cost. >> Dave: So we would have an optimizer database system? >> David: Who's asking a question, I'm sorry. I don't recognize the voice. >> Dave: That was Neil. Hold on one second, David. Hold on one second. Go ahead Nick. You talk about translation. >> Nick: ... On a network. It's SAN. It happens to be very low latency and very high throughput, but it's just a storage sub-system. >> David: Yep. Yep. It's a storage sub-system. It's called a server SAN. That's what we've been talking about for a long time is you need the same characteristics which is that you can get at all the data, but you need to be able to get at it in compute time as opposed to taking a stroll down the road time. >> Dave: Architecturally it's a SAN without an array controller? >> David: Exactly. Yeah, the array controller is software from a company called Xcellate, what was the name of it? I can't remember now. Say it again. >> Nick: Xcelero or Xceleron? >> David: Xcelero. That's the company that has produced the software for the data services, etc. >> Dave: Let's, as we sort of wind down this segment, let's talk about the business impact again. We're talking about different ways potentially to develop applications. There's an ecosystem requirement here it sounds like, from the ISDs to support this and other developers. It's the final, portends the elimination of the last electromechanical device in computing which has implications for a lot of things. Performance value, application development, application capability. Maybe you could talk about that a little bit again thinking in terms of how practitioners should look at this. What are the actions that they should be taking and what kinds of plans should they be making in their strategies? >> David: I thought Neil's comment last week was very perceptive which is, you wouldn't start with people like me who have been imbued with the 100 database call limits for umpteen years. You'd start with people, millennials, or sub-millenials or whatever you want to call them, who can take a completely fresh view of how you would exploit this type of architecture. Fundamentally you will be able to get through 10 or 100 times more data in real time than you can with today's systems. There's two parts of that data as I said before. The traditional systems of record that need to be updated, and then a whole host of applications that will allow you to do processes which are either not possible, or very slow today. To give one simple example, if you want to do real time changing of pricing based on availability of your supply chain, based on what you've got in stock, based on the delivery capabilities, that's a very, very complex problem. The optimization of all these different things and there are many others that you could include in that. This will give you the ability to automate that process and optimize that process in real time as part of the systems of record and update everything together. That, in terms of business value is extracting a huge number of people who previously would be involved in that chain, reducing their involvement significantly and making the company itself far more agile, far more responsive to change in the marketplace. That's just one example, you can think of hundreds for every marketplace where the application now becomes the systems of record, augmented by AI and huge amounts more data can improve the productivity of an organization and the agility of an organization in the marketplace. >> This is a godsend for AI. AI, the draw of AI is all this training data. If you could just move that in memory speed to the application in real time, it makes the applications much sharper and more (mumbling). >> David: Absolutely. >> Participant: How long David, would it take for the cloud vendors to not just offer some instances of this, but essentially to retool their infrastructure. (laughing) >> David: This is, to me a disruption and a half. The people who can be first to market in this are the SaaS vendors who can take their applications or new SaaS vendors. ISV. Sorry, say that again, sorry. >> Participant: The SaaS vendors who have their own infrastructure? >> David: Yes, but it's not going to be long before the AWS' and Microsofts put this in their tool bag. The SaaS vendors have the greatest capability of making this change in the shortest possible time. To me, that's one area where we're going to see results. Make no mistake about it, this is a big change and at the Micron conference, I can't remember what the guys name was, he said it takes two Olympics for people to start adopting things for real. I think that's going to be shorter than two Olympics, but it's going to be quite a slow process for pushing this out. It's radically different and a lot of the traditional ways of doing things are going to be affected. My view is that SaaS is going to be the first and then there are going to be individual companies that solve the problems themselves. Large companies, even small companies that put in systems of this sort and then use it to outperform the marketplace in a significant way. Particularly in the finance area and particularly in other data intent areas. That's my two pennies worth. Anybody want to add anything else? Any other thoughts? >> Dave: Let's wrap some final thoughts on this one. >> Participant: Big deal for big data. >> David: Like it, like it. >> Participant: It's actually more than that because there used to be a major trade off between big data and fast data. Latency and throughput and this starts to push some of those boundaries out so that you sort of can have both at once. >> Dave: Okay, good. Big deal for big data and fast data. >> David: Yeah, I like it. >> Dave: George, you want to talk about digital twins? I remember when you first sort of introduced this, I was like, "Huh? What's a digital twin? "That's an interesting name." I guess, I'm not sure you coined it, but why don't you tell us what digital twin is and why it's relevant. >> George: All right. GE coined it. I'm going to, at a high level talk about what it is, why it's important, and a little bit about as much as we can tell, how it's likely to start playing out and a little bit on the differences of the different vendors who are going after it. As far as sort of defining it, I'm cribbing a little bit from a report that's just in the edit process. It's data representation, this is important, or a model of a product, process, service, customer, supplier. It's not just an industrial device. It can be any entity involved in the business. This is a refinement sort of Peter helped with. The reason it's any entity is because there is, it can represent the structure and behavior, not just of a machine tool or a jet engine, but a business process like sales order process when you see it on a screen and its workflow. That's a digital twin of what used to be a physical process. It applied to both the devices and assets and processes because when you can model them, you can integrate them within a business process and improve that process. Going back to something that's more physical so I can do a more concrete definition, you might take a device like a robotic machine tool and the idea is that the twin captures the structure and the behavior across its lifecycle. As it's designed, as it's built, tested, deployed, operated, and serviced. I don't know if you all know the myth of, in the Greek Gods, one of the Goddesses sprang fully formed from the forehead of Zeus. I forgot who it was. The point of that is digital twin is not going to spring fully formed from any developers head. Getting to the level of fidelity I just described is a journey and a long one. Maybe a decade or more because it's difficult. You have to integrate a lot of data from different systems and you have to add structure and behavior for stuff that's not captured anywhere and may not be captured anywhere. Just for example, CAD data might have design information, manufacturing information might come from there or another system. CRM data might have support information. Maintenance repair and overhaul applications might have information on how it's serviced. Then you also connect the physical version with the digital version with essentially telemetry data that says how its been operating over time. That sort of helps define its behavior so you can manipulate that and predict things or simulate things that you couldn't do with just the physical version. >> You have to think about combined with say 3D printers, you could create a hot physical back up of some malfunctioning thing in the field because you have the entire design, you have the entire history of its behavior and its current state before it went kablooey. Conceivably, it can be fabricated on the fly and reconstituted as a physicologic from the digital twin that was maintained. >> George: Yes, you know what actually that raises a good point which is that the behavior that was represented in the telemetry helps the designer simulate a better version for the next version. Just what you're saying. Then with 3D printing, you can either make a prototype or another instance. Some of the printers are getting sophisticated enough to punch out better versions or parts for better versions. That's a really good point. There's one thing that has to hold all this stuff together which is really kind of difficult, which is challenging technology. IBM calls it a knowledge graph. It's pretty much in anyone's version. They might not call it a knowledge graph. It's a graph is, instead of a tree where you have a parent and then children and then the children have more children, a graph, many things can relate to many things. The reason I point that out is that puts a holistic structure over all these desperate sources of data behavior. You essentially talk to the graph, sort of like with Arnold, talk to the hand. That didn't, I got crickets. (laughing) Let me give you guys the, I put a definitions table in this dock. I had a couple things. Beta models. These are some important terms. Beta model represents the structure but not the behavior of the digital twin. The API represents the behavior of the digital twin and it should conform to the data model for maximum developer usability. Jim, jump in anywhere where you feel like you want to correct or refine. The object model is a combination of the data model and API. You were going to say something? >> Jim: No, I wasn't. >> George: Okay. The object model ultimately is the digital twin. Another way of looking at it, defining the structure and behavior. This sounds like one of these, say "T" words, the canonical model. It's a generic version of the digital twin or really the one where you're going to have a representation that doesn't have customer specific extensions. This is important because the way these things are getting built today is mostly custom spoke and so if you want to be able to reuse work. If someone's building this for you like a system integrator, you want to be able to, or they want to be able to reuse this on the next engagement and you want to be able to take the benefit of what they've learned on the next engagement back to you. There has to be this canonical model that doesn't break every time you essentially add new capabilities. It doesn't break your existing stuff. Knowledge graph again is this thing that holds together all the pieces and makes them look like one coherent hole. I'll get to, I talked briefly about network compatibility and I'll get to level of detail. Let me go back to, I'm sort of doing this from crib notes. We talked about telemetry which is sort of combining the physical and the twin. Again, telemetry's really important because this is like the time series database. It says, this is all the stuff that was going on over time. Then you can look at telemetry data that tells you, we got a dirty power spike and after three of those, this machine sort of started vibrating. That's part of how you're looking to learn about its behavior over time. In that process, models get better and better about predicting and enabling you to optimize their behavior and the business process with which it integrates. I'll give some examples of that. Twins, these digital twins can themselves be composed in levels of detail. I think I used the example of a robotic machine tool. Then you might have a bunch of machine tools on an assembly line and then you might have a bunch of assembly lines in a factory. As you start modeling, not just the single instance, but the collections that higher up and higher levels of extractions, or levels of detail, you get a richer and richer way to model the behavior of your business. More and more of your business. Again, it's not just the assets, but it's some of the processes. Let me now talk a little bit about how the continual improvement works. As Jim was talking about, we have data feedback loops in our machine learning models. Once you have a good quality digital twin in place, you get the benefit of increasing returns from the data feedback loops. In other words, if you can get to a better starting point than your competitor and then you get on the increasing returns of the data feedback loops, that is improving the fidelity of the digital twins now faster than your competitor. For one twin, I'll talk about how you want to make the whole ecosystem of twins sort of self-reinforcing. I'll get to that in a sec. There's another point to make about these data feedback loops which is traditional apps, and this came up with Jim and Neil, traditional apps are static. You want upgrades, you get stuff from the vendor. With digital twins, they're always learning from the customer's data and that has implications when the partner or vendor who helped build it for a customer takes learnings from the customer and goes to a similar customer for another engagement. I'll talk about the implications from that. This is important because it's half packaged application and half bespoke. The fact that you don't have to take the customer's data, but your model learns from the data. Think of it as, I'm not going to take your coffee beans, your data, but I'm going to run or make coffee from your beans and I'm going to take that to the next engagement with another customer who could be your competitor. In other words, you're extracting all the value from the data and that helps modify the behavior of the model and the next guy gets the benefit of it. Dave, this is the stuff where IBM keeps saying, we don't take your data. You're right, but you're taking the juice you squeezed out of it. That's one of my next reports. >> Dave: It's interesting, George. Their contention is, they uniquely, unlike Amazon and Google, don't swap spit, your spit with their competitors. >> George: That's misleading. To say Amazon and Google, those guys aren't building digital twins. Parametric technology is. I've got this definitely from a parametric technical fellow at an AWS event last week, which is they, not only don't use the data, they don't use the structure of the twin either from engagement to engagement. That's a big difference from IBM. I have a quote, Chris O'Connor from IBM Munich saying, "We'll take the data model, "but we won't take the data." I'm like, so you take the coffee from the beans even if you don't take the beans? I'm going to be very specific about saying that saying you don't do what Google and FaceBook do, what they do, it's misleading. >> Dave: My only caution there is do some more vetting and checking. A lot of times what some guy says on a Cube interview, he or she doesn't even know, in my experience. Make sure you validate that. >> George: I'll send it to them for feedback, but it wasn't just him. I got it from the CTO of the IOT division as well. >> Dave: When you were in Munich? >> George: This wasn't on the Cube either. This was by the side of, at the coffee table during our break. >> Dave: I understand and CTO's in theory should know. I can't tell you how many times I've gotten a definitive answer from a pretty senior level person and it turns out it was, either they weren't listening to me or they didn't know or they were just yessing me or whatever. Just be really careful and make sure you do your background checks. >> George: I will. I think the key is leave them room to provide a nuanced answer. It's more of a really, really, really concrete about really specific edge conditions and say do you or don't you. >> Dave: This is a pretty big one. If I'm a CIO, a chief digital officer, a chief data officer, COO, head of IT, head of data science, what should I be doing in this regard? What's the advice? >> George: Okay, can I go through a few more or are we out of time? >> Dave: No, we have time. >> George: Let me do a couple more points. I talked about training a single twin or an instance of a twin and I talked about the acceleration of the learning curve. There's edge analytics, David has educated us with the help of looking at GE Predicts. David, you have been talking about this fpr a long time. You want edge analytics to inform or automate a low latency decision and so this is where you're going to have to run some amount of analytics. Right near the device. Although I got to mention, hopefully this will elicit a chuckle. When you get some vendors telling you what their edge and cloud strategies are. Map R said, we'll have a hadoop cluster that only needs four or five nodes as our edge device. And we'll need five admins to care and feed it. He didn't say the last part, but that obviously isn't going to work. The edge analytics could be things like recalibrating the machine for different tolerance. If it's seeing that it's getting out of the tolerance window or something like that. The cloud, and this is old news for anyone who's been around David, but you're going to have a lot of data, not all of it, but going back to the cloud to train both the instances of each robotic machine tool and the master of that machine tool. The reason is, an instance would be oh I'm operating in a high humidity environment, something like that. Another one would be operating where there's a lot of sand or something that screws up the behavior. Then the master might be something that has behavior that's sort of common to all of them. It's when the training, the training will take place on the instances and the master and will in all likelihood push down versions of each. Next to the physical device process, whatever, you'll have the instance one and a class one and between the two of them, they should give you the optimal view of behavior and the ability to simulate to improve things. It's worth mentioning, again as David found out, not by talking to GE, but by accidentally looking at their documentation, their whole positioning of edge versus cloud is a little bit hand waving and in talking to the guys from ThingWorks which is a division of what used to be called Parametric Technology which is just PTC, it appears that they're negotiating with GE to give them the orchestration and distributed database technology that GE can't build itself. I've heard also from two ISV's, one a major one and one a minor one who are both in the IOT ecosystem one who's part of the GE ecosystem that predicts as a mess. It's analysis paralysis. It's not that they don't have talent, it's just that they're not getting shit done. Anyway, the key thing now is when you get all this - >> David: Just from what I learned when I went to the GE event recently, they're aware of their requirement. They've actually already got some sub parts of the predix which they can put in the cloud, but there needs to be more of it and they're aware of that. >> George: As usual, just another reason I need a red phone hotline to David for any and all questions I have. >> David: Flattery will get you everywhere. >> George: All right. One of the key takeaways, not the action item, but the takeaway for a customer is when you get these data feedback loops reinforcing each other, the instances of say the robotic machine tools to the master, then the instance to the assembly line to the factory, when all that is being orchestrated and all the data is continually enhancing the models as well as the manual process of adding contextual information or new levels of structure, this is when you're on increasing returns sort of curve that really contributes to sustaining competitive advantage. Remember, think of how when Google started off on search, it wasn't just their algorithm, but it was collecting data about which links you picked, in which order and how long you were there that helped them reinforce the search rankings. They got so far ahead of everyone else that even if others had those algorithms, they didn't have that data to help refine the rankings. You get this same process going when you essentially have your ecosystem of learning models across the enterprise sort of all orchestrating. This sounds like motherhood and apple pie and there's going to be a lot of challenges to getting there and I haven't gotten all the warts of having gone through, talked to a lot of customers who've gotten the arrows in the back, but that's the theoretical, really cool end point or position where the entire company becomes a learning organization from these feedback loops. I want to, now that we're in the edit process on the overall digital twin, I do want to do a follow up on IBM's approach. Hopefully we can do it both as a report and then as a version that's for Silicon Angle because that thing I wrote on Cloudera got the immediate attention of Cloudera and Amazon and hopefully we can both provide client proprietary value add, but also the public impact stuff. That's my high level. >> This is fascinating. If you're the Chief of Data Science for example, in a large industrial company, having the ability to compile digital twins of all your edge devices can be extraordinarily valuable because then you can use that data to do more fine-grained segmentation of the different types of edges based on their behavior and their state under various scenarios. Basically then your team of data scientists can then begin to identify the extent to which they need to write different machine learning models that are tuned to the specific requirements or status or behavior of different end points. What I'm getting at is ultimately, you're going to have 10 zillion different categories of edge devices performing in various scenarios. They're going to be driven by an equal variety of machine learning, deep learning AI and all that. All that has to be built up by your data science team in some coherent architecture where there might be a common canonical template that all devices will, all the algorithms and so forth on those devices are being built from. Each of those algorithms will then be tweaked to the specific digital twins profile of each device is what I'm getting at. >> George: That's a great point that I didn't bring up which is folks who remember object oriented programming, not that I ever was able to write a single line of code, but the idea, go into this robotic machine tool, you can inherit a couple of essentially component objects that can also be used in slightly different models, but let's say in this machine tool, there's a model for a spinning device, I forget what it's called. Like a drive shaft. That drive shaft can be in other things as well. Eventually you can compose these twins, even instances of a twin with essentially component models themselves. Thing Works does this. I don't know if GE does this. I don't think IBM does. The interesting thing about IBM is, their go to market really influences their approach to this which is they have this huge industry solutions group and then obviously the global business services group. These guys are all custom development and domain experts so they'll go into, they're literally working with Airbus and with the goal of building a model of a particular airliner. Right now I think they're doing the de-icing subsystem, I don't even remember on which model. In other words they're helping to create this bespoke thing and so that's what actually gets them into trouble with potentially channel conflict or maybe it's more competitor conflict because Airbus is not going to be happy if they take their learnings and go work with Boeing next. Whereas with PTC and Thing Works, at least their professional services arm, they treat this much more like the implementation of a packaged software product and all the learnings stay with the customer. >> Very good. >> Dave: I got a question, George. In terms of the industrial design and engineering aspect of building products, you mentioned PTC which has been in the CAD business and the engineering business for software for 50 years, and Ansis and folks like that who do the simulation of industrial products or any kind of a product that gets built. Is there a natural starting point for digital twin coming out of that area? That would be the vice president of engineering would be the guy that would be a key target for this kind of thinking. >> George: Great point. This is, I think PTC is closely aligned with Terradata and they're attitude is, hey if it's not captured in the CAD tool, then you're just hand waving because you won't have a high fidelity twin. >> Dave: Yeah, it's a logical starting point for any mechanical kind of device. What's a thing built to do and what's it built like? >> George: Yeah, but if it's something that was designed in a CAD tool, yes, but if it's something that was not, then you start having to build it up in a different way. I think, I'm trying to remember, but IBM did not look like they had something that was definitely oriented around CAD. Theirs looked like it was more where the knowledge graph was the core glue that pulled all the structure and behavior together. Again, that was a reflection of their product line which doesn't have a CAD tool and the fact that they're doing these really, really, really bespoke twins. >> Dave: I'm thinking that it strikes me that from the industrial design in engineering area, it's really the individual product is really the focus. That's one part of the map. The dynamic you're pointing at, there's lots of other elements of the map in terms of an operational, a business process. That might be the fleet of wind turbines or the fleet of trucks. How they behave collectively. There's lots of different entry points. I'm just trying to grapple with, isn't the CAD area, the engineering area at least for hard products, have an obvious starting point for users to begin to look at this. The BP of Engineering needs to be on top of this stuff. >> George: That's a great point that I didn't bring up which is, a guy at Microsoft who was their CTO in their IT organization gave me an example which was, you have a pipeline that's 1,000 miles long. It's got 10,000 valves in it, but you're not capturing the CAD design of the valve, you just put a really simple model that measures pressure, temperature, and leakage or something. You string 10,000 of those together into an overall model of the pipeline. That is a low fidelity thing, but that's all they need to start with. Then they can see when they're doing maintenance or when the flow through is higher or what the impact is on each of the different valves or flanges or whatever. It doesn't always have to start with super high fidelity. It depends on which optimizing for. >> Dave: It's funny. I had a conversation years ago with a guy, the engineering McNeil Schwendler if you remember those folks. He was telling us about 30 to 40 years ago when they were doing computational fluid dynamics, they were doing one dimensional computational fluid dynamics if you can imagine that. Then they were able, because of the compute power or whatever, to get the two dimensional computational fluid dynamics and finally they got to three dimensional and they're looking also at four and five dimensional as well. It's serviceable, I guess what I'm saying in that pipeline example, the way that they build that thing or the way that they manage that pipeline is that they did the one dimensional model of a valve is good enough, but over time, maybe a two or three dimensional is going to be better. >> George: That's why I say that this is a journey that's got to take a decade or more. >> Dave: Yeah, definitely. >> Take the example of airplane. The old joke is it's six million parts flying in close formation. It's going to be a while before you fit that in one model. >> Dave: Got it. Yes. Right on. When you have that model, that's pretty cool. All right guys, we're about out of time. I need a little time to prep for my next meeting which is in 15 minutes, but final thoughts. Do you guys feel like this was useful in terms of guiding things that you might be able to write about? >> George: Hugely. This is hugely more valuable than anything we've done as a team. >> Jim: This is great, I learned a lot. >> Dave: Good. Thanks you guys. This has been recorded. It's up on the cloud and I'll figure out how to get it to Peter and we'll go from there. Thanks everybody. (closing thank you's)

Published Date : May 9 2017

SUMMARY :

There you go. and maybe the key issues that you see and is coming even more deeply into the core practice You had mentioned, you rattled off a bunch of parameters. It's all about the core team needs to be, I got a minimal modular, incremental, iterative, iterative, adaptive, and co-locational. in the context of data science, and get automation of many of the aspects everything that these people do needs to be documented that the whole rapid idea development flies in the face of that create the final product that has to go into production and the algorithms and so forth that were used and the working model is obviously a subset that handle the continuous training and retraining David: Is that the right way of doing it, Jim? and come back to sort of what I was trying to get to before Dave: Please, that would be great. so how in the world are you going to agilize that? I think if you try to represent data science the algorithm to be fit for purpose and he said something to me the other day. If you look at - Just to clarify, he said agile's dead? Dave: Go ahead, Jim. and the functional specifications and all that. and all that is increasingly the core that the key aspect of all the data scientists that incorporates the crux of data science Nick, you there? Tough to hear you. pivoting off the Micron news. the ability to create a whole number of nodes. Participant: This latency and the node count At the moment, 3D Crosspoint is a nice to have That is the secret sauce which allows you The latency is incredibly short. Move the processing to that particular node Is Micron the first market with this capability, David? David: Over fabric? and who are coming to market with their own versions. Dave: David? bring the application to the data, Now that the impact of non co-location and you have to do a join, and take the output of that and bring that together. All of the data science is there. NVMe, by definition is a point to point architecture. Dave: Oh, got it. Does that definition of (mumbling) make sense to you guys? Nick: You're emphasizing the network topology, the whole idea was the thrust was performance. of the systems as a whole Then the fact that you have a large memory space is that the amount of data that you can pass through it You just referenced 200 milliseconds or microseconds? David: Did I say milliseconds? Relate that to the five microsecond thing again. anywhere else in the thousand nodes, That's the reason why you can now do what I talked about when you have an MPI like mesh than a ring. They believe that the cost of the equivalent DSSD system Who develops the query optimizer for the system? Jim: The DBMS vendor would have to re-write theirs I don't recognize the voice. Dave: That was Neil. It happens to be very low latency which is that you can get at all the data, Yeah, the array controller is software from a company called That's the company that has produced the software from the ISDs to support this and other developers. and the agility of an organization in the marketplace. AI, the draw of AI is all this training data. for the cloud vendors to not just offer are the SaaS vendors who can take their applications and then there are going to be individual companies Latency and throughput and this starts to push Dave: Okay, good. I guess, I'm not sure you coined it, and the idea is that the twin captures the structure Conceivably, it can be fabricated on the fly and it should conform to the data model and that helps modify the behavior Dave: It's interesting, George. saying, "We'll take the data model, Make sure you validate that. I got it from the CTO of the IOT division as well. This was by the side of, at the coffee table I can't tell you how many times and say do you or don't you. What's the advice? of behavior and the ability to simulate to improve things. of the predix which they can put in the cloud, I need a red phone hotline to David and all the data is continually enhancing the models having the ability to compile digital twins and all the learnings stay with the customer. and the engineering business for software hey if it's not captured in the CAD tool, What's a thing built to do and what's it built like? and the fact that they're doing these that from the industrial design in engineering area, but that's all they need to start with. and finally they got to three dimensional that this is a journey that's got to take It's going to be a while before you fit that I need a little time to prep for my next meeting This is hugely more valuable than anything we've done how to get it to Peter and we'll go from there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

JimPERSON

0.99+

Chris O'ConnorPERSON

0.99+

GeorgePERSON

0.99+

DavePERSON

0.99+

AirbusORGANIZATION

0.99+

BoeingORGANIZATION

0.99+

Jim KobeielusPERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NeilPERSON

0.99+

JoePERSON

0.99+

NickPERSON

0.99+

David FloyerPERSON

0.99+

George GilbertPERSON

0.99+

1,000 milesQUANTITY

0.99+

10QUANTITY

0.99+

PeterPERSON

0.99+

195 microsecondsQUANTITY

0.99+

Java's Relevance for Modern Enterprises: theCUBE Power Panel


 

(upbeat music) >> Facilitator: From theCUBE studios in Palo Alto in Boston, connecting with other leaders all around the world. This is a CUBE conversation. >> Java is the world's most popular programming language. And it remains the leading application development platform. But what's the status of Java? What a customers doing? And very importantly, what is Oracle's and the community strategy with respect to Java? Welcome everybody to this Java power panel on theCUBE. I'm your host, Dave Vellante. Manish Gupta here, he's the Vice President of Global Marketing at Java for Oracle, Donald Smith is also on the panel, and he's the Senior Director of Product Management at Oracle and we're joined by David Floyd who is a CTO of Wikibon Research and has done a number of research activities on this very topic. Gentlemen, welcome to theCUBE, great to see you. >> Thank you. >> Thank you. >> Manish, I want to start with you. Can you help us understand really what dig into Oracle strategy with respect to Java. The technology, the licensing, the support. How has that evolved over time? Take us through that. >> Dave, with 51 billion JVMs deployed worldwide, Java has truly cemented its position as the language of innovation and the technology world. There's no question about that. In fact, I like to say it's really the language of empowerment. Given the the impact it has had numerous applications ranging from the Mars Rover to genomics and everything in between. As Oracle acquired sign over 10 years ago, it's really kept it front of mind, two aspects of what we want to do with the technology and the platform. The first one was to ensure there was broad accessibility to the technology and the platform for anybody that wanted to benefit from it. And the second one was to ensure that the ecosystem remained vibrant and thriving throughout. I managed to do both. And underpinning these two objectives were really three pillars of our strategy. The first one was around trust, ensuring that openness and transparency of the technology was as was before continued to be the case going forward. The second element of that within the trust pillar was to ensure that as enterprises invested in the technology that investment was protected, it was not, you invest and you lose over a period of time in a backward compatibility, interoperability, certifications, were all foundational to the platform itself to the features, to the innovation moving forward. And more recently as we have rethought to the support, the licensing and the overall structure of the pricing that we have ensured that ultimately the trust comes along in those dimensions as well. So the launch of the Java subscription came along with, pay as you go model, it's a transparent pricing structure and discuss structure published on the website. So you can go and see what it would cost for the desktop on servers or cloud deployment. So those were the things that made kind of the first pillar happen. The second one was Dunno innovation. Over the last 25 years, Java has stood the test of time. It has delivered the needs of today while preparing for the future. And that remains the case. It is not something that has sort of focused on the fat of the day and the hot thing for the day, but really more important that it is prepared to deal with the mission critical, massive scale deployments that can run for years, for decades, in some cases. And keeping that in mind, Oracle has continued to put more and more technology into the open source world with every release that comes out, you can see 80 plus percent of the contributions come from Oracle. So that's the second pillar around innovation. And the third piece of the strategy has been around predictability. Ensuring that Java, the technology and platform perform as advertised, and that goes into the feature releases, it goes into the release process, it goes into the fact that you were broadly within the open JDK environment for developing and executing the roadmap. From a CIO standpoint, it's important to know that the technology used to develop your applications has talent around. And your, if you're going to develop something like Java, you'll find the right Java engineers to do the job. That is not a question, right? And so that's part of predictability. And finally, again with the change in the six months release cadence that came about three years ago, with the release of Java 10, we've really made sure that it's not, no, a bunch of things come about. You don't know when they're going to be released, but you know, like clockwork, you'll have a new Java list every six months. And that's been the case every March and September, since Java 10, you've had a new release of Java with certain features that come up and we just launched Java 15. So trust innovation predictability, have really been the three pillars on which we've executed the strategy for Java. >> Excellent, thank you for that intro, and we're going to get into it now. I'm glad you mentioned the sun acquisition. I said at the time that Java was the linchpin of that acquisition, many people, of course, we looked at the integration piece with the hardware, but it was really Java and the capabilities that it brings. And of course, a lot of Oracle software written in Java and not the least of which is a fusion. But now let's get into the components of this. And I want to talk a little bit about the methodology of this and going to call on you David Floyer. But essentially my understanding is that Wikibon went through and David, you led this, you did a technical deep dive, which you always do, did a number of in depth interviews with Java customers. And then of course you also did a web survey and then you built from that data and economic model. So you can try to understand the sort of dimensions of the financials if you will. So what were your key findings there? >> So the key findings were that Java was in a good state that people were happy with the Java. The second key finding is that the business case itself for using the Oracle services, the subscription services was good. It didn't mean to say that that wasn't for every company, the right way to do it, but there was a very good return on that. And the third area was that there was a degree of confidence that the new way of doing things, the six-month cycle, as opposed to the three-year cycle was overall a benefit to the rate of change, the ability for them to introduce new features quickly. >> Okay, well, I mean, you know, and I read that research. And to me my takeaways where I saw the continued relevance of Java, which is kind of goes without saying, but a lot of times it gets lost in the headlines. That subscription piece is key. We're going to get into some of the economics as to how that affects customers and it saves you money. And the other piece was the roadmap becoming more transparent. And I don't want to dig into that a little bit, but before we do, let's get into that innovation component Manish, mentioned that several times, but Don, I want to go to you guys. We have a slide on the various components of the innovation. If you would bring this up and Don I wonder if you could talk to this and give us some examples if you would. >> Yeah, sure. So we were the number one development platform for the last 25 years. We want to be the number one development platform for the next 25 years. And in order to do that, we have to be constantly innovating and constantly innovating not only the business side in terms of the subscription and the support offerings and commercial features like Manish was talking about, but also the platform in general. And so the way we like to talk about innovation as we break it down by these pillars that you can see on the slide. And so the first pillar is continuous improvements to the language. So this is watching developers trying to write the same piece of code over and over again, and us asking, can we make you more efficient? Can we give you more language features that reduce the amount of boilerplate that you have to write? The second pillar is a project that we just announced a few months ago called Leyden. And the idea with Leyden is addressing the longterm pinpoints of Java slow startup time and time to peak performance. So if you go back 10 years ago, everybody knows about Java as an enterprise platform, Java EE application servers. They all had the notion of being very long lived. And so Java at that time would be optimized towards long lived applications, startup, and performance. Where if it took a little while to get there, it didn't matter as long as when it got there, it was super fast. And so we're trying to get that peak performance faster in the world of microservices. In a similar vein with project loom, we're looking at making concurrency simple again, looking at how developers are doing more reactive style programming and realizing that the threading model needs to be rethought from the ground up, that project is looking really, really good. Then we have project Panama. Project Panama is all about making it easier to connect Java with native libraries. Valhalla is all about improving, there's a couple of benefits, but it's all about improving memory density and being able to access and iterate and operate over primitive data types at super fast speeds by better optimizing how that information is stored in memory. And then the other pillar of the final pillar that we have been working on from an innovation perspective is ZGC. We introduced a new garbage collector technology a few years ago, G1GCE a generational garbage collector with the eye towards making garbage collection in Java pause lists. So if you, again, if you go back in time and look at the history of Java, memory management is awesome, but there's always that cost and risk of a garbage collection cycle, taking a bit of time away from a critical application. And ZGC is all about getting rid of that. So lots of innovation, lots of different pillars going on right now. >> Awesome, I'm impressed. There's something after Valhalla. I thought that was Nirvana. (laughing) But now, and these are all open source projects, right? And you guys obviously provide committers, there are other people in the open source world who provide that, is that correct Don? >> Yeah, that's correct. We have about 80% of the contributions in open JDK. We are the stewards of open JDK and lead the project. Most of the pillars I talked about here are you know Oracle folks working on that. >> Awesome. Okay, let's get into some of the data. David, I want to come back to you and talk about some of the survey results guys, if you bring up that next slide. Why David, why do people upgrade? What are the drivers? It's really talks to the large companies and what's different from the small company or mid-size companies? What are the takeaways here? >> David: Well, so this is interesting, and as you might expect, large enterprises, have very concerned about application stability. Whereas midsize or enterprises are much more concerned about the performance, making sure that the performance is good. They are both concerned about reliable performance and security, but it's interesting that from a regulation point of view, mid-size companies really want to make sure that they are obeying the regulations, that they are meeting those. Whereas larger organizations usually have their own security and regulation functions looking very hard at these things. So that looking less to the platform to provide those than their own people. >> Yeah, I think you're right. I think the midsize organizations don't have as many people running around taking care of security and it's harder for them to keep up with the edicts of the organization. So they want to stay more current. Don, I wonder if you can add anything to this data from an innovation standpoint. >> Yeah, well, and from a product management standpoint, and what we see here is when you look at just going from fortune 500 to global 2000, you see things that are important to one or less so than the other. You can extrapolate that all the way down to a small company or a startup. And that's why providing the most flexibility in terms of an offering to allow people to decide what, when, where, and how they would be going to upgrade their software so they can do it when they want, and on their own terms. You can see that that becomes really important. And also making sure that we're providing innovation in a broad way so that it'll appeal both to the enterprise and again extrapolating that forward down to even very small startups. >> You know, David, the other thing that struck me in the data, if we bring up that other piece is the upgrade strategy, and there was a stark difference between large enterprises and midsize organizations. Talk to this data, if you would. >> Yes, this is again, a pretty stark difference between them. When you're looking at large enterprises, they really wants stability and they don't want to upgrade so often. Whereas mid-size enterprises, are much more willing to both upgrade on a regular cadence and really have a much more up-to-date, or have always have the latest software. They're driving smaller applications, but they're much more agile about their approach to it. Again, emphasizing what Don was saying about the smaller enterprises wanting a different strategy and a different way of doing things than large enterprises. >> So Manish this says to me that you got it right from a strategy standpoint. I mean, any color you can add here. >> Yeah, it's very intuitive that whether you're a large organization, a mid-sized enterprise or a small business, right? You face competitive pressures, your dynamics are unique. What you're able to do with the resources, what you desire to do at the pace that is appropriate for your environment, are really unique to you, and to try to force one model across any one size or across any set of dynamics is just not appropriate. So we've always felt that giving the enterprises and the organization, the ability to move at the pace of their business is the right approach. And so when we designed the Oracle Java SE subscription, we truly have that front and center in our thought process. And that structure seems to be working well. >> David, what I like about the way you do research is you actually build an economic model. A lot of these business value projects. I know this well, having been in the business a long time, they'll go out to ask the customer what they got, and then the customer said, "Well, I got a 111% ROI, and boom, that's what it is. You actually construct an economic model, you bring in rules of thumb, it allows you to do what ifs you can test that model and calibrate it against the real world. So I commend you on that. You've done a lot of hard work there, but bottom line at forests, I mean, let's bring up the economics. I mean, that's what people ultimately want to know. Does this save me money? What's the bottom line here? >> Yeah. Yes, that's a very important question. And the way we go about it is to ask the questions so that we can extract from those questions, how much effort it took, for example, to upgrade things, how much effort it took for important applications and not so important applications. So we have a very detailed model driven by the survey itself and in the back of the research, I'm a great believer that you should be able to follow exactly what the research said, what the survey said and how it was applied to the model. So, and what were you focused on was, what was the return of using the Java subscription service or taking an upgrade every six months? Those were the two ways that we looked at it. And for large enterprises, the four-year costs for the enterprise was $11 million, but for taking the additional subscription service, and this was well well covered, the payback is within a year, well covered by the lower costs of managing in a lot of systems and environment. And we found a very similar result on those midsize services. There, it was 3 million, and again, they got that back the year in terms of payback. So, but that's one alternative. There is another alternative that may be worthwhile the extra money if you really want to be up-to-date and or if you want to drive a much more aggressive strategy for your organization. >> So these are huge numbers. I mean, he's talking about 30% savings on average for large and mid-sized enterprises in the percentage terms, but the absolute dollars are actually enormous. So, you know, large companies here, we're talking about $20 billion enterprises with 500 or more Java applications. And mid-size is, you're talking about a couple, two, $3 billion companies. Manish, what are you saying in the customer base in terms of the economics? >> Yeah, you know anytime an organization is looking at an offering and a solution, they want to make sure just giving them the value. And we all know that the priorities of businesses have, they want to focus on that. Managing the Java estate is important, but is it the thing where they want to invest the dollars? And if they are investing the dollars, are they getting the return? We find that if you can give the enterprises an ability where they can see the return, the cost is right for them. And if you can mirror that and you can map it also with reduce risk, then you put the right formula. And with the subscription, they're able to not only see the cost savings that the model indicates clearly, but they're also able to reduce the risk in terms of security protection and other things. So it's a really, really good combination for the enterprises. >> Well, thank you, I wonder Manish, if you could bring us home here and just kind of summarize from your thoughts, everything you've heard today, what are the key takeaways? >> You know Java has been around for 25 years, and we certainly believe it's really positioned well for what's required today. And perhaps more importantly, what is needed for the next decade and for the next 25 years. Having now served thousands of customers with the Java subscription, it's clear that it is meeting the needs of fortune 10 organizations all the way down to a 5% development house, for example. What we're hearing from across the board is really Java has been the go to platform and it continues to be the go to platform for mission critical development and deployment. However, the complexity as the Java estate becomes large when you've got tens to hundreds, in some cases over a thousand applications running across the enterprise, that complexity can be daunting. And the Java subscription is really serving the needs in three ways. One, it's getting them the best of class support from Oracle, which is a steward of Java, the company that is generating over 80% of innovation with every single release. The second thing they're getting the business flexibility. So they can move at the pace that works for them. And therapies is as a business model as indicated that they're getting it at a lower cost while having your list. So the combination of these things is the reason why we're seeing very high renewal rates, why we're seeing thousands of organization take it over. And I want to wrap it up by saying one final thing, that you can count on Oracle to be the transparent, to be the right steward or both technology innovation, as well as to ensuring the support for the vast ecosystem whether it's libraries, frameworks, user groups, educational services and so on. So Java is here, has been here for the enterprise, large and small, and it's ready for the next generation as well. >> Great, thank you for that. Well, one more question. What's the call to action? If I'm a mid-sized company or a large company, I've made investments in Java, what, what should I do next? >> I would say, take a look at the Oracle subscription. It will reduce your rates. It'll save you a cost and it'll give you a lower risk parameter for your organization. >> Great, nice and crisp, I like it. If you like, if you guys don't object, I'm going to give you my summary. I've been taking notes this whole time and so, we've explored two options. Customers can do it yourself or go with the subscription on a regular cadence. It's very clear to me that that Java remains relevant as we set up top. It's the world's most popular programming language we know about all that. The ecosystem is really moving fast. Of course, with the stewardship of Oracle cloud microservices, the development of, of modern applications. I think that the directional changes that Manish you guys and, and Don and Oracle have made were really the right call. The research that David you did, shows that it's serving customers better. It lowers costs, it's cutting down risk particularly for the mid-sized companies that maybe are, or don't have the security infrastructure and the talent to go chase those problems. And I love the roadmap piece. The more transparent roadmap really is going to give the industry and the community much more confidence to invest and move forward. So guys, thanks very much for coming on this CUBE Java power panel. It was great to have you. >> Thank you. >> Thank you. >> Thank you. >> All right, I thank you for watching everybody. This is Dave Vellante, for theCUBE, and we'll see you next time. (soft music)

Published Date : Oct 1 2020

SUMMARY :

leaders all around the world. And it remains the leading The technology, the and that goes into the feature releases, of the financials if you will. And the third area was that And the other piece and realizing that the threading in the open source world JDK and lead the project. What are the drivers? making sure that the performance is good. and it's harder for them to keep up You can extrapolate that all the way down in the data, if we bring or have always have the latest software. me that you got it right the ability to move at and calibrate it against the real world. and in the back of the research, in terms of the economics? but is it the thing where they and for the next 25 years. What's the call to action? at the Oracle subscription. and the talent to go chase those problems. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloydPERSON

0.99+

Dave VellantePERSON

0.99+

David FloyerPERSON

0.99+

3 millionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

four-yearQUANTITY

0.99+

Java 15TITLE

0.99+

$11 millionQUANTITY

0.99+

OracleORGANIZATION

0.99+

six-monthQUANTITY

0.99+

5%QUANTITY

0.99+

Donald SmithPERSON

0.99+

three-yearQUANTITY

0.99+

tensQUANTITY

0.99+

Java 10TITLE

0.99+

Manish GuptaPERSON

0.99+

111%QUANTITY

0.99+

JavaTITLE

0.99+

Wikibon ResearchORGANIZATION

0.99+

twoQUANTITY

0.99+

ManishORGANIZATION

0.99+

second elementQUANTITY

0.99+

third pieceQUANTITY

0.99+

500QUANTITY

0.99+

25 yearsQUANTITY

0.99+

second pillarQUANTITY

0.99+

two waysQUANTITY

0.99+

first pillarQUANTITY

0.99+

bothQUANTITY

0.99+

DonPERSON

0.99+

ManishPERSON

0.99+

second pillarQUANTITY

0.99+

thousandsQUANTITY

0.99+

one alternativeQUANTITY

0.99+

third areaQUANTITY

0.99+

two optionsQUANTITY

0.98+

first oneQUANTITY

0.98+

two aspectsQUANTITY

0.98+

over 80%QUANTITY

0.98+

$3 billionQUANTITY

0.98+

SeptemberDATE

0.98+

Java EETITLE

0.98+

DavePERSON

0.98+

10 years agoDATE

0.98+

BostonLOCATION

0.98+

second oneQUANTITY

0.98+

six monthsQUANTITY

0.98+

OneQUANTITY

0.98+

Java Power Panel V1 FOR REVIEW


 

(upbeat music) >> Facilitator: From theCUBE studios in Palo Alto in Boston, connecting with other leaders all around the world. This is a CUBE conversation. >> Java is the world's most popular programming language. And it remains the leading application development platform. But what's the status of Java? What a customers doing? And very importantly, what is Oracle's and the community strategy with respect to Java? Welcome everybody to this Java power panel on theCUBE. I'm your host, Dave Vellante. Manish Gupta here, he's the Vice President of Global Marketing at Java for Oracle, Donald Smith is also on the panel, and he's the Senior Director of Product Management at Oracle and we're joined by David Floyd who is a CTO of Wikibon Research and has done a number of research activities on this very topic. Gentlemen, welcome to theCUBE, great to see you. >> Thank you. >> Thank you. >> Manish, I want to start with you. Can you help us understand really what dig into Oracle strategy with respect to Java. The technology, the licensing, the support. How has that evolved over time? Take us through that. >> Dave, with 51 billion JVMs deployed worldwide, Java has truly cemented its position as the language of innovation and the technology world. There's no question about that. In fact, I like to say it's really the language of empowerment. Given the the impact it has had numerous applications ranging from the Mars Rover to genomics and everything in between. As Oracle acquired sign over 10 years ago, it's really kept it front of mind, two aspects of what we want to do with the technology and the platform. The first one was to ensure there was broad accessibility to the technology and the platform for anybody that wanted to benefit from it. And the second one was to ensure that the ecosystem remained vibrant and thriving throughout. I managed to do both. And underpinning these two objectives were really three pillars of our strategy. The first one was around trust, ensuring that openness and transparency of the technology was as was before continued to be the case going forward. The second element of that within the trust pillar was to ensure that as enterprises invested in the technology that investment was protected, it was not, you invest and you lose over a period of time in a backward compatibility, interoperability, certifications, were all foundational to the platform itself to the features, to the innovation moving forward. And more recently as we have rethought to the support, the licensing and the overall structure of the pricing that we have ensured that ultimately the trust comes along in those dimensions as well. So the launch of the Java subscription came along with, pay as you go model, it's a transparent pricing structure and discuss structure published on the website. So you can go and see what it would cost for the desktop on servers or cloud deployment. So those were the things that made kind of the first pillar happen. The second one was Dunno innovation. Over the last 25 years, Java has stood the test of time. It has delivered the needs of today while preparing for the future. And that remains the case. It is not something that has sort of focused on the fat of the day and the hot thing for the day, but really more important that it is prepared to deal with the mission critical, massive scale deployments that can run for years, for decades, in some cases. And keeping that in mind, Oracle has continued to put more and more technology into the open source world with every release that comes out, you can see 80 plus percent of the contributions come from Oracle. So that's the second pillar around innovation. And the third piece of the strategy has been around predictability. Ensuring that Java, the technology and platform perform as advertised, and that goes into the feature releases, it goes into the release process, it goes into the fact that you were broadly within the open JDK environment for developing and executing the roadmap. From a CIO standpoint, it's important to know that the technology used to develop your applications has talent around. And your, if you're going to develop something like Java, you'll find the right Java engineers to do the job. That is not a question, right? And so that's part of predictability. And finally, again with the change in the six months release cadence that came about three years ago, with the release of Java 10, we've really made sure that it's not, no, a bunch of things come about. You don't know when they're going to be released, but you know, like clockwork, you'll have a new Java list every six months. And that's been the case every March and September, since Java 10, you've had a new release of Java with certain features that come up and we just launched Java 15. So trust innovation predictability, have really been the three pillars on which we've executed the strategy for Java. >> Excellent, thank you for that intro, and we're going to get into it now. I'm glad you mentioned the sun acquisition. I said at the time that Java was the linchpin of that acquisition, many people, of course, we looked at the integration piece with the hardware, but it was really Java and the capabilities that it brings. And of course, a lot of Oracle software written in Java and not the least of which is a fusion. But now let's get into the components of this. And I want to talk a little bit about the methodology of this and going to call on you David Floria. But essentially my understanding is that Wikibon went through and David, you led this, you did a technical deep dive, which you always do, did a number of in depth interviews with Java customers. And then of course you also did a web survey and then you built from that data and economic model. So you can try to understand the sort of dimensions of the financials if you will. So what were your key findings there? >> So the key findings were that Java was in a good state that people were happy with the Java. The second key finding is that the business case itself for using the Oracle services, the subscription services was good. It didn't mean to say that that wasn't for every company, the right way to do it, but there was a very good return on that. And the third area was that there was a degree of confidence that the new way of doing things, the six-month cycle, as opposed to the three-year cycle was overall a benefit to the rate of change, the ability for them to introduce new features quickly. >> Okay, well, I mean, you know, and I read that research. And to me my takeaways where I saw the continued relevance of Java, which is kind of goes without saying, but a lot of times it gets lost in the headlines. That subscription piece is key. We're going to get into some of the economics as to how that affects customers and it saves you money. And the other piece was the roadmap becoming more transparent. And I don't want to dig into that a little bit, but before we do, let's get into that innovation component Manish, mentioned that several times, but Don, I want to go to you guys. We have a slide on the various components of the innovation. If you would bring this up and Don I wonder if you could talk to this and give us some examples if you would. >> Yeah, sure. So we were the number one development platform for the last 25 years. We want to be the number one development platform for the next 25 years. And in order to do that, we have to be constantly innovating and constantly innovating not only the business side in terms of the subscription and the support offerings and commercial features like Manish was talking about, but also the platform in general. And so the way we like to talk about innovation as we break it down by these pillars that you can see on the slide. And so the first pillar is continuous improvements to the language. So this is watching developers trying to write the same piece of code over and over again, and us asking, can we make you more efficient? Can we give you more language features that reduce the amount of boilerplate that you have to write? The second pillar is a project that we just announced a few months ago called Leyden. And the idea with Leyden is addressing the longterm pinpoints of Java slow startup time and time to peak performance. So if you go back 10 years ago, everybody knows about Java as an enterprise platform, Java EE application servers. They all had the notion of being very long lived. And so Java at that time would be optimized towards long lived applications, startup, and performance. Where if it took a little while to get there, it didn't matter as long as when it got there, it was super fast. And so we're trying to get that peak performance faster in the world of microservices. In a similar vein with project loom, we're looking at making concurrency simple again, looking at how developers are doing more reactive style programming and realizing that the threading model needs to be rethought from the ground up, that project is looking really, really good. Then we have project Panama. Project Panama is all about making it easier to connect Java with native libraries. Valhalla is all about improving, there's a couple of benefits, but it's all about improving memory density and being able to access and iterate and operate over primitive data types at super fast speeds by better optimizing how that information is stored in memory. And then the other pillar of the final pillar that we have been working on from an innovation perspective is ZGC. We introduced a new garbage collector technology a few years ago, G1GCE a generational garbage collector with the eye towards making garbage collection in Java pause lists. So if you, again, if you go back in time and look at the history of Java, memory management is awesome, but there's always that cost and risk of a garbage collection cycle, taking a bit of time away from a critical application. And ZGC is all about getting rid of that. So lots of innovation, lots of different pillars going on right now. >> Awesome, I'm impressed. There's something after Valhalla. I thought that was Nirvana. (laughing) But now, and these are all open source projects, right? And you guys obviously provide committers, there are other people in the open source world who provide that, is that correct Don? >> Yeah, that's correct. We have about 80% of the contributions in open JDK. We are the stewards of open JDK and lead the project. Most of the pillars I talked about here are you know Oracle folks working on that. >> Awesome. Okay, let's get into some of the data. David, I want to come back to you and talk about some of the survey results guys, if you bring up that next slide. Why David, why do people upgrade? What are the drivers? It's really talks to the large companies and what's different from the small company or mid-size companies? What are the takeaways here? >> David: Well, so this is interesting, and as you might expect, large enterprises, have very concerned about application stability. Whereas midsize or enterprises are much more concerned about the performance, making sure that the performance is good. They are both concerned about reliable performance and security, but it's interesting that from a regulation point of view, mid-size companies really want to make sure that they are obeying the regulations, that they are meeting those. Whereas larger organizations usually have their own security and regulation functions looking very hard at these things. So that looking less to the platform to provide those than their own people. >> Yeah, I think you're right. I think the midsize organizations don't have as many people running around taking care of security and it's harder for them to keep up with the edicts of the organization. So they want to stay more current. Don, I wonder if you can add anything to this data from an innovation standpoint. >> Yeah, well, and from a product management standpoint, and what we see here is when you look at just going from fortune 500 to global 2000, you see things that are important to one or less so than the other. You can extrapolate that all the way down to a small company or a startup. And that's why providing the most flexibility in terms of an offering to allow people to decide what, when, where, and how they would be going to upgrade their software so they can do it when they want, and on their own terms. You can see that that becomes really important. And also making sure that we're providing innovation in a broad way so that it'll appeal both to the enterprise and again extrapolating that forward down to even very small startups. >> You know, David, the other thing that struck me in the data, if we bring up that other piece is the upgrade strategy, and there was a stark difference between large enterprises and midsize organizations. Talk to this data, if you would. >> Yes, this is again, a pretty stark difference between them. When you're looking at large enterprises, they really wants stability and they don't want to upgrade so often. Whereas mid-size enterprises, are much more willing to both upgrade on a regular cadence and really have a much more up-to-date, or have always have the latest software. They're driving smaller applications, but they're much more agile about their approach to it. Again, emphasizing what Don was saying about the smaller enterprises wanting a different strategy and a different way of doing things than large enterprises. >> So Manish this says to me that you got it right from a strategy standpoint. I mean, any color you can add here. >> Yeah, it's very intuitive that whether you're a large organization, a mid-sized enterprise or a small business, right? You face competitive pressures, your dynamics are unique. What you're able to do with the resources, what you desire to do at the pace that is appropriate for your environment, are really unique to you, and to try to force one model across any one size or across any set of dynamics is just not appropriate. So we've always felt that giving the enterprises and the organization, the ability to move at the pace of their business is the right approach. And so when we designed the Oracle Java SE subscription, we truly have that front and center in our thought process. And that structure seems to be working well. >> David, what I like about the way you do research is you actually build an economic model. A lot of these business value projects. I know this well, having been in the business a long time, they'll go out to ask the customer what they got, and then the customer said, "Well, I got a 111% ROI, and boom, that's what it is. You actually construct an economic model, you bring in rules of thumb, it allows you to do what ifs you can test that model and calibrate it against the real world. So I commend you on that. You've done a lot of hard work there, but bottom line at forests, I mean, let's bring up the economics. I mean, that's what people ultimately want to know. Does this save me money? What's the bottom line here? >> Yeah. Yes, that's a very important question. And the way we go about it is to ask the questions so that we can extract from those questions, how much effort it took, for example, to upgrade things, how much effort it took for important applications and not so important applications. So we have a very detailed model driven by the survey itself and in the back of the research, I'm a great believer that you should be able to follow exactly what the research said, what the survey said and how it was applied to the model. So, and what were you focused on was, what was the return of using the Java subscription service or taking an upgrade every six months? Those were the two ways that we looked at it. And for large enterprises, the four-year costs for the enterprise was $11 million, but for taking the additional subscription service, and this was well well covered, the payback is within a year, well covered by the lower costs of managing in a lot of systems and environment. And we found a very similar result on those midsize services. There, it was 3 million, and again, they got that back the year in terms of payback. So, but that's one alternative. There is another alternative that may be worthwhile the extra money if you really want to be up-to-date and or if you want to drive a much more aggressive strategy for your organization. >> So these are huge numbers. I mean, he's talking about 30% savings on average for large and mid-sized enterprises in the percentage terms, but the absolute dollars are actually enormous. So, you know, large companies here, we're talking about $20 billion enterprises with 500 or more Java applications. And mid-size is, you're talking about a couple, two, $3 billion companies. Manish, what are you saying in the customer base in terms of the economics? >> Yeah, you know anytime an organization is looking at an offering and a solution, they want to make sure just giving them the value. And we all know that the priorities of businesses have, they want to focus on that. Managing the Java estate is important, but is it the thing where they want to invest the dollars? And if they are investing the dollars, are they getting the return? We find that if you can give the enterprises an ability where they can see the return, the cost is right for them. And if you can mirror that and you can map it also with reduce risk, then you put the right formula. And with the subscription, they're able to not only see the cost savings that the model indicates clearly, but they're also able to reduce the risk in terms of security protection and other things. So it's a really, really good combination for the enterprises. >> Well, thank you, I wonder Manish, if you could bring us home here and just kind of summarize from your thoughts, everything you've heard today, what are the key takeaways? >> You know Java has been around for 25 years, and we certainly believe it's really positioned well for what's required today. And perhaps more importantly, what is needed for the next decade and for the next 25 years. Having now served thousands of customers with the Java subscription, it's clear that it is meeting the needs of fortune 10 organizations all the way down to a 5% development house, for example. What we're hearing from across the board is really Java has been the go to platform and it continues to be the go to platform for mission critical development and deployment. However, the complexity as the Java estate becomes large when you've got tens to hundreds, in some cases over a thousand applications running across the enterprise, that complexity can be daunting. And the Java subscription is really serving the needs in three ways. One, it's getting them the best of class support from Oracle, which is a steward of Java, the company that is generating over 80% of innovation with every single release. The second thing they're getting the business flexibility. So they can move at the pace that works for them. And therapies is as a business model as indicated that they're getting it at a lower cost while having your list. So the combination of these things is the reason why we're seeing very high renewal rates, why we're seeing thousands of organization take it over. And I want to wrap it up by saying one final thing, that you can count on Oracle to be the transparent, to be the right steward or both technology innovation, as well as to ensuring the support for the vast ecosystem whether it's libraries, frameworks, user groups, educational services and so on. So Java is here, has been here for the enterprise, large and small, and it's ready for the next generation as well. >> Great, thank you for that. Well, one more question. What's the call to action? If I'm a mid-sized company or a large company, I've made investments in Java, what, what should I do next? >> I would say, take a look at the Oracle subscription. It will reduce your rates. It'll save you a cost and it'll give you a lower risk parameter for your organization. >> Great, nice and crisp, I like it. If you like, if you guys don't object, I'm going to give you my summary. I've been taking notes this whole time and so, we've explored two options. Customers can do it yourself or go with the subscription on a regular cadence. It's very clear to me that that Java remains relevant as we set up top. It's the world's most popular programming language we know about all that. The ecosystem is really moving fast. Of course, with the stewardship of Oracle cloud microservices, the development of, of modern applications. I think that the directional changes that Manish you guys and, and Don and Oracle have made were really the right call. The research that David you did, shows that it's serving customers better. It lowers costs, it's cutting down risk particularly for the mid-sized companies that maybe are, or don't have the security infrastructure and the talent to go chase those problems. And I love the roadmap piece. The more transparent roadmap really is going to give the industry and the community much more confidence to invest and move forward. So guys, thanks very much for coming on this CUBE Java power panel. It was great to have you. >> Thank you. >> Thank you. >> Thank you. >> All right, I thank you for watching everybody. This is Dave Vellante, for theCUBE, and we'll see you next time. (soft music)

Published Date : Sep 21 2020

SUMMARY :

leaders all around the world. And it remains the leading The technology, the and that goes into the feature releases, of the financials if you will. And the third area was that And the other piece and realizing that the threading in the open source world JDK and lead the project. What are the drivers? making sure that the performance is good. and it's harder for them to keep up You can extrapolate that all the way down in the data, if we bring or have always have the latest software. me that you got it right the ability to move at and calibrate it against the real world. and in the back of the research, in terms of the economics? but is it the thing where they and for the next 25 years. What's the call to action? at the Oracle subscription. and the talent to go chase those problems. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloydPERSON

0.99+

Dave VellantePERSON

0.99+

3 millionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

four-yearQUANTITY

0.99+

Java 15TITLE

0.99+

$11 millionQUANTITY

0.99+

OracleORGANIZATION

0.99+

six-monthQUANTITY

0.99+

5%QUANTITY

0.99+

Donald SmithPERSON

0.99+

David FloriaPERSON

0.99+

three-yearQUANTITY

0.99+

tensQUANTITY

0.99+

Java 10TITLE

0.99+

Manish GuptaPERSON

0.99+

111%QUANTITY

0.99+

JavaTITLE

0.99+

Wikibon ResearchORGANIZATION

0.99+

twoQUANTITY

0.99+

ManishORGANIZATION

0.99+

second elementQUANTITY

0.99+

third pieceQUANTITY

0.99+

500QUANTITY

0.99+

25 yearsQUANTITY

0.99+

second pillarQUANTITY

0.99+

two waysQUANTITY

0.99+

first pillarQUANTITY

0.99+

bothQUANTITY

0.99+

DonPERSON

0.99+

ManishPERSON

0.99+

second pillarQUANTITY

0.99+

thousandsQUANTITY

0.99+

one alternativeQUANTITY

0.99+

third areaQUANTITY

0.99+

two optionsQUANTITY

0.98+

first oneQUANTITY

0.98+

two aspectsQUANTITY

0.98+

over 80%QUANTITY

0.98+

$3 billionQUANTITY

0.98+

SeptemberDATE

0.98+

Java EETITLE

0.98+

DavePERSON

0.98+

10 years agoDATE

0.98+

BostonLOCATION

0.98+

second oneQUANTITY

0.98+

six monthsQUANTITY

0.98+

OneQUANTITY

0.98+

CJ Bruno, Intel | The Computing Conference


 

>> SiliconANGLE Media presents... theCUBE! Covering AlibabaCloud's annual conference. Brought to you by Intel. Now, here's John Furrier... >> Hello everyone, welcome to Silicon Angle's theCUBE here on the ground, in Hangzhou, China. We're here at the Intel Booth as part of our coverage, exclusive coverage of Alibaba Cloud Conference here in the cloud city. I'm John Furrier, the co-founder of SiliconANGLE, Wikibon and theCUBE. And I'm here with CJ Bruno, who is the Corporate Vice President and General Manager of Global Accounts of the sales and marketing group at Intel. That's a mouthful but basically you run a lot of the major accounts, you bring a lot of value to Intel Supplier to these big clouds. >> I do, John. We look after our top 20 or so largest partners and customers around the world. Amazing like Alibaba, edge to cloud enterprises, deep rich engagements, just an exciting, exciting time to be in the business with these big customers. >> And there's no borders to the cloud so its not as easy as saying PC, like people might think of Intel in the old days. You guys have these major cloud providers, there's a lot of intel inside so to speak but that value is enabling a new kind of functionality. We're hearing it here at the show. >> You are. We work together with partners like Ali, in the area of such big artificial intelligence development, big data analytics and of course, the cloud. We've been working with them for over 12 years now and you can see the advancements and the services that they're providing to their customers, not only domestically, here in China but on a global stage as well. >> Its interesting, Intel, you've been working with these guys for 12 years, what a journey, from an entrepreneurial 12 guys in a dorm room, or an apartment for Jackie Ma, that he talks about all the time, to now the powerhouse. What's it like, because these guys have an interesting formula going on here. They're bringing culture and art, with science, kind of sounds like Steve Jobs, technology meets liberal arts, bringing a cultural aspect. How far have they come? Give us some insight into where they've come from and where you think they're going. >> Its amazing, Jack Ma, yesterday in his keynote, talked about this event eight years ago. 120 people, John, we're standing amongst 60,000 or so, in this event today, just eight short years later. Its amazing what they've been able to do. They're driving innovation, this is not a copy economy, it's an innovation economy. They invest, very high-degree of technical acumen. Willingness to break barriers, try things people have not. Fail fast and correct. Take risks. They're entrepreneurs at heart, they're technologists in their bloodstream and they really invest to win. >> You guys are supplying. We talked to people who talk about Photonics, Deeraj Malik, who's really going deep on these pathways around. Some of the Intel innovations, some of it's like wow, mind-blowing. The other end is just practical stuff, making it easier, faster, simpler to run things. IoT, their big use case, I mean you can't get any more sexier than looking at a city cloud that's actually running the city with traffic and all those IoT devices, so what is the big thing that you guys do for Alibaba? Talk about that journey because its not one thing, what is it? What is the magical formula? >> Sure, of course, first off we deliver, we think, world-class ingredients to their world-class cloud. And enable them to deliver amazing services to their customer, at the base level. But we really work together to solve societal problems. Look at the precision medical cloud that we announced last April together, John. Genome sequencing, solving people's cancer problems, in a matter of days, instead of months. Just one example of the real use case that we bring these technologies to bear on and have an amazing influence. We work on them with the Tenatchi Medical Imaging Competition. 3,000 entrants competing to see who can identify lung cancer quickest, and we have some winners selected, just this week. So these things are real, taking this technology, solving real life problems, and business problems, around the globe. >> And its not just the big, heaving lifting technology that moves the needle, like you were mentioning but its also the micro technologies, like FPGA, you guys have got lot of things. This is like the new Intel, so I'd love to get your thoughts, if you can just take a moment to share the journey that Intel is on right now because you gave a talk yesterday, a kind of a keynote, onstage. What is the Intel journey right now look like? >> We're transforming ourselves from a PC centric company to a company that runs the cloud and powers countless numbers, billions and billions of smart-connected devices. That's a big journey we're on. We've diversified our business significantly in a five year period, John. Driving our data-center business, our IoT business, our programmable logic business as you said, our friends from former Alterra are now two years inside Intel. Our memory business, our NSG technologies, 3D NAND Optane, driving breakthroughs in SSDs and of course new technologies that we're exploring, like drones and neuromorphic computing, making sure we never miss the next big thing. >> I've been following Intel for 30 years of my career and life, as an initial user-developer and now in the media. It's interesting, Intel has never done it alone, it's always been part of the ecosystem. You have brought a lot of goods to the party, so to speak, in technology, Moore's law and the list is endless. Now is an end to end game but you look at 5G for instance, you kind of connect the dots, put a radio frequency cloud over a city and you got to run the IoT devices like a city brain, they're showing here. You got to tie it together with programmable arrays, it's a hardware thing but now the software guys are doing it. You've got cloud native with the Linux Foundation, that's DevOps. You've got data centers that are 10 to one silicon to the edge, this is a wide opportunity, how do you guys make sense of it to customers? Because its a complex story. >> It is John, look, we're the ultimate ingredient supplier. We're bringing forward technologies in artificial intelligence, in 5G, in VR and AR, areas that are just autonomous everything. Autonomous driving in particular. These are big investment areas we're driving into that require an enormous amount to compute, storage, networking, connectivity and we're making the investments to make sure we're critical partners with our customers, in all those huge growth areas. Making us a big growth company now. >> I had a great conversation with Dr. Wong, who's the founder of Alibaba Cloud, he's on the Technology Steering Committee for Alibaba Group and yesterday they just announced a 15 billion dollar investment over three years for FinTech, across the board IoT, AI, collaborate with scientists as well as artisans. This is a big deal. >> It is John, this is exactly an example of what I mentioned earlier. These guys invest to win and they have a will to win. And they want to pioneer and they want to innovate and they put their money where their mouth is, in that announcement, its pretty exciting. >> So the cloud serves quite a market, doing really well. Your global accounts are doing well, certainly in Asia and People's Republic of China, PRC, as you guys call it, extremely well but now there's a Renaissance in cloud in general, so we're expecting to see a lot more cloud service providers, maybe not as big as Alibaba but Alibaba is going to start getting customers that become SaaS companies, that's technically a cloud service provider if you think about it, if they have an application, how do you look at that mark? >> We see what is known as the super seven in the industry, the large folks, both US based and China based but then we've identified the next 60-70 next wave CSPs that are growing vibrantly around the globe and there's a long tail of another 120 that we're interacting with. You're absolutely on point, an exploding area. Significant double-digit growth for years to come and just solving, big, big life and business problems. >> So at SiliconANGLE also silicon is in the name and Wikibon Research is really big in China, here, interesting dynamic that's happening here with the data and the software and was brought up with Dr. Wong about the IoTs, kind of a nuanced point but I want to get it out for the folks watching that you're going to start to see new compute at the edge because data is now the currency of the future. It needs to flow, it's like water but at the edge it can be expensive, low latency that table stakes that everyone wants to get to. You're going to see a lot more compute or silicon at the edge of network. Internet of things coming, your view on that? >> There's no question John, that's exactly the way we see it. The time to get the data back to the long-haul data center, is very expensive and very challenging and requires an absolute redo of the network. We're moving to compute closer and closer to the data, of course, the cloud remains a vital, vital part of that but we move that compute capability closer to where the data is sensed, you can analyze it quicker, you can make faster decisions and you can implement those decisions at the edge. >> CJ, final question for you, obviously Alibaba, big part of their growth strategy is going outside mainland China, obviously doing very well here, not to knock them there but great opportunity to go into the global marketplace, specifically North America. That's going to put more competition, competition was good but it's also going to require more growth. How are you helping Alibaba and how does your relationship at Intel expand with Alibaba? >> We work with Alibaba, not only on the technical front of course but on their go-to-market plans, on ecosystem development plans and even some business models. We do that across our entire customer and partner base, John. We're seeing this explosive growth in cloud and being able to work with our partners on all four of those fronts; technology development, ecosystem development, business model development, are obviously a benefit to both of us. >> Alibaba is going to need some help because you know its competitive, Amazon had a nice run for a while, Microsoft nibbling at the heels, Google and now Alibaba coming in. Competition is good. >> We're proud to call all those innovators our customers and we work hard everyday to earn their business. >> Final, final question, this one just popped in my head. What should folks in America know about this PRC market or China market that they may not know about? Obviously they read what they read in the paper. They see the security hacks, they see the crypto-currency temporarily on hold but blockchain certainly has a lot of promise, but it's a dynamic market here. A lot of of opportunities. What should that audience know about the China market? >> I think the first thing they should know is that if they haven't come to experience it themselves they should. The scale of the opportunity, the scale of the country is like nothing people have ever seen before. As I said, the investments they're making-to innovate, to drive an innovation economy is breakthrough. You take that scale and that investment and this is a market to be reckoned with. >> Congratulations on the 12 year run with Alibaba, and now Alibaba Cloud. Looking really, really, strong, love the culture, got to unique twist; artistry and scientific cultures coming together, looking good. >> Absolutely John, thanks for letting us tell our story. >> CJ Bruno, Group Vice President, General Manager Global Accounts for Intel. I'm John Furrier with SiliconANGLE, thanks for watching.

Published Date : Oct 24 2017

SUMMARY :

Brought to you by Intel. Accounts of the sales and marketing group at Intel. time to be in the business with these big customers. You guys have these major cloud providers, there's a lot of intel inside so to speak services that they're providing to their customers, not only domestically, here in China but on he talks about all the time, to now the powerhouse. to win. is the big thing that you guys do for Alibaba? And enable them to deliver amazing services to their customer, at the base level. This is like the new Intel, so I'd love to get your thoughts, if you can just take a and of course new technologies that we're exploring, like drones and neuromorphic computing, You have brought a lot of goods to the party, so to speak, in technology, Moore's law and It is John, look, we're the ultimate ingredient supplier. the Technology Steering Committee for Alibaba Group and yesterday they just announced a These guys invest to win and they have a will to win. but Alibaba is going to start getting customers that become SaaS companies, that's technically We see what is known as the super seven in the industry, the large folks, both US data is now the currency of the future. The time to get the data back to the long-haul data center, is very expensive and very challenging opportunity to go into the global marketplace, specifically North America. We're seeing this explosive growth in cloud and being able to work with our partners on Alibaba is going to need some help because you know its competitive, Amazon had a nice We're proud to call all those innovators our customers and we work hard everyday to What should that audience know about the China market? As I said, the investments they're making-to innovate, to drive an innovation economy is Looking really, really, strong, love the culture, got to unique twist; artistry and scientific I'm John Furrier with SiliconANGLE, thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AlibabaORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

JohnPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Steve JobsPERSON

0.99+

WongPERSON

0.99+

AsiaLOCATION

0.99+

ChinaLOCATION

0.99+

Jack MaPERSON

0.99+

USLOCATION

0.99+

Jackie MaPERSON

0.99+

AmericaLOCATION

0.99+

30 yearsQUANTITY

0.99+

12 yearsQUANTITY

0.99+

John FurrierPERSON

0.99+

12 guysQUANTITY

0.99+

CJ BrunoPERSON

0.99+

Alibaba CloudORGANIZATION

0.99+

last AprilDATE

0.99+

North AmericaLOCATION

0.99+

Deeraj MalikPERSON

0.99+

120 peopleQUANTITY

0.99+

Wikibon ResearchORGANIZATION

0.99+

yesterdayDATE

0.99+

SiliconANGLEORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

10QUANTITY

0.99+

IntelORGANIZATION

0.99+

two yearsQUANTITY

0.99+

five yearQUANTITY

0.99+

billionsQUANTITY

0.99+

People's Republic of ChinaLOCATION

0.99+

Hangzhou, ChinaLOCATION

0.99+

bothQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

PRCLOCATION

0.99+

AliORGANIZATION

0.99+

3,000 entrantsQUANTITY

0.99+

60,000QUANTITY

0.99+

Linux FoundationORGANIZATION

0.98+

Silicon AngleORGANIZATION

0.98+

eight years agoDATE

0.98+

eight short years laterDATE

0.98+

15 billion dollarQUANTITY

0.98+

todayDATE

0.98+

this weekDATE

0.98+

over 12 yearsQUANTITY

0.98+

Alibaba GroupORGANIZATION

0.97+

oneQUANTITY

0.97+

Technology Steering CommitteeORGANIZATION

0.96+

Alibaba Cloud ConferenceEVENT

0.96+

SiliconANGLE MediaORGANIZATION

0.95+

one exampleQUANTITY

0.95+

Dr.PERSON

0.95+

120QUANTITY

0.95+

Jean English, NetApp | NetApp Insight 2017


 

>> Announcer: Live from Las Vegas, it's The Cube, covering NetApp Insight 2017, brought to you by NetApp. >> Okay, welcome back everyone. We're here live in Las Vegas, Mandalay Bay. This is The Cube's exclusive coverage of NetApp Insight 2017. I'm John Furrier, the cohost of The Cube, co-founder of SiliconANGLE Media with my cohost, Keith Townsend with CTO Advisors. Our next guest is Jean English. She's the Chief Marketing Officer of NetApp. Great to see you, thanks for having us, and thanks for coming on The Cube. >> Oh, thank you, thank you guys for being here. >> So NetApp is no longer a storage company, we learned. But then last year, now this year, you're a data company. >> Jean: (laughs) That's right. >> The brand promise is still the same. Take us through, as the Chief Marketing Officer, you have to, it's a complex world. One of your concepts here we've been seeing is winning while in a tough environment and IT is a tough environment. I got application development going on. I got DevOps. I got data governance. I got security issues, internet of things. It's a challenging time for our customers. How is your brand promise evolving? >> So we really see that NetApp is the data authority for hybrid cloud, and the amazing thing is is that what we see is our customers aren't talking to us about storage anymore. They're talking to us about data, and what their data challenges are, and most companies are trying to think through if they're going to transform, how are they going to harness the wealth of the data. What are they going to do to maximize the value of the data? >> And the cloud too is center stage, 'cause the cloud is a forcing function that's changing the relationship of your partners, VARs who has a lot of folks on talking about the dynamics with customers around multiple clouds. We saw on stage the announcement with Microsoft. Congratulations. >> Jean: Thank you. >> So you've been in Amazon for a while. We've been covering that, but the on-premise work still is growing, where you have the data from Wikibon Research came out shows that the on-premise true private cloud, which is defined as cloud operation business model is actually growing. However, the decline in automation of non-differentiated labor is declining by 1.5 billion over the next five years, which means the SAS market is going to continue to explode and grow, so the on-premise is actually growing, as is the cloud. How does that change the narrative for you guys, or does it, or is that a tailwind for NetApp? >> We think it's a complete tailwind in for NetApp. When we think about data today, we see that it's really becoming more distributed across environments. It's definitely more dynamic, as you're looking for the latest source of truth. And the diversity of data, especially with machine learning. I mean, it is exploding. So, how do you start to be able to build that data together? We really think of it as that our customers want to maximize that value and the only way to do that is to start to think about how do they bring it together, and how do they get more insight from that data, and then how do they have more access and control of that data, and then the most questions we usually get from our customers around, how do I make sure it's secure? But the really big point is is that, as we think about what NetApp is doing, it has been about three things that we see with our customers. They have to make sure that they're modernizing what they have today, and that goes to the on-prem environment, so if it's going to be that they got to accelerate applications, they want to make sure that they have that. But this notions of building clouds, even building private clouds. And we think of that as a next-generation data center, especially with DevOp environments. Then harnessing the power of the cloud and hybrid cloud world. And if they are not able to really leverage the cloud for SAS applications, if they're leveraging the cloud for backup, or even disaster recovery, data protection, that's where we see that these three imperatives, when they come together, that they're truly, truly able to unleash the power. >> So we saw on stage, CEO George Kurian talking about his personal situations in light of what's happened in Las Vegas here. Data is changing the world, and your tagline is "Change the world with data." So I got to ask you, obviously, data, we see a lot of examples in society and also personal examples of data being harnessed for value. The cloud can be great there, it's all on-prem. How do you guys position NetApp as a company? I know there's a lot of positioning exercises in marketing you do, but positioning is really important. That's what you do. The tagline is kind of the emotional aspect of it, okay, changing the world, let's change the world with data. I believe that. But what's the positioning of NetApp? How would you say that the positioning- What's the positioning statement of NetApp? >> The positioning statement of NetApp, I think we've really seen a big break in the positioning in the last couple of years. And why is because the customers are demanding something different. They're really looking for more hybrid cloud data services. And what are those data services that accelerate and integrate data, and that notion of on-prem and in the cloud, that's where we see what's going to happen to accelerate digital transformation. And so, this notion of yes, thought about as storage before, customers are demanding more for their data and they need data services, especially in hybrid environments to really be able to drive their business. >> The old expression, "Position it, they will come." And you guys have done a good job with the data. Okay, now let's get to the customer reality. You have to go out and do the tactical marketing. They're busy, right? There's a lot of noise out there. We just came back from New York and our Big Data NYC event that we ran in conjunction with Strata, which is a separate event, and it's clear they don't want the hype. They want reality. The rubber's hitting the road because they're so busy, and with the the security and the governance challenges- GDPR, for instance, in Europe is a huge pressure point for data. A lot of challenges but they want the magic. (laughter) It should be easy, right? But it's not. How do you guys go out day to day and take that to the field message? What's your strategy? >> Well, you talked about changing the world with data. And it feels like a lofty promise, but we really believe that when we come down to the purpose of why NetApp exists, it is to empower our customers to change the world with data and that's something that NetApp has been focused on not just for today, but the 25 years of history, and then also into the future. So what makes that the reality? Well number one, they want something that's simple. And so this notion of simplicity, and no matter how they think about managing or optimizing their data, it's got to be simple and easy to manage. Optimized to protect, I think data protection is critically important. Things about safeguarding data across its life cycle. and I think that NetApp has always been focused on how to make sure data is secure and protected. And that now is what we're seeing in the cloud too. So, all the relationships and partnerships that we've been creating and solidifying, AWS has been for the last couple years, we've had some latest`announcements of what we're doing to really make sure we have stronger data protection in multi-cloud environments. Obviously, today from what we're doing with Microsoft Azure, in really providing- Not even having to know how to manage storage, you can do it easily in Azure, and- >> No, I'm sorry. I really love this, this message from NetApp. As a traditional technologist, I understand NetApp disrupting the original storage CN Market with Fowlers, you guys were one of the first in the cloud with AWS, so from a trusted partner inside of the infrastructure team, I understand the vision of NetApp. But the transformation also means that you're starting to expand that conversation beyond just that single customer of the storage admin, of the infrastructure group. How is that messaging been going towards that new group of customers within your customers who have said, "NetApp? Isn't that a storage company?" How is that transformation been going? >> (laughs) You know, when we talk about reinventing, NetApp is reinventing itself. And that's what we're going through right now. And what we see is, is that the customers that we know and love, the storage admins and the storage architects, those are definitely tried-and-true and we love our relationships with them. But we see that the demands around data are growing and those demands are starting to reach more into DevOps, application developers, definitely into cloud enterprise architects as we think about cloud environments. The CIO is now under more pressure to think through how- They have a mandate to move to the cloud. Now what? But who do they want to move with? Someone that they've trusted before, and by the way, because we've been first, and because we're so open with all our relationships with the cloud providers, why not move with us? Because we can help them think through it. >> So you're keeping the core. You're not pivoting off the core, you're building on top of the core, extending that. Is that what you're saying? >> We're building off of a really great foundation of who we've had as customers all along. We're establishing new relationships, though, as well, with cloud enterprise architects, and today, we actually just had here at Insight our first executive summit, where we brought together CIOs and CTOs and really talked about what's happening with data and organizations, what's happening with data that's being disruptive, what's happening if you want to thrive, based on data as well. >> There used to be an old expression back in the day when Polaroid was around, "What's the new Polaroid picture of something?" Now it's Instagram, so I have to ask this question. What is the new Instagram picture of NetApp with the customers that you have and for customers now in the data space, there's a lot of data conversations happening. What is that picture of NetApp? What should they know about NetApp? >> NetApp is in the cloud. >> Yeah, I love that messaging that NetApp is in the cloud. And how important is that moving forward? Especially as we look at technology such as ONTAP. They have been there from the beginning. I love the NFS on Azure story, but that's powered by ONTAP, which I kind of- It took me a few minutes to kind of get it, because I'm thinking, "ONTAP in Azure, that's bringing the old to the new." But that's not exactly what it is. What messaging do you want customers to get out of something like an NFS in Azure? >> We want them to understand that they don't have to know anything about storage to be able to protect and manage their data. No matter what environment that they're in. >> And by the way, we've been looking at and commenting critically on The Cube many events now that multi-cloud is a pipe dream. Now I say that only as folks know me. It's real. Customers want multi-cloud, but multi-cloud has been defined as, "Oh, I run 365 on Azure, and I got some analytics on Redshift on Amazon, I do some stuff on-prem. That's considered multi-cloud because there happen to be stuff on multiple clouds. You guys are doing something with cloud orchestrate that's quite interesting. It truly is multiple clouds in the sense that you can move data, if I get this right, across clouds. >> Jean: That's right. >> So it's in a complete transparent way, seamless way, so I don't have to code anything. Is that true? If that's true, then you might be one of the first multi-cloud use cases. >> We are one of the first multi-cloud use cases. We have created the data fabric, which is really looking at how do you seamlessly integrate across multiple clouds or on-prem environments? The data fabric, we've been talking about this vision for a couple of years. What we're seeing now is customers are seeing it come to reality. And now that we have more and more relationships expanding, as we mentioned we've been building SAS offerings with AWS for a couple years, we just had the big announcement today with Microsoft Azure. We're working with IBM Cloud. We're also working with Google Cloud, Alibaba, so as we think about a seamless data fabric, they want frictionless movement in and out of the cloud. >> Jean, I got to change gears for a second, because one of the things we've been observing over the past couple of months, certainly we were at the Open Source Summit, Linux Foundation. Open source is growing exponentially now. You've seen the new onboarding of developers in general and enterprise is going to take the bulk of that. Companies are supplying personnel to contribute on open source projects. That's continuing to happen. Nothing new there. But it's starting to change the game. You see Blockchain out there, getting some traction, ICOs and all that hype, but it points to one thing. Communities are really valuable. So as a marketer, I know you were at IBM, very community-oriented, very open source oriented, the role of communities is going to be super important as customers discover- So marketing is changing from batch marketing, you know, surge email marketing to real-time organic with communities. It's not just have a social handle. Really, have you guys looked at the B2B marketing transformation as customers start to make selections and take opinions in the new organic communities, because you have people in these projects, in open source, who are making decisions based on content. What's your view on communities and the importance of communities? >> Well, we believe highly in communities. Our A-Team is a community with us that is so strong, and they're our biggest advocates. They get brought in very, very early on in terms of learning about our new technologies and learning our story and understanding our strategy and where we're moving. I think you may have talked to some of our A-Team members before. >> John: Quite strong, very strong. >> But they are an amazing group of people and we believe highly that their advocacy is what is really going to help us to stay in touch and be really close to these new buyers as well. >> And you've got to really internalize that too in the company. Operationally, any best practices you can share with other CMOs? 'Cause this is a challenge for a lot of marketers is, how do you operationalize something new? >> Yes, well, we're finding that this notion of reinvention and it starts with the company itself. And it starts with their own employees. So when we talk about the shift from storage to data, we're even having our own employees talk about their own data story and how do they connect data. George talked about his data story, actually, on the main stage in our keynote the other day. But connecting to that's been really important. This notion of transforming to think about these new customers and new buyers, it starts with the customer needs, it's not about a product-out discussion. And so, a new story to a new buyer, relevancy, what's happening in their industry, and then engagement, engagement, engagement. >> I've been following NetApp since they were start-up and they went public, great story. They have a DNA of reinvention. David Hitz is going to to come out, I'm sure. We'll talk about that, because he's been an entrepreneur, but he's also had that entrepreneurial DNA. It's kind of still in the company, so my question to you is, from a personal perspective, what have you learned or observed at NetApp during this reinvention, not a pivot, it's not at all. It's more of an inflection point for NetApp and a new way, a new way to engage with customers, a new way to build products, a new way to do software development, a new way to use data. This is a theme we're seeing. What's your personal observation, learnings that you could share? >> Well, in my first month, what I really learned is just the absolute amazing culture of what NetApp has and this notion of we're always embracing what our customers want to where we move. So what our customer wants, we move with it. We embrace it holistically. Years and years ago, you know, Linux and Windows. A couple of years later, virtualization, virtualized environments. Could've killed us. Made us stronger. Now, embracing the cloud. A lot of our customers say, "I would have canceled the meeting with you, but now I understand that you're interested in the cloud and that you're in the cloud, I've totally changed my mind." And we say, "We love the cloud. We embrace the cloud holistically." >> You guys are progressive. I've noticed it's a competitive strategy kind of theory but as the old expression goes, "You got to eat your own to get to the new market. Some companies will milk the market share dry and then can't get to the new model. This is the reinvention challenge. When do you stop making profits to build for the future? It's a tough call. >> It is, but that's why we listen to what our customers say. And so, when they talked about wanting to move to the cloud a few years ago, we said, "We're going to be the first to holistically embrace the cloud." >> Okay, so you got the NetApp Insight 2017 going on in Berlin. Okay, that brings up the question, because it's in Germany, so I have to ask. GDPR has been super hot. The global landscape, how is that going on for NetApp? Obviously you have some experience in outside the US. It's not always the US, North America centric world. What's the global story for NetApp? >> It's not. I lived in China and Singapore, and I know that there are demands that are not just US-centric. When we talk about Germany, I was just there a few months ago, and this notion of how do we start to address the articles that are in GDPR that help to make sure that we have the right compliance and protection for data inside of a country and inside of Europe. We actually have expertise in that area. We've been actually consulting and talking with customers about what they want to do with data compliance and we're being asked now to say, "How does NetApp help address those articles? How do we come back with solutions to help control data and make sure we have the right access of data?" So, we're already consulting with customers. We know it's a top priority, and we have expertise to be able to help. >> We had Sheila FitzPatrick on. She's the Chief Privacy Officer. Very colorful, very dynamic, a lot of energy. >> Jean: She is. (laughs) >> She's going to slap anyone around who says you don't bolt on privacy. Good policy conversations, the policies converging in with that. It's interesting, the global landscape- The Cube will be in China next week for the Alibaba Cloud Conference, so we're going to go report, see what's going on there, so huge international challenge around regulations and policy. Does that affect the marketing at all? Because policy kind of is data privacy and security. Security super hot, obviously. Data security is number- A big thing. How does policy intersect with the technology? How as a CMO do you get that realized and put into action? >> Well, I think basing on the foundation that we're always optimized to protect. That's one of our key foundations of why people choose NetApp. We definitely know that there are other demands that are happening in local markets. I was just in Australia few weeks ago and was meeting with the New South Wales government, which they've had a mandate that all of the agencies need to use their own cloud platform. They've been working with NetApp to ensure that they can have the right data management solutions on that platform. And from a marketing perspective, we embrace that. And so we work with, whether it's Telstar, we're working with New South Wales, we're thinking about how do we ensure that that message is strong, because we know customers there have different demands than just what's in the US. >> So when you get CIOs and and senior executives together at a summit like you guys had over the past few days, ideas start to percolate, problem start to come across. What was some of the biggest policy concerns throughout those conversations? Was it GDPR? Was it something else? What's top-of-mind? >> What we're hearing top-of-mind right now is data governance. And I think that that could be towards data compliance in terms of GDPR for Europe. I think it expands beyond Europe, though. I just heard, like I said, in Australia, where they're having demands based on the government of what's needed to be really driven through a cloud platform. We're hearing through our customers in the last couple weeks about if I'm moving to the cloud, number one, I want to have seamless transition during the move in or out of the cloud, but I got to make sure I've got the right governance model in place. >> So we've heard this repeatedly. Customers moved into the cloud. How many customer are coming to you saying, "You know what, for whatever reason, whether it's cost, agility, the overall capability we thought we'd have available in the cloud, not really what we thought it would be. We need help moving it back." And what is that conversation like? >> Well, it's a conversation that we're able to help with pretty easily. Right now, we have had customers that have either had one, a cloud mandate, so they got to think about how am I going to move all my data to the cloud. Once they actually start getting into the detail, we do a design workshop where we help them think about maybe there's not all workloads going to the cloud. Maybe some workloads go in the cloud. We have had a customer who did move the majority of workloads in the cloud and then decided, actually, we think we'll get better cost performance and better efficiencies if we actually have those back on-prem. We said, "No problem. We can help you with that too." And I think that's the beauty of what we talked about with data fabric is, we're able to help them think through, no matter where they want their data, on-prem or in the cloud, we can help them. >> Jean, thanks for coming up here. I know your time is super valuable. I got to get one more point in, 'cause I want to make sure we get that out there. Public sector. NetApp's position strong, getting better? What's your thoughts? A quick update on public sector. >> We are very, very strong on public sector. We've actually had a strong presence in public sector with our customers for many years. And we're continuing to help them think about too how they start to look at cloud environments. >> All right, Jean English, CMO here on The Cube. Getting the hook here in the time. She's super busy. Thanks for coming. Congratulations- >> Jean: Thank you. >> On great positioning and looking forward to chatting further at The Cube. Live coverage here, Las Vegas at the Mandalay Bay. I'm John Furrier, Keith Townsend. We'll be right back with more live coverage after this short break. (upbeat music)

Published Date : Oct 4 2017

SUMMARY :

covering NetApp Insight 2017, brought to you by NetApp. She's the Chief Marketing Officer of NetApp. So NetApp is no longer a storage company, we learned. The brand promise is still the same. What are they going to do to maximize the value of the data? We saw on stage the announcement with Microsoft. How does that change the narrative for you guys, and that goes to the on-prem environment, Data is changing the world, and that notion of on-prem and in the cloud, and take that to the field message? to really make sure we have stronger data protection beyond just that single customer of the storage admin, and by the way, because we've been first, You're not pivoting off the core, and today, we actually just had here at Insight and for customers now in the data space, that's bringing the old to the new." they don't have to know anything about storage And by the way, we've been looking at one of the first multi-cloud use cases. And now that we have more and more relationships expanding, and enterprise is going to take the bulk of that. I think you may have talked and be really close to these new buyers as well. how do you operationalize something new? and it starts with the company itself. It's kind of still in the company, so my question to you is, and that you're in the cloud, I've totally changed my mind." and then can't get to the new model. to holistically embrace the cloud." because it's in Germany, so I have to ask. that help to make sure that we have the right compliance She's the Chief Privacy Officer. Jean: She is. Does that affect the marketing at all? and was meeting with the New South Wales government, ideas start to percolate, problem start to come across. but I got to make sure I've got the overall capability we thought on-prem or in the cloud, we can help them. I got to get one more point in, how they start to look at cloud environments. Getting the hook here in the time. and looking forward to chatting further at The Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

Keith TownsendPERSON

0.99+

JeanPERSON

0.99+

JohnPERSON

0.99+

ChinaLOCATION

0.99+

AustraliaLOCATION

0.99+

AlibabaORGANIZATION

0.99+

EuropeLOCATION

0.99+

David HitzPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

New YorkLOCATION

0.99+

TelstarORGANIZATION

0.99+

GermanyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

BerlinLOCATION

0.99+

SingaporeLOCATION

0.99+

1.5 billionQUANTITY

0.99+

Las VegasLOCATION

0.99+

Jean EnglishPERSON

0.99+

Wikibon ResearchORGANIZATION

0.99+

USLOCATION

0.99+

last yearDATE

0.99+

Mandalay BayLOCATION

0.99+

George KurianPERSON

0.99+

25 yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Sheila FitzPatrickPERSON

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

this yearDATE

0.99+

North AmericaLOCATION

0.99+

GDPRTITLE

0.99+

OneQUANTITY

0.99+

IBMORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

next weekDATE

0.99+

firstQUANTITY

0.99+

NetAppTITLE

0.99+

few weeks agoDATE

0.99+

todayDATE

0.98+

New South Wales governmentORGANIZATION

0.98+

oneQUANTITY

0.98+

first monthQUANTITY

0.98+

Linux FoundationORGANIZATION

0.98+

singleQUANTITY

0.97+

Alibaba Cloud ConferenceEVENT

0.97+

LinuxTITLE

0.97+

WindowsTITLE

0.97+

The CubeORGANIZATION

0.97+

Big DataEVENT

0.96+

InstagramORGANIZATION

0.96+

Open Source SummitEVENT

0.96+

three imperativesQUANTITY

0.95+

Tom Joyce, Pensa | CUBE Conversation Sept 2017


 

(futuristic music) >> Hello and welcome to theCUBE Studios here in Palo Alto, CA I'm John Furrier, co-host of theCUBE and co-founder of Silicon Angle Media, Inc. I'm joined here with Tom Joyce, Cube alumni. Some big news, new role as the CEO of Pensa. Welcome back to the Cube. You've been freelancing out there as an entrepreneur in residence, CEO in residence, you've been on theCUBE commentating. Great to see you. >> Good to see you, too. Thanks for having me back. You know, fully employed. >> Congratulations. You know, finding where you land is really critical. I've talked to a lot of friends, and they want to get a good fit in a gig, they want to have a good team to work with it's a cultural issue, but also you want to sink your teeth into something good, so you found Pensa. You're the CEO now of the company and you've got some news which we'll get to in a minute, but what's going on? Why the change, why these guys? >> You know, last time we talked, last time I was in here, I was running a consulting business, and I did that for almost a year so that I could look at a lot of options and you know, kind of reset my understanding of where the industry is and where the problems are. And it was good to do that. These were some of the best people that I met, and I got interested in what they were doing. They're smart, technical people, I wanted to work with them It was a good fit in terms of skills because when I joined Pensa just a couple of months ago now they were all technical people, and they'd been heads-down developing core technology and some early product stuff for almost three years. So they needed somebody like me to come in and help them get to the next level and it was a really good fit. And the other thing is, frankly, in my last job I was running an IT shop and I also had a thousand people out there selling, and about 300 pre-sales people, and when I saw this, I saw a product that I could've used in both of those areas. So sometimes when you resonate with something like that you start to think well geeze, this is something that I could, that a lot of people are going to need. And so there are many aspects of the technology that are interesting, but ultimately, I saw that this is a useful thing that I could go make a big business out of. So that's why I did it. >> You've had a great career, you know we know each other going way back, EMC days, and certainly at HP, even during the corporate developments work that Meg Whitman was doing at HP but involved in a lot of M&A activities, so you seen the landscape, you are talking about all the VCs, and all the conversations we've talked about in the past on other interviews you can check it out on YouTube, Tom Joyce, if you're interested in checking those conversations out. Worth looking at. So you landed at Pensa. What do they do? What was the itch for you? What was the, why are they relavant? What do they doing? >> Well, the first thing is, the company was founded about three years ago by people that had hardcore experience in big networking and virtualization environments. And they've been tackling some of the hardest problems in virtual infrastructure as you move from the hardware to everything being virtualized on multiple clouds. These guys were tackling the scale problem. And they'd also drilled down into how to make this work in the largest network environments in the world. So they had gotten business out of one of the largest service providers in the world as their first customer. So you look at that, and you say, alright these are smart people. And they're focusing on hard problems and there's a lot of, a lot of longevity in the technology that they're going out and building. And basically, what they're trying to do is help customers go to the next level with all software-based or software-defined, if you will, infrastructure, so that you can take technology from a whole bunch of different sources. It's going to be VMware, OpenStack, DevOps, the DevOps Stack as well as the whole constellation of people in the security industry. How do you make all those software parts work together at scale, with the people that you have? Rather than going out an hiring a whole new IT staff to plug all this stuff together and hope it works, these guys wanted to solve that. So it's without a lot of expertise, this product can go design, validate that it works, build and deploy complete software-defined environments, and it can do it faster than you could do it any other way that I'm aware of, and I've been around this industry for a long time. So that's what I saw when I said, geeze, I could have used this before, I could have used it in my own IT where our exposures were things like we had all this old software that we needed to update and we're scared to touch any of it, right? You look at things like Equifax. I was exposed in the Equifax breach, and that was exactly that scenario. >> Yeah, and they had four months in there playing around. Who knows what they got? >> To be honest with you, in my business we were doing the same thing because we weren't comfortable with upgrading our software cause we couldn't validate that it worked. How do you move from the old stuff to VMware six-dot-five and make sure nothing else breaks? We're kind of in the era of needing machine learning, intelligent technologies, autonomous kinds of ways to deploy this stuff, cause you can't hire enough smart people to go do it. And that's what I saw. >> Well, we'll do a breakdown or a tear-down, however you want to look at it, of the company in a second, but you guys have some news. Let's get to the news. What's the big news that you're sharing today? >> Okay, great. Well, there's a couple of key parts of it. First, we're formally launching the company. We've been heads down in development and I've been there for a few months, but the company hasn't been launched. So we're doing that, we're introducing Pensa to the world and the new website is Pensa.ai. The second thing is we've completed our Series A financing so we've got the financing under our belt. Third thing is we've been hiring a team. We've brought in certainly me, I've brought in a fella named Jim Chapel as the VP of marketing, long-time industry guy in both large and small software companies. And we're rolling out the first product. So the technology is called-- >> In terms of shipping? >> Yeah, it's going to be shipping as a SaaS offering and it's available now. It's built on our technology which is called Maestro, which is this smart machine, and the first offering is called Pensa Lab. And I can describe to you what it's used for, but it's for helping people go figure out how do I design, build, run, try new scenarios, and roll out stuff that's actually going to work and do it a lot faster than people can do with traditional technologies. >> Congratulations for launching the company, congratulations on the new role, great job. I'm looking forward to seeing you, But let's get into company, Pensa. >> Alright. >> So let's just go in market you guys are targeting. Take a minute to go into the market. What's the market, what's going on in the market, what trends, what's the bet in the market for you guys? >> With a early company like this, there's always a lot of things you can do and the battle is figuring out what is the first thing we're going to do? So I think over time we're going to be relevant to a lot of people, the first customers we're going to be focusing on are people in IT that are trying to manage complex virtualized networks. So a lot of them are people using VMware today. >> So the category is virtualization cloud? What's the category? >> It's a SaaS product for design, build, run. So it's really designing autonomous IT systems that are built on software-defined environments. So it's VMware, OpenStack, DevOps stack, and being able to kind of bring all those parts together in a way that from an operational standpoint you can deploy quickly. In the first version of the product is going to be designed for test in depth. And next year, we intend to bring out production versions of it, but virtually every one of these folks has environments for test today to figure out alright, I want to go do my update, my upgrade, my change I want to try a different security policy, cause I've got a hack happening and I want to do that fast, we're going to go after that. The other side of it is folks in the vendor community. Almost anybody that's selling a solution, again, like me and the job that I used to have, has people out there doing proofs of concept, demos, building systems for customers. And what we can do is give you the ability to spin up complete working environments and do it (snaps finger) basically like that. If you got a call this afternoon to go show VMware NSX running with some customer application with some other technology from a third we can make that work for you, and then you can tear it down and do the next one at four o'clock in the afternoon. >> So that a VMware customer-based you're targeting, I mean, it sounds like, and clarify if I don't get this right, you don't really care if it's private cloud, or hybrid cloud, or public cloud. >> We don't care. No, we don't. And there's a lot of folks-- >> And VMware, is that a target market, VMware buyers? >> Absolutely. Yup. And frankly, we've had people inside of VMware working with us as a number of the beta testers on this and demonstrating that they can spin up their own environments faster, so that kind of proof point is what we're after. Then there's a lot of folks in DevOps, right? DevOps is one of the hot targets for our business and a lot of businesses and what we see is folks that are focusing on the app development side of DevOps and then they get to the point where they got to call IT and say alright, give me a platform to run my new application on and they get the old answers. So a lot of these folks are looking for the ability to spin up environments very very quickly, with a lot of flexibility where they don't need to be and expert in alright, how's the storage going to work and how do I build a network, right? >> So are you targeting IT and DevOps hybrid, or is it one of the other DevOps developers? >> It's both. >> Okay and you don't care which cloud so you're going to draft off the success that VMware's seeing right now with their cloud strategy with AWS >> Absolutely. I mean look, there's a lot of ways >> Software design is booming. >> We can help those customers figure out how do I do VSAN faster? How do I do NSX faster? How do I set up applications that I can move to AWS faster? It's kind of bringing-- >> So software-defined clouds, software-defined data center, all this is in your wheelhouse. >> Yes, that's exactly right. >> This is what you're targeting. >> And that's the opportunity and the challenge. Again when you're doing a small company, the world is your oyster but you have to kind of focus on the first thing first. So we're going to go in and try to help people that have, are dealing with alright, I need to kind of update my software so that I don't have an Equifax, or I need to fix my security policies, I need an environment like, today that I can use to test that. Or, I want to go from the old VMware to to the new VMware, I got to make sure it works. That's good for the customer, that's good for VMware, it's good for us. >> And the outcome is digital productivity for the developer. >> Absolutely. >> OK, so let's talk about the business, and the business model. So you guys raised some money, can you talk about the amount, or is that confidential? >> It's confidential at this point and we have some additional-- >> Is it bigger than 10 million? Less than 10 million? >> It's been less than 10 million. We're going to go lean and mean, but we're set up to make the run we need to run. >> OK, good I got that out of the way. Employees, how many people do you guys have? What's the strategy? >> Just over 20 now, and we have a few more folks that we're going to be adding. We're going to go fairly lean from here. >> Okay, in terms of business model, you said SaaS Can you just explain a little bit more about thee business model, and then some of the competition that you have? >> Yeah, this product was designed from day one to be a SaaS product, so we're not going to go on-premise software or old models, we're going with a SaaS model for everything we're doing now and everything we intend to do in the future, so the product sits in the cloud, and you can access it basically on demand. We're going to make it very easy for people to get in and give this a try. It's going to be simple pricing, starting at about 15 hundred dollars a month. >> So a little bit of low-cost entry, not freemium, so it's going to some cost to get in, right? Try before you buy, POC, however that goes, right? >> Yeah, it's see a demo, do a trial, give it a shot. I'll give you an example, right. When I was at my last job, I had 300 pre-sales people >> Where's this? >> This was at Dell Software. >> Dell Software, okay, got it. >> Now it's called Quest. They would go out and they'd use cloud-based resources to spin up their demo environment. Well, I'm going to give them, and I'm calling them, by the way, the ability to buy it for a very short amount of money and you're not committed to it forever, you can use it as much as you want. And get the ability to say alright, let's spin up VMware, let's spin up OpenStack, let's spin up F5 Palo Alto Networks whatever security I want, get my app running on that without being an expert in all those parts. >> You can stand up stuff pretty quickly, it's a DevOps ethos but it's about the app and the developer productivity. >> Right. And from a business model standpoint, it's how do I make this really, really easy? Because the more of those folks that use it in this phase, next year, when we get to say alright, let's punch that thing you built into production on your cloud, we'll be ready to go. Our goal is to grab space quickly. >> Talk about competition. >> I think the competition for this part of it this kind of dev test lab spin up scenario, the Pensa lab that I just described, the biggest competition is going to be people that build their own. So in the corner you've got your test environment running on your old hardware, right? So that doesn't come with this automated software capability. The other ones are going to be people like Skytap, as an example, that a lot of people use, and I've used in the past, that gives you a platform to run on, but again, a lot more cost and not the automated software capabilities. So there are a lot of scenarios like that that we can go after, and it's almost universal. Everybody's got a need to have some sort of a test or dev environment, right? And we are going to prove to them that the software is better. >> So not a lot of competition. It's not like there's a zillion players out there. >> No, it's a big target, but there's not a lot of players. And for the most part, you're going to go into scenarios where customers have something they've cobbled together that isn't working as well as they'd like. >> And Pensa AI hints a little bit of a automation piece which is really all our people know in the enterprise. Let's talk about the technology. What's under the hood, is there AI involved, also you've got the domain name .ai, which I love those domain names, by the way, but what's the tech? What's driving the innovation and story differentiation? >> To be honest with you, inside that's something you debate because that's what it is. If AI is a way to use technology, to do things as well or better than people used to do before, that's what it is. And if you take all the hype, and nonsense out of the conversation, you say it's not about SkyNet and computers taking over the world, it's really about doing stuff better than we can do and making people more effective, that's what we have. Now, under AI there's a bunch of different techniques and we're going to be focused on primarily modeling and the core IP of this is how we build the model for all of those components and how they interact and how they behave, and then machine learning. How do we apply techniques to actually-- >> So you're writing software that's innovating on technology and configuration, tying that together and then using that instrumentation to make changes and/or adaptive-like capabilities-- >> Exactly, but rather than go spend a month building the template that you're going to go deploy the system will build that for you. And that's where the smarts are. And we'll use machine learning techniques over time to make that model better. So that's kind of where we're digging, and frankly it's a big problem for people. >> So software you're main technology. >> It's 100% a software platform. >> Okay, well, Wikibon Research was going viral at VMworld and I'll make a note cause I think this is important cause automation is our and it's a key point of your thing is that Wikibon showed that about 1.5 billion dollars are going to be taken out of the market as automation takes non-differentiated labor out of the equation, which essentially is stacking servers and racking, stacking and racking. That plays right into your trend. >> That's exactly what we're doing. And what we want to do is-- >> By the way that value shifts, too, all the parts. >> Yeah, and I think we're trying to focus-- automation isn't new. It's not new in IT. Certainly there's been a lot of focus on it the last 10 years. The question is how do you make the automation smarter? So you don't have to do the design and say push play. Cause the problem with automation in these really complicated microservices, multi-- the problem is, if you automate it, if you build that template wrong, you can make the same mistake a thousand times in a row. And I've had products in the past where they've worked great as long as that template was correct. Well what if the template changes? What if I need to put new security policies in there, changes? Maestro is going to build it for you. That's what the story is all about. >> That's your product, that's your product name. >> Yep. >> Well, that's what DevOps is all about. Programming the infrastructure, and that's always going to change. So that's really the DevOps ethos. >> Yeah, and that's why if you expand out from the first play run, this test dev scenario, well, frankly, we'll learn a lot. We'll learn a ton about different patterns that we see, we'll learn a lot about the Interop environment that customers want, I want you to add this or add that, the system is going to get smarter to the point where when we punch it into production, it's going to know a lot more than it does today. >> Well congratulations on the launch. My final question for you is really the most important one which is, if I'm a customer, why do I care? What's in it for me? What's the value? Why should I pay attention to Pensa.ai? What's going on, what's the value to me, why should I care, why should I call you? Gimme that bottom line. >> It's about risk reduction. It's about making sure that the things you need to change you can actually do it without it blowing up in your face. And it's also, frankly, the other side of the AI-- >> What, the infrastructure blowing up in my face? Or just apps? >> If you make changes to your environment and you're not sure if they're going to work, but you know, again, take the Equifax thing. If they had made those changes and put them into their environment, it wouldn't be on the front page of every newspaper in the world. Frankly, my information wouldn't have been hacked. >> What would you guys have done if I was Equifax and I knew that potentially I had to move fast? How could you guys solve that problem? >> If you have a problem, upgrade the software today. And what we would've done is give them the ability-- >> Do you think they knew they had a problem? >> Uh... I don't know if they did or not, but you can see this scenario over and over and over again in other companies, where they say, we know we need to do an update, but we're not doing it. We're going to wait for the six months-- >> Cause it breaks stuff. >> Cause we're scared. >> Scared, or that it breaks stuff, or both? >> It breaks stuff and we need to test it, right? So we're going to bring test velocity into that, we're going to bring intelligence to make sure the design is right, right? So that you can do it more quickly. In many different scenarios. >> It's interesting in the old days, it was like, patch management was a big thing, that was the on-premise software, but with DevOps, you need, essentially, test and dev all the time on? >> You do. If you're developing these applications with DevOps in the front end, and you're dropping new versions of 'em in hours, rather than quarters, the infrastructure in the back end has to kind of speed up to DevOps speed. And that's where we're going to focus our attention. >> Alright, here's the hard question for you and we'll end the segment, is when does a customer, your potential customer, know they need you? What's the environment look like? What's the pain points? What are the signals that they need to be calling Pensa.ai? What's the deal there? >> Yeah, I think we're going to talk to the DevOps people that are looking to get their applications out and get them built and deployed-- >> So, need for application pushing, that's one. >> That's one. The other ones are going to be folks inside any IT organization that need better velocity, need to be able to test one and take money, cost out of it, cause we're going to do it for a lot less than what it costs you to do now. And the third one is the vendor community. Folks out there selling software. VARs, pre-solicit people. >> So I guess the question is more specific. What is the signs inside the customer that make them want to call you? Stuff's breaking, upgrades not happening fast enough, I'm trying to get to the heart of it. If I'm a customer-- >> On the IT customer side, it's all about velocity. We need to push our apps faster, we need infrastructure faster, we need to test security policies faster, we're not going fast enough-- >> So basically if you're going slow, not getting the job done, they call you. >> Pretty much, that's our guys. >> Tom, congratulations on the launch, congratulations on the new CEO job, we'll be tracking you guys. Series A funding, congratulations, who's the VC involved? >> We have The Fabric, which was the seed funding source, and then March Capital has been very helpful to us in this A round. >> Great, well they got a great pro in you as CEO. We'll keep in touch. Cube alumni, good friend Tom Joyce here inside theCUBE Studios on the conversation around the launch of the company, Series A funding, new team members, and Pensa.ai. This is theCUBED. Cubed.net is our URL, check it out. Siliconangle.com and wikibon.com is where you can go check out our stuff. I'm John Furrier, thanks for watching. (futuristic music)

Published Date : Oct 4 2017

SUMMARY :

Some big news, new role as the CEO of Pensa. Good to see you, too. You're the CEO now of the company and help them get to the next level So you landed at Pensa. the hardware to everything being virtualized Yeah, and they had four months in there playing around. to deploy this stuff, cause you can't hire enough of the company in a second, but you guys have some news. and the new website is Pensa.ai. And I can describe to you what it's used for, congratulations on the new role, great job. So let's just go in market you guys are targeting. the first customers we're going to be focusing on And what we can do is give you the ability So that a VMware customer-based you're targeting, And there's a lot of folks-- and expert in alright, how's the storage going to work I mean look, there's a lot of ways So software-defined clouds, software-defined data center, And that's the opportunity and the challenge. and the business model. to make the run we need to run. OK, good I got that out of the way. that we're going to be adding. so the product sits in the cloud, and you can access it I'll give you an example, right. And get the ability to say alright, let's spin up VMware, but it's about the app and the developer productivity. let's punch that thing you built into production the biggest competition is going to be people that So not a lot of competition. And for the most part, you're going to go into scenarios where What's driving the innovation and story differentiation? and the core IP of this is how we build the model building the template that you're going to go deploy out of the equation, which essentially is stacking servers And what we want to do is-- the problem is, if you automate it, So that's really the DevOps ethos. the system is going to get smarter to the point where Well congratulations on the launch. It's about making sure that the things you need to change in the world. If you have a problem, upgrade the software today. but you can see this scenario over and over and over again So that you can do it more quickly. the infrastructure in the back end has to What are the signals that they need to be calling Pensa.ai? a lot less than what it costs you to do now. So I guess the question is more specific. On the IT customer side, it's all about velocity. not getting the job done, they call you. congratulations on the new CEO job, and then March Capital has been very helpful to us Siliconangle.com and wikibon.com is where you can go

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tom JoycePERSON

0.99+

TomPERSON

0.99+

Jim ChapelPERSON

0.99+

EquifaxORGANIZATION

0.99+

John FurrierPERSON

0.99+

Meg WhitmanPERSON

0.99+

AWSORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

100%QUANTITY

0.99+

PensaORGANIZATION

0.99+

Silicon Angle Media, Inc.ORGANIZATION

0.99+

Palo Alto, CALOCATION

0.99+

next yearDATE

0.99+

FirstQUANTITY

0.99+

Sept 2017DATE

0.99+

less than 10 millionQUANTITY

0.99+

300 pre-sales peopleQUANTITY

0.99+

six monthsQUANTITY

0.99+

four monthsQUANTITY

0.99+

first productQUANTITY

0.99+

first versionQUANTITY

0.99+

bothQUANTITY

0.99+

Less than 10 millionQUANTITY

0.99+

first customerQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

DevOpsTITLE

0.99+

CubeORGANIZATION

0.99+

first offeringQUANTITY

0.99+

todayDATE

0.99+

theCUBEORGANIZATION

0.98+

Wikibon ResearchORGANIZATION

0.98+

about 300 pre-sales peopleQUANTITY

0.98+

Siliconangle.comOTHER

0.98+

about 1.5 billion dollarsQUANTITY

0.98+

Series AOTHER

0.98+

SkyNetORGANIZATION

0.98+

firstQUANTITY

0.97+

over 20QUANTITY

0.97+

VMworldORGANIZATION

0.97+

Pensa.aiORGANIZATION

0.97+

oneQUANTITY

0.97+

second thingQUANTITY

0.97+

HPORGANIZATION

0.97+

Third thingQUANTITY

0.97+

about 15 hundred dollars a monthQUANTITY

0.96+

third oneQUANTITY

0.96+

OpenStackTITLE

0.96+

Pensa LabORGANIZATION

0.95+

VMwareORGANIZATION

0.95+

wikibon.comOTHER

0.95+

YouTubeORGANIZATION

0.94+

Dell SoftwareORGANIZATION

0.94+

almost three yearsQUANTITY

0.93+

first customersQUANTITY

0.93+

first thingQUANTITY

0.93+

VMwareTITLE

0.9+

couple of months agoDATE

0.88+

PensaLOCATION

0.88+

this afternoonDATE

0.86+

Monzy Merza & Haiyan Song, Splunk | Splunk .conf 2017


 

>> Announcer: Live from Washington DC, it's theCUBE, covering .conf2017, brought to you by Splunk. >> Well good morning, welcome to day two, Splunk .conf2017 here in Washington DC, theCUBE very proud to be here again for the seventh time I believe this is. John Walls, Dave Vellante. Good morning, sir, how are you doing, David? >> I'm doing well thank you. >> Did you have a good night? >> Yeah, great night. >> DC, I know your son's here >> Walked round the district a little bit, yeah, it was good. >> It's good to have you here. >> At the party last night upstairs, (John laughs) talked to a few customers, trying to find out what they didn't like about Splunk, and it was not a lot of things. >> That would be a short conversation I think. We can do us, we got a couple of keynote rockstars with us this morning, Haiyan Song, who's the Senior Vice President of Security Markets at Splunk. Haiyan, good to see you again. >> Great to see you too. >> John: Thanks for coming back, Monzy Merza, who was the Head of Cybersecurity Research at Splunk. >> Thank you for having me. >> John: Monzy, commanding the stage with great acumen today, good job there. >> Monzy: Thank you. >> Yeah we'll get into that a little bit later. But first off, let's just kind of set the table here a little bit. I know this is a bit of transformational year for you in terms of security, in how you're building out your portfolio, and your services, and so kind of walk us through that. What are you doing, Haiyan, in terms of, I guess being available, right, for whomever, whenever, wherever they are in their security journey you might say. >> Journey is the keyword this year, and nerve center is another one that I highlighted at my super session yesterday. So when I reflect on, this is your seventh year, and when I reflect on the last three years, right, we came in and really talked about the enterprise security product on the first year. And second year we talked about, you know, how UBA adds to the capabilities for better detection and machine learning. We introduced different features. This year we didn't start the conversation on, "Here's a new feature". This year we started the conversation on you need to build a security nerve center. That's the new defense system. And there's a journey to get there, and our role is to enable you on that journey every step of the way. So it's portfolio message, and not only for the very advanced customers, who want machine learning, who want to customize the thread models. Also for people who just started, to say I have the data, and help me get more insight into this, or help me understand how leverage machine data across domains to really correlate and connect the dots, and do investigations. Or what are the important things to set up the basic operations. Very, very excited about the ability, transformational year, as you mentioned, that we can bring the full portfolio to our customer. >> So, Monzy, you've said in your keynote today, defenders can succeed. We talked off camera, you're an optimist. And all we need is this nerve center. So to date, has that nerve center been missing, has it been there and people haven't been able to take advantage of it, have the tools been too complicated? I wonder if you could unpack that a little bit? >> I think what's happened over the course of many years, as the security ecosystem matures and evolves, there are a lot of expert technologies in a variety of different areas, and it's a matter of bringing those expert technologies together, so that the operations teams can really take advantage of them. And you know, it's one thing to have a capability, but it's another to leverage that capability along with another capability and combine the forces together, and really that's the message, that's Haiyan's message, that's been there for the nerve center, that we can bring together. And so when I say the defender has an advantage, I mean that, because I feel that the operations teams, the IT teams, as well as the security teams, have laid out a path, and the attacker cannot escape that path. You have to walk down a certain path to get to something to achieve or to steal or to do whatever, or damage that you need to do. So when you have a nerve center, you can bring all the instrumentation that's been placed along those path to make use of it. So the attacker has to work within that terrain. They cannot escape that terrain. And that's what I mean, is the nerve center allows for that to occur. >> Now you guys have talked for a long time about bringing analytics and security, those worlds together. We've always been a big obviously proponent of that, but spending's just starting to shift, right. They're still spending a lot of money on the perimeter. I guess you have to. We all see the numbers, security investments continue to increase. But where are we today with regard to analytics and being able to proactively both identify and remediate? >> So I just echo what you just said. I'm so pleased to see the industry started the shifts. I think being analytics-driven is really top of mind for people, and using machine learning automation to help really speed up the detection and even response are top of mind. We just did a CISO Customer Advisory Report on Monday, and we always ask when we start the meetings, "Tell us your top of mind challenges, "tell us your top of, you know two investment, and what's the recommendation for Splunk?" And better, faster response, better faster detection and automation and analytics is top of mind for everybody. So for us, this year, extremely, extremely happy to talk about how we're completing that narrative for analytics-driven security. >> Well on that point, you talk about analytics stories, and filling gaps, putting an entire narrative together so that somebody could loosen up the nuts, and they can see exactly where intrusions occur, what steps could be taken, and so on and so forth. So, I mean, dig a little deeper on that for us, maybe Monzy, you can jump on that, about what this concept of analytics stories, and then how you're translating that into your workplace. >> We thought about this for quite some time in terms of drilling down and saying, as analysts and practitioners, what is it that we desire? The security research team at Splunk is composed of people who spend many, many years in the trenches. So what do we want, what did we always want, and what was hard? And instead of trying to approach it from the perspective of, you know, let's just connect the dots, really take an adversarial model approach to say, "What does an adversary actually do?" and then as a defender, what do I do when I see certain things happening? And I see things on the network, I see things on the end point, and that's good, and a lot of people talk about that. But what do I do next? As the analyst, where do I go, and what would be helpful to me? So we took this concept of saying, let's not call them anything else, we actually fought over this for quite some time. These are not use cases, because use case has a very different connotation. We wanted stories because an adversary starts somewhere, adversary takes some action. The defender may see some of that action, but then the defender carries on and does other things, so we really had this notion of a day in the life, and we wanted to capture that day in the life of the prospective of what's important to their business, and really encapsulate that as a narrative, so that when the analysts and security operations teams get their hands on this stuff, they're not bootstrapping their way through the process. They have a whole story that they can play through, and they can say, and if it doesn't make sense to them, that's okay, they can modify the story, and then have a complete narrative to understand the threat, and to understand their own actions. >> So we hear the stat a lot about how long it takes for organizations to identify an intrusion. It ranges I've been seeing, you know, service now flashing 191, I've seen it as high as 320. I'm not sure there's clear evidence that that number's compressing. I think it's early days there, but presumably analytics can help compress that number, but when I think about things like, you know, zero day signatures, and other very high tech factors that are decades old now. Can analytics help us solve those problems? Can the technology, which kind of got us into this mess, get us out of the mess? (Monzy and Haiyan laugh) >> That's such a great point. It is the technology that just made our lives so much easier, as you know, living, and then it complicate it so much for security people. I'll give you a definitive yes, right. Analytics are there to help detect early warning signs, and it will help us, may not be able to just change the stats right now for the whole industry, I'm sure it's changing stats for a lot of the customers, especially when it comes to remediation. The more readily available the data is for you when you are sort of facing an incident, the faster you can get to the root cause and start remediate. That we have seen many of our customers talk about how it was going from weeks to days, days to hours, and that includes not just technology, but also process, right? Process streamline and automating some of the things, and freeing up the people to do the things that they're great at, versus the mundane things, trying to collect the information. So I'm also a glass half full person, optimist, that's why we work together so well, that we really think being data driven, being analytics driven, is changing the game. >> What about the technology of the malware? I think it was at a .conf, I think it was 2013, one of your guest speakers gave us an inside look at Stuxnet. Of course by then it was seven, eight years old, right? But it was fascinating, and you know you read more about it, and you learn more about it, and it's insidious. Has the technology on the defender side, I guess was my real question, accelerated to keep up with that pace? Where are we at with the bad technology and the good technology? Are they at a balance now, an equilibrium? >> I think it's going to be a constant evolutionary process. It's like anything else, you know, whether you look at thieves or whether you look at people who are trying to create new innovative solutions for themselves. I think the key that, this is the reason why I said this morning, is that defenders can have, I think I said unfair advantage, not just an advantage. And the reason for that is, some of the things Haiyan talked about, with analytics, and with the availability of technology that can create a nerve center. It's not so much so that someone can detect a certain type of threat. It's that we know the low fidelity sort of perturbations that cause us to fire an alarm, but there's so many of those that we get desensitized. The thing that's missing is, how do I connect something that is very low threshold, to another thing that's very low threshold, and sequence those things together, and then say, you know, combined all of this is a bad thing. And one of my colleagues uses as example, you know, I go to the doctor and I say you know, "I've got this headache for a long time", and the doctor says, "Don't worry, you don't have a tumor." And it's like, "Okay, great, thank you very much," (Dave laughs) but I still have the headache >> Still have the headache. >> And so this is why even in the analytics stories we use, and even in UBA and in enterprise security, we don't use the concept of a false positive. We use the concept of confidence, and we want to raise confidence in a particular situation, which is why the analytics story concept makes sense, is because within that story, the confidence keeps raising as you go farther and farther down the chain. >> So it's a confidence, but also married, presumably through analytics, with a degree of risk, right? So I can understand whether that asset is a high value asset or John's football pool or something like that. >> John: Which is going very well right now by the way. (all laugh) Bring it on, very happy. >> Now you guys have come out with some solutions for ransomware. I tweeted out this morning that I was pleased at .conf that we're talking about analytics, analytic-driven solutions to ransomware, and not just the typical, when we go these conferences, the air gap yap. Somebody tweeted back to me, said, "Dave, until we see 100% certainty with analytics-driven solutions, we better still have air gaps." So I guess I wanted, if you guys could weigh in on what should people be thinking about in terms of ransomware, in terms of an end to end solution. Can you comment? >> I will add and... So for us, right, even to follow on the last question you had, the advancement in technology is not just algorithms, it's actually the awareness and the mindset to instrument your enterprise, and the biggest information gap in an incident response is, I don't have the data, I don't know what happened. So I think there's lot of advancement happened. We did a war game, you know, tabletop exercise, that was one of the biggest takeaways. Oh we better go back and instrument our enterprise, or agency, so when something does happen, we can trace back, right? So that's number one. So ransomware's the same thing. If you have instrumented your infrastructure, your applications stack, and your cloud visibility, you can actually detect some of the anomalies early. It's never going to solve 100%. So security is all about layered defense, right. Adapting and adding more layers, because nobody is really claiming I can be 100%, so you just want to put different layers and hoping that as they sift through, you catch them along the way. >> I think it's a question of ecosystem, and really goes back to this notion that different people have instrumented their environments in different ways, they deploy different technologies. How much value can they get out of them? I think that's one vector. The other vector is, what is your risk threshold? Somebody may have absolutely zero tolerance for air gaps. But I would, as a research person, I would like to challenge even that premise. I've been privileged to work in certain environments, and there are some people who have incredible resources, and so it's just a question of what is your adversary model that you're trying to protect yourself against, what is your business model for which you're willing to take over that risk? So I don't think there is a too high endpoint, there isn't a single solution for any of these number of things. It really just has to match with your business operation or business risk posture that you want to accommodate. >> You know what, you're almost touching on a point that I did want to hit you up on before you left, about choice, and you know, it's almost like personal, how much risk am I willing to take on? It's about customization, and providing people different tools. So how much leash do you give people? I mean do you worry that if we allow you to do too much tinkering you actually do more harm than good? But how do you factor all that in to the kind of services that you're offering? >> I think that ultimately it's up to the customer to decide what's valuable and what's critical for their business. If somebody wants a complete solution from Splunk, we're going to serve those customers. You heard a number of announcements this week from ES Content updates, to opening up the SDK, you know, with UBA, to the security essentials app releases, and all of those different kinds of capabilities. On the top end of it, we have the machine learning toolkit. If you have experts that want to tinker and learn something more, and want to exert their own intuition and energy on a compute problem, we want to provide those capabilities. So it's not about us, it's about the ability for our customers to exert what is important to them, and get a significant advantage in the marketplace for their business. >> I think it's important to point out too for our audience, it's not just a technology problem. The security regimes in organizations for years has fallen on IT and security practitioners, and we wrote a piece several years ago on Wikibon Research, that bad user behavior is going to trump good security every time. And so it's everybody's responsibility. I mean it sounds like a bromide, but it's so true, and it's really part of the complete solution. You know, I mean, I presume you agree. >> Totally. Going back to the CISO Advisory Board, one of the challenges they pointed out is user accountability. That's one of the CISO's biggest challenges. It's not just technology. It's how can they train the users and make them responsible and somehow hold them accountable. I thought that was a really very interesting insight we didn't talk about before. >> Yeah, you don't want to hear my bad, but unfortunately you do. Well, we were kind of kidding before we got started, we said, "We've got an hour to chat." It seems like it was just a matter of minutes and so thank you for taking time. We could talk an hour, I think. >> Monzy: Oh easy. >> Fascinating subject. And we thank you both for your time here today, and great show. >> [Haiyan And Monzy] Thank you for having us. >> Haiyan: It's always a pleasure to be here. >> You bet, all right, thank you Haiyan and Monzy. Back with more of theCUBE here covering .conf2017 live in Washington DC.

Published Date : Sep 27 2017

SUMMARY :

conf2017, brought to you by Splunk. Good morning, sir, how are you doing, David? Walked round the district and it was not a lot of things. Haiyan, good to see you again. John: Thanks for coming back, Monzy Merza, John: Monzy, commanding the stage for you in terms of security, and our role is to enable you on that journey I wonder if you could unpack that a little bit? So the attacker has to work within that terrain. and being able to proactively both identify and remediate? So I just echo what you just said. Well on that point, you talk about analytics stories, from the perspective of, you know, It ranges I've been seeing, you know, The more readily available the data is for you and you know you read more about it, and the doctor says, "Don't worry, you don't have a tumor." and we want to raise confidence in a particular situation, So it's a confidence, but also married, John: Which is going very well right now by the way. and not just the typical, when we go these conferences, and the mindset to instrument your enterprise, and really goes back to this notion that I did want to hit you up on before you left, and get a significant advantage in the marketplace and it's really part of the complete solution. one of the challenges they pointed out and so thank you for taking time. And we thank you both for your time here today, You bet, all right, thank you Haiyan and Monzy.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

John WallsPERSON

0.99+

MonzyPERSON

0.99+

JohnPERSON

0.99+

MondayDATE

0.99+

DavidPERSON

0.99+

100%QUANTITY

0.99+

HaiyanPERSON

0.99+

2013DATE

0.99+

Monzy MerzaPERSON

0.99+

Washington DCLOCATION

0.99+

Haiyan SongPERSON

0.99+

This yearDATE

0.99+

DavePERSON

0.99+

sevenQUANTITY

0.99+

CISO Advisory BoardORGANIZATION

0.99+

SplunkORGANIZATION

0.99+

Wikibon ResearchORGANIZATION

0.99+

seventh yearQUANTITY

0.99+

this yearDATE

0.99+

todayDATE

0.99+

DCLOCATION

0.99+

seventh timeQUANTITY

0.99+

bothQUANTITY

0.99+

oneQUANTITY

0.99+

an hourQUANTITY

0.99+

yesterdayDATE

0.98+

this weekDATE

0.98+

UBAORGANIZATION

0.97+

SplunkEVENT

0.97+

theCUBEORGANIZATION

0.96+

several years agoDATE

0.95+

this morningDATE

0.95+

CISOORGANIZATION

0.94+

single solutionQUANTITY

0.94+

second yearQUANTITY

0.94+

one vectorQUANTITY

0.94+

firstQUANTITY

0.94+

UBALOCATION

0.92+

one thingQUANTITY

0.9+

last nightDATE

0.88+

StuxnetORGANIZATION

0.84+

320QUANTITY

0.84+

zero dayQUANTITY

0.84+

.confORGANIZATION

0.84+

.conf2017EVENT

0.83+

first yearQUANTITY

0.83+

decadesQUANTITY

0.82+

zeroQUANTITY

0.81+

eight years oldQUANTITY

0.79+

day twoQUANTITY

0.77+

last three yearsDATE

0.75+

two investmentQUANTITY

0.74+

.confOTHER

0.71+

191QUANTITY

0.61+

ES ContentTITLE

0.6+

SplunkOTHER

0.59+

SplunkPERSON

0.57+

Prakash Nanduri, Paxata | BigData NYC 2017


 

>> Announcer: Live from midtown Manhattan, it's theCUBE covering Big Data New York City 2017. Brought to you by SiliconANGLE Media and it's ecosystem sponsors. (upbeat techno music) >> Hey, welcome back, everyone. Here live in New York City, this is theCUBE from SiliconANGLE Media Special. Exclusive coverage of the Big Data World at NYC. We call it Big Data NYC in conjunction also with Strata Hadoop, Strata Data, Hadoop World all going on kind of around the corner from our event here on 37th Street in Manhattan. I'm John Furrier, the co-host of theCUBE with Peter Burris, Head of Research at SiliconANGLE Media, and General Manager of WikiBon Research. And our next guest is one of our famous CUBE alumni, Prakash Nanduri co-founder and CEO of Paxata who launched his company here on theCUBE at our first inaugural Big Data NYC event in 2013. Great to see you. >> Great to see you, John. >> John: Great to have you back. You've been on every year since, and it's been the lucky charm. You guys have been doing great. It's not broke, don't fix it, right? And so theCUBE is working with you guys. We love having you on. It's been a pleasure, you as an entrepreneur, launching your company. Really, the entrepreneurial mojo. It's really what it's all about. Getting access to the market, you guys got in there, and you got a position. Give us the update on Paxata. What's happening? >> Awesome, John and Peter. Great to be here again. Every time I come here to New York for Strata I always look forward to our conversations. And every year we have something exciting and new to share with you. So, if you recall in 2013, it was a tiny little show, and it was a tiny little company, and we came in with big plans. And in 2013, I said, "You know, John, we're going to completely disrupt the way business consumers and business analysts turn raw data into information and they do self-service data preparation." That's what we brought to the market in 2013. Ever since, we have gone on to do something really exciting and new for our customers every year. In '14, we came in with the first Apache Spark-based platform that allowed business analysts to do data preparation at scale interactively. Every year since, last year we did enterprise grade and we talked about how Paxata is going to be delivering our self-service data preparation solution in a highly-scalable enterprise grade deployment world. This year, what's super exciting is in addition to the recent announcements we made on Paxata running natively on the Microsoft Azure HDI Spark system. We are truly now the only information platform that allows business consumers to turn data into information in a multi-cloud hybrid world for our enterprise customers. In the last few years, I came and I talked to you and I told you about work we're doing and what great things are happening. But this year, in addition to the super-exciting announcements with Microsoft and other exciting announcements that you'll be hearing. You are going to hear directly from one of our key anchor customers, Standard Chartered Bank. 150-year-old institution operating in over 46 countries. One of the most storied banks in the world with 87,500 employees. >> John: That's not a start up. >> That's not a start up. (John laughs) >> They probably have a high bar, high bar. They got a lot of data. >> They have lots of data. And they have chosen Paxata as their information fabric. We announced our strategic partnership with them recently and you know that they are going to be speaking on theCUBE this week. And what started as a little experiment, just like our experiment in 2013, has actually mushroomed now into Michael Gorriz, and Shameek Kundu, and the entire leadership of Standard Chartered choosing Paxata as the platform that will democratize information in the bank across their 87,500 employees. We are going in a very exciting way, a very fast way, and now delivering real value to the bank. And you can hear all about it on our website-- >> Well, he's coming on theCUBE so we'll drill down on that, but banks are changing. You talk about a transformation. What is a teller? An Internet of Things device. The watch potentially could be a terminal. So, the Internet of Things of people changes the game. Are the ATMs going to go away and become like broadcast points? >> Prakash: And you're absolutely right. And really what it is about is, it doesn't matter if you're a Standard Chartered Bank or if you're a pharma company or if you're the leading healthcare company, what it is is that everyone of our customers is really becoming an information-inspired business. And what we are driving our customers to is moving from a world where they're data-driven. I think being data-driven is fine. But what you need to be is information-inspired. And what does that mean? It means that you need to be able to consume data, regardless of format, regardless of source, regardless of where it's coming from, and turn it into information that actually allows you to get inside in decisions. And that's what Paxata does for you. So, this whole notion of being information-inspired, I don't care if you're a bank, if you're a car company, or if you're a healthcare company today, you need to have-- >> Prakash, for the folks watching that might not know our history as you launched on theCUBE in 2013 and have been successful every year since. You guys have really deploying the classic entrepreneurial success formula, be fast, walk the talk, listen to customers, add value. Take a minute quickly just to talk about what you guys do. Just for the folks that don't know you. >> Absolutely, let's just actually give it in the real example of you know, a customer like Standard Chartered. Standard Chartered operates in multiple countries. They have significant number of lines of businesses. And whether it's in risk and compliance, whether it is in their marketing department, whether it's in their corporate banking business, what they have to do is, a simple example could be I want to create a customer list to be able to go and run a marketing campaign. And the customer list in a particular region is not something easy for a bank like Standard Charter to come up with. They need to be able to pull from multiple sources. They need to be able to clean the data. They need to be able to shape the data to get that list. And if you look at what is really important, the people who understand the data are actually not the folks in IT but the folks in business. So, they need to have a tool and a platform that allows them to pull data from multiple sources to be able to massage it, to be able to clean it-- >> John: So, you sell to the business person? >> We sell to the business consumer. The business analyst is our consumer. And the person who supports them is the chief data officer and the person who runs the Paxata platform on their data lake infrastructure. >> So, IT sets the data lake and you guys just let the business guys go to town on the data. >> Prakash: Bingo. >> Okay, what's the problem that you solve? If you can summarize the problem that you solve for the customers, what is it? >> We take data and turn it into information that is clean, that's complete, that's consumable and that's contextual. The hardest problem in every analytical exercise is actually taking data and cleaning it up and getting it ready for analytics. That's what we do. >> It's the prep work. >> It's the prep work. >> As companies gain experience with Big Data, John, what they need to start doing increasingly is move more of the prep work or have more of the prep work flow closer to the analyst. And the reason's actually pretty simple. It's because of that context. Because the analyst knows more about what their looking for and is a better evaluator of whether or not they get what they need. Otherwise, you end up in this strange cycle time problem between people in back end that are trying to generate the data that they think they want. And so, by making the whole concept of data preparation simpler, more straight forward, you're able to have the people who actually consume the data and need it do a better job of articulating what they need, how they need it and making it presentable to the work that they're performing. >> Exactly, Peter. What does that say about how roles are starting to merge together? Cause you've got to be at the vanguard of seeing how some of these mature organizations are working. What do you think? Are we seeing roles start to become more aligned? >> Yes, I do think. So, first and foremost, I think what's happening is there is no such thing as having just one group that's doing data science and another group consuming. I think what you're going to be going into is the world of data and information isn't all-consuming and that everybody's role. Everybody has a role in that. And everybody's going to consume. So, if you look at a business analyst that was spending 80% of their time living in Excel or working with self-service BI tools like our partner's Tableau and Power BI from Microsoft, others. What you find is these people today are living in a world where either they have to live in coding scripting world hell or they have to rely on IT to get them the real data. So, the role of a business analyst or a subject matter expert, first and foremost, the fact that they work with data and they need information that's a given. There is no business role today where you can't deal with data. >> But it also makes them real valuable, because there aren't a lot of people who are good at dealing with data. And they're very, very reliant on these people to turn that data into something that is regarded as consumable elsewhere. So, you're trying to make them much more productive. >> Exactly. So, four years years ago, when we launched on theCUBE, the whole premise was that in order to be able to really drive towards a world where you can make information and data-driven decisions, you need to ensure that the business analyst community, or what I like to call the business consumer needs to have the power of being able to, A, get access to data, B, make sense of the data, and then turn that data into something that's valuable for her or for him. >> Peter: And others. >> And others, and others. Absolutely. And that's what Paxata is doing. In a collaborative, in a 21st Century world where I don't work in a silo, I work collaboratively. And then the tool, and the platform that helps me do that is actually a 21st Century platform. >> So, John, at the beginning of the session you and Jim were talking about what is going to be one of the themes here at the show. And we observed that it used to be that people were talking about setting up the hardware, setting up the clutters, getting Hadoop to work, and Jim talked about going up the stack. Well, this is one of the indicators that, in fact, people were starting to go up the stack because they're starting to worry more about the data, what it can do, the value of how it's going to be used, and how we distribute more of that work so that we get more people using data that's actually good and useful to the business. >> John: And drives value. >> And drives value. >> Absolutely. And if I may, just put a chronological aspect to this. When we launched the company we said the business analyst needs to be in charge of the data and turning the data into something useful. Then right at that time, the world of create data lakes came in thanks to our partners like Cloudera and Hortonworks, and others, and MapR and others. In the recent past, the world of moving from on premise data lakes to hybrid, multicloud data lakes is becoming reality. Our partners at Microsoft, at AWS, and others are having customers come in and build cloud-based data lakes. So, today what you're seeing is on one hand this complete democratization within the business, like at Standard Chartered, where all these business analysts are getting access to data. And on the other hand, from the data infrastructure moving into a hybrid multicloud world. And what you need is a 21st Century information management platform that serves the need of the business and to make that data relevant and information and ready for their consumption. While at the same time we should not forget that enterprises need governance. They need lineage. They need scale. They need to be able to move things around depending on what their business needs are. And that's what Paxata is driving. That's why we're so excited about our partnership with Microsoft, with AWS, with our customer partnerships such as Standard Chartered Bank, rolling this out in an enterprise-- >> This is a democratization that you were referring to with your customers. We see this-- >> Everywhere. >> When you free the data up, good things happen but you don't want to have IT be the constraint, you want to let them enable-- >> Peter: And IT doesn't want to be the constraint. >> They don't. >> This is one of the biggest problems that they have on a daily basis. >> They're happy to let it go free as long as it's in they're mind DevOps-like related, this is cool for them. >> Well, they're happy to let it go with policy and security in place. >> Our customers, our most strategic customers, the folks who are running the data lakes, the folks who are managing the data lakes, they are the first ones that say that we want business to be able to access this data, and to be able to go and make use out of this data in the right way for the bank. And not have us be the impediment, not have us be the roadblock. While at the same time we still need governance. We still need security. We still need all those things that are important for a bank or a large enterprise. That's what Paxata is delivering to the customers. >> John: So, what's next? >> Peter: Oh, I'm sorry. >> So, really quickly. An interesting observation. People talk about data being the new fuel of business. That really doesn't work because, as Bill Schmarzo says, it's not the new fuel of business, it's new sunlight of business. And the reason why is because fuel can only be used once. >> Prakash: That's right. >> The whole point of data is that it can be used a lot, in a lot of different ways, and a lot of different contexts. And so, in many respects what we're really trying to facilitate or if someone who runs a data lake when someone in the business asks them, "Well, how do you create value for the business?" The more people, the more users, the more context that they're serving out of that common data, the more valuable the resource that they're administering. So, they want to see more utilization, more contexts, more data being moved out. But again, governance, security have to be in place. >> You bet, you bet. And using that analogy of data, and I've heard this term about data being the new oil, etc. Well, if data is the oil, information is really the refined fuel or sunlight as we like to call it. >> Peter: Yeah. >> John: Well, you're riffing on semantics, but the point is it's not a one trick pony. Data is part of the development, I wrote a blog post in 1997, I mean 2007 that said data's the new development kit. And it was kind of riffing on this notion of the old days >> Prakash: You bet. >> Here's your development kit, SDK, or whatever was how people did things back then Enter the cloud, >> Prakash: That's right. >> And boom, there it is. The data now is in the process of the refinery the developers wanted. The developers want the data libraries. Whatever that means. That's where I see it. And that is the democratization where data is available to be integrated in to apps, into feeds, into ... >> Exactly, and so it brings me to our point about what was the exciting, new product innovation announcement we made today about Intelligent Ingest. You want to be able to access data in the enterprise regardless of where it is, regardless of the cloud where it's sitting, regardless of whether it's on-premise, in the cloud. You don't need to as a business worry about whether that is a JSON file or whether that's an XML file or that's a relational file. That's irrelevant. What you want is, do I have the access to the right data? Can I take that data, can I turn it into something valuable and then can I make a decision out of it? I need to do that fast. At the same time, I need to have the governance and security, all of that. That's at the end of the day the objective that our customers are driving towards. >> Prakash, thanks so much for coming on and being a great member of our community. >> Fantastic. >> You're part of our smart network of great people out there and entrepreneurial journey continues. >> Yes. >> Final question. Just observation. As you pinch yourself and you go down the journey, you guys are walking the talk, adding new products. We're global landscape. You're seeing a lot of new stuff happening. Customers are trying to stay focused. A lot of distractions whether security or data or app development. What's your state of the industry? How do you view the current market, from your perspective and also how the customer might see it from their impact? >> Well, the first thing is that I think in the last four years we have seen significant maturity both on the providers off software technology and solutions, and also amongst the customers. I do think that going forward what is really going to make a difference is one really driving towards business outcomes by leveraging data. We've talked about a lot of this over the last few years. What real business outcomes are you delivering? What we are super excited is when we see our customers each one of them actually subscribes to Paxata, we're a SAS company, they subscribe to Paxata not because they're doing the science experiment but because they're trying to deliver real business value. What is that? Whether that is a risk in compliance solution which is going to drive towards real cost savings. Or whether that's a top line benefit because they know what they're customer 360 is and how they can go and serve their customers better or how they can improve supply chains or how they can optimize their entire efficiency in the company. I think if you take it from that lens, what is going to be important right now is there's lots of new technologies coming in, and what's important is how is it going to drive towards those top three business drivers that I have today for the next 18 months? >> John: So, that's foundational. >> That's foundational. Those are the building blocks-- >> That's what is happening. Don't jump... If you're a customer, it's great to look at new technologies, etc. There's always innovation projects-- >> RND, GPOCs, whatever. Kick the tires. >> But now, if you are really going to talk the talk about saying I'm going to be, call your word, data-driven, information-driven, whatever it is. If you're going to talk the talk, then you better walk the walk by delivering the real kind of tools and capabilities that you're business consumers can adopt. And they better adopt that fast. If they're not up and running in 24 hours, something is wrong. >> Peter: Let me ask one question before you close, John. So, you're argument, which I agree with, suggests that one of the big changes in the next 18 months, three years as this whole thing matures and gets more consistent in it's application of the value that it generates, we're going to see an explosion in the number users of these types of tools. >> Prakash: Yes, yes. >> Correct? >> Prakash: Absolutely. >> 2X, 3X, 5X? What do you think? >> I think we're just at the cusp. I think is going to grow up at least 10X and beyond. >> Peter: In the next two years? >> In the next, I would give that next three to five years. >> Peter: Three to five years? >> Yes. And we're on the journey. We're just at the tip of the high curve taking off. That's what I feel. >> Yeah, and there's going to be a lot more consolidation. You're going to start to see people who are winning. It's becoming clear as the fog lifts. It's a cloud game, a scale game. It's democratization, community-driven. It's open source software. Just solve problems, outcomes. I think outcome is going to be much faster. I think outcomes as a service will be a model that we'll probably be talking about in the future. You know, real time outcomes. Not eight month projects or year projects. >> Certainly, we started writing research about outcome-based management. >> Right. >> Wikibon Research... Prakash, one more thing? >> I also just want to say that in addition to this business outcome thing, I think in the last five years I've seen a lot of shift in our customer's world where the initial excitement about analytics, predictive, AI, machine-learning to get to outcomes. They've all come into a reality that none of that is possible if you're not able to handle, first get a grip on your data, and then be able to turn that data into something meaningful that can be analyzed. So, that is also a major shift. That's why you're seeing the growth we're seeing-- >> John: Cause it's really hard. >> Prakash: It's really hard. >> I mean, it's a cultural mindset. You have the personnel. It's an operational model. I mean this is not like, throw some pixie dust on it and it magically happens. >> That's why I say, before you go into any kind of BI, analytics, AI initiative, stop, think about your information management strategy. Think about how you're going to democratize information. Think about how you're going to get governance. Think about how you're going to enable your business to turn data into information. >> Remember, you can't do AI with IA? You can't do AI without information architecture. >> There you go. That's a great point. >> And I think this all points to why Wikibon's research have all the analysts got it right with true private cloud because people got to take care of their business here to have a foundation for the future. And you can't just jump to the future. There's too much just to come and use a scale, too many cracks in the foundation. You got to do your, take your medicine now. And do the homework and lay down a solid foundation. >> You bet. >> All right, Prakash. Great to have you on theCUBE. Again, congratulations. And again, it's great for us. I totally have a great vibe when I see you. Thinking about how you launched on theCUBE in 2013, and how far you continue to climb. Congratulations. >> Thank you so much, John. Thanks, Peter. That was fantastic. >> All right, live coverage continuing day one of three days. It's going to be a great week here in New York City. Weather's perfect and all the players are in town for Big Data NYC. I'm John Furrier with Peter Burris. Be back with more after this short break. (upbeat techno music).

Published Date : Sep 27 2017

SUMMARY :

Brought to you by SiliconANGLE Media I'm John Furrier, the co-host of theCUBE with Peter Burris, and it's been the lucky charm. In the last few years, I came and I talked to you That's not a start up. They got a lot of data. and Shameek Kundu, and the entire leadership Are the ATMs going to go away and turn it into information that actually allows you Take a minute quickly just to talk about what you guys do. And the customer list in a particular region and the person who runs the Paxata platform and you guys just let the business guys and that's contextual. is move more of the prep work or have more of the prep work are starting to merge together? And everybody's going to consume. to turn that data into something that is regarded to be able to really drive towards a world And that's what Paxata is doing. So, John, at the beginning of the session of the business and to make that data relevant This is a democratization that you were referring to This is one of the biggest problems that they have They're happy to let it go free as long as Well, they're happy to let it go with policy and to be able to go and make use out of this data And the reason why is because fuel can only be used once. out of that common data, the more valuable Well, if data is the oil, I mean 2007 that said data's the new development kit. And that is the democratization At the same time, I need to have the governance and being a great member of our community. and entrepreneurial journey continues. How do you view the current market, and also amongst the customers. Those are the building blocks-- it's great to look at new technologies, etc. Kick the tires. the real kind of tools and capabilities in it's application of the value that it generates, I think is going to grow up at least 10X and beyond. We're just at the tip of Yeah, and there's going to be a lot more consolidation. Certainly, we started writing research Prakash, one more thing? and then be able to turn that data into something meaningful You have the personnel. to turn data into information. Remember, you can't do AI with IA? There you go. And I think this all points to Great to have you on theCUBE. Thank you so much, John. It's going to be a great week here in New York City.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

JohnPERSON

0.99+

JimPERSON

0.99+

MicrosoftORGANIZATION

0.99+

2013DATE

0.99+

PeterPERSON

0.99+

PrakashPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

Prakash NanduriPERSON

0.99+

Bill SchmarzoPERSON

0.99+

1997DATE

0.99+

New YorkLOCATION

0.99+

ThreeQUANTITY

0.99+

80%QUANTITY

0.99+

Michael GorrizPERSON

0.99+

Standard Chartered BankORGANIZATION

0.99+

New York CityLOCATION

0.99+

2007DATE

0.99+

HortonworksORGANIZATION

0.99+

87,500 employeesQUANTITY

0.99+

PaxataORGANIZATION

0.99+

NYCLOCATION

0.99+

last yearDATE

0.99+

37th StreetLOCATION

0.99+

SASORGANIZATION

0.99+

WikiBon ResearchORGANIZATION

0.99+

five yearsQUANTITY

0.99+

ExcelTITLE

0.99+

24 hoursQUANTITY

0.99+

OneQUANTITY

0.99+

this yearDATE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

This yearDATE

0.99+

21st CenturyDATE

0.99+

oneQUANTITY

0.99+

eight monthQUANTITY

0.99+

one questionQUANTITY

0.99+

four years years agoDATE

0.99+

3XQUANTITY

0.99+

5XQUANTITY

0.99+

firstQUANTITY

0.99+

three yearsQUANTITY

0.99+

Tim Smith, AppNexus | BigData NYC 2017


 

>> Announcer: Live, from Midtown Manhattan, it's theCUBE. Covering Big Data, New York City, 2017. Brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Okay welcome back, everyone. Live in Manhattan, New York City, in Hell's Kitchen, this is theCUBE's special event, our annual CUBE-Wikibon Research Big Data event in Manhattan. Alongside Strata, Hadoop; formerly Hadoop World, now called Strata Data, as the world continues. This is our annual event; it's our fifth year here, sixth overall, wanted to kind of move from uptown. I'm John Furrier, the co-host of theCUBE, with Peter Burris, Head of Research at SiliconANGLE and GM of Wikibon Research. Our next guest is Tim Smith, who's the SVP of technical operations at AppNexus; technical operations for large scale is an understatement. But before we get going; Tim, just talk about what AppNexus as a company, what you guys do, what's the core business? >> Sure, AppNexus is the second largest digital advertising marketplace after google. We're an internet technology company that harnessed, we harness data and machine learning to power the companies that comprise the open internet. We began by building a powerful technology platform, in which we embedded core capabilities, tools and features. With me so far? >> Yeah, we got it. >> Okay, on top of that platform, we built a core suite of cloud-based enterprise products that enable the buying and selling of digital advertising, and a scale-transparent and low-cost marketplace where other companies can transact; either using our enterprise products, or those offered by other companies. If you want to hear a little about the daily peaks, peak feeds and speeds, it is Strata, we should probably talk about that. We do about 11.8 billion impressions transacted on a daily basis. Each of those is a real-time auction conducted in a fraction of a second, well under half a second. We see about 225 billion impressions per day, and we handle about 5 million queries per second at peak load. We produce about 150 terabytes of data each day, and we move about 400 gigabits into and out of the internet at peak, all those numbers are daily peaks. Makes sense? >> Yep. >> Okay, so by way of comparison, which might be useful for people, I believe the NYSE currently does roughly 2 million trades per day. So if we round that up to 3 million trades a day and assume the NYSE were to conduct that volume every single day of the year; 7 days a week, 365 days a year, that'd be about a billion trades a year. Similarly, I believe Visa did about 28-and-a-half billion transactions in their fiscal third quarter. I'll round that up to 30 billion, and average it out to about 333 million transactions per day and annualize it to about 4 billion transactions per year. Little bit of math, but as I mentioned, AppNexus does an excess of 10 billion transactions per day. And so it seems reasonable to say that AppNexus does roughly 10 times the transaction volume in one day, than the NYSE does in a year. And similarly, it seems reasonable to say that AppNexus daily does more than two times the transaction volume that Visa does in a year. Obviously, these are all just very rough numbers based on publicly available information about the NYSE and Visa, and both the NYSE and Visa do far, far more volume than AppNexus when measured in terms of dollars. So given our volumes, it's imperative that AppNexus does each transaction with the maximum efficiency and lowest reasonable possible cost, and that is one of the most challenging aspects of my job. >> So thanks for spending the time to give the overview. There's a lot of data; I mean 10 billion a day is massive volume. I mean the internet, and you see the scale, is insane. We're in a new era right now of web-scale. We've seen it in Facebook, and it's enormous. It's only going to get bigger, right? So on the online ad tech, you guys are essentially doing like a Google model, that's not everything but Google, which is still huge numbers. Then you include Microsoft and everybody else. Really heavy lifting, IT-like situation. What's the environment like? And just talk about, you know, what's it like for you guys. Because you got a lot of opp's, I mean terms of dev opp's. You can't break anything, because that 10 billion transaction or near, it's a significant impact. So you have to have everything buttoned-up super tight, yet you got to innovate and grow with the future growth. What's the IT environment like? >> It's interesting. We have about 8,000 servers spread across about seven data centers on three continents, and we run, as you mentioned, around the clock. There's no closing bell; downtime is not acceptable. So when you look at our environment, you're talking about four major categories of server complexes. We have real-time processing, which is the actual ad serving. We have a data pipeline, which is what we call our big data environment. We also have client-facing environment and an infrastructure environment. So we use a lot of different tools and applications, but I think the most relevant ones to this discussion are Hadoop and its friends HDFS, and Hive and Spark. And then we use the Vertica Analytics Platform. And together Hadoop and its friends, and Vertica comprise our entire data pipeline. They're both very disk-intensive. They're cluster based applications, and it's a lot of challenge to keep them up and running. >> So what are some of those challenges? Just explain a little bit, because you also have a lot of opportunity. I mean, it's money flowing through the air, basically; digital air, if you will. I mean, they got a lot of stuff happening. Take us through the challenges. >> You know, our biggest apps are all clustered. And all of our clusters are built with commodity servers, just like a lot of other environments. The big data app clusters traditionally have had internal disks, while almost all of our other servers are very light on disk. One of the biggest challenges is, since the server is the fundamental building block of a cluster, then regardless of whether you need more compute or more storage, you always have to add more servers to get it. That really limits flexibility and creates a lot of inefficiencies, and I really, really am obsessive about reducing and eliminating inefficiencies. So, with me so far? >> Yep. >> Great. The inefficiencies result from two major factors. First, not all workloads require the same ratio of compute to storage. Some workloads are more compute-intensive, and others are really less dependent on storage, while other workloads require a lot more storage. So we have to use standard server configurations and as a result, we wind up with underutilized compute and storage. This is undesirable, it's inefficient, yet given our scale, we have to use standardized configurations. So that's the first big challenge. The second is the compute to disk ratio. It's generally fixed when you buy the servers. Yes, we can certainly add more disks in the field, but that's a labor intensive, and it's complicated from a logistics and an asset management standpoint, and you're fundamentally limited by the number of disk slots in the server. So now you're right back into the trap of more storage requires more servers, regardless of whether you need more compute or not. And then you compound the inefficiencies. >> Couldn't you just move the resources from, unused resources, from one cluster to the other? >> I've been asked that a lot; and no, it's just not that simple. Each application cluster becomes a silo due to its configuration of storage and compute. This means you just can't move servers from clusters because the clusters are optimized for the workloads, and the fact that you can't move resources from one cluster to another, it's more inefficiencies. And then they're compounded over time since workloads change, and the ideal ratio of compute-to-storage changes. And the end result is unused resources trapped in silos and configurations that are no longer optimized for your workload. And there's only really one solution that we've been able to find. And to paraphrase an orator far, far more talented than I am, namely Ronald Reagan, we need to open this gate, tear down these silos. The silos just have to go away. They fundamentally limit flexibility and efficiency. >> What were some of the other issues caused by using servers with internal drives? >> You have more maintenance, you've got to deal with the logistics. But the biggest problem is service and storage have significantly different life cycles. Servers typically have a three year life cycle before they're obsolete. Storage typically is four to six years. You can sometimes stretch that a little further with the storage. Inside the servers that are replaced every 3 years, we end up replacing storage before the end of its effective lifetime; that's inefficient. Further, since the storage is inside the servers, we have to do massive data migrations when we replace servers. Migrations, they're time consuming, they're logistically difficult, and they're high risk. >> So how did DriveScale help you guys? Because you guys certainly have a challenging environment, you laid out the the story, and we appreciate that. How did DriveScale help you with the challenges? >> Well, what we really wanted to do was disaggregate storage from servers, and DriveScale enables us to do that. Disaggregating resources is a new term in the industry, but I think lot of people are focusing on it. I can explain it if you think that would make sense. >> What do you mean by disaggregating resources? Can you explain that, and how it works? >> Sure, so instead of buying servers with internal drives, we now buy diskless servers with JBODs. And DriveScale lets us easily compose servers with whatever amount of disk storage we need, from the server resource pool and the disk resource pool; and they're separate pools. This means we have the right balance of compute and storage for each workload, and we can easily adjust it over time. And all of this is done via software, so it's easy to do with a GUI or in our case, at our scale, scripting. And it's done on demand, and it's much more efficient. >> How does it help you with the underutilized resource challenge you mentioned earlier? >> Well, since we can add and remove resources from each cluster, we can manage exactly how much compute power and storage is deployed for each workload. Since this is all done via software, it can be done quickly and easily. We don't have to send a technician into a data center to physically swap drives, add drives, move drives. It's all done via software and it's very, very efficient. >> Can you move resources between silos? >> Well, yes and no. First off, our goal is no more silos. That said, we still have clusters, and once we completely migrate to DriveScale, all of our compute and storage resources will be consolidated into just a few common pools. And disk storage will no longer differentiate pools; thus, we have fewer pools. For more, we have fewer pools and can use the resources in each pool for more workloads. And when our needs change and they always do, we can reallocate resources as needed. >> What of the life cycle management challenge? How you guys address that? >> Well that's addressed with DriveScale. The compute and the storage are now disaggregated or separated into diskless servers and JBODs, so we can upgrade one without touching the other. We want to upgrade servers to take advantage of new processors or new memory architectures, we just replace the servers, re-combine the disks with the new servers, and we're back up and operating. It saves the cost of buying new disks when we don't need to, and it also simplifies logistics and reduces risk, as we no longer have to run the old plant and the new plant concurrently, and do a complicated data migration. >> What about this qualifying server and storage vendors? Do you still do that? Or how's that impact -- >> We actually don't have to do it. We're still using the same server vendor. We've used Dell for many, many years, we continue to use them. We are using them for storage and there was no real work, we just had to add DriveScale into the mix. >> What's it like working with DriveScale? >> They're really wonderful to work with. They have a really seasoned team. They were at Sun Microsystems and Cisco, they built some of the really foundational products that changed the internet, that the internet was built on. They're really talented, they really bright, and they're really focused on customer success. >> Great story, thanks for sharing that. My final question for you is, you guys have a very big, awesome environment, you've got a lot of scale there. It's great for a startup to get into an environment like this, because one, they could get access to the data, work with a good team like you have. What's it like working with a startup? >> You know it's always challenging at first; too many things to do. >> They got talented guys. Most of the startups, those early day startups, they got all their A players out there. >> They have their A players, and we've been very pleased working with them. We're dealing with the top talent, some of the top talent in the industry, that created the industry. They have a proven track record. We really don't have any concerns, we know they're committed to our success and they have a great team, and great investors. >> A final, final question. For your friends out there are watching, and other practitioners who are trying to run things at scale with a cloud. What's your advice to them? You've been operating at scale, and a lot of, billions of transactions, I mean huge; it's only going to get bigger. Put your IT friendly advice hat on. What's the mindset of operators out there, technical op's, as dev op's comes in seeing a lot of that. What do people need to be thinking about to run at scale? >> There's no magic silver bullet. There's no magic answers. The public cloud is very helpful in a lot of ways, but you really have to think hard about your economics, you have to think about your scale. You just have to be sure that you're going into each decision knowing that you've looked at the costs and the benefits, the performance, the risks, and you don't expect there to be simple answers. >> Yeah, there's no magic beans as they say. You've got to make it work for the business. >> No magic beans, I wish there were. >> Tim, thanks so much for the story. Appreciate the commentaries. Live coverage at Big Data NYC, it's theCUBE. Be back with more after this short break. (upbeat techno music)

Published Date : Sep 27 2017

SUMMARY :

Brought to you by SiliconANGLE Media and GM of Wikibon Research. Sure, AppNexus is the second largest of the internet at peak, all those numbers are daily peaks. and that is one of the most challenging aspects of my job. I mean the internet, and you see the scale, is insane. and we run, as you mentioned, around the clock. because you also have a lot of opportunity. One of the biggest challenges is, The second is the compute to disk ratio. and the fact that you can't move resources Further, since the storage is inside the servers, Because you guys certainly have a challenging environment, I can explain it if you think that would make sense. and we can easily adjust it over time. We don't have to send a technician into a data center and once we completely migrate to DriveScale, and the new plant concurrently, We actually don't have to do it. that changed the internet, that the internet was built on. you guys have a very big, awesome environment, You know it's always challenging at first; Most of the startups, those early day startups, that created the industry. What's the mindset of operators out there, and you don't expect there to be simple answers. You've got to make it work for the business. Tim, thanks so much for the story.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NYSEORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

Sun MicrosystemsORGANIZATION

0.99+

Tim SmithPERSON

0.99+

fourQUANTITY

0.99+

DellORGANIZATION

0.99+

ManhattanLOCATION

0.99+

AppNexusORGANIZATION

0.99+

SiliconANGLEORGANIZATION

0.99+

TimPERSON

0.99+

Ronald ReaganPERSON

0.99+

10 timesQUANTITY

0.99+

VisaORGANIZATION

0.99+

three yearQUANTITY

0.99+

one dayQUANTITY

0.99+

FirstQUANTITY

0.99+

fifth yearQUANTITY

0.99+

secondQUANTITY

0.99+

GoogleORGANIZATION

0.99+

each workloadQUANTITY

0.99+

OneQUANTITY

0.99+

each clusterQUANTITY

0.99+

googleORGANIZATION

0.99+

Wikibon ResearchORGANIZATION

0.99+

sixthQUANTITY

0.99+

six yearsQUANTITY

0.99+

oneQUANTITY

0.99+

each poolQUANTITY

0.99+

Midtown ManhattanLOCATION

0.99+

fiscal third quarterDATE

0.99+

EachQUANTITY

0.99+

7 days a weekQUANTITY

0.99+

one solutionQUANTITY

0.99+

each transactionQUANTITY

0.98+

one clusterQUANTITY

0.98+

365 days a yearQUANTITY

0.98+

FacebookORGANIZATION

0.98+

each dayQUANTITY

0.98+

a yearQUANTITY

0.98+

10 billion a dayQUANTITY

0.98+

Hell's KitchenLOCATION

0.98+

three continentsQUANTITY

0.98+

bothQUANTITY

0.98+

about 28-and-a-half billion transactionsQUANTITY

0.98+

about 150 terabytesQUANTITY

0.97+

Manhattan, New York CityLOCATION

0.97+

more than two timesQUANTITY

0.97+

Big DataORGANIZATION

0.97+

New York CityLOCATION

0.97+

two major factorsQUANTITY

0.97+

about 11.8 billion impressionsQUANTITY

0.96+

about 8,000 serversQUANTITY

0.96+

about 400 gigabitsQUANTITY

0.96+

Each application clusterQUANTITY

0.96+

billionsQUANTITY

0.96+

up to 30 billionQUANTITY

0.96+

NYCLOCATION

0.95+

under half a secondQUANTITY

0.94+

Strata DataEVENT

0.93+

each decisionQUANTITY

0.92+

SiliconANGLE MediaORGANIZATION

0.92+

2017DATE

0.91+

VerticaORGANIZATION

0.91+

about 4 billion transactions per yearQUANTITY

0.9+

SparkTITLE

0.9+

theCUBEORGANIZATION

0.9+

about a billion trades a yearQUANTITY

0.9+

up to 3 million trades a dayQUANTITY

0.9+

10 billion transactionQUANTITY

0.88+

DriveScaleORGANIZATION

0.88+

about 333 million transactions per dayQUANTITY

0.87+

HiveTITLE

0.87+

HDFSTITLE

0.87+

CUBE-Wikibon Research Big DataEVENT

0.86+

DriveScaleTITLE

0.86+

10 billion transactions per dayQUANTITY

0.86+

GMPERSON

0.83+

2 million trades per dayQUANTITY

0.82+

Jagane Sundar, WANdisco | BigData NYC 2017


 

>> Announcer: Live from midtown Manhattan, it's theCUBE, covering BigData New York City 2017, brought to you by SiliconANGLE Media and its ecosystem sponsors. >> Okay welcome back everyone here live in New York City. This is theCUBE special presentation of our annual event with theCUBE and Wikibon Research called BigData NYC, it's our own event that we have every year, celebrating what's going on in the big data world now. It's evolving to all data, cloud applications, AI, you name it, it's happening. In the enterprise, the impact is huge for developers, the impact is huge. I'm John Furrier, cohost of the theCUBE, with Peter Burris, Head of Research, SiliconANGLE Media and General Manager of Wikibon Research. Our next guest is Jagane Sundar, who's the CTO of WANdisco, Cube alumni, great to see you again as usual here on theCUBE. >> Thank you John, thank you Peter, it's great to be back on theCUBE. >> So we've been talking the big data for many years, certainly with you guys, and it's been a great evolution. I don't want to get into the whole backstory and history, we covered that before, but right now is a really, really important time, we see you know the hurricanes come through, we see the floods in Texas, we've seen Florida, and Puerto Rico now on the main conversation. You're seeing it, you're seeing disasters happen. Disaster recovery's been the low hanging fruit for you guys, and we talked about this when New York City got flooded years and years ago. This is a huge issue for IT, because they have to have disaster recovery. But now it's moving more beyond just disaster recovery. It's cloud. What's the update from WANdisco? You guys have a unique perspective on this. >> Yes, absolutely. So we have capabilities to replicate between the cloud and Hadoop multi data centers across geos, so disasters are not a problem for us. And we have some unique technologies we use. One of the things we do is we can replicate in an active-active mode between different cloud vendors, between cloud and on-prem Hadoop, and we are the only game in town. Nobody else can do that. >> So okay let me just stop right there. When you say the only game in town I got a little skeptic here. Are you saying that nobody does active-active replication at all? >> That is exactly what I'm saying. We had some wonderful announcements from Hortonworks, they have a great product called the Dataplane. But if you dig deep, you'll find that it's actually an active-passive architecture, because to do active-active, you need this capability called the Paxos algorithm for resolving conflict. That's a very hard algorithm to implement. We have over 10 years' experience in that. That's what gives us our ability to do this active-active replication, between clouds, between on-prem and cloud. >> All right so just to take that a step further, I know we're having a CTO conversation, but the classic cliche is skate to where the puck is going to be. So you kind of didn't just decide one morning you're going to be the active-active for cloud. You kind of backed into this. You know the world spun in your direction, the puck came to you guys. Is that a fair statement? >> That is a very fair statement. We've always known there's tremendous value in this technology we own, and with the global infrastructure trends, we knew that this was coming. It wasn't called the cloud when we started out, but that's exactly what it is now, and we're benefiting from it. >> And the cloud is just a data center, it's just, you don't own it. (mumbles) Peter, what's your reaction to this? Because when he says only game in town, implies some scarcity. >> Well, WANdisco has a patent, and it actually is very interesting technology, if I can summarize very quickly. You do continuous replication based on writes that are performed against the database, so that you can have two writers and two separate databases and you guarantee that they will be synchronized at some point in time because you guarantee that the writing of the logs and the messaging to both locations >> Absolutely. >> in order, which is a big issue. You guys put a stamp on the stuff, and it actually writes to the different locations with order guaranteed, and that's not the way most replication software works. >> Yes, that's exactly right. That's very hard to do, and that's the only way for you to allow your clients in different data centers to write to the same data store, whether it's a database, a Hadoop folder, whether it's a bucket in a cloud object store, it doesn't matter. The core fact remains, the Paxos algorithm is the only way for you to do active-active replication, and ours is the only Paxos implementation that can work over the >> John: And that's patented by you guys? >> Yes, it's patented. >> And so someone to replicate that, they'd have to essentially reverse engineer and have a little twist on it to not get around the patents. Are you licensing the technology or are you guys hoarding it for yourselves? >> We have different ways of engaging with partners. We are very reasonable with that, and we work with several powerful partners >> So you partner with the technology. >> Yes. >> But the key thing, John, in answer to your question is that it's unassailable. I mean there's no argument, that is, companies move more towards a digital way of doing things, largely driven by what customers want, your data becomes more of an asset. As you data becomes more of an asset, you make money by using that data in more places, more applications and more times. That is possible with data, but the problem you end up with consistency issues, and for certain applications, it's not an issue, you're basically writing, or if you're basically reading data it's not an issue. But the minute that you're trying to write on behalf of a particular business event or a particular value proposition, then now you have a challenge, you are limited in how you can do it unless you have this kind of a technology. And so this notion of continuous replication in a world that's going to become increasingly dependent upon data, data that is increasingly distributed, data that you want to ensure has common governance and policy in place, technologies like WANdisco provides are going to be increasingly important to the overall way that a business organizes itself, institutes its work and makes sure it takes care of its data assets. >> Okay, so my next question then, thanks for the clarification, it's good input there and thanks for summarizing it like that, 'cause I couldn't have done that. But when we last talked, I always was enamored by the fact that you guys have the data center replication thing down. I always saw that as a great thing for you guys. Okay, I get that, that's an on-premise situation, you have active-active, good for disaster recovery, lot of use cases, people should be beating down your door 'cause you have a better mousetrap, I get that. Now how does that translate to the cloud? So take me through why the cloud now fits nicely with that same paradigm. >> So, I mean, these are industry trends, right. What we've found is that the cloud object stores are very, very cost effective and efficient, so customers are moving towards that. They're using their Hadoop applications but on cloud object stores. Now it's trivial for us to add plugins that enable us to replicate between a cloud object store on one side, and a Hadoop on the other side. It could also be another cloud object store from a different cloud provider on the other side. Once you have that capability, now customers are freed from lock-in from either a cloud vendor or a Hadoop vendor, and they love that, they're looking at it as another way to leverage their data assets. And we enable them to do that without fear of lock-in from any of these vendors. >> So on the cloud side, the regions have always been a big thing. So we've heard Amazon have a region down here, and there was fix it. We saw at VMworld push their VMware solution to only one western region. What's the geo landscape look like in the cloud? Does that relate to anything in your tech? >> So yes, it does relate, and one of the things that people forget is that when you create an Amazon S3 bucket, for example, you specify a region. Well, but this is the cloud, isn't it worldwide? Turns out that object store actually resides in one region, and you can use some shaky technologies like cross-region replication to eventually get the data to the other region. >> Peter: Which just boosts the prices you pay. >> Yes, not just boost the price. >> Well they're trying to save price but then they're exposed on reliability. >> Reliability, exactly. You don't know when the data's going to be there, there are no guarantees. What we offer is, take your cloud storage, but we'll guarantee that we can replicate it in a synchronous fashion to another region. Could be the same provider, could be another provider. That gives tremendous benefits to the customers. >> So you actually have a guarantee when you go to customers, say with an SLA guarantee? Do you back it up with like money back, what's the guarantee? >> So the guarantees are, you know we are willing to back it up with contracts and such like, and our customers put us through rigorous testing procedures, naturally. But we stand up to every one of those. We can scale and maintain the consistency guarantees that they need for modern businesses. >> Okay, so take me through the benefits. Who wants this? Because you can almost get kind of sucked into the complexities of it, and the nuances of cloud and everything as Peter laid out, it's pretty complex even as he simplified it. Who buys this? (laughs) I mean, who's the guy, is it the IT department, is it the ops guy, is it the facilities, who... >> So we sell to the IT departments, and they absolutely love the technology. But to go back to your initial statement, we have all these disasters happening, you know, hopefully people are all doing reasonably okay at the end of these horrible disasters, but if you're an enterprise of any size, it doesn't have to be a big enterprise, you cannot go back to your users or customers and say that because of a hurricane you cannot have access to your data. That's sometimes legally not allowed, and other times it's just suicide for a business >> And HPE in Houston, it's a huge plant down there. >> Jagane: Indeed. >> They got hit hard. >> Yep, in those sort of circumstances, you want to make sure that your data is available in multiple data centers spread throughout the world, and we give you that capability. >> Okay, what are some of the successes? Let's talk through now, obviously you've got the technology, I get that. Where's the stakes in the ground? Who's adopting it? I know you do a lot of biz dev deals. I don't know if they're actually OEM-type deals, or they're just licensing deals. Take us through to where your successes are with this technology. >> So, biz dev wise, we have a mix of OEM deals and licenses and co-selling agreements. The strong ones are all OEMs, of course. We have great partnerships with IBM, Amazon, Microsoft, just wonderful partnerships. The actual end customers, we started off selling mostly to the financial industry because they have a legal mandate, so they were the first to look into this sort of a thing. But now we've expanded into automobile companies. A lot of the auto companies are generating vast amounts of data from their cars, and you can't push all that data into a single data center, that's just not reasonable. You want to push that data into a single data store that's distributed across the world in just wherever the car is closest to. We offer that capability that nobody else can, so that we've got big auto manufacturers signed up, we've got big retailers signed up for exactly the same capability. You cannot imagine ingesting all that data into a single location. You want this replicated across, you want it available no matter what happens to any single region or a data center. So we've got tremendous success in retail, banking, and a lot of this is through partnerships again. >> Well congratulations, I got to ask, you know, what's new with you guys? Obviously you have success with the active-active. We'll dig into the Hortonworks things to check your comment around them not having it, so we'll certainly look with the Dataplane, which we like. We interviewed Rob Bearden. Love the announcement, but they don't have the active-active, we're going to document that, and get that on the record. But you guys are doing well. What's new here, what's in New York, what are some of your wins, can you just give a quick update on what's going on at WANdisco? >> Okay, so quick recap, we love the Hortonworks Dataplane as well. We think that we can build value into that ecosystem by building a plugin for them. And we love the whole technology. I have wonderful friends there as well. As for our own company, we see all of our, a lot of our business coming from cloud and hybrid environments. It's just the reality of the situation. You had, you know, 20 years ago, you had NFS, which was the great appender of all storage, but turned out to be very expensive, and you had 10 years, seven years ago you had HDFS come along, and that appended the cost model of NFS and SANs, which those industries were still working their way through. And now we have cloud object stores, which have appended the HDFS model, it's much more cost-efficient to operate using cloud object stores. So we will be there, we have replication products for that. >> John: And you're in the major clouds, you in Azure? >> Yes, we are in Azure. >> Google? >> Jagane: Yes, absolutely. >> AWS? >> AWS, of course. >> Oracle? >> Oracle, of course. >> So you got all the top four companies. >> We're in all of them. >> All right, so here's the next question is, >> And you're also in IBM stuff too. >> Yes, we're built tightly into IBM >> So you've got a pretty strong legacy >> And a monopoly. >> On the mainframe. >> Like the fiber channel of replication. (John and Jagane laugh) That was a bad analogy. I mean it's like... Well, I mean fiber channel has only limited suppliers 'cause they have unique technology, it was highly important. >> But the basic proposition is look, any customer that wants to ensure that a particular data source is going to be available in a distributed way, and you're going to have some degree of consistency, is going to look at this as an option. >> Yes. >> Well you guys certainly had a great team under your leadership, it's got great tech. The final question I have for you here is, you know, we've had many conversations about the industry, we like to pontificate, I certainly like to speculate, but now we have eight years of history now in the big data world, we look back, you know, we're doing our own event in New York City, you know, thanks to great support from you guys and other great friends in the community. Appreciate everyone out there supporting theCUBE, that's awesome. But the world's changed. So I got to ask you, you're a student of the industry, I know that and knowing you personally. What's been the success formula that keeps the winners around today, and what do people need to do going forward? 'Cause we've seen the train wreck, we've seen the dead bodies in the industry, we've kind of seen what's happened, there've been some survivors. Why did the current list of characters and companies survive, and what's the winning formula in your opinion to stay relevant as big data grows in a huge way from IoT to AI cloud and everything in between? >> I'll quote Stephen Hawking in this. Intelligence is the capability to adapt to changes. That's what keeps industries, that's what keeps companies, that what keeps executives around. If you can adapt to change, if you can see things coming, and adapt your core values, your core technology to that, you can offer customers a value proposition that's going to last a long time. >> And in a big data space, what is that adaptive key focus, what should they be focused on? >> I think at this point, it's extracting information from this volume of data, whether you use machine learning in the modern days, or whether it was simple hive queries, that's the value proposition, and making sure the data's available everywhere so you can do that processing on it, that remains the strength. >> So the whole concept of digital business suggests that increasingly we're going to see our assets rendered in some form as data. >> Yes. >> And we want to be able to ensure that that data is able to be where it needs to be when it needs to be there for any number of reasons. It's a very, very interesting world we're entering into. >> Peter, I think you have a good grasp on this, and I love the narrative of programming the world in real time. What's the phrase you use? It's real time but it's programming the world... Programming the real world. >> Yeah, programming the real world. >> That's a huge, that means something completely, it's not a tech, it's a not a speed or feed. >> Well the way we think about it, is that we look at IoT as a big information transducer, where information's in one form, and then you turn it into another form to do different kinds of work. And that big data's a crucial feature in how you take data from one form and turn it into another form so that it can perform work. But then you have to be able to turn that around and have it perform work back in the real world. There's a lot of new development, a lot of new technology that's coming on to help us do that. But any way you look at it, we're going to have to move data with some degree of consistency, we're still going to have to worry about making sure that if our policy says that that action needs to take place there, and that action needs to take place there, that it actually happens the way we want it to, and that's going to require a whole raft of new technologies. We're just at the very beginning of this. >> And active-active, things like active-active in what you're talking about really is about value creation. >> Well the thing that makes active-active interesting is, again, borrowing from your terms, it's a new term to both of us, I think, today. I like it actually. But the thing that makes it interesting is the idea that you can have a source here that is writing things, and you can have a source over there that are writing things, and as a consequence, you can nonetheless look at a distributed database and keep it consistent. >> Consistent, yeah. >> And that is a major, major challenge that's going to become increasingly a fundamental feature of our digital business as well. >> It's an enabling technology for the value creation and you call it work. >> Yeah, that's right. >> Transformation of work. Jagane, congratulations on the active-active, and WANdiscos's technology and all your deals you're doing, got all the cloud locked up. What's next? Well you going to lock up the edge? You're going to lock up the edge too, the cloud. >> We do like this notion of the edge cloud and all the intermediate steps. We think that replicating data between those systems or running consistent compute across those systems is an interesting problem for us to solve. We've got all the ingredients to solve that problem. We will be on that. >> Jagane Sundar, CTO of WANdisco, back on theCUBE, bringing it down. New tech, whole new generation of modern apps and infrastructure happening in distributed and decentralized networks. Of course theCUBE's got it covered for you, and more live coverage here in New York City for BigData NYC, our annual event, theCUBE and Wikibon here in Hell's Kitchen in Manhattan, more live coverage after this short break.

Published Date : Sep 27 2017

SUMMARY :

brought to you by SiliconANGLE Media great to see you again as usual here on theCUBE. Thank you John, thank you Peter, Disaster recovery's been the low hanging fruit for you guys, One of the things we do is we can replicate Are you saying that nobody does because to do active-active, you need this capability the puck came to you guys. and with the global infrastructure trends, And the cloud is just a data center, and the messaging to both locations You guys put a stamp on the stuff, is the only way for you to do active-active replication, or are you guys hoarding it for yourselves? and we work with several powerful partners But the key thing, John, in answer to your question that you guys have the data center replication thing down. Once you have that capability, Does that relate to anything in your tech? and you can use some shaky technologies but then they're exposed on reliability. Could be the same provider, could be another provider. So the guarantees are, you know we are willing to is it the ops guy, is it the facilities, who... you cannot have access to your data. And HPE in Houston, and we give you that capability. I know you do a lot of biz dev deals. and you can't push all that data into a single data center, and get that on the record. and that appended the cost model of NFS and SANs, So you got all Like the fiber channel of replication. But the basic proposition is look, in the big data world, we look back, you know, Intelligence is the capability to adapt to changes. and making sure the data's available everywhere So the whole concept of digital business is able to be where it needs to be What's the phrase you use? That's a huge, that means something completely, that it actually happens the way we want it to, in what you're talking about really is about is the idea that you can have a source here that's going to become increasingly and you call it work. Well you going to lock up the edge? We've got all the ingredients to solve that problem. and more live coverage here in New York City

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Jagane SundarPERSON

0.99+

Rob BeardenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

JaganePERSON

0.99+

John FurrierPERSON

0.99+

PeterPERSON

0.99+

WANdiscoORGANIZATION

0.99+

Stephen HawkingPERSON

0.99+

two writersQUANTITY

0.99+

HoustonLOCATION

0.99+

New York CityLOCATION

0.99+

Puerto RicoLOCATION

0.99+

TexasLOCATION

0.99+

New YorkLOCATION

0.99+

AWSORGANIZATION

0.99+

Wikibon ResearchORGANIZATION

0.99+

VMworldORGANIZATION

0.99+

FloridaLOCATION

0.99+

GoogleORGANIZATION

0.99+

eight yearsQUANTITY

0.99+

bothQUANTITY

0.99+

OracleORGANIZATION

0.99+

two separate databasesQUANTITY

0.99+

20 years agoDATE

0.99+

HortonworksORGANIZATION

0.99+

CubeORGANIZATION

0.99+

firstQUANTITY

0.99+

WANdiscosORGANIZATION

0.98+

over 10 years'QUANTITY

0.98+

theCUBEORGANIZATION

0.98+

SiliconANGLE MediaORGANIZATION

0.98+

one formQUANTITY

0.97+

WikibonORGANIZATION

0.97+

OneQUANTITY

0.97+

todayDATE

0.97+

seven years agoDATE

0.96+

oneQUANTITY

0.96+

one regionQUANTITY

0.96+

HadoopTITLE

0.96+

Hortonworks DataplaneORGANIZATION

0.95+

NYCLOCATION

0.95+

four companiesQUANTITY

0.94+

single regionQUANTITY

0.94+

yearsDATE

0.93+

DataplaneORGANIZATION

0.91+

single locationQUANTITY

0.91+

single data centerQUANTITY

0.91+

HPEORGANIZATION

0.9+

one sideQUANTITY

0.9+

one westernQUANTITY

0.89+

PaxosTITLE

0.89+

PaxosOTHER

0.88+

both locationsQUANTITY

0.88+

10 yearsQUANTITY

0.88+

BigDataEVENT

0.87+

AzureTITLE

0.86+

Wikibon Analyst Meeting | September 15, 2017


 

>> [Peter] Hi, this is Peter Burris. Welcome to Wikibon Research's Friday research meeting on theCUBE. (tech music) Today, we're going to talk about something that's especially important, given the events of this week. As many of you know, Apple announced a new iOS 11, and a whole bunch of new devices. Now we're not going to talk about the devices so much, but rather some of the function that's being introduced in iOS 11. Specifically, things like facial recognition. An enormous amount of processing is going to go into providing that type of service on devices like this, and that processing capability, those systems capabilities, are going to be provided by some new technologies that are related to artificial intelligence, big data, and something called deep learning. And the challenge that the industry's going to face is, where will this processing take place? Where will the data be captured? Where will the data be stored? How will the data be moved? What types of devices will actually handle this processing? Is this going to all end up in the cloud, or is it going to happen on increasingly intelligent smart devices? What about some of the different platform? And, ultimately, one of the biggest questions of all is, and how are we going to bring some degree of consistency and control to all of these potentially distributed architectures, platforms, and even industries, as we try to weave all of this into something that serves all of us and not just a few problems. Now to kick this off, Jim Kobielus, why don't you start by making a quick observation in what we mean by deep learning. >> [Jim] Yeah, thank you, Peter. Deep learning. The term has been around for a number of years. Essentially, it's machine learning, but with more layers of neuron, and able to do higher level abstractions from the data. Abstractions, such as face recognition, natural language processing, and speech recognition, and so forth, so when we talk about deep learning now, as a client, to what extent can more of these function? Face recognition as in iOS 11, or iPhone 10. What then will this technology, with capability, be baked into all edge endpoints now? >> [Peter] Jim, I'm having a little bit of a problem hearing you, so maybe we can make sure that we can hear that a little bit better. But, very quickly, and very importantly, it suggests that, the term deep learning suggests something a little different than I think we're actually going to see. Deep learning suggests that there's going to be a centralization, a function for some process. It's going to be the ultimate source of value. And I don't think we mean that. When we talk about deep learning, let's draw a distinction between deep learning, as a process, and deep learning as a set of systems and designs and investment that's going to be made to deliver on this type of business function. Does deep learning fully capture what's going to happen here? >> [James] Is this for me, Peter? Can you hear me, Peter? >> [Peter] I can hear you better now, a little bit saturated. >> [James] Okay, I got my earbuds in. Yeah, essentially the term deep learning is a broad paradigm that describes both the development pipeline function that quite often will, more often than not, will be handled in the cloud among distributed teams, and those function of deep learning that can be brought to the edge, to the end devices, the mobile devices, the smart sensors. When we talk about deep learning at the edge, as enabled through chip sets, we're talking about functions such as local sensing, local inference, from the data that's being acquired there, local actuation as it were taking action, like an autonomous vehicle steering right or left, based on whether there is an obstacle in their path. So really, in the broadest sense, you need that full infrastructure to do all the building and the tuning and the training of deep learning models, and, of course, you need the enabling chip sets and tools to build those devices, those functions, deep learning functions, that need to be pushed for local, often autonomous execution at endpoints. >> [Peter] So, David Floyer, that strongly suggest that, in fact, deep learning is suggestive of a new system architecture model that is not going to be large and centralized, but rather is going to be dependent upon where data actually exists and how proximate it is to the set of events that we're both monitoring, and ultimately trying to guide, as we think about new automation, new types of behavior. Take us through our thinking on some of these questions of where the data's going to reside, where the function's going to reside, and ultimately, how the architecture's going to evolve. >> [James] I think you're on mute, David. >> [David] Yes, I would put forward the premise that the majority of the processing of this data and the majority of the spend on equipment for this data will exist at the edge. Neal brought forward a very good differentiation between second-hand data, which is where bit data is today, and primary data, which is what we're going to be analyzing and taking as decisions on at the edge. As senses increase the amount of data and smart senses come, so we're going to see more and more of a processing shift from the traditional centralized, to the edge. And taking Apple as another example, they're doing locally, all of this processing of data. Siri, itself, is becoming more and more local, as opposed to centralized, and we're seeing the shift of computing down to the edge. And if we look at the amount of computing we're talking about, we're talking, with the Apple 10, it's six hundred billion operations a second. That's a lot of computing power. We see the same thing in other industries. There's the self-driving car. If you take the Nvidia Drive-2 it has a huge amount of computing power within that to process all of the different sources of data in a device which is costing less than $1,000, $600, $700. So much lower pricing of processing, et cetera. Now the challenge of data, the traditional model, is that all of the data goes to the center, is the cost of all this data, moving it from the edge to the center is just astronomical. It would never happen. So only a sum set of that data will be able to be moved. And people who develop systems, AI systems, for example, at the edge, will have to have simulation factories very local to them to do it, so car manufacturers, for example, having a small city, if you like, where they have very, very fast communication devices. And the amount of data that can be stored, as well, from this new primary source of data is going to be very, very small, so most of that data either is processed immediately, or it disappears. And after it's processed, in our opinion, most of that will disappear, 99% of that class will disappear completely. So the traditional model of big data is being turned upside down by these new and prolific sources of data, and the value will be generated at the edge. That's where the value is in recognizing a bad person coming into a building, or recognizing your friends, or recognizing that something is going wrong with a smart sensor locally. The vibrations are too high, or whatever the particular example is. That value will be generated at the edge by new classes of people and new classes of actors is this space. >> [Peter] So, Neil Raden, one of the interesting things that we're talking about here, is that we're talking about here is that we're talking about some pretty consequential changes in the nature of the applications, and the nature of the architectures and infrastructures that we're going to build to support these applications. But those kinds of changes don't take place without serious consideration of the business impacts. Is this something that companies are going to do, kind of willy-nilly? How deeply are companies going to have to think about how deeply are users going to have to think about deploying these within their business? Because it seems like it's going to have a pretty consequential impact on how businesses behave. >> [Neil] Well, they're going to need some guidance, because there just aren't enough people out there with the skill to implement this sort of thing for all the companies that may want to do it. But more importantly than that, I think that our canonical models, right now, for deep learning and intelligence at the edge are pretty thin. We talk about autonomous cars or facial recognition, something like that, there's probably a lot more things we need to think about. And from that we can derive some conclusions about how to do all this. But when it comes to the persistence of data, there's a difference between a B to C application, where we're watching people click, and deciding next best offer, and anything that happened a few months ago was irrelevant, so maybe we can throw that data away. But when you're talking about monitoring the performance of an aircraft in flight or a nuclear power plant, or something like that, you really need to keep that data. Not just for analytical purposes, but probably for regulatory purposes. In addition to that, if you get sued, you want to have some record of actually what happened. So I think we're going to have to look at this whole business, and all of its different components, before we can categorically say, yes we saved this data, here's the best application. Everything should be done in the cloud. I don't think we really know that yet. >> [Peter] But the issue that's going to determine that decision is going to be a combination of costs today, although we know that those costs are going to change over time, and knowledge of where people are and the degree to which people really understand some of these questions. And, ultimately, what folks are trying to achieve as they invest to get to some sort of objective. So there's probably going to be a difference in the next few years between, in which we do a lot of learning about deep learning systems, and some steady state that we get to. And my guess is that the ecosystem is going to change pretty dramatically between now and then. So it may be the telcos think that they're going to enjoy a bonanza on communications costs over the next few years, as people think about moving all this data. If they try to do that, that's going to have an impact on how Amazon and Google, and some of the big cloud suppliers invest to try to facilitate the movement of the data. But there's a lot of uncertainty here. Jim, why don't you take us through some of the ecosystem questions. What role will developers play? Where's the software going to end up? And to what degree is this going to end up in hardware and is going to lead to or catalyze kind of a Renaissance in the notion of specialized hardware? >> [James] Yeah, those are great questions. I think most of the functionality, meaning the local sensing and inference, and actuation, is inevitably going to end up in hardware, in highly specialized and optimized hardware for particular use cases. In other words, smart everything. Smart appliances, smart clothing, smart lamps, smart... You know, what's going to happen is that more and more of what we now call deep learning will just be built-in by designers and engineers of all sorts, regardless of whether they have a science or a computer background. And so it's, I think going to be part of the material fabric of reality, the bringing intelligence that, with that said then, if you look at the chip set architectures, and if we can use the term chip set here, that will enable this vast palette of embedding of this intelligence in physical reality. The jury is really out about whether it will be GPUs, like in video, of course, it's power out behind GPUs, versus CPUs, versus FPGAs, A6, there's various neuromorphic chip sets from IBM and others. It'll be, it's clearly going to be a fairly very innovative period of great ferment in innovation in the underlying hardware substrate, the chip sets, to enable all these different use cases in embedding of all this. In terms of developer, take the software developers. Definitely, they're still very much at the core of this phenomenon, when I say they, data scientists, as the core developers of this new era who are the ones who are building these convolutional neural networks and recurrent neural networks, and long, short-term, and so forth. All these DL algorithms very much are the province of data scientists, for the new generation of data scientists who specialize in those areas and that who work hand-in-hand with traditional programmers and so forth, to put all of this intelligence into a shape that can then be embedded and might, containerized, whatever, and brought into some degree of harmonization with the physical hardware layer into which hardware could be used for terms like, clothing, smart clothing. What gave us that, now we have a new era where the collaborations are going to be diverse among nontraditional job, or skills categories, who are focused on bringing AI into everything that touches our lives. It's wide open now. >> [Peter] so David Floyer, let me throw it over to you, because Jim's raised some interesting points about where the various propositions, the value propositions, and how the ecosystem is going to emerge. This sounds like a, once again, going back to the role that consumer markets are going to play from a volume, cost, and driving innovation standpoint. Are we seeing kind of a repeat of that, are the economics going to, of volume going to also play a role here? Muted? >> [David] Yes, I believe so, very strongly. If you look at technologies and how they evolve. If you look for example at Intel, and how they became so successful in the chip market. They developed the chips with Microsoft for the PC. That was very, very successful, and from that they then created the chip set for the data senses, themselves. When we look at the consumer volumes, we see a very different marketplace. For example, GPUs are completely winning in the consumer market. So Apple introduced GPUs into their ARM processes this time around. Nvidia has been very, very successful, together with ARM, in producing systems for self-driving cars. Very, very powerful systems. So we're looking at new architectures. We're looking at consumer architectures, that in Nvidia's case came from game playing, and in ARM, has come all of the distributed ecosystems, the clients, et cetera, all ARM-based. We're seeing that it's likely that consumer technologies will be utilized in these ecosystems because volume wins. Volume means reduction in price. And when you look at, for example, the cost of an ARM processor within an Apple iPhone, it's $26.90. That's pretty low compared with the thousands of dollars you're talking about for a processor going into a PC. And when you look at the processing power of these things, in terms of operation, they're actually greater power. And same with Nvidia with the GPUs. So yes, I think there is a potential for a big, big change. And a challenge to the existing vendors that they have to change and go for volume and pricing for volume in a different way than they do at the moment. >> [Peter] So that's going to have an enormous impact, ultimately, on the types of hardware designs that we see emerge over the course of the next few years. And the nature of the applications that the ecosystem is willing to undertake. I want to pivot and bring it back to the notion of deep learning as we think about the client. Because it ultimately describes a new role for analytics and how analytics are going to impact the value propositions, the behaviors, and ultimately, the experience of consumers, and everybody, has with some of these new technologies. So Neil, what's the difference between deep learning-related analytics on the client, and a traditional way of thinking about analytics? Take us through that a little bit. >> [Neil] Deep learning on the client? You mean at the edge? >> [Peter] Well deep learning on a client, deep learning on the edge, yeah. Deep learning out away from the center. When we start talking about some of this edge work, what's the difference between that work and the traditional approach for data analytics, data warehousing, et cetera? >> [Neil] Well, my naive point of view is deep learning involves crunching through tons of data in training models to come up with something you can deploy. So I don't really see deep learning happening at the edge very much. I think David said this earlier, that the deep learning is happening in the big data world when they have trillions of observations to use. Am I missing your point? >> [Peter] No, no. We talked earlier about the difference between deep learning as a process and deep learning as a metaphor for a new class of systems. So when we think about utilizing these technologies, whether it's deep learning, or AI, whatever we call it, and we imagine deploying more complex models close to the edge, what's that mean from the standpoint of the nature of the data that we're going to use, the approach, the tooling that we're going to use, the approach we're going to take organizationally, institutionally, to try to ensure that that work happens. Is there a difference between that and doing data warehousing with financial systems? >> [Neil] Well, there's a difference in terms of the technology. I think that 10 years ago, we were talking about complex event processing. The data wasn't really flowing from centers, it was scraping Web screens and that sort of thing, but it was using decision-making technology to look for patterns and pass things along. But you have to look at the whole process of decision making. If you're talking about commercial organizations, it's not really that much in commercial organizations that requires complex, real-time, yeah, making decisions about supply chain or shop floor automation, or that sort of thing. But from a management point of view, it's not really something that you do. The other part of decision making that troubles me is, I wrote about this 10 years ago, and that was we shouldn't be using any kind of computer-generated decision making that affects human lives. And I think you could even expand that to living things, or harming the environment and so forth. So I'm a little bit negative about things like autonomous cars. It's one thing to generate a decision-making thing that issues credit cards, and maybe it's acceptable to have 5% or 3% of decision just completely wrong. But it's that many wrong in autonomous driving, especially trucks, the consequences are disastrous. So we have to be really careful about this whole thing with IoT, we've got to be a lot more specific about what we mean, what kinds of architectures, and what kind of decisions we're trying on. >> [Peter] I think that's a great point, Neil. There's a lot that can be done, and then the question is that we have to make sure that it's done well. We understand some of the implications, and again, I think there's a difference between a transition period and a steady state. We're going to see a lot of change over the next few years. The technology's making it possible to do so, but there's going to be a lot of social impacts that ultimately have to be worked out. And I'll get, we'll get to some of those in a second. But George, George Gilbert, I wanted to give you an opportunity to talk a little bit about the way that we're going to get this done. Talk about how we're, where's this training going to take place, per what Neil said? Is the training going to take place at the edge? Is the training going to take place in the cloud? Institutionally, what is the CIO and the IT organization have to do to prepare for this? >> [George] So I think the sort of widespread consensus is that the inferencing and sort of predicting for the low latency actions will be at the edge, and some smaller amount of data goes up into the cloud training, but the class of training that we will do over time changes. And we've been very fixated on sort of the data centricity, like most of the data's at the edge a little bit in the center. And Neil has talked about sort of secondary, or reference data, to help build the model from the center. But the models themselves that we build in the center and then push out, will change in the sense that we look at the compute intensity. The compute intensity of the cloud will be, will evolve, so that it's more advantageous there to build models that become rich enough to be like simulation. So in other words, it's not do I, if I see myself drifting over the lane marker on the right, do I correct left? But you have a whole bunch of different, different knobs that get tuned, in that it happens over time. So that the idea of the model is almost like a digital twin, but not of, let's say, just an asset or physical device, but almost like a domain, in that that model, it's very compute intensive, it generates a lot of data sets, but then the model itself can be distilled down and pushed out to the edge. Or, essentially, guiding or informing decisions, or even making decisions with a lot more knobs than you would have with a more simplistic model. >> [Peter] So, Ralph, I know that we've spent some time looking at some of the market questions of this. Based on this conversation, can you kind of give a summary of how much data volume we think is happening, data movement's happening? What's the big, broad impact on some of the segments and opportunities over the course of the next couple of years? >> [Ralph] Yeah, I think the, think back maybe 10 years, the amount of unstructured data that was out there was not all that great. Obviously, in the last 10 years of war, there's a lot more of it. So the growth of data is dramatically increasing. Most of it is going to be in the mobile area. So there's just a lot of it out there. And this, I think fishing for where you derive value from that data is really critical for moving optimization of processes forward. But I think I agree with Neil that there's a lot of work to be done yet about how that actually unfolds. >> [Peter] And there's also a lot of work to be done in areas like, what will the role of, who's going to help define how a lot of these platforms are going to be integrated together. What's the role of standards? What role will government play? There's an enormous number of questions here. But one thing we all agree on ultimately, is that this is an emerging source of, or this technology is an emerging source of dramatic new types of business value taking on problems that we've never thought about taking on before. And it's going to have an enormous impact on how IT organizations work with business, how they work with each other, how businesses work together. This is the centerpiece of the new digital business transformation. Alright, so let me summarize this week's findings. The first observation we make is that this week, Apple introduced facial recognition directly in iOS 11, and it wowed much of the industry, and didn't get a lot of people excited for a variety of reasons, but it does point to the idea that increasingly we're going to see new classes of deep learning, AI, machine learning, and other big data-type technologies, being embedded more deeply in systems as a way of improving the quality of the customer experience, improving operational effectiveness and efficiency, and ultimately, even dramatically improving the ratio between product and service revenue in virtually everything that we can think about. Now, that has led folks to presume that there's, again, going to be this massive migration of workload back into the cloud, both from a data standpoint, as well as from a workload standpoint. But when we stop and think about what it really means to provide this value, it's pretty clear that for a number of reasons, including real-time processing to provide these services, the cost of moving data from one point to another, and that the characteristics of the intellectual property controls, et cetera, restricts the pressure to try to move all this data from the edge, client, and device back into the cloud. And that the new architectures, increasingly, are going to feature a utilization of dramatic new levels of processing on devices. We observe, for example, that the new iPhone is capable of performing 600 billion instructions per second. That's an unbelievable amount of processing power. And we're going to find ways to use that up, to provide services closer to end users without forcing a connection. This is going to have enormous implications, overall, in the industry. Questions, for example, like how are we going to institutionally set up the development flow? We think we're going to see more model building at the center, with a constrained amount of the data, and more execution of these models at the edge. But we note that there's going to be a transition period here. There's going to be a process by which we're learning what data's important, what services are important, et cetera. We also think it's going to have an enormous impact, for example, on even describing the value proposition. If everything is sold as a product, that means the cost of moving the data, the cost of liability, et cetera, on these devices is going to be extreme. It's going to have an enormous impact on the architectures and infrastructures we use. If we think in terms of services, that might have a different, or lead to a different set of ecosystem structures being put in place, because it will change the transaction costs. The service provider, perhaps, is going to be more willing to move the data, because they'll price it into their service. Ultimately, it's going to have a dramatic impact on the organization of the technology industry. The past 25, 30, 40 years have been defined, for the first time, by the role that volume plays within the ecosystem. Where Microsoft and Intel were the primary beneficiaries, or were primary beneficiaries of that change. As we move to this notion of deep learning and related technologies at the edge, providing new classes of behavior, it opens up the opportunity to envision a transitioning of where the value is up and down the stack. And we expect that we're going to see more of that value be put directly into hardware that's capable of running these models with enormous speed and certainty in execution. So a lot of new hardware gets deployed, and then the software ecosystem is going to have to rely on that hardware to provide the data and build the systems that are very data rich to utilize and execute on a lot of these, mainly ARM processors that are likely to end up in a lot of different devices, in a lot of different locations, in its highly distributed world. The action item for CIOs is this. This is an area that's going to ensure that a role for IT within the business, as we think about what it means for a business to exploit some of these new technologies, in a purposeful and planful and architected way. But it also is going to mean that more of the value moves away from the traditional way of thinking about business systems with highly stylized data to a more clear focus on how consumers are going to be supported, devices are going to be supported, and how we're going to improve and enhance the security and the utilization of more distributed, high quality processing at the edge, utilizing a new array of hardware and software within the ecosystem. Alright, so I'm going to close out this week's Wikibon Friday Research Meeting on theCUBE, and invite you back next week where we'll be talking about new things that are happening in the industry that impact your lives and the industry. Thank you very much for attending. (tech music)

Published Date : Sep 15 2017

SUMMARY :

And the challenge that the industry's going to face is, to do higher level abstractions from the data. It's going to be the ultimate source of value. deep learning functions, that need to be pushed that is not going to be large and centralized, is that all of the data goes to the center, and the nature of the architectures and infrastructures And from that we can derive some conclusions And my guess is that the ecosystem is going to change pretty the chip sets, to enable all these different use cases and how the ecosystem is going to emerge. and in ARM, has come all of the distributed ecosystems, that the ecosystem is willing to undertake. and the traditional approach for data analytics, that the deep learning is happening and deep learning as a metaphor for a new class of systems. of the technology. and the IT organization have to do to prepare for this? So that the idea of the model is almost like a digital twin, of the next couple of years? Most of it is going to be in the mobile area. restricts the pressure to try to move all

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

Jim KobielusPERSON

0.99+

NeilPERSON

0.99+

GeorgePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Neil RadenPERSON

0.99+

Peter BurrisPERSON

0.99+

$26.90QUANTITY

0.99+

RalphPERSON

0.99+

JimPERSON

0.99+

JamesPERSON

0.99+

NvidiaORGANIZATION

0.99+

IBMORGANIZATION

0.99+

September 15, 2017DATE

0.99+

PeterPERSON

0.99+

AppleORGANIZATION

0.99+

99%QUANTITY

0.99+

$700QUANTITY

0.99+

5%QUANTITY

0.99+

IntelORGANIZATION

0.99+

next weekDATE

0.99+

$600QUANTITY

0.99+

3%QUANTITY

0.99+

George GilbertPERSON

0.99+

less than $1,000QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

this weekDATE

0.99+

Drive-2COMMERCIAL_ITEM

0.99+

iOS 11TITLE

0.99+

SiriTITLE

0.99+

first timeQUANTITY

0.99+

thousands of dollarsQUANTITY

0.98+

10 years agoDATE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

iPhone 10COMMERCIAL_ITEM

0.98+

TodayDATE

0.98+

todayDATE

0.97+

one pointQUANTITY

0.96+

Wikibon ResearchORGANIZATION

0.95+

NealPERSON

0.95+

600 billion instructions per secondQUANTITY

0.95+

six hundred billion operationsQUANTITY

0.94+

20170908 Wikibon Analyst Meeting Peter Burris


 

(upbeat music) >> Welcome to this week's edition of Wikibon Research Meeting on the Cube. This week we're going to talk about a rather important issue that raises a lot of questions about the future of the industry and that is, how are information technology organizations going to manage the wide array of new applications, new types of users, new types of business relationships that's going to engender significant complexity in the way applications are organized, architected and run. One of the possibilities is that we'll see an increased use of machine learning, ultimately inside information technology and operations management applications and while this has tremendous potential, it's not without risk and it's not going to be simple. These technologies sound great on paper but they typically engender an enormous amount of work and a lot of complexity themselves to run. Having said that, there are good reasons to suspect that this approach will in fact be crucial to ultimately helping IT achieve the productivity that it needs to support digital business needs. Now a big challenge here is that the technology, while it looks good, as I said, nonetheless is pretty immature and today's world, there's a breadth first and a depth first approach to thinking about this. Breadth first works on or worries about end to end visibility into how applications work across multiple clouds, on premise in the cloud, across applications, wherever they might be. You get an enormous amount of visibility and alerts but you also get a lot of false positives and that creates a challenge because these tools just don't have enormous visibility into how the individual components are working or how their relationships are set up, they just look at the broad spectrum of how work is being conducted. The second class is looking at depth first which is really based on the digital twin notion that's popular within the IOT world and that is vendors delivering out of the box models that are capable of doing a great job of creating a digital simulacrum of a particular resource so that it can be modeled and tracked and tested. Now again, a lot of potential, a lot of questions about how machine learning and iTom are going to come together. George, what is one of the key catalysts here? Somewhere in here there's a question about people. >> Okay there's a talent question, always with the introduction of new technology, it's people processed technology. The people end of the equation here is that we've been trying to upskill and create a new class of application developer as Jim has identified. This new class is a data scientist and they focus on data intensive applications and machine learning technology. The reason I bring up the technology is when we have this landscape that you described, that is getting so complex where we're building on business transaction applications, extending them with systems of engagement and then the operational infrastructure that supports both of them, we're getting many orders of magnitude more complexity in multiple dimensions and in data and so we need a major step function in the technology to simplify the management of that because just the way we choked on the deployment, mainstream deployment of big data technology in terms of lack of the specialized administrators, we are similarly choking on the deployment of very high value machine learning applications because it takes a while to train a new generation of data scientists. >> So George, we got a lot of challenges here in trying to train people but we're also expecting that we're going to be better trained technology with some of these new questions, so Jim let me throw it to you. When we think ultimately about this machine learning approach, what are some of the considerations that people have to worry about as they envision the challenges associated with training some of these new systems? >> Yeah I think one of the key challenges with training new systems for iTom is, do you have a reference data set? The predominant approach to machine learning is something called supervised learning where you're training it on rhythm against some data that represents what you're trying to detect or predict or classify. If for IT and operations management, you're looking for anomalies, for unprecedented events, black swan events and so forth. Clearly, if they're unprecedented, there's probably not going to be a reference data set that you can use to detect them or hopefully before they happen and neutralize them. That's an important consideration and supervised learning breaks down if you can't find a reference data example. Now there are approaches to machine learning, they're called cluster analysis or unsupervised learning, alert to something called cluster analysis algorithms which would be able to look for clusters in the data that might be indicative of correlations that might be useful to drill into, might be indicative of anomalous events and so forth. What I'm getting as it that when you're then considering ML, machine learning in the broader perspective of IT and operations management, do you go supervised learning, do you go with unsupervised learning for the anopolis, do you, if you want to remediate it, that you have a clear set of steps to follow from precedent, you might also want something called reinforcement learning. What I'm getting at is that all the aspects of training the models to acquire the knowledge necessary to manage the IT operations. >> Jim, let me interrupt, what we've got here is a lot of new complexity and we've got a need for more people and we've got a need for additional understanding of how we're going to train these systems but this is going to become an increasingly challenging problem. David Floyer, you've done some really interesting research on with the entire team that we call unigrid. Unigrid is looking at the likely future of systems as we're capable of putting more data proximate to other data and use that as a basis for dramatically improving our ability to, in a speedy, nearly real-time way, drive automation between many of these new application forms. It seems as though depth first, or what we're calling depth first, is going to be an essential element of how unigrid's going to deploy. Take us through that scenario and what do you think about how these are going to come together? >> Yes, I agree. The biggest, in our opinion, the biggest return on investment is going to come from being able to take the big data models, the complex models and make those simple enough that they can, in real time, help the acceleration, the automation of business processes. That seems to be the biggest return on this and unigrid is allowing a huge amount more data to be available in near real-time, 100 to 1000 times more data and that gives us an opportunity for business analytics which includes of course AI and machine learning and basic models, etc. to be used to take that data and apply it to the particular business problem, whether it be fraud control, whether it be any other business processing. The point I'm making here is that coding techniques are going to be very, very stretched. Coding techniques for an edge application in the enterprise itself and also of course coding techniques for pushing down stuff to the IOT and to the other agents. Those coding techniques are going to focus on performance first to begin with. At the same time, a lot of that coding will come from ISVs into existing applications and with it, the ISVs have the problem of ensuring that this type of system can be managed. >> So George, I'm going to throw it back to you at this point in time because based on what Dave has just said, that there's new technology on the horizon that has the potential to drive the business need for this type of technology, we'll get to that in a little bit more detail in a second, but is it possible that at least the depth first side of these ML and IT and iTom applications could become the first successful packaged apps that use machine learning in a featured way? >> That's my belief, and the reason is that even though there's going to be great business value in linking, say big data apps and systems of record and web mobile apps, say for fraud prevention or detection applications where you really want low latency integration, most of the big data applications today are more high latency integration where you're doing training and inferencing more in batch mode and connecting them with high latency with the systems of record or web and mobile apps. When you have that looser connection, high latency connection, it's possible to focus just on the domain, the depth first. Because it's depth first, the models have much more knowledge built in about the topology and operation of that single domain and that knowledge is what allows them to have very precise and very low latency remediation either recommendations or automated actions. >> But the challenge with just looking at it from a depth first standpoint is that as the infrastructure, as the relationships amongst technologies and toolings inside an infrastructure application portfolio is that information is not revealed and becomes more crucial overall to the operation of the system. Now we got to look a little bit at this notion of breadth first, the idea of tooling support end to end. That's a little bit more problematic, there's a lot of tools that are trying to do that today, a lot of services trying to do that today, but one of the things that's clearly missing is an overall good understanding of the dependency that these two tools have on machine learning. Jim, what can you tell us about how overall some of these breadth first products seem to be dependent or not on some of these technologies. >> Yeah, first of all breadth first products, what's neat is above, basically an overall layer is graph analysis, graph modeling to be able to follow a hundred interactions of transactions and business flows across your distributed IT infrastructure, to be able to build that entire narrative of what's causing a problem or might be causing a problem. That's critically important but as you're looking at depth first and you just go back and forth between depth first, like digital twin as a fundamental concept and a fundamentally important infrastructure for depth first, because the digital twin infrastructure maintains the data that can be used for training data for supervised machine learning looking into issues from individual entities. If you can combine overall graph modeling at the breadth first level for iTom with the supervised learning based on digital twin for depth first, that makes for a powerful combination. I'm talking in a speculative way, George has been doing the research, but I'm seeing a lot of uptake of graph modeling technology in the sphere, now maybe George could tell us otherwise, but I think that's what needs to happen. >> I think conceptually, the technology is capable of providing this George, I think that it's going to take some time however, to see it fully exploited. What do you got to say about that? >> I do want to address Jim, your comments about training which is the graph that you're referring to is precisely the word when I use topology figuring that more people will understand that and it's in the depth first product that the models have been pre-trained, supervised and trained by the vendor so they come baked in to know how to figure out the customer's topology and build what you call the graph. Technically, that's the more correct way of describing it and that those models, pre-trained and supervised have enough knowledge also to figure out the behavior which I call the operations of those applications, it's when you get into the breadth first that it's harder because you have no bounds to make assumptions about, it's harder to figure out that topology and operational behavior. >> But coming back to the question I asked, the fact that it's not available today, as depth first products accrete capabilities and demonstrate success, and let's presume that they are because there is evidence that they are, that will increase the likelihood that they are generating data that can then be used by breadth first products. But that raises an interesting question. It's a question that certainly I've thought about as well, is that is, Nick, ultimately where is the clearing house for ascertaining the claims these technologies will not and work together, have you seen examples in the past of standards, at this level of complexity coming together that can ensure that claims in fact, or that these technologies can in fact increasingly work together. Have we've seen other places where this has happened? >> Good question. My answer is that I don't know. >> Well but there have been standards bodies for example that did some extremely complex stuff in IO. Where we saw an explosion in the number of storage and printer and other devices and we saw separation of function between CPUs and channels where standards around SCUZI and what not, in fact were relatively successful, but I don't know that they're going to be as, but there is specific engineering tests at the electricity and physics level and it's going to be interesting to see whether those types of tests emerge here in the software world. All right, I want to segue from this directly into business impacts because ultimately there's a major question for every user that's listening to this and that is this is new technology, we know the business is going to demand it in a lot of ways. The machine learning in business activities, as David Floyer talked about, business processes, but the big question is how is this going to end up in the IT organization? In fact is it going to turn into a crucial research that makes IT more or less successful? Neil Raden, we've got examples of this happening again in the past, where significant technology discontinuities just hit both the business and IT at the same time. What happened? >> Well, in a lot of cases it was a disaster. In many more cases, it was a financial disaster. We had companies spending hundreds of billions of dollars implementing an ERP system and at the end, they still didn't have what they wanted. Look, people not just in IT, not just in business, not just in technology, consistently take complex problems and try to reduce them to something simple so they can understand them. Nowhere is that more common than in medical research where they point at a surrogate endpoint and they try to prove the surrogate endpoint but they end up proving nothing about the disease they're trying to cure. I think that this problem now, it's gone beyond an inventory of applications and organizations, far too complex for people to really grasp all at once. Rather than come up with a simplified solution, I think we can be looking to software vendors to be coming up with packages to do this. But it's not going to be a black box. It's going to require a great deal of configuration and tuning within each company because everyone's a little different. That's what I think is going to happen and the other thing is, I think we're going to have AI on AI. You're going to have a data scientist work bench where the work bench recommends which models to try, runs the replicates, crunches the numbers, generates the reports, keeps track of what's happening, goes back to see what's happened because five years ago, data scientists were basically doing everything in R and Java and Python and there's a mountain of terrible code out there that's unmaintainable because they're not professional programmers, so we have to fix that. >> George? >> Neil, I would agree with you for the breadth first products where the customer has to do a lot of the training on the job with their product. But in the depth first products, they actually build in such richly trained models that there really is, even in the case of some of the examples that we've researched, they don't even have facilities for customers to add say the complex event processing for analytics for new rules. In other words, they're trained to look at the configuration settings, the environment variables, the setup across services, the topology. In other words it's like Steve Jobs says, it just works on a predefined depth first domain like a big data stack. >> So we're likely to see this happen in the depth first and then ultimately see what happens in the breadth first but at the end of the day, it still has to continue to attract capital to make these technologies work, make them evolve and make the business cases possible. David, again you have spent a lot of time looking at this notion of business case and we can see that there's a key value to using machine learning in say fraud detection, but putting shoes on the cobbler's children of IT has been a problem for years. What do you think? Are we going to see IT get the resources it needs starting with depth first but so that it can build out a breadth oriented solution? >> My view is that for what it's worth, is we're going to focus or IT is going to focus on getting in applications which use these technologies and they will go into the places for that business where it makes most sense. If you're an insurance company, you can make hundreds of millions of dollars with fraud detection. If you are in other businesses, you want to focus on security or potential security. The applications that go in with huge amounts more data and more complexity within them, initially in my view will be managed as specific applications and the requirements of AI requirements to manage them will be focused on those particular applications, often by the ISVs themselves. Then from that, they'll be learning about how to do it and from that will come broader type of solutions. >> That's further evidence that we're going to see a fair amount of initial successes more in the depth first side, application specific management. But there's going to be a lot of efforts over the next few years for breadth first companies to grow because there's potentially significant increasing returns from being the first vendor out there that can build the ecosystem that ties all of these depth first products together. Neil, I want to leave you with a last thought here. You mentioned it earlier and you've done a lot of work on this over the years, you assert that at the end of the day, a lot of these new technologies, similar to what David just said, are going to come in through applications by application providers themselves. Just give us a quick sense of what that scenario's going to look like. >> I think that the technology sector runs on two different concepts. One is I have a great idea, maybe I could sell it. Did you hear that, I just got a message my connection was down there. Technology vendors will say that I have a, >> All right we're actually losing you, so Dave Alante, let me give you the last word. When you think about some of the organizational implications of doing this, what do we see as some of the biggest near term issues that IT's going to have to focus on to move from being purely reactive to actually getting out in front and perhaps even helping to lead the business to adopt these technologies. >> Well I think it's worth instructive to review the problem that's out there and the business impact that it'll have an what many of the vendors have proposed through software, but I think there are also some practical things that IT organizations can do before they start throwing technology at the problem. We all know that IT has been reactive generally to operations issues and it's affected a laundry list of things in the business, not only productivity, availability of critical systems, data quality, application performance and on and on. But the bottom line is it increases business risk and cost and so when the organizations that I talk to, they obviously want to be proactive. Vendors are promising that they have tools to allow them to be more proactive, but they really want to reduce the false positives. They don't want to chase down trivial events and of course cloud complicates all this. What the vendor community has done is it's promised end to end visibility on infrastructure platforms including clouds and the ability to discover and manage events and identify anomalies in a proactive manner. Maybe even automate remediation steps, all important things, I would suggest that these need to map to critical business processes and organizations need to have an understanding or they're not going to understand the business impact and it's got to extend to cloud. Now, is AI and ML the answer, maybe, but before going there, I would suggest that organizations look at three things that they can do. The first is, the fact is that most outages on infrastructure come from failed or poorly applied changes, so start with good change management and you'll attack probably 70% of the problem in our estimation. The second thing that we, I think would point to users, is that they should narrow down their promises and get their SLA's firmed up so they can meet them and exceed them and build up credibility with an organization before taking on wider responsibilities and increasing project skills and I think the third thing is start acting like a cloud provider. You got to be clear about the services that you offer, you want to communicate the SLA's, you know clearly they're associated with those services and charge for them appropriately so that you can fund your business. Do these three things before you start throwing technology at the problem. >> That's a great wrap. The one thing I'd add to that Dave, before we actually get to the wrap itself is that I find it intriguing that the processes of thinking through the skills we need and the training that we're going to have to do of people and increasing the training, whether it's supervised, unsupervised, reinforced, of some of these systems, will help us think through exactly the type of prescriptions that you just put forward. All right, let's wrap. This has been a great research meeting. This week, we talked about the emergence of machine learning technologies inside IT operations management solutions. The observation we make is that increasingly, businesses becoming dependent on multicloud including a lot of SAS technologies and application forms and using that as a basis for extending their regional markets and providing increasingly specialized services to customers. This is putting an enormous pressure on the relationship between brand, customer experience and technology management. As customers demand to be treated more uniquely, the technology has to respond, but as we increase the specificity of technology, it increases the complexity associated with actually managing that technology. We believe that there will be an opportunity for IT organizations to utilize machine learning and related AI type and big data technologies inside their iTom capabilities but that the journey to get there is not going to be simple. It's not going to be easy and it's going to require an enormous amount of change. The first thing we observe is that there is this idea of what we call breadth first technology or breadth first machine learning in iTom, which is really looking end to end. The problem is, without concrete deep models, we look at individual resources or resource pools, end up with a lot of false positives and you lose a lot of the opportunity to talk about how different component trees working together. Depth first, which is probably the first place that machine learning's going to show up in a lot of these iTom technologies, provides an out of the box digital twin from the vendor that typically involves or utilizes a lot of testing on whether or not that twin in fact is representative and is an accurate simulacrum of the resource that's under management. Our expectation is that we will see a greater utilization of depth first tooling inactivity, even as users continue to experiment with breadth first options. As we look on the technology horizon, there will be another forcing function here and that is the emergence of what we call unigrid. The idea that increasingly you can envision systems that bring storage, network and computing under a single management framework at enormous scale, putting data very close to other data so that we can run dramatically new forms of automation within a business, and that is absolutely going to require a combination of depth first as well as breadth first technology to evolve. A lot of need, lot of change on how the IT organization works, a lot of understanding of how this training's going to work. The last point we'll make here is that this is not something that's going to work if IT pursues this in isolation. This is not your old IT where we advocated for some new technology, bought it in, played for it, create a solution and look around for the problem to work with. In fact, the way that this is likely to happen and it further reinforces the depth first approach of being successful here is we'll likely see the business demand certain classes of applications that can in fact be made more functional, faster, more reliable, more integratable through some of these machine learning like technologies to provide a superior business outcome. That will require significant depth first capabilities in how we use machine learning to manage those applications. Speed them up, make them more complex, make them more integrated. We're going to need a lot of help to ensure that we're capable of improving the productivity of IT organizations and related partnerships that actually sustain a business's digital business capabilities. What's the bottom line? What's the action item? The action item here is user organizations need to start exploring these new technologies, but do so in a way that has proximate near term implications for how the organization works. For example, remember that most outages are in fact created not by technology but by human error. Button up how you think about utilizing some of these technologies to better capture and report and alert folks, alert the remainder of the organization to human error. The second thing to note very importantly, is that the promises of technology are not to be depended upon as we work with business to establish SLA's. Get your SLA's in place so the business can in fact have visibility to some of the changes that you're making through superior SLA's because that will help you with the overall business case. Now very importantly, cloud suppliers are succeeding as new business entities because they're doing a phenomenal job of introducing this and related technologies into their operations. The cloud business is not just a new procurement model. It's a new operating model and start to think about how your overall operating plans and practices and commitments are or are not ready to fully incorporate a lot of these new technologies. Be more of a cloud supplier yourselves. All right, that closes this week's Friday research meeting from Wikibon on the Cube. We're going to be here next week, talk to you soon. (upbeat music)

Published Date : Sep 11 2017

SUMMARY :

and a lot of complexity themselves to run. in the technology to simplify the management of that so Jim let me throw it to you. What I'm getting at is that all the aspects is going to be an essential element and basic models, etc. to be used to take that data low latency integration, most of the big data applications from a depth first standpoint is that as the infrastructure, is graph analysis, graph modeling to be able to follow going to take some time however, to see it fully exploited. that the models have been pre-trained, supervised and demonstrate success, and let's presume that they are My answer is that I don't know. but I don't know that they're going to be as, and at the end, they still didn't have what they wanted. a lot of the training on the job with their product. but at the end of the day, it still has to continue of AI requirements to manage them will be focused that scenario's going to look like. Did you hear that, I just got a message near term issues that IT's going to have to focus on and the ability to discover and manage events but that the journey to get there is not going to be simple.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

JimPERSON

0.99+

DavidPERSON

0.99+

Neil RadenPERSON

0.99+

NeilPERSON

0.99+

GeorgePERSON

0.99+

Dave AlantePERSON

0.99+

DavePERSON

0.99+

Steve JobsPERSON

0.99+

Peter BurrisPERSON

0.99+

70%QUANTITY

0.99+

UnigridORGANIZATION

0.99+

100QUANTITY

0.99+

WikibonORGANIZATION

0.99+

next weekDATE

0.99+

two toolsQUANTITY

0.99+

bothQUANTITY

0.99+

NickPERSON

0.99+

This weekDATE

0.99+

JavaTITLE

0.99+

this weekDATE

0.99+

firstQUANTITY

0.99+

second classQUANTITY

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

PythonTITLE

0.99+

unigridORGANIZATION

0.99+

second thingQUANTITY

0.98+

five years agoDATE

0.98+

todayDATE

0.98+

each companyQUANTITY

0.98+

first productsQUANTITY

0.98+

first productQUANTITY

0.98+

first vendorQUANTITY

0.98+

hundreds of millions of dollarsQUANTITY

0.97+

1000 timesQUANTITY

0.97+

first sideQUANTITY

0.97+

first levelQUANTITY

0.96+

first standpointQUANTITY

0.96+

first domainQUANTITY

0.96+

first optionsQUANTITY

0.96+

third thingQUANTITY

0.96+

FridayDATE

0.96+

singleQUANTITY

0.96+

first approachQUANTITY

0.95+

iTomORGANIZATION

0.95+

hundreds of billions of dollarsQUANTITY

0.95+

first approachQUANTITY

0.94+

Wikibon Research MeetingEVENT

0.93+

Justin Donlon, Carbonite - Informatica World 2017 - #INFA17 - #theCUBE


 

>> Announcer: Live from San Francisco, it's The Cube covering Informatica World 2017, brought to you by Informatica. >> Hey, welcome back, everyone. Live here in San Francisco for Informatica World 2017. This is The Cube's exclusive coverage. I'm John Furrier with SiliconANGLE and The Cube. My co-host, Peter Burris with Wikibon Research. Our next guest, Justin Donlon, the Business Applications Manager, Carbonite; a customer of Informatica, welcome to The Cube. >> Thanks, it's great to be here. >> So you've done a lot of interesting things. We were just talking before you came on camera. >> Yeah. >> Really hard. Moving to the cloud was really easy. >> Right, it helped us big time. >> So tell us about some of the interesting things you've got going on. >> Okay, well, this is a great use-case which we've been speaking about here at Informatica World. We sell through a number of distributors and through probably 8000, 9000 partners, but two of our distributors. We didn't have an e-comm way of interacting with him so we built up this manual, semi-manual process. We actually called it the manual, automated, auto-process. (laughing) That's what we called it. So we built up this process and we just thought we can't keep going like this. We had received a purchase order in email, send it over to sales ops then open it, validate it , does this make sense? They agree, sign it off, pass it onto finance. Finance would open it, say, "yep, makes sense," key it into our great playing system, (mumbles), pass it on to provisioning. This is for a SaaS product that we sell. It's just not scalable at all. >> John: A lot of touch points through there-- >> Too many touch points and a delay for something that should be instant. So we spoke to these distributors and said, "What do you have, what can we do?" We didn't have any options for API integration, so they said, "Well, we've got EDI," so we said, "Okay, first question, what does that stand for?" (laughing) 'Cause we were a cutting-edge company, you know and everything that we do is kind of, >> So 1980s. >> Yeah, I know. Kind of bleeding into it. so we kind of did our homework a little bit and found out what EDI is electronic-- >> John: Where do we sign up for it? >> Yeah, Electronic Data Interchange and then we said, "How are we going to do this?" We kind of looked around a little bit, spoke to our partners at Informatica and I said, "You know, we've got a EDI-capability in the cloud." So we said, "Great, let's do a POC," so we did that POC, banged it together pretty quickly, which is the beauty of a SaaS offering, or the beauty of the cloud, and as we were building this up, we were working with our counterparts at these distributors. These guys who lived and breathed EDI for all their partners and at some point, I just thought you know, we're building this thing up, I don't have anything to compare it to. How do we know if we're even building the right thing? We're just going on what we think seems to be making sense so I phoned him up one day and I said, "Listen, would you mind just taking an hour "and let me walk through what we're building here? "Let me just show you what we're building. "See if it makes any sense." And so he said, "Sure, I'll be happy to do that." He knows EDI back to front and as you mentioned just now it's a very complex, very in-depth, old-school kind of system, old-school, we're processing transactions. I showed him what we'd built out and (mumbles) leveraged Informatica, Salesforce as a front-end. There's a really, really kind of bolted on solution, but we managed to put it together in a few months. I showed him each part and at some point, or at many points, I was waiting for him to interrupt and say, "Well, hang on a second, why are you doing that?" But he didn't, he was silent through everything. So I thought, "Okay, what have we done here?" And so I turned it over to him and I said, "What do you think, is this okay? "Are we doing the right thing?" And he paused for a second and then he said, "Yeah," he says, "this is actually quite an elegant solution "that you've built out in a few months. "This is what has taken us 10 years to mature into." >> John: He was mad! >> I think he was a little mad and for me, it was just a big sigh of relief as I thought, "Okay, we're actually are on track," and we've actually been able to do something really quickly and elegantly through a SaaS product, through these cloud offerings. >> That's a great use case of Informatica. You've taken something that's hard and cloud made it easy for you to do and you had no baggage. In this case, it was a green field for you. What other end-to-end examples are you guys working on because data is now going end-to-end and sometimes it's multi-vendor, of course, but cloud's going to help you. You got there, anything you got else going on? Into any IOT, big data stuff you happening? >> IOT, well, more especially, big data is becoming more and more important to us. As we've kind of grown through our consumer business, Carbonite started out as a consumer product, and as well over one and a half million consumer subscribers and is moved into the very small business, then into this kind of SMB space and a little bit into the enterprise space, and as we've been doing that, we need to understand what we're doing, especially at very small business through the enterprise space. We've acquired these companies. One of the key things we need to do as we acquire companies is identify opportunities for cross-sell and for up-sell, and in order for to do that, we've got to get that data into one repository where we can figure it out pretty quickly. So that's a huge initiative at Carbonite at the moment is building out our data vault and our data legs and getting some accurate and good data governance as we fee this data into these data vaults with our analytics team. >> Peter: That's on the operational side? >> Yeah, that's on the operational side. >> So what Carbonite does is as a service to your customers, which is, I'm not going to say it's standard, but it's some really value-complex, complex things that you do. Has the engineering that you've done there informed the process by which you're starting to re-engineer in your digital footprint on the operations side? >> I know that there are conversations that kind of happened between engineering on the product side and the analytic side, but I think we'd love to see more of that discussion happening. Often what happens in any company, I think, is that you get the silos as we know, but the more that we can facilitate these discussions, I think the better it will be for us. >> Peter: So as you look at the Informatica Tool Care, the presence of, where are you starting, where do you anticipate you're going to use more of some of these tools, whether it's Power Center or MDM, et cetera, as you try to do this, as you try to replicate the experience you just had with EDI and the cloud transaction manager? >> That's a really good question. We've used application integration, so real-time application integration, which is a tool called ICRT. We've used Informatica Cloud Services, which is kind of batch-transferring of information to and fro. We've just, with EDI, implemented B-to-B gateway, which is for that connectivity with partners. And I think one of the key things for us moving forward is going to be data governance. As we have these different sources and different companies coming in, we've got to make sure that we govern and steward and ship it, and can I say sheriff, the data into its rightful homes accurately. We're trying to do that at the moment and we're doing it through spreadsheets and SharePoint and Lucidcharts and diagrams and Visio. One of the tools which I saw, which is an Informatica acquisition, Informatica Axon is a data governance tool. It doesn't store any data, but it just helps you manage and control your data. I think that's going to be crucial for any company which is working at amalgamating systems and data from various sources. >> John: What's the biggest challenge with data integration? One of the things, this is, companies have different views of the problem and opportunity. What's the biggest challenges that people have? >> You know, this is going to sound silly, but one of the biggest challenges that we have right now is just defining our data, defining what this term means. Even just this week, we've got one term, Sale Type, and still we're trying to figure out exactly what that means. That's one field that we want to be able to present to the business and we're still saying, "Hang on a second, what about this scenario?" I think that's the biggest deal is just to have a uniform definition of your different metrics and KPIs and attributes across the business. >> If you do that, you're going to first, you got to find the sources, you got to understand the degree to which synonyms are or are not synonyms, and then you got to go through the social engineering of getting people to agree so it is clear, for example. Do you see that as a facilitator for this process? >> I think it will be, I definitely think that will be, especially with the self-discovery or the intelligence structure discovery. I think that's going to be an exciting thing to see. >> I really like that intelligence structure discovery. That is just, that's not available in today's market. >> Yeah, that's right, but I think we've stepped away from that, I really do think so. >> You guys are. >> Yeah. And as an industry I think we are, with Informatica, partnering with Informatica. >> With Informatica, how are you guys working through (mumbles), you guys as a customer? What specifically are you guys doing with them? Sounds like that EDI thing is an enabler. What else are you working with them on? Share some specific-- >> Yeah, that's right. It's still, at this stage, it's kind of the, it's all cloud. We don't have any on-prem Informatica, so it's all the cloud stuff, and we use it extensively for our cloud systems, our cloud business applications: Markelo, Salesforce, Zuora, NetSuite. Those are the four big ones that we're using and those are the same (mumbles), I guess. So we're using Informatica to bridge the gap between these different systems a lot and so that's our kind of bread and butter with Informatica at the moment. >> John: How about developers onsite for data and dealing with data? How do you guys organize staff and skillsets? Is it mostly engineering? Is there data analysts, data science, how do you guys? >> Yeah, good question. We've got engineering, which kind of sits on the product. Then we've got IT business applications, which is where I fit in, and that's a combination of kind of business analysts as well as developers who build out this, a lot of the systems, and then we have an analytics team. The VP of analytics with Advanced Analytics, analytics platform, Data Lake, Data Vault, and so with those are the three big groups that we look at where Informatica splits across the different groups. >> Now you guys are pretty solid with Informatica, happy with them? >> Yes, very much so. >> Yeah, we've got a great partnership with them. Every time we've bought, it's not because it's been a hard sell. (mumble), We've said, "Okay, we need that," "and this is what we need." >> John: So not a hard sell. How long you been a customer, just curious? >> Almost three years. >> John: So you're not legacy Informatica. You're not locked in? >> No, I'm not, I've never even seen the on-prems. I've never even seen Power Santa, I hope to never see it. I'm not interested. >> You're cloud-native? >> Cloud, cloud first. That's right. >> How 'about you guys, multiple clouds? What kind of clouds (mumbles) do you guys have? >> With Informatica? >> No, for you guys. >> For us-- >> Salesforce, Markelo. >> Those are the things, all those business applications. Salesforce, Markelo, a little bit of hybrid stuff. We've got our own on-premz-- Do you have your own data center? >> We do have, as Carbonite? >> Yeah. >> Absolutely (talking over each other) Our customers data. >> Would you put that in the cloud, customer data? >> Yeah, that is, in fact, moving to the cloud. >> John: Alright, you are. >> Yeah. >> But under your control. It's your, effectively it's your cloud. So as you think about working with Markelo, Salesforce, Zoira, remmember the last one you mentioned, Oh, NetSuite >> Netsuite. >> As you look at those four, everybody, everybody is, all these SaaS companies are making, have a realization that if I can get the data, then I get the customer. Are they starting to make it more or less easy for you to perform these integrations across how they handle things? Where do you think their willingness to expose their APIs, get more information about the metadat, et cetera, is going so you can do a more effective job of bringing it together and creating derivative value out of these very rich, cloud-based applications? >> I think that's an excellent question. And for me as somebody who is not a developer, but as for me as somebody who's very very interested in moving and lending and transferring and transforming data, I have to rely on a tool like in Informatica because I don't want to go digging in the bowels of NetSuite to try and pull data out. I don't even want to have to write an API core. I honestly don't want to do that and I don't really want my team to be doing that. I want to be able to point Informatica at a system and say what have we got, so for me that's crucial. So I think that's where the partnership between a Salesforce and Informatica, I'm relying on that and I think that those sources, like the NetSuite and the Salesforce, I think they're going to continue to hopefully have this really good open partnership with these middleware or these integration tools. We have to have that. If we don't have that, we're stuck. The same people are going to start breaking into Salesforce and breaking into NetSuite to get the data 'cause we're going to get it one way or the other. >> Justin, great success story. I'd love to hear the cloud, need it being, you know, taking advantage of Informatica, really highlights that they've got the modern approach. Appreciate you coming out. Justin Donlon, Carbonite Applications Manager. This is The Cube with coverage of Informatica World 2017. More live coverage here after the short break. Stay with us. (innovative tones)

Published Date : May 17 2017

SUMMARY :

brought to you by Informatica. Our next guest, Justin Donlon, the We were just talking before you came on camera. Moving to the cloud was really easy. So tell us about some of the interesting things This is for a SaaS product that we sell. 'Cause we were a cutting-edge company, you know so we kind of did our homework and at some point, I just thought you know, and we've actually been able to do something for you to do and you had no baggage. One of the key things we need to do informed the process by which you're starting to and the analytic side, but I think we'd love to see One of the tools which I saw, which is One of the things, this is, companies have different views but one of the biggest challenges that we have right now and then you got to go through the social engineering I think that's going to be an exciting thing to see. I really like that intelligence structure discovery. Yeah, that's right, but I think we've stepped away And as an industry I think we are, With Informatica, how are you guys working through so it's all the cloud stuff, and we use it extensively and then we have an analytics team. Yeah, we've got a great partnership with them. How long you been a customer, just curious? John: So you're not legacy Informatica. No, I'm not, I've never even seen the on-prems. That's right. Do you have your own data center? Our customers data. Zoira, remmember the last one you mentioned, is going so you can do a more effective job and the Salesforce, I think they're going to continue to you know, taking advantage of Informatica,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

JohnPERSON

0.99+

Justin DonlonPERSON

0.99+

PeterPERSON

0.99+

CarboniteORGANIZATION

0.99+

InformaticaORGANIZATION

0.99+

JustinPERSON

0.99+

Wikibon ResearchORGANIZATION

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

NetSuiteTITLE

0.99+

10 yearsQUANTITY

0.99+

first questionQUANTITY

0.99+

Informatica WorldORGANIZATION

0.99+

SalesforceORGANIZATION

0.99+

oneQUANTITY

0.99+

each partQUANTITY

0.99+

SharePointTITLE

0.99+

fourQUANTITY

0.99+

SiliconANGLEORGANIZATION

0.99+

OneQUANTITY

0.98+

three big groupsQUANTITY

0.98+

VisioTITLE

0.98+

MarkeloORGANIZATION

0.98+

LucidchartsTITLE

0.98+

this weekDATE

0.98+

#INFA17EVENT

0.98+

1980sDATE

0.97+

firstQUANTITY

0.97+

one fieldQUANTITY

0.96+

one repositoryQUANTITY

0.95+

one termQUANTITY

0.95+

Informatica World 2017EVENT

0.95+

ZoiraORGANIZATION

0.94+

The CubeORGANIZATION

0.94+

one dayQUANTITY

0.94+

an hourQUANTITY

0.92+

todayDATE

0.92+

CarbonitePERSON

0.91+

Data LakeORGANIZATION

0.89+

over one and a half million consumer subscribersQUANTITY

0.87+

MarkeloTITLE

0.86+

Informatica AxonORGANIZATION

0.85+

SalesforceTITLE

0.85+

a secondQUANTITY

0.84+

8000, 9000 partnersQUANTITY

0.82+

aticaORGANIZATION

0.8+

Almost three yearsQUANTITY

0.8+

Deepak Gattala, Dell - Informatica World 2017 - #INFA17 - #theCUBE


 

>> Announcer: Live from San Francisco, it's theCUBE, covering Informatica World 2017. Brought to you by Informatica. (Upbeat music fades) >> Hey, welcome back everyone. We're live here in San Francisco for Informatica World 2017. I'm John Furrier with theCUBE, SiliconeANGLE's flag shift program. We go out to the events and extract the seth-a-pla-noids. My co-host for the next two days is Peter Burris, general manager of Wikibon Research. You can find that research at wikibon.com. Our next guest is Deepak Gattala. Big data architect, enterprise business intelligence strategy and planning with Dell, EMC Dell. Welcome to theCUBE. >> Deepak: Thank you so much. >> Or Dell Technologies. That's a big company now. You got a zillion brands. We just came back from two days at Dell EMC World in Vegas. A lot of action goin' on in your world, but you're here in Informatica World. You are the distinct winner of the Innovation Honorary Award. Tell us about that. That was last night. >> Yeah, exactly. It was a really good. It was great to be there, and part of the Honorary Awards and things like that. Its been really trusting that, well, you should know, big data is coming into maturity at Informatica, and we use a lot of Informatica products to be successful in the big data site. >> So, you're a customer I n this case, with Informatica. You're a customer of theirs. >> Deepak: Yes >> Alright, so how are you guys using Informatica? >> So, Informatica, we use, uh... Well you name it, and we have it so many products that are out there for Informatica right now, so, we started our journey back in 2007 with Informatica Power Center. As we evolved in different silos and different data sets, that got into, over a site regard lot of structure and unstructured data, and the first of big data also started growing tremendously. So, we have a new platform, and we have a data lake today that we harvest a lot of data that's coming from different sources. Some structured, unstructured, semi-structured data. We needed a tool and technology that can help us to actually use our existing skill set, and the army of the people who knows Informatica from the days of 2007 til today, that we have kept in sight. We wanted to level raise our skill set. Then you're creating a bunch of new folks in the big data platform, and starting them from scratch. >> Deepak, Hadoop was supposed to change the world, and like actually kill Informatica. All the press had, "Low End for Matica," and "Big Data Hadoop." Hadoop is just one element, now, of the big data space, or, how we wanted to describe it. Data lakes are also just fine. I don't like the them data lake because they become data swamps if you don't use the data, and as Informatica-- My question is, as data is gettin' laid out, whether it's Hadoop, or in the clouds, making it relevant is a real architectural challenge. Can you share your insight into, how do you guys look at that data architecture? >> Differently, right? So, like anybody else, you know we also face the same problem off record a lot of data or that we put it on, whether it is water data or lake? Whatever you want to call it. Then over the time we realized like, "Oh, okay. "Now what do we do with this data? "What is the value? How we can extract it," right? That's how the tools and technologies around this whole ecosystem comes into the play, which actually provide that value to get that data value extracted from it. Informatica is one of the tools that our choice was offered doing some of the big ops, and going though a process off proof of concept, and we are identified. This is the stage that we have to make a conclusion to say that we want to go with Informatica because of too many reasons I can't speak about, but one the major reason was, what I can say, I can do the same existing skill sets of what we have in house, and improve on top of it. >> So Deepak, the concept of data management used to be associated with managing a file, managing a data base, effectively managing tools that handle data. As we're hearing it increasing applied in the digital universe, I presume at Dell as well, the notion of data management is starting to extend and generalize a little bit differently. How do you see data management? And the next question I may ask you is, from an architectural standpoint, what's the relationship between what you do an architecture standpoint and how you envision data to managed. What is data management to you? >> So the data management is basically like harvesting your data, right? So basically, drives, like I said, data is coming in different forms. You wanted to go and get the data consumption happen from different sources and different silos that we have one time. Now we are at the situation like, what kind of data exists? What is a metadata? What is the governance unearned it, right? So those are becoming more and more and more important as we move and getting mature in this whole data management prospective. >> So, its looking at data across applications and across tools to try to increasingly treat data as an asset that can be managed just like a plant can be managed. Do I got right? >> Exactly. So, we have to realize that now data is an asset. That's where the value is. Your business and your stakeholders, everybody is looking at the data that we extract a value enough. >> So, does a data architect, then... Again a data architect used to be the person who laid out the database manager, and what not. Do you see your job now more as, design plus implementation with an eye towards performance and ensuring peple understand how to use data. Making sure things can be governed. What does the emerging and evolving job of a data architect in this new era of data management as managing data assets? >> So, traditionally with the relational database, it worked pretty well for the architects with the work they are doing, and it worked pretty well. But the thing is with the new changes we are going though, with the fast evolving technologies that we are having and the mobile data that we are getting in different forms. It always gets challenging that it's not just a data architect. It have to be co-- Together with some of the solution architecture together, to see how we have to go and consume all this information, but at the same time, Paul, we are providing value at affect >> So the tools that Informatica is providing are helping you do that? >> Yes the tools at Informatica have definitely helped us, starting at the power center site, which is more inclined towards conditional databases. Today we use big data management tools on our website, which is actually helping us the same kind of value that it provided with the power center. Now we can provide the same value at our new platform. >> So as you look forward over the course of the next few years, do you anticipate that the assets, the data assets that you're creating in Dell, are going to be applied to... How developers going to do things differently? How users are going to do that things differently. How do you see the data architect and data management serving these different consumers of data within Dell? So, what all includes, like, you know the business satisfaction right? The business is trying to get the value of the data yesterday. So, you know what? You need to be so fast enough to deliver the stuff to the business. One of the major capabilities that we are looking at is, to have the self-service capabilities for the business stakeholders so they can go and do themself rather than waiting for the IT or being a bottleneck for them to deliver what they want. >> So I got to ask you about this award. Dell was recently selected as the grand prize winner of the Informatica one million dollar software and services big data ready challenge. Was that cash prize? Michael Dell just spent 69 billion dollars on EMC, you'd probably use-- No, I'm only kiddin' (Peter laughs) Was that cash or was that product services and-- >> No, it's not the cash. You're suggest I'm one of the hundred year award that we got as Dell, being a proactive customer. A few guys who got inbranded. And, so research and the software that actually we are looking at Informatica data and degration hub and Informatica Intelligence Data Lake, which actually will provide self-service capabilities and integration at a single point. >> So you apply-- so, the objective of the self-service capaibility. The outcomes that you seek, you use the data inegration hub and some, for a period of time, some free software and free services to build that pilot, and then roll it outto the organization. >> Exactly. The whole idea is to show the values out of the these tools and technologies that Informatica has been investing` and helping the whole ecosystem to improvise the standards. >> So, Deepak, I got to ask you. We had some of the execs on earlier, and they're talking about, "oh, data's the heartbeat of the organization." You know, kind of cliches, but kind of accurate. We believe that to be true. Certainly, data is the center of the action. But then, it brings up the whole data conversation. Who's the practitioner? Do you have heart surgeons? And then, what about the hygienist? You know you got to have data hygiene. The big data ready challenge is interesting because its always been a challenge to go from pilot to production, but then also its the readiness around an organization's ability to understand what the hell they have, how do they use it, and then how do they take it to the next level? The mastery of doing the data. So, certainly there's different skill sets. How do you look at that analogy, and how would you organize teams around that, because in some cases, there's a heart surgeon needed. You got to redo some surgery on the company, Felt at the data strategy. And, sometimes its just know your hygiene, brush your teeth, if you will, kind of a concept of being ready. Your thoughts and reaction to that? >> So, yeah, initially we also started in the simplest days just to get the data and put it in one place, but it's not; it's just one part of the whole equation. You have so many things like data governance, data quality, data security, because you know, you might have a PIAD now that you want to secure it, and you might have something like, weblocks is doing your security. Everybody has a play in this. Its not just a one thing that you know, here we have the data fusion done, and then, you know, you're good. So it's not that the case. You should just always that the maturity happens in different stages. >> So the hiring and organizing a team, that's a specialty right? You're going to have the more skilled folks, and then some of the, you day to day, maybe an analyst or citizens data wrangler. You know, these things going on. Your thoughts on organizing, and the teams around data. >> Yeah, so one of the teams is that we are starting looking at these. We are harvesting a lot of data scientists interknow to Dell. That's because Dell's the ones, guys stuck Bill Nye on mostly to see, to extract the value of the hidden stuff, that we are not able to see as of today. To do that in an effective manner, we need to know how to unleash those guys and be self sustained by themself, so they can improve the quality and provide the value of innovation. >> The folks that have been following Informatica over the years, they were once a public company. The data warehouse was all the rage. Now, its real time. All kinds of landscape changes in the marketplace. What's Informatica all about today? >> Informatica is no more just data platform. I think its fanning it's wings to do more stuff. Especially on the beginner side. Now you have this Informatica data degration hub, and you are talking about having this intelligent data lake, and things like that. Which is going to be a link to use in learning off machine learning algorithms and things like that, having this whole meditator concept that is matured, or just a metadata manager. And right now, its going very huge because of the different big data platforms are coming together. Its not only the big data platform. The big data platform is very loose term to me. It's just not the Hadoop, you know? It could be... At Dell, we have so many different technologies come together and recall all of them as a data platform for us. We did a platform for us. We know how to just being one of the competent ones. >> So you're saying basically is there's no silver bullet. >> Yes. >> and there is no magical answer. >> But there are skills. >> Yes. >> And so, increasing what you're looking to do, is saying, what are the outcomes? What are the objectives? What are the skills we need to get there? And then, lets look for tools you today are lining up nicely with Informatica. >> Deepak: Exactly. >> So if you think about the next steps that you're going to take, where does the function of a data architect go within Dell, and what kind of recommendations would you make to those users out there who are thinking about how they want to optimize their skills. Their use of their skills in a data analytics world. >> Yeah, sure. So the data architecture is, like I said, knowing the previously in their traditional data warehouse kind of stuff, it was pretty straight forward. But now, they don't-- we are seeing that data architects are getting more and more matured, and I'm getting this data into semi-structured data, and improvising that how the data fit actually get readjusted in the right manner, so that, you know what we can really... Is it like three and one half, or whatever? Is it-- You can't have a data silos anymore? It's like what you need to bring in all data sets together to actually making a meaningful answer-- >> Peter: Well, at least make it possible to bring it together. >> Exactly. >> We had a lot of costs and a lot of pain. >> Yes. >> You may not have to bring them all together. As you said, you don't, its not all about putting it all in the data lake, its about making it possible to acquire. >> Exactly. So you have to know where the data is, you have to be able to quarry, you have to build and reformat on the fly, all these other things to servwe your customers, especially in the self-service world. >> Right. >> So, where does this go? How is this going to drive? What recommendation would you give to companies who are looking to accelerate their use of these new technologies and new approaches? >> I would say this to everybody. Adhere to your customers. Adhere to your business. They are the reason it is for your-- like you know what? There's no person in the organization that knows every domain. So you need to be in a way where right off your sneeze off different data domains that you have, and make sure you pull all these resources together to actually contributing to the whole arbitrational white impact. >> So, start with-- be true to your business. Your customers. Focus on finding data. And then focus on bringing the appropriate level of integration. Not putting it all in one place, but so your customers can be matched to the data they need. >> Deepak: Exactly. >> John: Alright, Deepak, final question. Just your thoughts on the show here. Again 3,000 people and growing every year. The new rebranding, Informatica going into the cloud world. Automation, you're seeing CLAIRE, this new AI meets data. What's your thoughts? >> I think this is phenomenal approach that Informatica is taking right now, and I'm glad. Like you said, there's 3,000 people here really interested to know what's going on and how the things are evolving with Informatica. It's a really great show to be here. Thank you. I'm very glad to be part of it. >> Congratulations on your award for the Big Data Ready Challenge, grand prize winner. A million dollars worth of products. I can knock down some of that purchase price, but I'm sure you guys are big customer. (group laughs) >> Thanks for coming on and sharing your insight as a customer of Informatica. >> It's my pleasure. >> It's theCUBE. I'm John Furrier with Peter Burris. More live coverage in San Fransisco. theCUBE at Informatica 2017. We'll be right back after this short break. Stay with us. (upbeat music)

Published Date : May 17 2017

SUMMARY :

Brought to you by Informatica. My co-host for the next two days is Peter Burris, You are the distinct winner of the the Honorary Awards and things like that. n this case, with Informatica. from the days of 2007 til today, that we have kept in sight. Hadoop is just one element, now, of the big data space, This is the stage that we have to make a conclusion And the next question I may ask you is, So the data management is basically and across tools to try to increasingly treat data everybody is looking at the data What does the emerging and evolving job of a data architect but at the same time, Paul, we are providing value at affect Yes the tools at Informatica have definitely helped us, One of the major capabilities that we are looking at is, So I got to ask you about this award. And, so research and the software that actually The outcomes that you seek, you use the data inegration hub out of the these tools and technologies Certainly, data is the center of the action. So it's not that the case. and then some of the, you day to day, of the hidden stuff, that we are not in the marketplace. It's just not the Hadoop, you know? and there is What are the skills we need to get there? and what kind of recommendations would you make and improvising that how the data fit to bring it together. putting it all in the data lake, especially in the self-service world. So you need to be in a way where right off your sneeze So, start with-- be true to your business. The new rebranding, Informatica going into the cloud world. and how the things are evolving with Informatica. for the Big Data Ready Challenge, grand prize winner. as a customer of Informatica. I'm John Furrier with Peter Burris.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
InformaticaORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

JohnPERSON

0.99+

PeterPERSON

0.99+

PaulPERSON

0.99+

San FransiscoLOCATION

0.99+

Deepak GattalaPERSON

0.99+

DeepakPERSON

0.99+

2007DATE

0.99+

DellORGANIZATION

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

Michael DellPERSON

0.99+

69 billion dollarsQUANTITY

0.99+

EMCORGANIZATION

0.99+

VegasLOCATION

0.99+

3,000 peopleQUANTITY

0.99+

Informatica Power CenterORGANIZATION

0.99+

Wikibon ResearchORGANIZATION

0.99+

TodayDATE

0.99+

two daysQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.98+

one elementQUANTITY

0.98+

threeQUANTITY

0.98+

yesterdayDATE

0.98+

last nightDATE

0.98+

oneQUANTITY

0.97+

Bill NyePERSON

0.97+

Informatica World 2017EVENT

0.97+

EMC DellORGANIZATION

0.97+

todayDATE

0.97+

one placeQUANTITY

0.97+

#INFA17EVENT

0.96+

firstQUANTITY

0.96+

one thingQUANTITY

0.95+

Informatica Intelligence Data LakeORGANIZATION

0.95+

HadoopTITLE

0.94+

one million dollarQUANTITY

0.94+

one timeQUANTITY

0.92+

one halfQUANTITY

0.92+

theCUBEORGANIZATION

0.9+

Greg Hanson, Informatica - Informatica World 2017 - #INFA17 - #theCUBE


 

>> Announcer: Live from San Francisco, it's the CUBE. Covering Informatica World 2017. Brought to you by Informatica. >> Hey, welcome back everyone. We are here live in San Francisco for Informatica World 2017. Exclusive CUBE coverage of the event, Informatica World 2017. I'm John Furrier with my co-host, Peter Burris, General Manager, Head of Wikibon Research at Wikibon.com. Our next guest is Greg Hanson, Vice President of EMEA Cloud and DaaS, Data as a Service. Welcome back, good to see you again, CUBE alumni. >> Good to see you, yeah thank you very much. >> Year two, or year three of our coverage. >> Exactly. >> So last year, we had a great conversation. I think you laid out pretty much the playbook. Lots happened, in fact Brexit happened. But cloud in outside of North America is a tricky game because there's a lot of different countries. We got EU, and other parts of the world there. It's really a regional issue, and you see in a massive expansion. The cloud guys, we have Amazon, sponsorship here, Google, now expanded globally. What is the landscape like? Given Brexit, that was a political thing has ramifications but also the regional expansion of the cloud players has been pretty significant over the past year. With announcements coming, I can't even keep track of 'em all. How is that impacting your business? >> So it is quite fragmented across EMEA. Our region is EMEA and Latin America as well. It's a huge geographical region. Across a geographical region that's very different in different countries. So the EU as a whole, there is, cloud is very hot in the EU at the moment. There's a large adoption. I think we've past that point of no return, past the tipping point, as you should say. Every enterprise customer I talked to is now it's not when they're going to, or if they're going to adopt cloud it's when. Usually, they're already on a journey that we can help them with. But then in some of the far-flung regions where the maturity of cloud is less so, where the presence of Amazon or Microsoft, or even ourselves is limited. Like Russia for example or the Middle East. There's not that same kind of infrastructure. So the desire and the demand for cloud in those regions is less. But the large majority of our geographical region, cloud is a huge topic for every single customer. >> What's the state of the art right now in your territory with cloud? Obviously, from Informatica perspective, you have a view but also in cloud adoption, hybrid, clear, public cloud, there's use case for that, a lot of on-premise with hybrid. What' the key state of the art right now for Informatica and the cloud players? >> I think there's fabulous opportunity for Informatica. It really is a hot topic. There's two ways that we can deal with that. I mean, there's the enterprise space, which Informatica has been ruling for 20 years now but cloud gives us a huge opportunity to go into new market sectors as well that we've really not been in before. Mid market opportunities. You no doubt see a lot of the partners around the event here that we've got that allowed us to address customers that we simply weren't addressing before. We had an enterprise sales force. If you think about those mid market organizations, they're the organizations that are really going to drive the cloud adoption as well. In countries like Italy and Germany, where you very quickly get down to small and medium sized enterprise. Cloud is huge in those organizations, in those countries. There's a great opportunity for us to go after mid market sector as well as the enterprise. >> But increasingly in the digital business, we were talking about this earlier in one of your segments, in the digital business, you have greater distribution of data, greater distribution of function, and almost inevitably, the ecosystem is going to be comprised of big enterprises but also mid market companies. They're going to have to work together. >> Greg: That's true. >> So it's not looking at the enterprise and the mid market in isolation. Increasingly the enterprise is going to be acknowledged as a way of extending your influence into a lot of different customers or a lot of different domains both through partnerships, as well as your customers. How is Informatica going to facilitate that kind of a new approach to thinking about business as a network of resources. >> One of the great things about the cloud infrastructure itself, if we reel back and think about 10 years ago, when all our products were on-prem. It's very difficult for us to understand what our customers were doing with our products. We have to go an talk to them, and speak to them on the phone, visit them to understand what their use cases were. Now in cloud, that world has changed. Because if you think about one of the things at Informatica is well-known for is metadata. So operational metadata, technical metadata. We can actually see what our customers are doing with our products. We can understand the uses cases. That becomes a crowd sourcing in terms of how you can replicate, how you can industrialize, how you can you reuse a lot of that type of integration, which is enabling us to create new wizards, new accelerators, which are common across the marketplaces and use cases. So really a phenomenal change over the last two years, which has been brought on by that ramp of cloud adoption that we've seen globally to be perfectly frank. >> Okay, take a minute Greg, to talk about this DaaS. I think of Daas, I think of like cellular distributed antenna system but let me, it's an acronym, it's Data as a Service. >> Greg: Data as a Service, yeah. >> Peter: But what does it really mean? >> Take a minute to just break that down. What does that mean to the customer? What's the product? What's the offering? >> Greg: Okay. >> It's important, obviously data is the key, and people want it as a service. So take a minute to just explain what that means and the impact. >> Yeah, it's important to understand what Informatica means by Data as a Service, I think. Our Data as a Service product line, pretty much concentrated and focused on increasing the quality of data. So high performance, quality of data. If you think about digital transformation as the topic, which is being talked all around in rims and corridors around this event here this week. Fundamentally, data is really the key foundation of digital transformation. But I would say high quality data is key to the success of digital transformation. That's what our DaaS product can enable us to do. So if you think about-- >> Peter: How does the customer engage with DaaS? (faint statement) >> So the typical use case is that you could have address verifications and we have products that support multiple different countries and regions, more than 240 countries. So if you want to get high quality data to our customers, which everyone is ultimately wanting to do these days to effectively cross-sell and upsell. We can provide a global facility to do that. But you can fix, you can fix data in batch orientation but what's much more effective is actually plugging into the applications. So become seamless to an end user. So they're using Salesforce.com or they're using another application, and it's embedded into their application. So it runs in the background. When they enter a poor address for example, it will correct it, and it will validate email addresses and phone verifications. We've got a customer in Germany, just as an example, 1&1, which is an Internet service provider in Germany. They've got 7.7 million customers. One of their biggest problems is inaccuracy of data. That prevented them billing, prevented them onboarding the customer first and foremost. Then it prevented them billing, which is a pretty serious problem for an organization. >> Peter: Yeah, I'm moving to Germany. (laughs) >> So by implementing the DaaS products, what they enabled them to do is make sure that when they enter data into a system, that it was high quality, it was correct at the point of entry, which by the way is seven times cheaper to do it there rather than trying to fix it downstream. So it's an important product set for us to support high quality data for that digital transformation journey. >> So you're, sorry John, you're not buying and selling your customers' data. What you're using-- >> No. >> Is this is a service to enhance the quality. >> Greg: Exactly. >> Of your data. >> It will fix data and it will also enrich data that they've already got. >> That's an important distinction, John, because a lot of people talked about Data as a Service, they say, "Oh yeah, I'm going to monetize my data "by giving it to the marketplace." We all know that you give that data to a good data scientist they're going to reengineer your customers pretty quick. >> Exactly. >> That's what people are worried about, the privacy. So back down the drivers for your business. What are the drivers for your business in EMEA? >> Yeah, certainly cloud option which we already talked about is a huge growth market for us in EMEA. But there's other things that happening locally in EMEA marketplace, GDPR, General Data Protection Regulations that are coming up. That is a hot topic on the lips of all of our customers right now. Let me take a minute to describe what that means for people who maybe are not familiar with it. Because it's generally an EU thing but it affects every organization that wants to sell into the EU. It came on the back of the Google Right To Be Forgotten ruling where really what we've got to do, we've got to provide a framework, where a customer can say to an organization, I want you to forget me. Obviously, then need a central library. They'll be able to manage it from a single point. That is an extremely complex thing for an organization to do, particularly an enterprise organization. >> John: Forensics is what it is. >> Exactly. If you think about how to approach that, I think Informatica is in a unique position to help organizations deal with that type of issue. Because, I know one of the announcements today, I think Ronen, who was on before me was talking about CLAIRE, our Clairvoyancy, and our artificial intelligence but it's all about that unification of metadata. That's a great example of how a good use case of where that can be deployed. 'Cause if you think of the fragmentation of data that we've got across many clouds, on-premise, how do you understand even where all your customer data is? That's what the unified metadata can provide. It can go out, collect all the metadata from all these different vendors, index it, catalog it for you. We've been in business 20 years. We know what our customer data looks like. We know what product data looks like. We can categorize it and index it for you. Then you can search it. So you can identify where your risk is, where your customer data is at risk. You can do something about it. Now, with the most recent acquisition that we made last year in terms of Diaku, which is a missing piece for me in terms of how do we expose that to business users to actually engage in the governance process. The new Diaku acquisition of Acson, really fills that gap for us. I think we've got a really good stack to help customers. >> You got product chop, we talked about in the past. The brand is new brand is out there. You're seeing some branding, brand value. Good for the partners, good for business. So with that, I'll ask you my final question which is, what's different from last year? A lot of change in 12 months. Just in a short 12 months, certainly in the product side, we saw some awesomeness from the products. Always had good product folks at Informatica World, which is why I love doing this conference. But the brand challenges were there. What is Informatica? So what's different now from last year? The big highlights. >> For me personally, and I've been here at Informatica quite a long time. I think it's quite refreshing. We had quite a lot of change in terms of our C-level at Informatica. It's really breathe new life into the organization from my own personal perspective. There's a huge refocus and a drive on our, fantastic new product sets that we're releasing here today. Internally, in the organization, there is a big motivation. There is a new kind of culture, a new resurgence almost in terms of where we feel we're going to be in the next five years. 'Cause we're looking at the product portfolio. We're looking at the outlook in terms of our growth, and our strategy. It's a great place to be right now. Sales, it always helps when you get good sales and everything. I'm sure you've seen the figures et cetera that we've been doing. But I can't see that changing. (fast crosstalk) >> Amazon's stock price and sales, and net income over the past year. Really the inflection point was right at '08, end of '08, beginning of '09, but really the real kick up on the hockey stick, which they have, has been around 2010, halfway through 2010, and then just pretty much straight up, massive shift. This is a wave, cloud is here. >> Yeah, I think Sally Jenkins, our CMO, earlier on this morning. I think she put it exactly right. In Informatica, in my view, we've been a little bit too conservative in terms of shouting about how good we are. I think we're pretty much one of the hottest pre-IPO companies that are out there right now. So if you look at our product set, the leader in six market segments. That's a great place to be. So I'm excited about the future-- >> Going private, we've talked to Anil, and talked to all the top executives. It's just a great close the curtain, open the doors back up again when you're ready. Easier to retool. Certainly as a private company, no pressure on the 90-day shock Clark Cherry held, board member was talking about how that makes things go really smooth. >> That's right, yeah. I mean imagine trying to make that journey towards subscription when you're a quarterly based organization. It's helped for the product development, it's helped with the commercial modeling as well. It's an exciting place to be right now. >> So it's good for the management to be focused on not that window every 90 days. But it's really 60 days, when you got 30 days to prep for the earnings call. But focusing on real product innovation, Micheal Dell did at Dell Technologies, now EMC. Lot of great stuff. Greg, thanks for coming back on the CUBE and sharing your insights. >> Nice, great to be here. >> When we're in EMEA, we're going to come by and say hello. >> Absolutely. >> Certainly, we'll keep in touch as we expand the CUBE out to in Europe. >> Look forward to it. >> Thanks so much. It's the CUBE, live coverage. I'm John Furrier with the CUBE with Peter Burris, Wikibon. We have got more live coverage here in San Francisco at Informatica 2017, after this short break. Stay with us. (enlightening tune)

Published Date : May 17 2017

SUMMARY :

Announcer: Live from San Francisco, it's the CUBE. Exclusive CUBE coverage of the event, What is the landscape like? So the desire and the demand for cloud and the cloud players? You no doubt see a lot of the partners around the event here and almost inevitably, the ecosystem is going to Increasingly the enterprise is going to be acknowledged So really a phenomenal change over the last two years, Okay, take a minute Greg, to talk about this DaaS. What does that mean to the customer? So take a minute to just explain what that means Fundamentally, data is really the key foundation So the typical use case is that you could have Peter: Yeah, I'm moving to Germany. So by implementing the DaaS products, So you're, sorry John, that they've already got. We all know that you give that data to a good data scientist So back down the drivers for your business. It came on the back of the Google Right To Be Forgotten Because, I know one of the announcements today, Just in a short 12 months, certainly in the product side, It's really breathe new life into the organization but really the real kick up on the hockey stick, So I'm excited about the future-- It's just a great close the curtain, It's helped for the product development, So it's good for the management to be focused as we expand the CUBE out to in Europe. It's the CUBE, live coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

Greg HansonPERSON

0.99+

PeterPERSON

0.99+

InformaticaORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

GregPERSON

0.99+

Peter BurrisPERSON

0.99+

GermanyLOCATION

0.99+

EuropeLOCATION

0.99+

Sally JenkinsPERSON

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

John FurrierPERSON

0.99+

ItalyLOCATION

0.99+

20 yearsQUANTITY

0.99+

30 daysQUANTITY

0.99+

60 daysQUANTITY

0.99+

last yearDATE

0.99+

North AmericaLOCATION

0.99+

AcsonORGANIZATION

0.99+

RonenPERSON

0.99+

12 monthsQUANTITY

0.99+

seven timesQUANTITY

0.99+

two waysQUANTITY

0.99+

General Data Protection RegulationsTITLE

0.99+

OneQUANTITY

0.99+

Clark CherryPERSON

0.99+

DiakuORGANIZATION

0.99+

90-dayQUANTITY

0.99+

EMEALOCATION

0.99+

Latin AmericaLOCATION

0.99+

Micheal DellPERSON

0.99+

7.7 million customersQUANTITY

0.99+

todayDATE

0.99+

EMEA CloudORGANIZATION

0.99+

Informatica World 2017EVENT

0.99+

Middle EastLOCATION

0.99+

BrexitEVENT

0.99+

CLAIREPERSON

0.99+

more than 240 countriesQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

EMCORGANIZATION

0.98+

2010DATE

0.98+

Wikibon.comORGANIZATION

0.98+

oneQUANTITY

0.98+

DaaSORGANIZATION

0.98+

'08DATE

0.97+

bothQUANTITY

0.97+

Year twoQUANTITY

0.97+

#INFA17EVENT

0.97+

WikibonORGANIZATION

0.97+

end of '08DATE

0.97+

six market segmentsQUANTITY

0.96+

year threeQUANTITY

0.96+

single pointQUANTITY

0.96+

this weekDATE

0.96+

DaasORGANIZATION

0.94+

Ronen Schwartz | Informatica World 2017


 

(upbeat electronic music) >> Announcer: Live from San Francisco, it's theCUBE, covering Informatica World 2017, brought to you by Informatica. >> Hey, welcome back, everyone. We're here live in San Francisco for a special presentation, exclusive coverage from theCUBE here at Informatica World 2017, our third year of covering the transformation of Informatica. And they need a bigger boat. Things are getting bigger, more data, tsunami's coming. I'm John Furrier with theCUBE with my co-host, Peter Burris, General Manager of Wikibon Research at wikibon.com. Check out all their great research from cloud, infrastructure, and big data, a lot of great stuff certainLy around IoT. Our next guess is CUBE alumni, Ronen Schwartz, fourth time on theCUBE, getting up there. Amit Walia is seven. He's your boss. >> Yeah, I'll let him win for the time being at least, yes, but I'm looking forward to seeing you in re:Invent in New York later this year. >> We love (mumbles). Thanks for coming on. We really want to get down and dirty real quick. Cloud obviously is the hottest thing. You guys kind of made a good strategic bet a few years ago, kind of being multicloud. That's the buzz word now. That seems to be the positioning for most folks as they start going hybrid. It's a gateway to multicloud. Still a lot of work to be done certainly in certain areas, latency and other things that are going to be worked on, but as an evolution, it's certainly the vector. And I want to get your thoughts because now, data is the most valuable commodity and precious resource. It's the heartbeat. >> Yes. So I think first of all, the world has moved from a world of cloud to a world of clouds. NDS is actually very, very important because if you look into the Informatica bet into the cloud, then more than 10 years ago, to know that the cloud and that Salesforce is going to lead this revolution is brilliant. But we have made a few early bets that are also very, very significant. I, for example, was a speaker in the first re:Invent from AWS. This is as far as we went to actually adopt their trend of building a full platform as a service in the cloud. And we have been betting very early on Microsoft Azure and now on Google and many of the other vendors there that are kind of leading the way into the new generation of analytics, into the things that are possible to do with data. So I think it's actually, as an early bet, it was very, very smart followed by a few other early recognition of the trends and the possibilities, all of them allowing you to bring data into the center. And to add just one last anecdote on that, I think in the world of clouds, when your application are residing in different clouds and so on, data is almost the only center of gravity that you have. I'm really happy that we are in this place to support these customers keeping this asset and bringing it into its full value. >> Well, that's great comment here. I want to get to the cloud Google announcement. You guys are mentioned in there. Spanner is now globally available. It was the hot thing at Google Next, the cloud conference actually here in San Francisco. But I want you to take a minute, Ronen, for the audience to just simplify Informatica's strategy vis-a-vis the cloud. How do you guys interact with the cloud? Just simply lay out the relationship that Informatica and your technology and offering is vis-a-vis the cloud, because a customer may have, I've got Amazon for this. I'm kicking the tires on Google. I use Azure for this. So you're starting to see some swim lanes with respect to early deployments, but how do you guys interface with the different clouds? >> I think in general, we should divide the term cloud at least into two or three groups. I'll use three for this analysis. I think we should look into cloud application or SaaS as they're called. These are vendors that are depending on data coming to them from the on premise from other clouds to really give the users the ability to work within the application. And then the second group is really the platform as a service. These are vendors that are supporting your ability to move your processes, your execution, your data storage, and really, your full operation, your full IT operation into the cloud. And then I think the third group is those vendors that deliver only the infrastructure as a service. If you look into these all three groups, and I include the analytics vendor in the first group, in the application or SaaS group, when you look into all of these groups, they all depend on data. Data is the lifeline of the application. It is also the lifeline of the platform. It's a key thing that every platform needs. Informatica want to play a key role in actually empowering the data to be part of all of these clouds in an efficient and effective way. To do that from a product strategy, what we're doing is we're delivering a broad best of breed set of integration and data management product that are all supporting this move to the cloud, the move of data across clouds, data quality, master data management and building this center of gravity of data. All of these products are built as part of a single platform and that single platform have four layers of capabilities. The first and the most fundamental one is connectivity. You have to be able to connect to all of these clouds as well as the on premise applications. The second layer is basically the layer of execution. You have to be able to process things in the right way, leveraging the open source technologies or chosen technologies. And basically, what we're announcing in a, the third layer, sorry, is the management and monitoring that you have to do. Especially when you work in a distributed environment, it's a different level of the problem. The fourth one which we are focusing in a big way in this event is really the layer of unified enterprise metadata and as we just announced, the artificial intelligence and intelligence that this can bring. We now call it CLAIRE. We have two terms in mind. One is clairvoyant, the ability to predict what needs to be done by the integration and data management. And the second one, CLAIRE very nicely have AI in the middle. And we really believe that AI and machine learning based on metadata can bring a lot of intelligence into the work of data. And in this event, we're sharing a lot of the stuff that we've been doing in the last three years to empower that, and there is a lot coming in this area. >> So very quickly, you said that it can improve the work of data. >> Ronen: Yes. >> One of the things would be perhaps have a little bit more clarity on overall is you're suggesting that there is a next generation or is there a new way of thinking about data management. What is the new data management? Because clearly, it's not just building a database and administering it. What is the next generation of data management? >> So I think that there are actually four big changes that are happening that are all impacting the data management. I think change number one is the fact that application in the data sources have been shifting from an on premise, inside your data center to a very distributed environment. The second change is that there is a need for additional patterns of integration and data management. It's not just about batch. It's not just about real time. Streaming, IoT, they're bringing a new set of requirements to the field. The third one is that basically, the integration can reside or can run in your control, in a self service mode, an embedded mode, or in the cloud itself. So beyond the endpoint, it can run in different places or the application can run in this place, the integration can do that. But the biggest change of all is the addition of users. There are more users that think they have the right to get data. They depend on data for their daily work. They don't just execute. They execute based on data. And these changes are shifting or shaping the data management world. I can double click on each one of these changes, but I want to double click specifically on the change in the users. The minute that you have very demanding new users that needs data, and they are not data experts, they are not practitioners, you actually have to work really hard in making it really, really simple for them to get the access to the data, to get not just the access to data but to get the access to the right data, and actually to get also very basic things happening to the data without them investing heavy time in doing that. So one of the products that we kind of showed on main stage is our enterprise information catalog. We all get used to the Google experience. We can search the Spanner release that you mentioned earlier and my name, and you'll find us both together. We are doing that for any data in the enterprise. So a naive user can go into a search interface and just type what he's looking for. He's typing, for example, the word customer. He's not just getting, as a return, all the database that have customer inside the field definition or something like that. He's getting all the dataset that fits that domain. How do we figure out the domain? That's where machine learning and AI is making a big difference. You can actually scan massive amounts of data. You can calculate back. You can go into vocabularies and things like that and figure out the domain so that when I'm searching for a customer, getting everything related to a customer domain. A more naive user less familiar with the data is able to get the data that he wants. The second example of AI that I have to share is that even if you chose this data, the minute that you pick it up, we are giving you an Amazon experience, telling you there are actually four other options that you might want to consider. This is a more robust option. This is a more curated option. This is two options that are very popular. And we really try to help you make the intelligent pickup of the right data. >> But through the metadata, you know that there is some semantic consistency across the different options. >> That is correct. And the metadata that we collect is showing the consistency from the technical perspective, but we're also collecting metadata about the users that are using the data or collecting metadata about the operation, how easy, how effective it is to access the data. And we put these four segments of metadata all together to really give the naive user the best experience. The expert almost know where to get the data, but if you want to expand the number of users, you really have to automate in doing all of that. >> Let me build on this. So if we think about modern data management, let me see if I can summarize, we're thinking about a couple of things. First off, in a digital business that is dependent and predicated on the availability of high quality data assets, that's the difference between a digital business and a non-digital business. >> Ronen: That's correct. >> We have to be able to inventory our data assets through metadata. We have to be be able to very, very quickly know and understand how they map to different forms and formats. And we have to be able to understand paths and movement of data through the enterprise. Now talk a little bit about the data movement side because in a digital business, there's a lot of things we can predict and we'll be wrong about most of them. But one thing we can predict is more data and more distributed which says a lot about the increasing importance of intelligent data movement, not just middle (mumbles) it used to be where you're riding the connections but intelligent data movement. Can you talk a bit about that? >> Absolutely, and I think it's a very, very deep observation that you're raising that I don't think most of the audience and most of the customers have already gone through. I think to many customers, the move to the cloud, for example, seems like everything is going to be shifting from one place to the other. You're actually spot on. The true long-term direction is in the multiclouds and in the distribution of data across multiple places. The decision that you as an organization have to pay attention, am I going to work in multiple silos or am I actually going to work in a distributed, but in integrated and an intelligent way, environment. We are definitely pushing very, very hard to enable the second one, intelligent, integrated environment. There are parts in the discovery that are very, very important to do that, but just like you mentioned, there is other parts in the data movement that are just as important. And to do that effectively, it's not enough to just be able, as you were mentioning, to just move the data in the batch mode and so on. You have to really stream the data in certain places so that's it's in real time available in two places. In some other places, when you move the data, you're actually running into the limits of the amount of data that you can move through the pipe so you have to-- >> Peter: Then we'd have latency. >> Exactly, so you have to compress it, move it in the right batches so you are reaching this level of accuracy. And most important is you have to do it intelligently. If you will just move all the data to seven different places so that you have it seven times, this is not a good strategy. So you want to subset it. You want to sort it. You want to get just the important data to the right place at the right time for one scenario. For another scenario, you want to replicate the full data. That's why I mentioned that inside the intelligent data platform, it's really important to support a variety of integration patterns. And that's we are doing, and we're doing it better and better. >> So the Google announcement that was announced today, the public availability of Spanner globally, you guys are mentioned, congratulations, it says here, just want to get your thoughts on this because in preparation for general availability, working closely with their partners, you were mentioned one in of them. And it says, "Now that these partners are in early stages "of cloud Spanner lift and shift," they're passing on their insight. So first, before I get to the lift and shift which I think just means rip and replace but in a different way, that's neither here nor there for now, but what are some of the things you did with Google early on prior to preparation as an integration partner with Google? Because Spanner is a wonderful product. It's horizontally scalable which is the ethos of DevOps. This is a core tenant. So most people go, "Oh, vertically integrated "because of this cloud." You're talking about a new dynamic that is a DevOps ethos, horizontally scalable with data. That's what Spanner does. What are some of the insights you can share with us on the pre general availability of Spanner? >> I think the Google engineering team have done a wonderful job building from the ground up. What you're saying is the dream of every DevOps operation of databases. And I think, indeed, we're seeing it in this industry now, the level of innovation that exists right now is parallel to none. The database industry have been innovating forever, but what we're seeing now is actually-- >> Fast change. >> Ronen: Yes. >> Massive shifts. >> And just like you're saying, this was the dream, and here is the dream actually coming true. >> John: The waves are coming in. Just get on your surfboard, ride the waves. >> Exactly. Exactly. We in Informatica believe that's one of the, even though I'm a product leader, one of the groups in my team is actually responsible for the strategic relationship with ISVs. And the reason we do that is because we believe that the work that we are doing with Salesforce, with AWS, with Microsoft Azure, with Google, and with a few other vendors needs to be long-term strategic view. So we needed to know about Spanner way ahead of the release, way ahead of the beta. So at the time that they are releasing, we are actually ready to support the customer doing that. >> What does it mean to be data ready in terms of integration? You're integration partner as part of the general availability. What's going on with Spanner? Give us some insight. >> So what it basically means is that we actually not only have the connectivity to the new database but we actually have the right set of optimization that are actually very different and unique when it comes to a distributed environment like that. So we are investing not just in getting the connectivity that allow our customer to move the data but actually in optimizing it so that we can support what you were alluding earlier which is can you do it in real time, can you do it faster, can you do it with larger batch, et cetera. So that actually means that Informatica is optimizing the data movement into this environment, optimizing the enterprise level of delivery of integration into this environment. >> Ronen, you're the senior vice president in charge of the whole cloud thing. Congratulations, a good strategy. You've got Amazon Web Services. And looking at their stock price since really 2010, it's been pretty much a hockey stick. That's kind of the demarcation point. 2008, the financial crisis, housing crisis, but 2010, it really kind of changed. I want to get your thoughts on the difference between Informatica now than last year. Obviously, 2010 is really kind of when the wave started, but in the past 12 months, a lot's happened with Informatica. What should people know about what's going on now this year, at this show right now that's different than a year ago? >> I think these are really, really exciting times. And if you ask me what have changed in 12 months, the list is very, very long. I think I may show there a slide, then I think we had to tune down the amount of releases, exactly. But I do want to mention a few things that are very, very significant that are different. I think intelligence is now available not just in a few of our products but actually as a platform capability. Our metadata layer have reached the level of maturity that it's a mandatory thing for every customer of ours in every project. And beyond that, it seems like the growth in the adoption in both our cloud product and our big data technology is skyrocketing. My best example is in our sales kickoff in January, I was really, really proud to present a slide that shows that every month, we move about half a billion records to the cloud. I was really proud of that, and as I come to this event, I'm presenting the number one trillion. It's less than six months and the amount of data that goes through the platform has doubled. If I looked into our big data revenues, they are tripling year over year. >> John: So the adoption uptake is huge. >> Yes, yes, the adoption is huge. It actually grows into new use cases, new examples. And in this event, you actually see tens of these customers, there's about 80 customers here, actually present new use cases. And these new use cases are fabulous. People are doing real time and streaming, and they are doing TV integrated to the web, and IoT examples, and cloud adoption. It really is a very exciting year. >> But data's not the new oil or the new gold. It's the plantation. It's the soil. It's the rich soil that lets things bloom. >> A lot of good things are growing out of data. Yes, I agree, I think there are so many analogies to data. It's really hard to pick the right one. >> The waves are coming. It's a data ocean, not a data lake. I said that years agon and coming true on that prediction. >> Look, at the end of the day, it's just an asset and businesses have to use it differently. That's what we're talking about here. >> And your research, by the way, just to plug Wikibon, is really phenomenal. You pegged this, and again, it's one of those points where Wikibon makes a bet it will come true, data is an asset and needs to be looked at that way and valued as an asset, not as an accounting mechanism. >> Peter: That's right. >> It'll be a strategic asset. And as you said, it's horizontal. It's going to be fertilized throughout the organization. I think that to me is just the beginning. So I think you guys are on a good strategy. Congratulations. >> Thank you. And if I may plug a last question here on the topic, if you're managing assets, I'm assuming the CFO know exactly where every asset is. It's in the balance sheet. He knows the list of bank accounts and so on. Our customers needs more to really collect their metadata and actually build this enterprise information catalog so they will know where the data assets are. This is a mandatory thing for any organization. >> John: And it's coming down the pipe pretty fast. >> Yes. >> So Pete, your research is right on target. Go to wikibon.com and check out their latest research on valuing the data. Any plugs for the research? >> Peter: No, it's brilliant. (John laughs) >> Of course you say that. >> Amazing. >> Ronen, great to see you. Thanks for coming back on. Hey, congratulations. You got a spring in your step, and you got a lot of cloud action going on, hybrid cloud. Congratulations. >> Thank you very, very much. A pleasure to be here. >> Okay, we are here live in San Francisco for an Exclusive CUBE coverage of Informatica World 2017. I'm John Furrier, Peter Burris. Stay with us for more live coverage after this short break. (upbeat electronic music)

Published Date : May 17 2017

SUMMARY :

brought to you by Informatica. of covering the transformation of Informatica. but I'm looking forward to seeing you data is the most valuable commodity and precious resource. data is almost the only center of gravity that you have. Just simply lay out the relationship that Informatica One is clairvoyant, the ability to predict what needs the work of data. What is the next generation of data management? the access to the data, to get not just the access to data semantic consistency across the different options. And the metadata that we collect is showing the consistency that is dependent and predicated on the availability a lot about the increasing importance of intelligent data and in the distribution of data across multiple places. to seven different places so that you have it seven times, What are some of the insights you can share with us that exists right now is parallel to none. and here is the dream actually coming true. ride the waves. And the reason we do that is because we believe as part of the general availability. not only have the connectivity to the new database in charge of the whole cloud thing. And beyond that, it seems like the growth And in this event, you actually see It's the rich soil that lets things bloom. It's really hard to pick the right one. I said that years agon and coming true on that prediction. Look, at the end of the day, it's just an asset data is an asset and needs to be looked at that way I think that to me is just the beginning. And if I may plug a last question here on the topic, Any plugs for the research? Peter: No, it's brilliant. and you got a lot of cloud action going on, hybrid cloud. A pleasure to be here. Okay, we are here live in San Francisco

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Ronen SchwartzPERSON

0.99+

RonenPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

Amit WaliaPERSON

0.99+

John FurrierPERSON

0.99+

InformaticaORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

two placesQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

JanuaryDATE

0.99+

two termsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

PetePERSON

0.99+

2010DATE

0.99+

two optionsQUANTITY

0.99+

tensQUANTITY

0.99+

less than six monthsQUANTITY

0.99+

AWSORGANIZATION

0.99+

first groupQUANTITY

0.99+

three groupsQUANTITY

0.99+

second groupQUANTITY

0.99+

last yearDATE

0.99+

sevenQUANTITY

0.99+

WikibonORGANIZATION

0.99+

2008DATE

0.99+

third oneQUANTITY

0.99+

threeQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

seven timesQUANTITY

0.99+

New YorkLOCATION

0.99+

third layerQUANTITY

0.99+

CLAIREPERSON

0.99+

firstQUANTITY

0.99+

second layerQUANTITY

0.99+

SalesforceORGANIZATION

0.99+

third groupQUANTITY

0.99+

OneQUANTITY

0.99+

fourth timeQUANTITY

0.99+

oneQUANTITY

0.99+

CUBEORGANIZATION

0.99+

Wikibon ResearchORGANIZATION

0.99+

second changeQUANTITY

0.98+

12 monthsQUANTITY

0.98+

this yearDATE

0.98+

Informatica World 2017EVENT

0.98+

third yearQUANTITY

0.98+

about 80 customersQUANTITY

0.98+

seven different placesQUANTITY

0.98+

bothQUANTITY

0.98+

FirstQUANTITY

0.97+

one scenarioQUANTITY

0.97+

todayDATE

0.97+

twoQUANTITY

0.97+

second oneQUANTITY

0.97+

single platformQUANTITY

0.97+

fourth oneQUANTITY

0.97+

Amit Zavery Oracle, Oracle OpenWorld - #oow16 - #theCUBE


 

>> Narrator: Live from San Francisco, it's TheCUBE! Covering Oracle OpenWorld 2016. Brought to you by Oracle. Now, here's your hosts John Furrier and Peter Burris. >> Okay welcome back everyone, we are here live in San Francisco for Oracle OpenWorld 2016. This is SiliconANGLE Media's TheCUBE. It's our flagship program, we go out to the events and extract the signals and noise. I'm John Furrier, the co-CEO of SiliconANGLE Media. My co-host, Peter Burris, head of research for SiliconANGLE Media and also the General Manager of Wikibon Research. Our next guest, CUBE alumni Amit Zavery, Senior Vice President and General Manager of Oracle' Cloud Platform, heavily involved in the platform as a service, where all the action is, as well as the cloud platform. Amit, great to see you, welcome back. >> Yeah, thank you. It's always a pleasure to be here. >> A lot of buzz, we're on day three of wall-to-wall live coverage, we got you at the end, so we have the luxury of one, getting your opinion, but also to look at the shows. So first, as day three kicks in, the big party today with Sting and the concert, but then the workshops tomorrow, pretty much ends tonight for the most part. What's your takeaway from the show? What's your vibe this year, what are you seeing, what's popped up at you at the show this year? >> A lot of things, I think one thing is there's a lot of maturity into adoption of the cloud, right, so we're seeing a lot of customers I speak to nowadays are talking about next, broader implementations, adding more and more capabilities into their services, no more just trying to try out things, a lot of production workloads been moving to the cloud. So very, very interesting conversations. And also I'm seeing a lot of different kinds of customers, typically, of course Oracle being in the very large enterprise software player, used to have large companies as the couple of folks I used to meet on a regular basis. Over the last couple of years, I'm noticing a lot of smaller companies who are probably, a lot of times I have to look up who they are, but they are doing very interesting projects, and they are coming here and talking to us. So I'm also seeing a very different audience I'm speaking to than I used to before. >> We had Dave Donatelli on earlier, Executive Vice President one of the main leaders now on the go-to market for cloud and all the converged infrastructure, and it's very clear to his standpoint, he's overtly saying it, putting the stake in the ground that if you run Oracle on Oracle hardware, it will be unequivocally the fastest, and it's an unfair advantage, and they've got to lap the field is what he said. Okay. Same for Platform as a Service, you guys have some success now under your belt this year from last. >> Yes. >> But it's not clear that Oracle has won the developers' hearts and minds in the enterprise, and winning the developers in general, Amazon has had that up their sleeve right now, but yet, you guys have a ton of open source stuff. So Platform as a Service is really going to come down to integration and developers. >> Yes. >> What's you guys' strategy there, and how do you see that playing out, because you need to fortify that middleware, that integration, that's APIs, that's a developer-centric DevOps way. What's your strategy? >> A lot of things. The one thing which you see from our Platform as a Service, we have been from the day one building it on an Open Standards technology, which is again end-to-end very broad and deep functionality. And we had a lot of history building out a platform, and we have thousands and thousands of customers who successfully deployed that. So our strategy going forward and what you see some of the announcements recently, as well as some of the use cases you might have heard from our customers, are customers who are really trying to install a broader platform requirement, not just trying to build an application, but be able to integrate them, be able to make sure they have ability to kind of take the data from it and be able to do analysis with it in real time as well as batch, and be able to publish that information whenever they choose to, and then work that in conjunction with any infrastructure as well as with any services, Software as a Service, systems applications. So this plays very well with our Software as a Service customers, they need a platform extension, so that's the developers we go after, we go to developers who are really also building brand new applications and need a platform which is Open Standard-based, and which is broad and deep and well-integrated. So that's really what we are seeing a lot of success from. So Amazon no doubt has success on the infrastructure side, but we have customers in just one cloud source to move the hardware, of course then allowing them to move up in the platform space as well, but they have very limited set of things which they still offer. We've been doing this space for many, many years, and now we may be able to provide the similar kind of breadth, which is to have on prem in the cloud, cloud natively built, with the right kind of APIs, right kind of interfaces, but a little higher-level services, so it's not very granular where you have to go and compose 15 different services together and build your application. I provide you as a customer, to a customer a very ease of use, and much more solution-centric platform. And that's really the differentiation we have there. >> So less moving parts on the composability, if you will. >> No we can do as granular as you want, but we also have composed in a way which is like okay, if we are looking at integration, for example, right, what are different kind of patterns you need in integration? You might want to be able to it to a file, you might be able to do it B2B, EDI basic integration, you might be able to do it through a process, you might do it through a messaging, might do a data integration. So we provide you integration cloud, versus saying that 20 services go at it, and then tomorrow you might not still understand what to use, what not to use. Customers can consume what they like in the integration cloud, and pay as they go in terms of what functionality they pick up. But as a developer, I don't have to go waste my time to figure out and learn the different tools, different stuff, and figure out how to make all these things work together. >> So the palette is becoming more enterprise-friendly. >> Sure. >> And on top of that, you're also providing a set of capabilities in the past platform that kind of replicates the experience developers have enjoyed certainly in the open source and the other world by making it easier to find stuff, to discover stuff, and then exploit and use stuff through the variety of different services. So as you look forward, how are developers going to change the way they spend their time? Moving from code, moving from composition. As we move forward, where will developers be spending more of their time? >> I think that over time, they should be spending time just writing either the code, or kind of extending the application they have. They shouldn't have to worry about DevOps, they shouldn't have to worry about all the underlying technologies required to build an application. They shouldn't have to worry about all the testing and the QA, which should be all part of the development life cycle, which we provide automated in the functionality. So developers, they should worry about what language they want to use, or what platform they want, what kind of framework I like, and who I'm trying to cater to, and what my user interface should be. And beyond that, all other things should be provided from the platform in terms of automation, in terms of simplicity, backup recovery, patching, upgrade, all that stuff should be automated as part of the platform provider. And that's a service we provide as part of our platform as well so that developers can focus on writing that application. And we make sure that we give you the choice, where you can pick languages you want, you can pick the standards you want. Open source and all the different things you might want to pick from, or something we have provided as well. But we give you that choice, it's not one or the other, and tomorrow if you want to move somewhere else, we'll make sure you can do that, because we are not locked into one way of doing things. >> So I know Oracle is historically very focused on professional development, but business people, well development is starting to happen elsewhere in the organization, >> Amit: Yes. >> not just in the professional developer community. So what used to be like building a spreadsheet, now has implications for some of the core digital assets that the business might run. How do you anticipate the definition of the developer evolving? The role of the developer, being able to provide these services to folks who historically might not have been developer, have them also be relevant, and at the same time collaborate with those pros. >> No, that's a very interesting point you raise, because I think more and more this idea of citizen developer, no code developers, a low code developer, whatever you want to use in industry. Many, many of them who want to be able to do quick and easy web building, their functional requirements and deliver that without having having to call an IT somebody to code it for you, or having to learn anything to code. And we have really made sure in a Platform as a Service we offer, there's lot of ease of use and quick drag-and-drop kind of tooling, we recently announced a visual code project, which is based on our application builder, composer kind of a service where you can drag and drop and create a very simple, easy to use application without having to write any code. Similarly, the integration side we do the same thing. We provide recipe-based integration, where if an event happens in one application I want to move that information to another application. As a developer I don't have a right to any single line of code. We provide the recipe or you can build your own recipe. I've shown it to my 13-year-old daughter. She was impressed, she did something from Instagram to Twitter by just using this application on a mobile phone. So similar, that's the kind of people we're going after from the line of business and business analyst, who don't want to write code but they have a business requirement and how can I make it easy and simple to use. So we're doing a lot of that work as well, and that's a very important part of our development community. >> Amit, talk about the competition, I mean obviously Amazon web services is clearly up there. We're kind of like thinking that it's more of a red herring the way it's talked about, because you have certainly the fundamentals with stall base, and you guys haven't really started moving your stall base over yet. When that comes, I'm sure that Wall Street's going to love that, but you have some time, some building blocks are being built out, but how do you guys have that conversation with customers, with AWS and Microsoft specifically, or even Google. How do you guys differentiate and where will you differentiate in the pass layer going forward? >> I think many things. One thing is of course our customers want to make sure they can preserve their investment while they move to the cloud. So we want to provide a platform which is hybrid in a way that they can take some of the information, they can run some of the things on premise, while they transition some of their workloads or move their applications to the cloud very easily without having to rewrite many of the step or retest anything. So that's something. Services we provide, we've created a lot of tooling around that to make it easy for them to do it. And the differentiation we provide to them is that, one, we will protect your investment. Second, the tools that are easy to use are out of the box. And third thing we do is to really make it compatible. We have commercial terms as well, which makes it easy for them take their work loads and move that without having to keep on reinvesting lot of the cost they put in place. >> One of the things that's not being hyped up at the show that's certainly popping out at us is integration and data sharing. We talked to the marketing cloud folks, we talked to the financial cloud folks, we talked to the retail, hospitality folks. Those once-traditional vertical apps still need big data to be differentiated at the domain level, machine learning and AI, and whether it's an IOT impact or not, same thing, but they also need to have access to other databases from other databases. >> Sure, sure. >> Retail, I didn't know if someone bought something over here, so how do you balance the horizontal play with still maintaining the integrity of the app level. >> Amit: Yeah. >> Seems like the past is the battleground for this architecturally. >> Yes, yes. No, I think you're right. I mean, if you look at typically every application customers we talk to nowadays, they have many data sources and data targets, systems underneath the covers, very very heterogeneous. And when we build our platform, we wanted to make sure that it is a heterogeneous support. Alright, so I can write from any database. >> John: That's built into the design. >> Into the design and it's already supported today. I can write from Oracle, DB2, Sequel Server, Hadoop, No Sequel, into again similar kind of back ends. Again Oracle or non-Oracle, we don't care really. We want to be able to support your infrastructure, the way you have invested in, and be able to move the data. So when the application should be gnostic of in terms of what you're using underneath the covers. And the platform extracts that out for you. So we have products and services. Today we have offering in the cloud something we call big data preparation, right, which allows you to take data sources from any kind of sensors, spreadsheets, databases, process that, do the data wrangling, prepare that information and write it into a big data lake, could be running Hadoop, could be running Oracle database or the data warehouse, could be running Amazon if they want to, and we don't really care then. >> So you're strategies offer services, >> Yes. >> On top of the core functional building blocks. At the same time, differentiating on extracting away component-level complexity. >> Yes. No doubt. Yes. And then we want to make it as simple as possible. There are things which we want to expose, we want to provide APIs for anybody who wants to really play around with things. We want to provide them also low-level capabilities if they want to get into that level, but we do also extract it out for, as you were talking about developers, we don't want to have to learn everything every time new capabilities and we provide that abstraction. >> Do you see tooling drives a lot of innovation. Do you see certain toolings becoming standard, not being abstracted away? Could you comment on that and share some color on what tooling will always be around. >> I think the tooling, what I've noticed over time and I think it's probably good to remain the same, every developer has a favorite tool, and we want to give them the choice to pick their favorite tool. I don't think that they should be, from the tooling perspective we have to make sure we can support every kind of program or developer in terms of how they want to write their code. As long as I can provide the interface to it, an API, or some kind of abstraction, and then the developer can go at it. I was a developer and I had my favorite tool, and I still use VI. >> Some will say I'm a VI guy, EMAX, world will go crazy. >> It's okay! >> John: Did you see that VI got an upgrade after how many years, 35 years. >> Amit: It's still amazing, right, I mean people use it and that's fine. (laughs) >> We may get into the VI EMAX war. Amit, final question, just we've got to wrap up here. Thanks so much fitting the time to share the insights. We'd be able to do a whole segment on VI versus editors. >> Peter: Please. >> The plans going forward, can you share any insight in the priorities, what you're looking at from a product and P and L perspective, obviously the revenue growth, you want to drive more of that, but what are some of the fundamental priorities for you, any adventure doing, where you investing your development and marketing dollars? >> A few things, right, so I think one is you probably heard some of the things we're doing for helping developers learn how to use a platform, right, so we're doing a lot of training and code samples, as well as developer-centric content globally. So that is one. Second thing you'll hear about us, the ability to kind of run our platform both on premise and in the cloud, so we have the customers can choose where they want to run it, be able to run it on their data center of choice, as well as they can get the benefit of running in the public cloud. Depending on regulation requirements, whatever it is. So you see evolution of that, but all of the platform we have in the public cloud also, we let the customer to choose. The flexibility's the big, big important part for a lot of enterprise customers, so they're getting to choose. >> John: You're going to continue to do that. >> 100 percent. I think it's very very important that they should not be tied into one, and they should be able to move away if they choose to, not be locked into one way of doing things. And third thing we're doing is we're really bringing together lot of infrastructure, platform, and Software as a Service offerings. Very close and close together as an integrated platform cloud, right, which makes it very easy for customers to consume what they want, but don't have to keep on making it all work together themselves. >> So integrate at will, however they want to compose. >> Yes, so that way at least we'll see lot of functionality, you heard a lot of this this week, we can't keep up with the amount of announcements we've made, and you'll see >> I'll rephrase the question, so first of all great answer but I was looking for something else. How about next year when we interview you, looking back, what would you view as a successful year for your group? >> I think the success for us and the way I measure it is continue customer adoption and use cases evolution right. So today we have around 10,000 plus customers. I would expect by next year we are growing at a very very rapid rate and that another four or five thousand customers more who are doing interesting use cases and going live with it. >> John: Great. >> That a big success. >> Customers ultimately. >> Keeping them happy and as long as I deliver the right things, they will be happy. >> I always say look at the scoreboard in sports, and that's ultimately the differentiation, so that's going to be the benchmark. The KPI is the number of customers, happy customers. >> Yes. >> I'm sure Mark Hurdle will have that on his next earnings report. This is TheCUBE bringing you Amit Zavery's commentary, also analysis of Oracle OpenWorld. With more after this short break, we're going to wrap up. Live, here at Oracle OpenWorld 2016. I'm John Furrier with Peter Burris, you're watching TheCUBE. (light music)

Published Date : Sep 22 2016

SUMMARY :

Brought to you by Oracle. Media and also the General Manager of Wikibon Research. It's always a pleasure to be here. live coverage, we got you at the end, so we have the luxury a lot of maturity into adoption of the cloud, right, and all the converged infrastructure, So Platform as a Service is really going to come down to and how do you see that playing out, And that's really the differentiation we have there. on the composability, if you will. So we provide you integration cloud, versus saying and the other world by making it easier to find stuff, and the QA, which should be all part of and at the same time collaborate with those pros. We provide the recipe or you can build your own recipe. the way it's talked about, because you have certainly And the differentiation we provide to them is that, One of the things that's not being hyped up at the show over here, so how do you balance the horizontal play Seems like the past is the battleground I mean, if you look at typically every application the way you have invested in, and be able to move the data. At the same time, differentiating on extracting away but we do also extract it out for, as you were talking Do you see tooling drives a lot of innovation. from the tooling perspective we have to make sure John: Did you see that VI and that's fine. Thanks so much fitting the time to share the insights. So you see evolution of that, but all of the platform and they should be able to move away if they choose to, looking back, what would you view as a successful year So today we have around 10,000 plus customers. the right things, they will be happy. The KPI is the number of customers, happy customers. This is TheCUBE bringing you Amit Zavery's commentary,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark HurdlePERSON

0.99+

Peter BurrisPERSON

0.99+

Dave DonatelliPERSON

0.99+

Amit ZaveryPERSON

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

AmitPERSON

0.99+

AWSORGANIZATION

0.99+

OracleORGANIZATION

0.99+

thousandsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

PeterPERSON

0.99+

next yearDATE

0.99+

fourQUANTITY

0.99+

20 servicesQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

35 yearsQUANTITY

0.99+

SecondQUANTITY

0.99+

todayDATE

0.99+

tomorrowDATE

0.99+

TodayDATE

0.99+

15 different servicesQUANTITY

0.99+

EMAXORGANIZATION

0.99+

this weekDATE

0.99+

this yearDATE

0.99+

OneQUANTITY

0.99+

Wikibon ResearchORGANIZATION

0.99+

tonightDATE

0.98+

13-year-oldQUANTITY

0.98+

five thousand customersQUANTITY

0.97+

bothQUANTITY

0.97+

one thingQUANTITY

0.97+

one applicationQUANTITY

0.97+

single lineQUANTITY

0.97+

firstQUANTITY

0.97+

TwitterORGANIZATION

0.97+

StingPERSON

0.97+

oneQUANTITY

0.96+

Oracle OpenWorld 2016EVENT

0.96+

coupleQUANTITY

0.96+

around 10,000 plus customersQUANTITY

0.96+

100 percentQUANTITY

0.95+

InstagramORGANIZATION

0.95+

TheCUBEORGANIZATION

0.94+

One thingQUANTITY

0.92+

#oow16EVENT

0.91+

Wall StreetLOCATION

0.89+

Oracle OpenWorldORGANIZATION

0.88+

thousands of customersQUANTITY

0.86+

Siddhartha Agarwal, Oracle Cloud Platform - Oracle OpenWorld - #oow16 - #theCUBE


 

>> Announcer: Live from San Francisco it's The Cube covering Oracle OpenWorld 2016 brought to you by Oracle. Now here's your host, John Furrier and Peter Burris. >> Hey welcome back everyone. We are live in San Francisco at Oracle OpenWorld 2016. This is SiliconANGLE, the key of our flagship program. We go out to the events, extract a signal from the noise. I'm John Furrier, Co-CEO of SiliconANGLE with Peter Burris, head of Research at SiliconANGLE as well as the General Manager of Wikibon Research, our next guest is Siddhartha Agarwal, Vice-President of Product Management and Strategy of Oracle Cloud Platform. Welcome back to the Cube, good to see you. >> Yes, hi John. Great to be here. >> So I've seen a lot of great stuff. The core messaging from the corporate headquarters Cloud Cloud Cloud, but there's so much stuff going on in Oracle on all the applications. We've had many great conversations around the different, kind of, how the price are all fitting into the cloud model. But Peter and I were talking yesterday in our wrap-up about, we're the developers. >> Siddhartha: Yeah. >> Now and someone made a joke, oh they're at JavaOne, which is great. A lot of them are at JavaOne, but there's a huge developer opportunity within the Oracle core ecosystem because Cloud is very developer friendly. Devops, agile, cloud-native environments really cater to, really, software developers. >> Yeah, absolutely and that's a big focus area for us because we want to get developers excited about the ability to build the next generation of applications on the Oracle Cloud. Cloud-native applications, microservices-based applications and having that environment be open with choice of programming languages, open in terms of choice of which databases they want, not just Oracle database. NoSQL, MySQL, other databases and then choice of the computeship that you're using. Containers, bare metal, virtual environments and an open standard. So it's giving a very open, modern easy platform for developers so that they'll build on our platform. >> You know, one of the things that we always talk about at events is when we talk to companies really trying to win the hearts and minds of developers. You always hear, we're going to win the developers. They're like an object, like you don't really win developers. Developers are very fickle but very loyal if you can align with what they're trying to do. >> Siddartha: Yeah. >> And they'll reject hardcore tactics of selling and lock-in so that's a concern. It's a psychology of the developers. They want cool but they want relevance and they want to align with their goals. How do you see that 'cause I think Oracle is a great ecosystem for a developer. How do you manage that psychology 'cause Oracle has traditionally been an enterprise software company, so software's great but... Amazon has a good lead on the developers right now. You know, look at the end of the day you have to get developers realizing that they can build excellent, fun creative applications to create differentiation for their organizations, right, and do it fast with cool technologies. So we're giving them, for example, not just the ability to build with Java EE but now they can build in Java SE with Tomcat, they can build with Node, they can build with PHP and soon they'll be able to do it with Ruby and Daikon. And we're giving that in a container-based platform where they don't necessarily have to manage the container. They get automatic scalability, they get back up batching, all of that stuff taken care of for them. Also, you know, being able to build rich, mobile applications, that's really important for them. So how they can build mobile applications using Ionic, Angular, whatever JavaScript framework they want, but on the back end they have to be able to connect these mobile apps to the enterprise. They have to get location-based inside and to where the person is who's using the mobile app. They need to be able to get inside and tell how the mobile app's been used, and you've heard Larry talk about the Chatbot platform, right? How do you engage with customers in a different way through Facebook Messenger? So those are some of the new technologies that we're making very easily available and then at the end of the day we're giving them choice of databases so it's not just Oracle database that you get up and running in the Cloud and it's provision managed, automated for you. But now you can ask for NoSQL databases. You can have Cassandra, MongoDB run on our IaaS and MySQL. We just announced MySQL enterprise edition available as a service in the Public Cloud. >> Yeah one of the things that developers love, you know, being an ex-developer myself in the old days, is, and we've talked to them... They're very loyal but they're very pragmatic and they're engineers, basically they're software engineers. They love tools, great tools that work, they want support, but they want distribution of their product that they create, they're creators, so distribution ultimately means modernization but developers don't harp too much on money-making although they'd want to make money. They don't want to be abandoned on those three areas. They don't want to be disloyal. They want to be loyal, they want support and they want to have distribution. What does Oracle bring to the table to address those three things? >> Yeah, they're a few ways in which we're thinking of helping developers with distributions. For example, one is, developers are building applications that they exposing their APIs and they want to be able to monetize those APIs because they are exposing business process and a logic from their organization as APIs so we're giving them the ability to have portals where they can expose their APIs and monetize the APIs. The other thing is we've also got the Oracle Cloud Marketplace where developers can put their stuff on Oracle Cloud Marketplace so others can be leveraging that content and they're getting paid for that. >> How does that work? Do they plug it into the pass layer? How does the marketplace fit in if I'm a developer? >> Sure, the marketplace is a catalog, right, and you can put your stuff on the catalog. Then when you want to drag and drop something, you drop it onto Oracle PaaS or onto Oracle IaaS. So you're taking the application that you've built and then you got it to have something that-- >> John: So composing a solution on the fly of your customer? >> Well, yeah exactly, just pulling a pre-composed solution that a developer had built and being able to drop it onto the Oracle PaaS and IaaS platform. >> So the developer gets a customer and they get paid for that through the catalog? >> Yes, yes, yes and it's also better for customers, right? They're getting all sorts of capability pre-built for them, available for them, ready for them. >> So one of the things that's come up, and we've heard it, it was really amplified too much but we saw it and it got some play. In developer communities, the messaging on the containers and microservers as you mentioned earlier. Huge deal right now. They love that ability to have the containerization. We even heard containers driving down into the IaaS area, so with the network virtualization stuff going on, so how is that going to help developers? What confidence will you share to developers that you guys are backing the container standards-- >> Siddhartha: Absolutely. >> Driving that, participating in that. >> Well I think there are a couple of things. First of all, containers are not that easy in terms of when you have to orchestrate under the containers, you have to register these containers. Today the technology is for containers to be managed, the orchestration technology which is things like Swarm, Kubernetes, MISO, et cetera. They're changing very rapidly and then in order to use these technologies, you have to have a scheduler and things like that. So there's a stack of three or four, relatively recent technologies, changing at a relatively fast pace and that creates a very unstable stack for someone who create production level stuff for them, right? The docker container that they built actually run from this slightly shaky stack. >> Like Kubernetes or what not. >> Yeah yeah and so what we've done is we're saying, look, we're giving you container as a service so if you've already created docker containers, you can now bring those containers as is to the Oracle Public Cloud. You can take this application, these 20 containers and then from that point on we've taken care of putting the containers out, scaling the containers up, registering the containers, managing the containers for you, so you're just being able to use that environment as a developer. And if you want to use the PaaS, that's that IaaS. If you want to use the PaaS, then the PhP node, JavaSE capability that I told you was also containerized. You're just not exposed to docker there. Actually, I know he's got a question, but I want to just point out Juan Loaiza, who was on Monday, he pointed out the JSON aspect of the database was I thought was pretty compelling. From a developer's standpoing, JSON's very really popular with managing APIs. So having that in the database is really kind of a good thing so people should check out that interview. >> Very quickly, one of the historical norm for developers is you start with a data model and then you take various types of tools and you build code that operates against that development for that basic data model. And Oracle obviously has, that's a big part of what your business has historically been. As you move forward, as we start looking at big data and the enormous investment that businesses are making in trying to understand how to utilize that technology, it's not going as well as a lot folks might've thought it would in part because the developer community hasn't fully engaged how to generate value out of those basic stacks of technology. How is Oracle, who has obviously a leadership position in database and is now re-committing itself to some of these new big data technologies, how're you going to differentially, or do you anticipate differentially presenting that to developers so they can do more with big data-like technologies? >> They're a few things that we've done, wonderful question. First of all, just creating the Hadoop cluster, managing the Hadoop cluster, scaling out the Hadoop cluster requires a lot of effort. So we're giving you big data as a service where you don't have to worry about that underlying infrastructure. The next problem is how do you get data into the data lake, and the data has been generated at tremendous volume. You think about internet of things, you think about devices, et cetera. They're generating data at tremendous volume. We're giving you the ability to actually be able to use a streaming, Kafka, Sparc-based serviced to be able to bring data in or to use Oracle data intergration to be able to stream data in from, let's say, something happening on the Oracle database into your big data hub. So it's giving you very easy ways to get your data into the data hub and being able to do that with HDFS, with Hive, whichever target system you want to use. Then on top of that data, the next challenge is what do you visualize, right? I mean, you've got all this data together but a very small percentage is actually giving you insight. So how do you look at this and find that needle in the haystack? So for that we've given you the ability to do analytics with the BI Cloud service to get inside into the data where we're actually doing machine learning. And we're getting inside from the data and presenting those data sets to the most relevant to the most insightful by giving you some smart insights upfront and by giving you visualizations. So for example, you search for, in all these forms, what are the users says as they entered in the data. The best way to present that is by a tag cloud. So giving you visualization that makes sense, so you can do rich discovery and get rich insight from BI Cloud service and the data visualization cloud service. Lastly, if you have, let's say, five years of data on an air conditioner and the product manager's trying to get inside into that data saying, hey what should I fix so that that doesn't happen next time around. We're giving you the big data discovery cloud service where you don't have to set up that data lab, you don't have to set up the models, et cetera. You could just say replicate two billing rows, we'll replicate it in the cloud for you within our data store and you can start getting insight from it. >> So how are developers going to start using these tools 'cause it's clear that data scientists can use it, it's clear that people that have more of analytic's background can use it. How're developers going to start grabbing a lot of these capabilities, especially with machine learning and AI and some of the other things on the horizon? And how do you guys anticipate you're going to present this stuff to a developer community so that they can, again, start creating more value for the business? Is that something that's on the horizon? >> You know it's here, it's not on the horizon, it's here. We're helping developers, for example, build a microservice that wants to get data from a treadmill that one of the customers is running on, right? We're trying to get data from one of the customers on the treadmills. Well the developer now creates a microservice where the data from the treadmill has been ingested into a data lake. We've made it very easy for them to ingest into the data lake and then that microservice will be able to very easily access the data, expose only the portion of the data that's interesting. For example, the developer wants to create a very rich mobile app that presents the customer running with all the insight into the average daily calorie burn and what they're doing, et cetera. Now they can take that data, do analytics on it and very easily be able to present it in the mobile platform without having to work through all the plumbing of the data lake, of the ingestion, of the visualization, of the mobile piece, of the integration of the backend system. All of that is being provided so developers can really plug and play and have fun. >> Yeah, they want that fun. Building is the fun part, they want to have fun-- >> They want relevance, great tools and not have to worry about the infrastructure. >> John: They want distribution. They want their work to be showcased. >> Peter: That's what I mean about relevance, that's really about relevance. >> They want to work on the cool stuff and again-- >> And be relevant. >> Developers are starting to have what I call the nightclub effect. Coding is so much fun now, there's new stuff that comes out. They want to hack with the new codes. They want to play with some that fit the form factor with either a device or whatnot. >> Yeah and one other thing that we've done is, we've made the... All developers today are doing containers delivery because they need to release code really fast, right. It's no longer about months, it's about days or hours that they have to release. So we're giving a complete continuous delivery framework where people can leverage Git for their code depository, they can use Maven for continuous integration, they can use Puppet and Chef for stripping. The can manage the backlog of their task. They can do code reviews, et cetera, all done in the cloud for them. >> So lifestyles, hospitality. Taking care of developers, that's what you got to do. >> Exactly, that's a great analogy. You know all these things, they have to have these tools that they put together and what we're doing is we're saying, you don't have to worry about putting together those tools, just use them. But if you have some, you can plug in. >> Well we think, Wikibon and SiliconeANGLE, believe that there's going to be a tsunami of enterprise developers with the consumerization of IT, now meaning the Cloud, that you're going to see enterprise development, just a boom in development. You're going to see a lot more activity. Now I know it's different in development by it's not just pure Cloud need, it's some Legacy, but it's going to be a boom so we think you guys are very set up for that. Certainly with the products, so my final question for you Siddhartha is, what's your plans? I mean, sounds great. What're you going to do about it? Is there a venture happening? How're you guys going to develop this opportunity? What're you guys going to do? >> So the product sets are already there but we're evolving those products sets to a significant pace. So first of all, you can go to cloud.oracle.com/tryit and try these cloud services and build the applications on it, that's there. We've got a portal called developer.oracle.com where you can get resources on, for example, I'm a JavaScript developer. What's everything that Oracle's doing to help JavaScript developers? I'm a MySQL developer. what's everyone doing to help with that? So they've got that. Then starting at the beginning of next year, we're going to roll out a set of workshops that happen in many cities around the world where we go work with developers, hands on, and getting them inside an experience of how to build these rich, cloud-native, microservices-based applications. So those are some of the things and then our advocacy program. We already have the ACE Program, the ACE Directive Program. Working with that program to really make it a very vibrant, energetic ecosystem that is helping, building a sort of sample codes and building expert knowledge around how the Oracle environment can be used to build really cool microservices-based, cloud-native-- >> So you're investing, you're investing. >> Siddhartha: Oh absolutely. >> Any big events, you're just more little events, any big events, any developer events you guys going to do? >> So we'll be doing these workshops and we'll be sponsoring a bunch non-Oracle developer events and then we'll be launching a big developer event of our own. >> Great, so final question. What's in it for the developer? If I'm a developer, what's in it for me? Hey I love Oracle, thanks for spending the money and investing in this. What's in it for me? Why, why should I give you a look? >> Because you can do it faster with higher quality. So that microservices application that I was talking about, if you went to any other cloud and tried to build that microservices-based application that got data from the treadmill into a data lake using IoT and the analytics integration with backend applications, it would've taken you a lot longer. You can get going in the language of your choice using the database of your choice, using standards of your choice and have no lock-in. You can take your data out, you can take your code out whenever you want. So do it faster with openness. >> Siddhartha, thanks for sharing that developer update. We were talking about it yesterday. Our prayers were answered. (laughing) You came on The Cube. We were like, where is the developer action? I mean we see that JavaOne, we love Java, certainly JavaScript is awesome and a lot of good stuff going on. Thanks for sharing and congratulations on the investments and to continuing bringing developer goodness out there. >> Thank you, John. >> This The Cube, we're sharing that data with you and we're going to bring more signal from the noise here after this short break. You're watching The Cube. (electronic beat)

Published Date : Sep 22 2016

SUMMARY :

brought to you by Oracle. This is SiliconANGLE, the key of our flagship program. Great to be here. in Oracle on all the applications. Now and someone made a joke, oh they're at JavaOne, and having that environment be open with choice You know, one of the things that we always talk about but on the back end they have to be able to connect Yeah one of the things that developers love, that they exposing their APIs and they want to be able to and then you got it to have something that-- to drop it onto the Oracle PaaS and IaaS platform. available for them, ready for them. So one of the things that's come up, and we've heard it, to use these technologies, you have to have So having that in the database is really kind and then you take various types of tools and you So for that we've given you the ability to do analytics and AI and some of the other things on the horizon? rich mobile app that presents the customer running Building is the fun part, they want to have fun-- have to worry about the infrastructure. They want their work to be showcased. Peter: That's what I mean about relevance, They want to play with some that fit the form factor that they have to release. Taking care of developers, that's what you got to do. we're saying, you don't have to worry about but it's going to be a boom so we think you guys are So first of all, you can go to cloud.oracle.com/tryit and then we'll be launching a big developer What's in it for the developer? and the analytics integration with backend applications, and to continuing bringing developer goodness out there. This The Cube, we're sharing that data with you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Siddhartha AgarwalPERSON

0.99+

SiddharthaPERSON

0.99+

PeterPERSON

0.99+

Peter BurrisPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

CassandraPERSON

0.99+

SiliconANGLEORGANIZATION

0.99+

20 containersQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Java SETITLE

0.99+

five yearsQUANTITY

0.99+

fourQUANTITY

0.99+

Juan LoaizaPERSON

0.99+

SiddarthaPERSON

0.99+

MySQLTITLE

0.99+

cloud.oracle.com/tryitOTHER

0.99+

MondayDATE

0.99+

Java EETITLE

0.99+

JavaTITLE

0.99+

yesterdayDATE

0.99+

developer.oracle.comOTHER

0.99+

Wikibon ResearchORGANIZATION

0.99+

threeQUANTITY

0.99+

LarryPERSON

0.99+

JavaOneORGANIZATION

0.99+

oneQUANTITY

0.99+

JavaScriptTITLE

0.99+

NoSQLTITLE

0.99+

TodayDATE

0.98+

PHPTITLE

0.98+

JavaSETITLE

0.98+

ChatbotTITLE

0.98+

JSONTITLE

0.97+

FirstQUANTITY

0.97+

Oracle OpenWorld 2016EVENT

0.97+

NodeTITLE

0.97+

IaaSTITLE

0.96+

Facebook MessengerTITLE

0.96+

two billing rowsQUANTITY

0.96+

GitTITLE

0.96+

The CubeTITLE

0.95+

WikibonORGANIZATION

0.95+

three thingsQUANTITY

0.94+

three areasQUANTITY

0.93+

PaaSTITLE

0.92+

todayDATE

0.9+

SiliconeANGLEORGANIZATION

0.89+

MongoDBTITLE

0.87+

PuppetTITLE

0.86+

ACE Directive ProgramTITLE

0.85+

IonicTITLE

0.84+

Oracle Cloud PlatformORGANIZATION

0.83+

Dave Donatelli, Oracle - Oracle OpenWorld - #oow16 - #theCUBE


 

(electronic dance music) >> Host: Live from San Francisco. It's theCUBE. Covering Oracle OpenWorld 2016. Brought to you by Oracle. Now, here's your hosts, John Furrier and Peter Burris. >> Hey, welcome back everyone. We are here live in San Francisco for Oracle OpenWorld 2016. This is theCUBE. SiliconANGLE Media's flaghship program. We got out to the events and extract the signal from the noise. I'm John Furrier the co-CEO of SiliconANGLE Media with Peter Burris, my co-host, who's the head of research for SiliconANGLE Media as well as the general manager of Wikibon Research. Our next guest is Dave Donatelli, Executive Vice President of Cloud and Converged Systems and Infrastructure at Oracle. Cube alumni always coming on. Great to see you. Thanks for spending really valuable time to come and share your insights with us. >> Great to see you guys again. It's always a pleasure. >> So you did the keynote today, obviously the forces in the industry around Cloud, Oracle's got the whole story now. They got the IaaS V2, they're calling it. And now you have up and down the stack PasS and Saas, and under the covers, under the hood is the power hardware. >> Dave: Of the infrastructure, yeah. >> Very disruptive and we chatted and we wrote a story at SiliconANGLE, also on Forbes, about the destruction of the existing incumbents. So with that in mind, how did the keynote go from your perspective? What was your key themes and how does that relate to some of the disruption in the landscape of the industry? >> Okay, well, as a self-writer, I'd say the keynote went very well, but what I really talked about was Oracle offers people three deployment models. And I gave 'em kind of five journey to take to the Cloud. The three models are public Cloud, broad-based public Cloud. Second thing is traditional enterprise, which business we've been in for so long. And then a new category is what we call Cloud a customer. Taking our public Cloud and making that available to customers. And then the second thing I did in the keynote is talk about five journeys people could take to our public Cloud and it's everything from optimizing what they currently have in their legacy environment, to running hybrid Cloud, to running this Cloud a customer, to running private Cloud, and the fifth one is just, end what you're doing in the current way and move all public Cloud. So in the five journeys, just to drill down on that. It's five different paths the customer could take. >> Dave: Correct, all from a customer's perspective. >> From their current position to a Cloud endgame, if you will. >> Dave: Yes. >> And which one you think is the most dominant right now in terms of your view because obviously we'll go though those, but of the ones, beyond on-prem, which ones has the most relevance today in terms of customers that you hear from. Why I'd say two things, what I see and what I've seen the last year is the acceleration of movement to the public Cloud in literally since the start of the year has been massive. And what's really changed about a lot of it's coming top down. So you see CEOs, board of directors, CFOs saying we're going to go to the Cloud, even some companies are giving their IT departments specific requirements. You'll have 40% of our applications in the Cloud by 2000. So big acceleration there and in saying that what most customers are doing is something in the middle. They have their legacy that they've always been running. we look at it app by app by app. What's the most likely to transform to the Cloud? Which ones are probably just going to go away? Which one should we just redesigned and build net new in the Cloud. And so that means to me that hybrid is really, you know, the one that we see most often. People are running on-premise, they're running in the Cloud. They'll have a mix for some time until the on-premise continues to go away. >> What's the concept we heard from Chuck Hollis yesterday around this notion of Cloud quotas. He's seeing customers being kind of mandated to get to the Cloud, almost like a quota. Hey, where are you with your with your Cloud migration? So there's pressure certainly coming in but you introduced Cloud insurance. Is that not actually insurance, but as a concept, just explain what you meant by that. >> Sure. So what we mean by that is this, is that, as I, as we just talked about most enterprises, if you look at most of data out there says only 5% of applications have moved to the Cloud, so far. So that means a lot are still running in their data centers. But now you're going to go to your boss and you're going to say, "Hey you know I need to buy some new infrastructure.". And if you're a regular company, that's going to take three to five years to depreciate. So you go to your boss and say, "Hey, give me $10 million, I got this great idea. I'm going to put this new infrastructure in.". Well, what if two years from now your boss comes in and says, "Guess what? We now we need to move to the public Cloud.". With traditional infrastructure or with infrastructure designed by companies who don't have a public Cloud, you now have a boat anchor, right? I run big businesses myself and the last thing you want is equipment on your books depreciating that has no technical value. What we mean by Cloud insurance, is that everything we sell customers on-premise also has a public Cloud equivalency. Think of Exadata. You can use Exadata on-premise. We have an Exadata Cloud service you can subscribe to in the Cloud. So if you buy an Exadata on-premise today and they say we want to start moving to Cloud. You can say, "Great, I'll do things like test EV in the Cloud with my equivalent Exadata service.". They're fully compatible. It's got the same management. It's one push button to move data from on-premise to the public Cloud. No one else can do that. >> Peter: So you're really selling them a Cloud option. Whatever you buy you are also buying a Cloud option. >> What I say is I'm giving them assurance and insurance. The assurance is you're buying something today that you know will have a useful life going forward in the go forward architecture. >> Peter: And if you want to exercise hat option today, you can do so, if you want exercising three years you can do so. >> Exactly. >> No financial penalty to you. >> Exactly. And what most competitors are saying is hey, by the way you always did it and guess what? You don't have that option. >> Peter: It's your asset. So one of the things, I love the idea of the five paths, but paths are going to be influenced by workloads. So as you think about the characteristics of workloads, not big companies, small company, regional, those are always going to be important. Sophistication, maturity of the shop. But, as you think about workloads, going back to John's question, what types of workloads do you see coming in first? So for example, we're seeing a lot of on-premise, big data happening, but not as fast as it might because of complexity. We're starting to see more of that move into options that are more simply packaged, easier to use like in the Cloud. What kind of workloads do you think are going to pull customers forward first? >> Dave: Sure. Well, first remember we play in Saas, PasS, and infrastructure. And what we've seen if you look at our financials, is huge growth in SaaS and that's where people are saying, I am taking, you know, with GE here, as an example, Ge is taking their ERP, big global company, they're putting that in the public Cloud. HSBC was here, same story, big financial institution. They're putting that in the Cloud first. And the reason why they're doing it, is they think it gives you more flexibility, makes them more efficient, saves them money. Then, which really changed, and what we've evolved to, is with our new infrastructure Cloud now we can do anything. This is to your question. Anything that runs on an x86 server or spark based server, whether it's an Oracle application or not, you can either migrate it and run it in our Cloud. You can, you know, reimagine it using using our PaaS to redesign it, move it to the Cloud, it's everything. And we're seeing increasing rates of people walking through by app by app in their environment and doing just what we've said. What stays, what moves, what do we transform in the process? >> You seen a lot of the the movie at EMC, certainly your history, your career at EMC and then HP. Lot of industry had changed while you're, you know, in those shops, now here at Oracle. So I got to ask you now with the Oracle advantage and you guys are pushing from the silicon to the app, however, I forget how they word it, but it's silicon to the app, the end-to-end kind of thing. What's different from a design standpoint, from a technical, as the product development teams build it, what's the unique thing that's changed? And how's that render itself to impacting the customer? >> Dave: Okay, that's a great question. So let me give you the customer benefit first and I'll tell you why it occurs. what I said today from stage is that to run our, I'll use an examples of our software. To run our software there's no better place on earth than our infrastructure, and compared to their most likely alternative which is their self build, them buying an x86 server, them buying their own networking, them buying storage. We give people better performance, better end-user experience, easier to manage and most importantly it costs them less money. >> John: So knocking down Oracle on Oracle, boom. That's a baseline. >> Less cost versus you going to buy a server online at Dell and trying to put it together yourself. >> I buy that. >> Dave: The way we do it, is the fact that we have insights which we have designed, all the way into our software as well as into our products. So depending which product you're talking about, for instance in Spark, we embedded a silicon itself. Accelerators for things like encryption, for deencryption, for the ability to compress, to decompress. All kinds of things that matter and speed. At the same time we make a lot of changes to our software itself to make that run better with our Hardware. It's RIP. It takes a lot of engineering to do that, but simply put if you don't have the software stack, you know if you're someone who just builds hardware, you can't see the software, you can't make those changes. >> John: Well, you have the advantage. Obviously, you have have software that Oracle writes, you have systems that are engineered for Oracle software. Clear advantage, so you're saying unequivocally-- >> Dave: From a technical-- >> You blow everyone away. >> Dave: From a pure technical perspective, it is an unfair fight. We will win every time. >> John: Okay, so i buy that, so that, you win those rounds. Curveball is multi-vendor. Now we're into a multi-vendor because a lot of people have that technical debt now on the books, if you will, I don't know if that's the right term, technical debt, but they have legacy. It might be Dell EMC, it might be HP and other stuff. How do how do those shops deal with this Oracle infrastructure Cloud and non Oracle software. >> Okay, so two ways. So if you look at an on-premise, we make products that run both Oracle software, engineered systems to run both Oracle software, non Oracle software in the same machine. So you get all the accrued benefits we talked about but you can also host your applications that might not necessarily be Oracle, with us. In the Cloud itself, i think you heard, you know I thought Larry gave an excellent presentation yesterday and very clearly walking through what we do that's different than alternatives. And as we said, >> John: He was very aggressive on Amazon. >> Dave: But I thought he was very, I thought he was very fair in how you did it, right. He walked through it just the facts. This is what they do, this is what we do, this is why it's technically different. He didn't just come out and say hey, we're better than amazon he gave specific reasons why. >> John: He did that and he did that, he did both. >> But if you look at it, so even just running a generic app, that's non-Oracle, on our infrastructure as a service, what we said very clearly is, we have an infrastructure by the way it is architected, that has less noise, meaning so you get less performance disruption, so it runs faster. It's built with the newer hardware and at the same time in doing so because of our architecture we can offer that to people at a lower price than they'd otherwise get. And again I think those are very straightforward, very well articulated points to show the value and you know that opens up the whole world to us. As you know the x86 market is almost a $40 billion market on-premise. What we're saying now at Oracle is, we can do a better job for you in the public Cloud running any of those workloads. >> That's right now. I think the other thing that came out, we've talked about it here, is that the stream of innovation that's going to unload itself on the industry over the next few years, someone still has to do the integration of all of these different piece parts. They're going to be improved upon and that integration cost is real, and so you can look at that from a CIOs perspective, they can look at and say do I want to put my time into the integration, do I want to put my time into the application that's going to have a differential effect on my business. So you guys seem to be coming pretty strongly on we've got the baseline we need to do the, we've got the stuff that we need to bring the innovation in an integrated way into our packaging. >> Dave: That's correct and I think very well said. I believe we are the easiest company to work with, in bringing people from, in essence, their old architecture to the new. And that is because we've already done that integration work. We offer those architectures on both sides of the equation, current on-premise into the public Cloud and give you one management software structure to manage both. Anybody else is only going to work with you on one extreme or another. It's either, hey only do Cloud or only do on-prem. How you work with the other one, you as a customer stuck with that burden to figure out. Dave, I know you got to go to another meeting, but I want to get the final question to you to elaborate on. What you're most proud of now in your tenure at Oracle. Some things that have worked for you in the organization product-wise, successes you've had. You want to highlight a few? And what's your priorities going forward? You're now running the Cloud group as well as Converged Infrastructure kind of coming together. What are you most proud of? what is, could be people not things, like ZDLRA, I know is doing really, Juan Loaiza was saying it's a smashing success and we're not hearing anything about that. We heard about it yesterday, but so what are you most proud of and then what's your priorities going forward? >> What I'm most proud of about being at Oracle is we're an organization investing for our customers' future. So we're spending $5.2 billion this year on R and D and it's all about bringing out these products that fit the future for our customers and protecting their investments along the way. I'm very proud to be part of a company, because as you know in these big transitions, companies don't make it. Think of Deck, right? They're a leader, didn't make it through to the new transition. And we're one of these companies that's leading the new transition even though we also participated in the prior architecture. I think from a product perspective, I would say ZDLRA is a great one you brought up. It stands for Zero Data Loss Recovery Appliance. It is designed by our database engineers to fully backup and recover, as it says, with zero data loss, our database. And we've had a number of customers here, we had customers of the keynote today, very major enterprises at the keynote today was General Electric, who talked about how it enables them now to sleep. They don't get woken up at three in the morning. It gives some certainty in terms of how they recover. And most importantly, it saves them money. >> And you're in the hardware business, but you're not in the box business. You're actually have the software, it's again software enabled. Congratulations, I know you're attracting a lot of good talent as well. They did a great job and it's been fun to watch your success at Oracle and we're proud to cover you guys. We have some points we would disagree with you. If we had more time we can go into little detail, but thanks for spending the time and sharing on theCUBE. >> All right, a pleasure. Always great to see you guys. Live in San Francisco for Oracle OpenWorld. This is theCUBE. I'm John Furrier, Peter Burris, we'll be back with more after this short break.

Published Date : Sep 22 2016

SUMMARY :

Brought to you by Oracle. and extract the signal from the noise. Great to see you guys again. So you did the keynote today, how did the keynote go from your perspective? So in the five journeys, just to drill down on that. if you will. And so that means to me that hybrid is really, you know, but as a concept, just explain what you meant by that. and the last thing you want is equipment on your books Whatever you buy you are also buying a Cloud option. you know will have a useful life going forward Peter: And if you want to exercise hat option today, by the way you always did it and guess what? What kind of workloads do you think are going to And what we've seen if you look at our financials, So I got to ask you now with the Oracle advantage So let me give you the customer benefit first and John: So knocking down Oracle on Oracle, boom. Less cost versus you going to buy a server online at Dell for the ability to compress, to decompress. John: Well, you have the advantage. Dave: From a pure technical perspective, a lot of people have that technical debt now on the books, In the Cloud itself, i think you heard, I thought he was very fair in how you did it, right. and you know that opens up the whole world to us. is that the stream of innovation that's going to unload Anybody else is only going to work with you is a great one you brought up. we're proud to cover you guys. Always great to see you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JohnPERSON

0.99+

Dave DonatelliPERSON

0.99+

Juan LoaizaPERSON

0.99+

Peter BurrisPERSON

0.99+

LarryPERSON

0.99+

HSBCORGANIZATION

0.99+

amazonORGANIZATION

0.99+

General ElectricORGANIZATION

0.99+

John FurrierPERSON

0.99+

threeQUANTITY

0.99+

OracleORGANIZATION

0.99+

Chuck HollisPERSON

0.99+

40%QUANTITY

0.99+

$10 millionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

$5.2 billionQUANTITY

0.99+

three yearsQUANTITY

0.99+

PeterPERSON

0.99+

San FranciscoLOCATION

0.99+

GEORGANIZATION

0.99+

DellORGANIZATION

0.99+

EMCORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

yesterdayDATE

0.99+

five journeysQUANTITY

0.99+

three modelsQUANTITY

0.99+

Wikibon ResearchORGANIZATION

0.99+

last yearDATE

0.99+

Dell EMCORGANIZATION

0.99+

two waysQUANTITY

0.99+

SparkTITLE

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

ZDLRAORGANIZATION

0.99+

GeORGANIZATION

0.99+

both sidesQUANTITY

0.99+

5%QUANTITY

0.99+

2000DATE

0.99+

todayDATE

0.98+

five different pathsQUANTITY

0.98+

five pathsQUANTITY

0.98+

Oracle OpenWorld 2016EVENT

0.98+

firstQUANTITY

0.98+

Don Johnson, Oracle - Oracle OpenWorld - #oow16 - #theCUBE


 

>> Announcer: Live from San Francisco. It's the CUBE, covering Oracle Open World 2016. Brought to you by Oracle. Now here is your host, John Furrier and Peter Burris. >> Okay welcome back everyone we are here live in San Francisco for The Cube. This Silocon Angle Media's flagship program, where we go out to the events and extract the signal from the noise. I'm John Furrier, the co-CEO of Silicon Angle Media with Peter Burris, head of research for Silicon Angle Media. He's also the general manager of Wikibon Research. Check out wikibon dot com for all the latest research and cloud, big data infrastructure. And we're at Oracle OpenWorld 2016. I'm excited to have our next guest, Don Johnson, VP of engineering for product development for the Infrastructures as a Service for Oracle Cloud. Welcome to The Cube. >> Thank you. >> John: Thanks for spending the time to come on. We really appreciate it. >> My pleasure. >> And obviously Oracle's cloud last year was obviously the announcement they're marching to the cloud. A big building block in this, was Infrastructure as a Service. They had the sass. They're taking names, kicking butt in there and they're transforming. Platform as a service developing nicely this year, showed some progress. But the upgrade if you will, or reboot or reset, however you want to call it, was fundamentally to introduce the new stuff with Infrastructures as a Service, to kind of round everything off. Give us the update. What's the key new news for Infrastructure as a Service and why is it important? >> Well a couple things. Let me start with the last part of your question, why is it important. So, very broadly, I would say, There's kind of two strata of cloud. There's cloud platform and there's everything that's up above, apps, sass, etc. Cloud platform I think is a big category. It's broad spectrum but it's IaaS and Paas and then there's lots of stuff that falls inside of there. Iaas is a fundamental and foundational building block and all of the characteristics, that everything up above, relies on or requires is basically enabled by infrastructure If you want to run at massive scale, if you want network connectivity between place A and place B if you want intrinsic security, that's all things that are foundational characteristics and you either have them or you don't based on whether infrastructure IaaS gives them to you. And so, for us, for Oracle, we're a cloud platform company. This is a foundational piece, and we're investing in this very aggressively and we're driving in a very innovative direction on this. So, >> You've been at Amazon since 2005, just recently joined Oracle on the engineering side and you know, infrastructure right now we're seeing is a cost and performance game. Drive the cost down as low as possible while preserving scale and performance, real critical. And almost hardening the top if you will, creating a hardened infrastructure so that you can enable dev-ops and some coolness around agility all that good stuff above, on top of the could. So what's the key things this year that you guys filled in terms of product? What was the key innovations and, on the development side, what was the key sprint for you guys? >> Well, so what we've been announcing at IaaS is really our next generation infrastructure, which is two-fold. It is the infrastructure itself, what are data centers and networks and virtual network looks like and then it's a new suite of products that we put on top of this, bare metal cloud services. And this is the fruit of a big kind of back to basics foundational exercise where we have gone and redesigned everything from the ground up. We've done it with a focus on a bunch of core, core criteria. Core things that we wanted to, that we wanted to capture and that we wanted to do better, better than have been done in the industry to date. And I would characterize those as two-fold. First, we are bringing along, all of the best characteristics of the cloud and why the cloud is compelling and what people, use it for, Self-service, pay for to use. It's elastic; it's easy to use. It's, there's low friction. It's high-scale, etc. But there's a number of things that for our core customer-base, actually are very challenging in moving to the cloud. And when I say our core customer-base, if you have a large existing, you know, if you're an enterprise and you have a large existing infrastructure and deployment, typically on premise, you have a lot of constraints and it's difficult to actually move into this new environment and take advantage of all that it has to offer. And there are, this applies to how your applications will run there, the assumptions that they make, your security and controls. And so we've identified a number of areas that we fundamentally wanted to do better than they've been done before. Security, reliability, governance, the ability to manage, if you're a large complex organization, you have a large complex footprint and deployment in the cloud, the ability to manage it. Performance, performance is a broad spectrum. Peak performance, raw performance, predictable performance in a particular price performance. You're talking about performance and cost. And sort of an adjunct to performance is the ability to harness modern technologies because if you look at where storage is going, non-volatile RAM and technologies like Intel Crosspoint. How can you actually enable customers to get access to that and use it and harness what it offers, very, very quickly. And most, most of all really, flexibility. Sort of the choice and what I mean by that is when you're a cloud provider you, you kind of pick a, you pick a certain level at which you implement and define and build your abstractions, and then, that has consequences in what choices you actually offer. So let me be a little bit more precise about this. A core thing that we did, sort of the keys, the special sauce in any cloud platform is the virtual network. And we made a fundamental choice that the way in which we're going to do virtual networks is to pull the virtualization into the network itself where we think it belongs. >> John: So no hypervisor? >> It's not in the hypervisor. And so, what that means is first it means we're able to like the, the requirement that we have of something that we can plug into our cloud, your cloud, your virtual network is, it has an ethernet port. This means that we put, we can put anything into a virtualized network. Our whole infrastructure, you know the presentation to the customers is everything runs in a virtual overlay. It's all virtual network. But we could put any class of resource in there. We could do bare metal. We could do an engineered system. We can, honestly, we can take an arbitrary middle box from you know, any third party vendor. This lets us give our customers bare metal. Giving our customers bare metal means we can, we can take, so we provide bare metal, compute with NVME drives. They are phenomenal. There is nothing, like we're literally giving you a server in the cloud with a, you know, provision in minutes paid by the hour. And you get, in our biggest shape, you get in excess of four million 4k read-high ops. Like this is phenomenal power. So really there is nothing that stands between us, between the technology and us giving it to you. >> So that was the key design criteria, then? >> That was the key design criteria and so this you know, in terms of sort of, flexibility and preserving choice, this means, you know, principle you can bring any OS. You can bring any hypervisor. If you have some old stuff that's difficult to move, you can't break up our hypervisor. >> So you let the performance, everyone kind of speak for themselves if you will. So, the customer can put anything on this thing >> Yeah. And these are phenomenally powerful boxes. >> Okay so now, how does that compare with Amazon and Azure because the number one question I get is, and let me see if you can put some color around this. Obviously Amazon had a different thing. You guys had a clean sheet of paper and you took smaller steps, computed storage and built services and scaled up there. Azure had, kind of backed into it with their existing business and there portals and all their services and then now are moving their customers on there. So, the number one question I get is, well what's different with the IaaS on Oracle vis-à-vis AWS and Microsoft Azure. How do you answer that question? Is there a distinct difference? Is there a design philosophy? Is it? >> Well, the design philosophy for Iaas is what I was just articulating. And in essence it, it looks and acts very, very similar from the perspective of the customer, the user experience at scale. As well as, it preserves choice and flexibility and is amenable. Basically it is it is much more friendly to the large enterprise or large business that is outside of the, often times and typically, outside of the sweet spot of what an infrastructure like say Amazon was originally designed for. So as a principle, we are trying to meet our customers where they're at. From, they want to migrate over some apps and do it cautiously and maybe not change too much about them. And not see that as a constraint or an obstacle to get to all of the, all of the promise and power of, running modern applications in high-scale, highly available. >> Look in many respects, in many respects, cloud is naturally a network-centric compute model. >> Don: Yes. >> By putting more, by not putting network virtualization above the network but putting it into the network, does that also at some point in time give you greater flexibility, the option to bring even more of, >> Don: Absolutely. >> core work that's gone down into the network? So that you can actually start liberating some of the power of a real network computing model. Others can't do that right now. So if you think about it, what kinds of applications might that make possible in the future? Thinking about IOT for example, the ability to use a network model to describe how work gets allocated within a cloud of services? >> Well, I think the, the network ultimately, what you need it to do, there's a few things you need it to do. You need to very reliably and quickly move bits from place a to place b. You need it to it to have the flexibility sort of, as a topology to be able to put things in. And you need it to preserve, privacy and plugability. So the fundamental thing that I see our virtual network supporting and enabling, is basically building up a fabric of services, and letting us say, so everyone runs in a private overlay. We want to make it easy for any provider, ourselves as well as any third party provider, to inject micro-services into your, into your private network. We want to make it easy to be able to bring over traditional security controls, where, you want to, set up bastions and set up taps and be able to introspect you know, do, you know traditional IDS, IPS. So, I see network virtualization really as an enabler of, you know it's providing a fabric that lets you, that gives you great flexibility in wiring things together. I hope that answered your questions. >> So final question for you, what's next? So what's on the, what's the priorities on the to-do list for you guys as you go down, a two point five, a two point one? As they say at Microsoft, never make it an odd, an even number, make it a, you know. Two point one or two point five or three point o. What's next? >> There's a ton of things. So we're building up data centers and new geographies. We're going big. We're going to add a ton of skews. We're going to make bigger things, smaller things, adding, a ton more features really all across the board. So I don't know that I see it as there's a two point five. There's going to a rapid pace. >> So more slew of announcement >> Very similar >> Don: Yes. >> to the cadence we've been seeing at Oracle and Amazon traditionally had started that trend. Larry couldn't even finish the keynote on Sunday because the announcement stream was so large >> No we have a, you'll see a constant string of releases on a, you know, a weekly, monthly, quarterly basis. There's just a ton of stuff coming. We have a ton of features to add. We have a ton of interesting new services to add. >> So the pace is fast. You're running hard? >> Don: The pace is very fast. >> Well, congratulations and looking forward to following you guys and your success. Love the agile mindset. Love to see that cadence of shipping stuff, moving really, really fast and appreciate, >> Alright. >> you spending the time. >> Don: Thank you very much. >> Sharing your insights. The Cube live here at OpenWorld. You're watching The Cube. Back with more live coverage here in San Francisco after this short break. (softly intense techno music)

Published Date : Sep 22 2016

SUMMARY :

Brought to you by Oracle. for all the latest research the time to come on. But the upgrade if you and all of the characteristics, And almost hardening the top if you will, in the cloud, the ability to manage it. a server in the cloud with a, you know, and so this you know, in terms of sort of, So you let the performance, And these are phenomenally powerful boxes. and let me see if you can all of the promise and power of, cloud is naturally a the ability to use a So the fundamental thing that for you guys as you go all across the board. because the announcement on a, you know, a weekly, So the pace is fast. to following you guys and your success. here in San Francisco

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave ShacochisPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VelantePERSON

0.99+

GoogleORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Francis HaugenPERSON

0.99+

Justin WarrenPERSON

0.99+

David DantePERSON

0.99+

Ken RingdahlPERSON

0.99+

PWCORGANIZATION

0.99+

CenturylinkORGANIZATION

0.99+

Bill BelichikPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

DeloitteORGANIZATION

0.99+

Frank SlootmanPERSON

0.99+

AndyPERSON

0.99+

Coca-ColaORGANIZATION

0.99+

Tom BradyPERSON

0.99+

appleORGANIZATION

0.99+

David ShacochisPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Don JohnsonPERSON

0.99+

CelticsORGANIZATION

0.99+

DavePERSON

0.99+

MerckORGANIZATION

0.99+

KenPERSON

0.99+

BerniePERSON

0.99+

OracleORGANIZATION

0.99+

30 percentQUANTITY

0.99+

CelticORGANIZATION

0.99+

LisaPERSON

0.99+

Robert KraftPERSON

0.99+

John ChambersPERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

JohnPERSON

0.99+

John WallsPERSON

0.99+

$120 billionQUANTITY

0.99+

John FurrierPERSON

0.99+

January 6thDATE

0.99+

2007DATE

0.99+

DanielPERSON

0.99+

Andy McAfeePERSON

0.99+

FacebookORGANIZATION

0.99+

ClevelandORGANIZATION

0.99+

CavsORGANIZATION

0.99+

BrandonPERSON

0.99+

2014DATE

0.99+

Hari Sankar, Enterprise Performance Management - Oracle OpenWorld - #oow16 - #theCUBE


 

(upbeat synth music) >> Narrator: Live from San Francisco, it's The Cube, covering Oracle OpenWorld 2016. Brought to you by Oracle. Now, here's your hosts John Furrier and Peter Burris. >> Hey, welcome back, everyone. We are here live in San Francisco for Oracle Open World 2016. This is SilconANGLE Media. It's The Cube, our flagship program, where we go out to the events and extract the signal from the noise. I'm John Furrier, the co-CEO of SiliconANGLE Media, joined by my co-host this week, Peter Burris, head of research at SiliconANGLE Media as well as the General Manager of Wikibon Research. Our next guest is Hari Sankar, who's the group Vice President of Enterprise Performance. Welcome to The Cube. >> Thank you. >> Thanks for joining us today. So, one of the things that you're in is performance management but in a different way, kind of a CFO perspective. >> Hari: That's right. >> Which this show is all about, ROI, total cost of ownership. But Oracle has a lot of software, finance software. First, take a step back and spend a minute to describe what is performance management and your role at Oracle. >> So traditionally, performance management is really about how finance sort of manages the overall business performance of a company. It's about things like forward-looking things like planning, forecasting, and budgeting. It's about, sort of, backward-looking things like okay, our quarter is done, how do we close the books and how do we report the numbers both internally, for management recording purposes, and externally, to the street and various stakeholders. So there is the compliance side to it. There is a strategy side to it, and these things have been traditionally what is performance management. What we are seeing now is that kind of discipline is now going beyond finance into operating lines of businesses, sales and marketing and manufacturing and so on. >> The, the-- >> One of the things, sorry, John, I think one of the things that is really interesting, especially in light of this show, is as we go through a process of digital transformation, where data becomes one of the most important assets in the business, that means that the asset specificity, to use a finance term, the degree to which an asset has only one use, starts to go down because you can program it. So marketing, sales, all the assets, intellectual property, data-oriented, that they've been developing over the years now can be bought under the umbrella of Enterprise Performance Management. >> That is absolutely true. That is absolutely-- >> So how is that happening? >> So part of how this is happening is let's say you are a marketing organization. You are spending $50 million on digital marketing. Now, there is a desire from the part of the marketing department to sort of manage that spend more diligently with more discipline and drither, just like finance manages any other line item in the budget. There's more desire to provide transparency to the business, in terms of here's where we are spending it and here's where we are getting returns, here's where we are perhaps not getting returns. So that is the planning part of it, and then there is also the reporting part of it, where we are seeing the emergence of the concept of narrative reporting, where you are saying hey, look, I'm not just going to distribute numbers and charts to my stakeholders, whether it's inside the company or outside, I'm going to give them context, I'm going to give them commentary on these numbers. If there is a variance, I'm going to tell them why is this there. Do I expect this variance to be there next quarter? What am I doing about it? So, it sort of brings those numbers to life and avoids that back and forth that typically happens. >> How much is the Performance Management moving out of the CFO function, and I want to get your take on how the costs in IT is becoming not just a functional shared resource but IT is now integrated across the whole company. Mark Hurd had tweeted yesterday on Twitter, "As more CEOs and CFOs understand "the potential of the cloud, "CIOs are going to get a lot more help," implying Oracle is going to help them. But it brings up the point that the CIO now is brought into the CFO conversation, they always have in facilities and what not, but now from a business perspective their contribution is significant and now co-mingled is it. Do you see that trend happening and what does that mean for the software side of it? >> We're definitely seeing that trend happening. For example, the most important new term to come out in finance in some time is the notion of digital finance. >> John: The notion of what? >> Digital finance, right? So this is really about whether you call it digitalization or not, digital finance, digital marketing, digital sales. So this digital business idea sort of elevates this role of the CIO because, as you said, data becomes a very, very important asset in terms of how you fundamentally drive innovation in your business, and so that digital notion is sort of elevating the role of the CIO. And in the context of Performance Management, as you see this spread beyond finance into other lines of businesses, other lines of businesses are starting to be more disciplined and rigorous in how they sort of measure their performance, how they manage their performance. There's also a need to connect the dots across. You know, if I'm doing a marketing plan, which is an important element of my overall spend, if there is a fluctuation or change, a big change in my marketing spend, that needs to be reflected back in the finance budget. So connecting the dots and aligning the plans across different functions is becoming a big priority as well. So you're seeing a lot of important changes happening. >> You just said a few things that's just gotten me standing up and getting all excited. Peter and I looked at each other, digital business, digital assets, digitizing your business, these are the mega-- >> Data value. >> Data value, this comes back down to what we've been talking about all week here on The Cube and for the past year. This is now what was once a come together, have a meeting, share, cross-pollinate, somewhat automated but in the end manual, to fully integrated. This is probably the biggest business problem in digital transformation right now. How come we're not hearing more? This is a-- >> Yeah, I think that's a great point, John. At the end of the day and what we've been talking about is that so this is is a little bit of SiliconANGLE Media, Wikibon, we believe that digital business, full-stop, is how you use data to differentially create sustained customers. >> Absolutely. >> That is digital business. You can say all kinds of new channels and all this other stuff, but it all boils down to are you using data as an asset better than your competitors? >> Yep. >> So that as a basis, two things. First off, interesting that Mark Hurd, we talked about it earlier, this is a quick aside, Mark Hurd talking about how CIOs are going to get more help. Remember when we talked about how Oracle's going to have to bring a lot of the IT group forward in its new transformation. >> This is it right here. >> Absolutely, but I'm going to throw you a little bit of a curve ball. I hope I'm not going to throw you a curve ball but its a very, very important point. As the IT organization, or as increasingly, the methods that we use to create digital assets, and increasingly also products, they're iterative, they're empirical, they're opportunistic, they're agile. That the traditional, year-long budget that says you have a certain money to spend, and you spend it or it goes away and you better not fail with this money, comes under attack by Agile, and I know a lot of CIOs that I talk to are trying to reconcile the impedance mismatch between Agile and Sprints, and being opportunistic and recognizing when something isn't working, and the CFO who's still talking about annualized releases of money. So I've always felt that you could not reconcile those. You could not bring those two points of view forward without EPM. Are you seeing that as well and how are you helping it? >> Yeah, we're definitely seeing this because this older, you're absolutely right. The old notion of let's make a budget once a year, get it right, and execute on it for the rest of the year, we are seeing that seeing that fading really fast. What people are saying is, look, plans are made only to be changed. Let's not fixate on getting the perfect plan in place. Let's start with a reasonable plan with the assumption that it's going to tweak and iterate and change many, many times over the year. So the focus is now on, less on getting it right the first time, more on how do we make dynamic changes to it in an agile fashion, just to your point. >> And reflect those changes throughout the entire cost-- >> And into finances-- >> Back into finance. >> It all comes back to finance. >> It comes back to finance because at the end of the day, let's say, take a simple example of a manufacturing company-- >> Paul: Finance is the language of business. It still is. >> End of the day, your business performance is measured in dollars and cents. I mean, period, right? So, let's say, your product mix changes because your customer demand is changing. That needs to be reflected back into finance, in terms of, okay, are we making more money or less money? Is it more revenue or less revenue? That needs to be reflected back, and so we're definitely seeing, in fact, the tagline for Enterprise Performance Management that we use these days is enabling business agility. So two parts to that, driving agile decisions, to your point, the second is, once you drive those agile decisions. Let's say I decided to expand into a new business and I did an acquisition. Fast forward six months, you need to reflect the results of that combined entity into your financial results, do it quickly, do it in a way that is correct and you're confident about the results and that's the job of finance. So it's agility of operations, agility in decision making, those two have to sort of come together. >> So here's my question then. I love this conversation because I think this speaks to the full-closed loop of Cloud and DevOps and the innovation around Agile. How much flexibility is built into the software, and I'm kind of going with the database route for a second, systems of records, schemas in database 'cause business plans can say it once a year and it's failing, I agree, I can see that failing. But, also, fixed schemas, can fail too. Well, I don't want to add the new data in 'cause the database can't handle it. I've heard that from developers before. Again, it slows the things down, so as you move from systems of record, which can be fixed and tweaked, the engagement data is the business engagement gestures. So how is that factoring into your software? You guys see that and is this AI Bot revolution and the machine learning, the smart software after engagement. Can you thread that through and explain how that fits? >> Let's start simple and sort of get a little more sophisticated quickly. The first things is we are seeing a lot more people come into the planning process than before. The old model was finance did the planning for other people. Now, people are doing their own plans, then sort of feeding it into the overall plan. People intentionally are pushing that because they want plans and decisions to be made closer to the point of action. Secondly, there is a greatest emphasis on driving fact-based decisions. For instance, we are working with some large consumer goods companies where they are saying, look, don't come here and tell me that I'm going to spend 10% less on this large line item compared to last year, Throw the last year's budget out and do a zero-based budget. I mean, zero-based budgeting is not a new concept. It's been around, but it's getting a new lease of life because in industries where profits are on the squeeze, they are saying "Look, I don't want "to do the traditional budgeting. "I want to go to a zero-based budget." >> Because they get facts that are surfacing faster. Is that kind of the premise? >> Facts, but more over to the performance of the business. >> That is definitely true. The facts that are surfacing faster, and, therefore, I want to give the tools to make use of those facts to the people who are closer to where they are surfacing. >> John: This is a digitized business in that scenario. >> Definitely true. >> Everything's instrumented. >> Good value. >> Hari: Yeah, definitely true. >> We always say on The Cube, I mean, this is the first time in the history of business in the world that you can actually measure everything. >> That is absolutely true. >> If you want to measure everything, you actually can do it. >> That is absolutely true. >> Now the CFO, which was once the measurement system, has to get integrated in. Am I getting this right? >> You are getting this right. You are getting this right. And the other part of your question is about okay, how is intelligence coming into, so some of these decisions over time, if you see a pattern, they can be perhaps automated. Plan adjustments can be, maybe some elements of plan adjustments can be automated, but I don't see finance going that far. That may be taken as an input. Maybe a recommendation comes from automated intelligence, and people will sort of take a look at it and say, "Hey, I want to go with this because it makes sense, "or I'm going to override it this way "because this doesn't take into account "what I'm planning for in the next quarter." >> Yeah, what scares me, though, in the whole bot thing, I mean, this is not a dis on Larik, I love the vision, it's got me all excited, is if they try to get too AI before they actually build the building blocks, they really can get ahead of themselves. So, you can see that head room, for sure, but a lot of companies are kind of in that planing mode. Is that true? What's this progress bar of customers right now who are into this, are in the software? I mean, track bots are great for certain things, but you can't really automate AI yet and everything. Or can you? >> I think there is probably a class of decisions that can be automated, but when it comes to finance, and finance tends to be conservative and for good reason, they definitely see the value of recommendations based on data, based on real-time data, but they still want to have the controls. >> [John} Got It. >> So that's kind of the mindset that we have seen. >> So real options valuations could really, really be helped by AI. But at the end of the day, you have to be able to close the books, and you don't need AI to help you close the books. >> This is a fascinating conversation. >> If I can add one quick conversation, just a quick point, as Enterprise Performance Management starts to weave its way into other parts of the business, institutionally, does that mean we're going to see controllers start to end up in different functions? >> Hari: (laughs) IOD of controllers? >> As a human interface that goes along with the system so that it works together. >> It's a definite possibility, right? Because if you're planning as rigorously in marketing as in finance and if you aremeasuring and reporting as rigorously in sales as you're doing in finance, maybe there's a sales controller function that becomes a legitimate need. But at the end of the day, today, you focus so much attention on reporting your numbers to the street. You focus attention on precision and accuracy and confidence in all of that. Why is that not a requirement for internal Reporting? >> It's the same argument when we talk about the technology of a structure. You move the computer to where the data is. You could move the controller where the action is, to your point earlier. It's a fascinating conversation, Hari. Thanks for sharing the insight. Love to do a follow-up on this because I think this really connects the language of business and kind of validates the digital fabric of digitization. But quick, I want to give you the last minute to give an update on the business, how you guys are doing. This is a pretty big deal. How's your business results, what's down the roadmap, what's the sales going to be like next month? I'm only kidding, I know. (all laugh) >> Sure, sure. I think the cloud has been a really game changer in this business. What the cloud has done has lowered the bar where we're seeing many mid-sized businesses start using Performance Management best practices, just like larger companies. We are seeing divisions or functions inside of larger businesses using Performance Management software for the first time. So there's a big market expansion, and we are seeing an expansion across other lines of businesses outside of finance. We are certainly seeing that. We are seeing that, you know, we introduced our first Cloud software in Enterprise Performance Management about two and a half years ago. At that time, we were not sure how the market update was going to be because we said finance tends to be conservative. Are they going to be comfortable doing their aggregated planning in the cloud, or are they going to be comfortable doing, reporting things in the cloud? We've been sort of pleasantly surprised by the willingness of finance, helped in part by the success the companies have had in deploying HR software in the cloud or CRM software in the cloud and so on. So the cloud has taken off. We have well north of 1,000 customers that have picked up EPM software in the cloud. We are very happy to see 100, 150 deployments go live every quarter, and we are seeing use cases in marketing, we are seeing use cases in HR of strategic workforce planning or marketing spend planning happened using EPM-style software. So, happy to see mid-sized businesses see real value from planning. >> John: Good integration capabilities? >> Good integration, I'm glad you mentioned it. Very good integration back into, for example, if you have financials in the cloud and EPM in the cloud, there are nice linkages between the two. So four teams are very important to us. We are seeing pervasive use of EPM software. We are seeing agile operations helped by EPM software in the cloud. We are seeing connected operations, whether it's the backbone systems or across functions. And we are seeing people take a sort of a comprehensive view of this, whether it's across functions or across processes. >> This is fascinating. We could go another hour. This is a really interesting topic because I think it really highlights a fact that, what we always say in The Cube is, you can provision technology faster and you get time to value certainly as the customers start to be creative and implement it. They get to actually put it to work and get the data around and behind. So thanks so much for spending the time on the insights on the EPM. We appreciate it, thank you so much. >> Thank you, I enjoyed the conversation. >> Okay, you're watching The Cube, live coverage here in San Francisco at Oracle OpenWorld 2016. I'm John Ferrier with Peter Burris. Thanks for watching. (upbeat synth music)

Published Date : Sep 21 2016

SUMMARY :

Brought to you by Oracle. and extract the signal from the noise. So, one of the things that you're in is spend a minute to describe how finance sort of manages the overall that means that the asset specificity, That is absolutely true. of the marketing department to sort of point that the CIO now is the notion of digital finance. is sort of elevating the role of the CIO. Peter and I looked at each other, This is probably the At the end of the day and but it all boils down to a lot of the IT group and I know a lot of CIOs that I talk to So the focus is now on, less on Paul: Finance is the End of the day, your of Cloud and DevOps and the come into the planning Is that kind of the premise? performance of the business. to make use of those facts to the people business in that scenario. in the history of business in the world everything, you actually can do it. Now the CFO, which was once in the next quarter." I love the vision, it's and finance tends to be So that's kind of the But at the end of the day, you have As a human interface that goes along But at the end of the day, today, the action is, to your point earlier. in deploying HR software in the cloud in the cloud and EPM in the cloud, as the customers start to be in San Francisco at Oracle OpenWorld 2016.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Mark HurdPERSON

0.99+

Peter BurrisPERSON

0.99+

John FerrierPERSON

0.99+

Hari SankarPERSON

0.99+

PeterPERSON

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

John FurrierPERSON

0.99+

$50 millionQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

100QUANTITY

0.99+

San FranciscoLOCATION

0.99+

SilconANGLE MediaORGANIZATION

0.99+

yesterdayDATE

0.99+

two partsQUANTITY

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

WikibonORGANIZATION

0.99+

FirstQUANTITY

0.99+

PaulPERSON

0.99+

two pointsQUANTITY

0.99+

next quarterDATE

0.99+

Wikibon ResearchORGANIZATION

0.99+

first timeQUANTITY

0.99+

secondQUANTITY

0.99+

todayDATE

0.99+

Oracle Open World 2016EVENT

0.98+

HariPERSON

0.98+

SecondlyQUANTITY

0.98+

San FranciscoLOCATION

0.98+

The CubeTITLE

0.98+

150 deploymentsQUANTITY

0.97+

oneQUANTITY

0.97+

Oracle OpenWorld 2016EVENT

0.97+

bothQUANTITY

0.96+

next monthDATE

0.96+

six monthsQUANTITY

0.96+

OneQUANTITY

0.96+

first CloudQUANTITY

0.95+

#oow16EVENT

0.95+

two thingsQUANTITY

0.95+

four teamsQUANTITY

0.94+

SprintsORGANIZATION

0.94+

once a yearQUANTITY

0.94+

this weekDATE

0.94+

AgileTITLE

0.93+

first thingsQUANTITY

0.91+

TwitterORGANIZATION

0.9+

10% lessQUANTITY

0.88+

Enterprise Performance ManagementORGANIZATION

0.88+

two and a half years agoDATE

0.87+

past yearDATE

0.86+

aboutDATE

0.83+

one quickQUANTITY

0.82+

LarikORGANIZATION

0.81+

Enterprise Performance ManagementTITLE

0.79+

Performance ManagementTITLE

0.79+

The CubeCOMMERCIAL_ITEM

0.78+

EnterpriseTITLE

0.74+

1,000 customersQUANTITY

0.72+

#theCUBETITLE

0.7+

The CubeORGANIZATION

0.68+

northQUANTITY

0.62+

GeneralPERSON

0.57+

OpenWorldEVENT

0.53+

Enterprise PerformanceTITLE

0.52+

Keynote Analysis - Oracle OpenWorld - #oow16 - #theCUBE


 

>> Announcer: Live from San Francisco, it's The Cube, covering Oracle OpenWorld 2016, brought to you by Oracle. Now, here's your hosts, John Furrier and Peter Burris. >> Welcome back, everyone. We're here live in San Francisco for SiliconAngle's theCUBE, our flagship program. We go out to the events and extract the signal through noise. We are here at Oracle OpenWorld 2016 on the ground floor in the exhibit hall, big booth, big studio, breaking down Oracle OpenWorld. Of course, Larry Ellison just gave his keynote. He does the opening keynote Sunday night before the event and then saves his best for the keynote right in the middle of the afternoon from 1:30 to 3:30 on Tuesday, second day. And so, we're going to break it down. I'm John Furrier, the co-CEO of Silicon Media here. Peter Burris, Chief of Research at SiliconANGLE Media, and also General Manager at Wikibon Research and Rob Hoth, Editor-in-Chief of SiliconANGLE and heading up the editorial publication we're going to be expanding on. Guys, let's get into it. So Larry Ellison, he must have been tired Sunday and he must have saw out tweets about upping his game a bit. He delivered an epic performance. He really came out, guns blaring, Amazon Web Services clearly in his sights, being aggressive. Peter, your thoughts on the tone. Is this the new Oracle or what's your take? >> Larry was pumping so much energy into his speech that he actually overflowed a bunch of buffers and as a consequence, it was very, very halting when it came in over the TVs here. I think in general, he went back to his playbook or he's going back to the Oracle playbook and the Oracle playbook historically has been we're more open, we are faster and when you combine those two factors, we will be cheaper. So he used, they ran some benchmarks. He showed how Oracle on Oracle is faster, how everything on Oracle's faster. How Oracle on Oracle and not everything else is faster and how ultimately that turned into longer term, cheaper operations. So it looks like Oracle remembers what got it to where it is 20 years ago, 25 years ago. In the last big transition, or one of the last big transitions, sounds like he's kind of going back to the playbook and running it again. >> The guns are blaring, Rob you were in the session, you've been out scouring for stories, your headline you just posted at siliconangle.com says, "you're locked in, baby, Oracle's Larry Ellison "redoubles attack on AWS." Your thoughts on the keynote, what was the vibe, what struck you? >> Well I mean obviously Larry was really on his game there. He, as you said Sunday, was a little off, but this time he really came out blazing, guns blazing as you say, and he really attacked Amazon on a number of fronts, performance, openness, which is kind of ironic, isn't it? For oracle? So, you know, he's obviously gunning after Amazon, or at least wanting to appear to. What I wonder, though, is what that means. I mean, is he really going after Amazon, or is he trying to set some sort of tone for the customer to say, "look, if you're even thinking of Amazon, "you got to look at us first, right?" >> And certainly as on Oracle on Oracle, I heard things like, "our stuff is faster." "Our stuff works." end to end, back to the drumbeat of "our code is identical on premise, identical on cloud, it's the easiest way to move to the cloud, obviously Oracle Cloud, not the cloud. De-positioning Amazon as being locked in and closed is interesting. I thought it ironic, too, Rob, that's the first thing I came to, it's like, a lot of people have accused Oracle on their run after they started getting escape velocity as a venture, and when they really ran the table on the market, a lot of people were looking at Oracle as a lock-in, Oracle's database was so important they are buying companies, people saw all that goes on on the history of their, customers felt locked in, so it's ironic now the shoe's on the other foot. >> And not only that, he even recused them, he said, "you're locked in, baby, and if they want to raise "their prices, you better get out your checkbook," and I thought, "isn't that Oracle's playbook?" (laughing) >> So Peter, that is the playbook on the licensing side, we've heard from customers that the licensing has always been a sticky issue, we know the VMware has had that challenge with the Hypervisor, which Oracle's now announcing there's no Hypervisor on their network virtualization, so how does these companies make money, because certainly it's a shift of the dollars, Mark Hurd said 80% of IT spending will be in the cloud by 2025, so is there a ratchet, is there a sticky factor for Oracle to actually maintain that revenue growth? >> Well, as we talked about yesterday, there is nothing stickier than the application. When a business reconfigures itself to run an application, it is extremely hard to take that application without dramatically disrupting the business. You can take out a database manager if you can move that data to some other structure, some other mechanism, and still run the application, you can certainly change hardware, and you will be able to change (mumbles) providers, it won't be fun, it won't be easy, it'll be expensive, but you can move data around and stand new instances of things up in other places. But the most sticky thing, good or bad, is the application. So as Oracle goes forward, there's no doubt that it's going to talk about how Amazon is trying to lock people into it's platform and some of the services that it's coming out with. But most of the businesses out there are mainly focused on whether or not the applications that they've either got from Oracle or have built on top of Oracle databases are working. And one point to make here, John, I have never met with a CIO or a senior IT person who has ever said to me, "I really hope Oracle bones it in this next transition, "because I'd like to be able to throw them out." Nobody is looking for Oracle to lose. Most of these companies have so much invested in Oracle that they don't want to go through the pain and suffering of Oracle not succeeding. They would certainly like to have alternatives, and they would certainly like Oracle to modernize its practices so that it appears and presents itself more as a cloud supplier, but this is overall a good thing for customers. >> I would agree with you, but I'd make a point, Cisco is one of those other companies in the early days had that stickiness with the routers, you couldn't just pull one out, it had a nice nestedness into the fabric of every business that they did business, Oracle's the same way. But it's interesting, I find the tone, Rob, that you were mentioning, that they're going after Amazon and to quote thing, your article, "Oracle's cloud runs "24 times faster for analysts than Oracle on AWS." Now you're talking about Oracle on AWS. To your point, he's trying to keep the customer saying, "don't move to Amazon." And then the other thing I was taken aback by Redshift comments, he's going after Redshift, you pointed that out-- >> And Aurora. >> And Aurora, but Redshift is interesting. Redshift is the fastest growing service on AWS. Andy Jassy's told me that directly, and so he kind of did a nice little trick, he de-positioned Redshift great for analytics, but horrible for online transaction processing, the core for-- >> Which it's probably really not made for, right? >> Yeah, I wouldn't say that was Redshift's position. >> However, but it comes back down to what Amazon's all about, we speculate at the Cube that we don't yet know what Amazon will become, and the behemoth that they might be given their success, so I see Oracle really trying to kind of deposition Amazon as getting more territory in their accounts. So yeah, I don't think Oracle looks at Amazon as a replacement, that they will die, but the disruption factor coming from Amazon certainly is being felt by Oracle, would you agree? >> Oh absolutely. And, well, the disruption, look, nobody, or very few people look back 12, 13 years ago and say that Amazon is going to become what it has become in a lot of different markets. Jeff Bezos has demonstrated that he can get his troops to focus in and get very, very complex things done and have an enormous impact in a lot of different industries. Right now we are all wondering, we're all wondering what those new industry structures are going to look like that are going to be the dominate institutional forces in the next 10, 15 years. And it's clear that Amazon has identified what one part of that institutional basis will look like, and Oracle needs to respond. And that's what they're doing. And they seem to be doing it well, but it's going to be a long, long, long haul. >> Well let's bookmark that, 'cause in this segment, I want to get into what's not being talked about here at Oracle OpenWorld, and we'll get to that in a second. later in the segment, but let's keep on the theme of what we're hearing. Rob, you're out with your notebook, you're talking to people, what's the general point of view, what's the general consensus of your findings as you interview folks just after the keynotes? Is there a tone, is there a certain thing you're hearing outside of their messaging, which is pretty clear, "Oracle Cloud all the time," what's some of the things that you're finding in your reporting? >> Well there is some sense out there that, you know, questions about Oracle's commitment to cloud, especially the infrastructure part, and whether they're trying to position themselves, but not necessarily being completely serious about really taking on Amazon, and so-- >> As a red herring, or more of a posture, or legit competitor? >> Well, I don't know, but the doubts are kind of interesting you know they're obviously not spending as much on R&D on production of data centers, and so people look at that and go, on the one hand, the investors say, "maybe I don't want 'em to spend so much 'cause they're "going up against an entrenched competitor in Amazon," but the customer is probably, you know, "I want more." So they're a little doubtful, I think, about how far Oracle's going to go. >> Any other findings from other Oracle executives? Is there anything that's jumping out at you in the hallway conversations? >> No, they're pretty consistent in their messaging. (laughing) Needless to say, that's one of their strengths, but there was a Q and A with Thomas Kurian, people were trying to lead him off on saying things, and he was not taking any bait, he was completely on message, and-- >> Well, the Oracle executives are very strong with holding the line, party line, we do get some nuggets on the Q, Juan Loaiza came on, again, he didn't really reveal anything confidential or out of bounds with (mumbles) messaging, but he brought the database piece, and was really seeing that the database piece is going to be much more of a broader perspective, that's my interpretation, and I was saying earlier that getting out of those swim lanes is a key message. But it's interesting, I mean I think the customers, what I'm hearing in the hallways, Peter, is "what's the impact to the customer?" Right? Like okay, the buyers. So there's no real, no one's running for the exits with respect to Oracle, so I agree with you there, but there's definitely an investment criteria going on with the customers around the future that they need in their architecture, whether that's a hybrid multi-cloud environment, and then ultimately, the fear of being forclosed for opportunity. So as customers think about the future, they just don't want a forclosure situation where there's no end room, so they have to go outside of Oracle, if they have to. So I think that choice option is interesting. So I kind of see the difference as more of a fun factor-- (crosstalk) >> This is clearly the most important thing on the table, is new workloads. The second mot important thing on the table is if the new workloads go to the cloud, which it appears that they are going to, and they end up more in Amazon, that's going to create a center of gravity, that's going to have an impact on existing workloads, and there are few companies that have more to loose if existing workloads move into Amazon's cloud than Oracle. And so by, Oracle needs to intercept this, they absolutely need to intercept this. But it's also the right thing to do for the customer base. My guess is that they're figuring out how to transition the business models, you know, revenue comes down here, goes up there, how do we do it so that everybody wins? That's a very, very complex management undertaking, especially given that there may be a CEO change on the horizon at some point in time. But the bottom line is get the new workloads, make sure the center of gravity doesn't move too much, and keep your customers. >> I was out last night at the press event, also there was an Accenture party, and I bumped into a few folks, and I had an interesting conversation with one, and it's something we've talked about in the Cube a little bit, but I'll bring it out here. They said, "John, what do you think about Oracle and Amazon, and all this stuff?" Which you take, you know, a little bit socially lubricated at the time, I said, "Hey, someone's going to be a Blackberry in this equation." They're like, "what do you mean?" I go, "well the Blackberry had all the features "of the phone, they had email, they had browser support, "they had huge adoption installed base phones. "When the iPhone came out, that was the game changing shift "for applications, so every net new mobile app, "now called mobile first, was really build for a computer, "AKA, iPhone and then the Samsung and Android, "so all new applications essentially "were written for the iPhone and Android." So they're like, "where are you going with this?" I'm like, "okay, if you're an enterprise, "every net new application or cloud native application "will be written for the cloud. "My belief is that why wouldn't you "build an application for this next gen architecture? "Why would you even do it on a prim unless it was "some specific requirement or outdated software--" >> Edge computing, there's some other things. >> There's some specific enterprise things that will always be there, but every net new application, or Greenfield application, (speaking indistinctly)-- >> Peter: Who's going to have-- >> Is going to be in the cloud, so if you believe that argument, that means there's going to be a tsunami of action in cloud, period. That means everything's going on the cloud. So if you believe that then it's a simple scale game, so that's going to be share taken by the cloud guys, so who are they? Oracle's kind of new to the cloud, and it's only really Oracle Cloud, so Amazon's been getting the lions' share of that, so Google's ramping up for that with Diane Greene, and you've got Microsoft. Your thoughts on that Blackberry that is, will someone be the Blackberry of cloud? >> It's an interesting analogy. I think that there's going to be some early, let's put it this way, if there's going to be a Blackberry of the cloud, it would be Amazon. And I don't think Amazon's going to be the Blackberry of the cloud. Right? >> Rob: Not anytime soon. >> No, because the Blackberry was the first one to come out that said, "look, we can add more functionality "than what you normally think about a phone," and along came Apple, and said, "you know what, "we're actually almost anticipating a treason. "We can turn the phone into a piece of software "that runs this handheld computer." So I don't think that Amazon is likely to be the Blackberry. Now the question is will Oracle be a Blackberry as a consequence of the cloud. And again, there is, businesses have invested so much in their core enterprise applications that they are configured around, that the cost to rip them out would be so great, and the benefits would be so modest unless Oracle does a faceplant of absolutely epic proportions, I don't think it's going to happen. >> It's not a clean analogy, but I do remember people having two phones, 'cause work had a phone that was a Blackberry, and the other one's iPhone, but it's a hard question, but here's-- >> But the cloud is the iPhone. In your analogy, the cloud is the iPhone. >> Yes, so it's a hard question, right, so we can pontificate, but here's the thing that I want to ask you guys both, 'cause it's a hard question, because it's early to provocatively bring this out, but what would be the tell signs for the Blackberry? One, it's large pre-existing condition. Right, install base. Clutching and holding on to the old way. And trying to be new. Blackberry tried to be cool, but never really realized, let's just go and complete iPhone clone-- >> So what was the centerpiece of Blackberry strategy? That core, fundamentally core, enterprise telecommunications app that handled email and phone metrics. The telltale thing that Oracle's doing something wrong, quite frankly, the first one would be that they start cutting people out of the ecosystem. That they start going toward what you were talking about, was that the suite becomes more important than the innovation. Again, I don't think that's going to happen. I'm encouraged by the fact that this very comprehensive announcement so strongly features ISVs and partners, which, John, that is a really, really important thing to be looking for. Does the suite become more important than the innovation? >> That's a great point, and the other thing that's interesting, too, is the whole workload conversation, because if you bring this kind of analogy together, is that the Blackberry ran workloads, it ran email, and so those workloads were highly efficient on their device. >> Well remember, actually Exchange ran the email, Blackberry ran the presentation, so the Blackberry application was simply taking something and went somewhere else, so it was easier to displace it when somebody came along with an alternative. >> There's a lot of holes in the analogy, but it does ring true, because we all know what happened to Blackberry, so the question that's on the table is, you got to transform or die, right? I mean this is clearly a lot of stakes are at risk here, so interesting conversation, so-- >> So the question is, Is Oracle kind of being proactive and aggressive enough, not just on the marketing front, but in innovation? Because they said a defensive, even Hurd yesterday, Mark Hurd yesterday said, was painting the, you know, we basically have a situation where IT is not growing, traditional IT, so we have to get into these new things, but are they? >> I think they're defensive, good point, and I think my observation, one from doing all the Cube interviews and covering Oracle deeply than we have been is that I think they're being defensive for reasons of not making enough progress, and that's not a function of Oracle's a function of their build out, and Ray Wang pointed out that the progression of building the data centers, and Larry's presentation, hangs together if you're an Oracle customer. If you're an Oracle customer, everything he said on stage actually makes a lot of sense, totally no problem with that. The other thing that Oracle's managing is the public perception on Wall Street around their growth prospects. I think they're holding the line, they're over-amplifying their CNBC interview, you watch the CNBC interviews, you watch Bloomberg, Mark Hurd is messaging like a politician, there's no real substance to any future indication of the strategy. Now we all know what the strategy is, they are not even close to being ready to get to the cloud, and that's hwy they're staying with their core base first, but they are going into a position to be set up for a siege where they bring their database customers to the cloud. That's where the game will get interesting, when Oracle starts really having those foundational building blocks of infrastructures of service, PaaS, and SaaS set up on Oracle first, then the net new application metric becomes, the net new customer metric becomes important, then you star to see the real war going on, where then it's a frontal attack on Amazon. >> John, you're absolutely right, but again, there is an overarching user issue here. They may not like Oracle's pricing, they may not like negotiating with Oracle, especially on the database side, although I've heard recently there's been some moderation, there's something of a d'etat going on right now. But again, there is not a user on the planet who wakes up and says, "I can't wait for the fun of ripping out "my Oracle applications and replacing them "with something new." That's just not something that anybody looks forward to >> I 100% agree, and here's my analogy on that. The marketing cloud comps that we had earlier, and some of the other new stuff around Oracle is pretty exciting because they're talking about design, user experience, they're talking about some of the real interaction, engaging components of their software towards the front end, near the consumer. On the existing Oracle, I've said this on the Cube, going back to 2010, Oracle's like plumbing and pipes, it runs the water, it feeds everything into the enterprise, why would you want to rip out, replace something that's already working? What are you adding onto the plumbing? So as a utility, Oracle has a utility effect on some of these core systems, whether it's CRM, ERP, CM or whatever, I get that, and I don't think that's at risk. I think if better plumbing comes along, (laughing) >> But here's-- >> It's another-- >> Well that's very true, but here's another way of looking at that exact point, John. In most businesses, your ERP application really is your infrastructure, it's not your servers, and your storage, and your middleware and your network. From your Board of Directors' standpoint, from your CFO's standpoint, for most of the business, the infrastructure is the ERP application. So when we all talk about infrastructure as a service, we're talking amongst ourselves about the role that Amazon's going to play, and it's important, they're having a major impact, new way of thinking about infrastructure at a technology end. But from a business standpoint, they're not looking at Amazon necessarily and saying, "oh wow, let's go there "because I can get a bunch of virtual machines." >> I mean, you said it earlier, the user's the center of the conversation, the customer, and here's my acid test for kind of the monkey business that goes on between the suppliers, and it comes down to this: whoever can enable value will do well. And customers don't mind paying for value, right, so the value equation is interesting. As a platform, if you're a platform as a service or whatever platform you are, you have infrastructure that's hard or whatever happens, if you are creating value and enabling value for the customer, in whatever form, ISVs, developers, other things, customers will pay for it. And what I'm hearing is customers are afraid that that enablement will be constrained somehow and boxed into a framework-- >> The suite verses innovation >> Bingo. >> Argument you made yesterday. And it's a really good point, and Rob, I'd like to hear what people are saying on the floor, 'cause you've been wandering around more than John or I have, about this point specifically. That tension between where I am today and where I want to go, and whether or not they see Oracle, you said earlier, investing enough to maintain that stream of innovation that's becoming so important to articulating the next generation of what technology looks like. >> Well like I said there's a lot of uncertainty out there. As you said, I think a lot of them don't want to move off of Oracle, they just want to be able to go to the next thing that they need to do, analytics, big data, that sort of thing, and you know if Oracle can provide that, they're going to go for it, right? I mean, why not? >> Peter: Right, and you're absolutely right. >> So my question is, not knowing the technical ins and outs of what they're doing on that front, is are they going far enough on that front. I don't know yet. >> Rob, let me ask you a question for the folks watching, I know you're out getting stories, people always trying to get us to look at their stuff, get attention. As someone who's doing the reporting out there and leading the editorial for its Silicon angle, what do you look for when you come into Oracle OpenWorld, you come in in objectively looking at the signals, what do you look for in stories, how do you take the size of the show, because now Amazon started this, these shows are so big now, there's a slew of announcements, I mean we were talking about the number of releases going out alone, okay, there you go, there's 12 releases, how do you vet, how do you make that decision editorially to where the stories are? >> Well it is overwhelming, I mean, I kind of drowned on Sunday night, and I think Larry did, too, you could tell he was sort of saying, "oh no, another slide." But, you know, in the end they were mostly announcing customer relationships, partner relationships, some new technology. I look for, one, the new technology, I want to know what's real here. You can't know from a press release, but you can get a sense of that. But, you know, a lot of it's a business aspect, this has to work for businesses, and so, I want to know is this going to move a needle on Oracle, on their growth, is it going to keep 'em in the mix, that's what I care about. >> Great, thanks for that. Peter, what's missing? What are we not hearing here on this show that you expected to hear more of, or you think is an area that Oracle will have to flesh out as they go forward? >> Well, so we heard this really nice comprehensive vision of Oracle moving into the cloud and moving their customers into the cloud, and very importantly, their partners into the cloud, so that's really positive. What we didn't hear as much of is two things. And one is really crucial to the strategy, and maybe you heard some of this, Rob, but I'll do the first one and then that one. The first one is we're only hinting at how developers are going to do things differently as a consequence of Oracle's moving into the cloud. We're hearing, "yes, we're going to support "all of the languages," and "yes, we're going to do all the, "the database is going to be there." But just hints. And the developer ecosystem is still something that everybody's making a play for, nobody has really put their stake in the ground and said this is how we're going to do it. Amazon's play is, "don't worry about IT, come to the cloud, "get what you need, build your application, "make your business happy." Oracle is, and this is segueing to the other point, Oracle is more of a traditionalist in that they've got developers, they want to give them the tools that they need, bring their tools along, I expected to hear more about that. But the number two is in many respects, as I just said, Amazon's play is "okay, IT, if you want to marginize, come work with us." With business, "if you don't want to wait for IT, "you have another option to come to us directly." Many years ago IBM tried to play the hand that they were going to bring the IT professional along with them as they went through a transition. And they did a pretty decent job of it. This time, it's pretty much up to Oracle, I would say, to bring IT, the traditional IT manager along with them. So they're not only modernizing Oracle, they're in many respects modernizing their traditional customers. That's not necessarily, that's not going to be a particularly easy job, but a lot will go along with it. And we'll see the degree to which Amazon starts to fight not just for the hearts and minds of business, but the hearts and minds of technology, and the IT people as well. It's an interesting dynamic to see-- (crosstalk) I'm sorry, to answer your question directly, I expected to see more about how they were going to bring along IT people. >> Yeah, and I agree with that developer thing, they have Java one, so when the people say, "hey we're all developers," the comment I heard was, "oh they're all at Java one." Here's what Oracle has to do in my opinion, they have to integrate the goodness of Java one into this show, because if they want to be successful in the cloud and take on Amazon and others at the platform as a service level, this is a new middleware, I've said this before, Thomas Kurian, he knows middleware, and I guarantee you the database guys know exactly where the action's going to be, they're going to beat the four to five the pass layer, with developers and their ISV, so that's existing ISVs and new developers, and I hear zero value proposition coming out of this show around that particular piece. I think it's a major area of improvement Oracle needs to do, and if they want to win the hearts and minds of the developer, that is key, because the cloud is about DevOps, you can automate away IT, and bring them along, but applications, the core bread and butter of Oracle, that takes advantage of the database is coming from developers. >> John, I think that's a great play, we heard in the Cube yesterday, so where is the line between IAAS and PaaS? It's kind of a blurring, and we say it's, well, which is it going to be, PaaS of IAAS? And when the Cube guys came back and said, "we're thinking it's going to be PaaS." Which it may very well be. But it's not what the interest is thinking right now. That's going to be up to Oracle to make that PaaS. >> I just (speaking indistinctly) to end the segment is that Ray was commenting, and I agree with him, and I think your point about the coherent message. Oracle doesn't want to over-rotate and get ahead of their skis on this one, becusae they're sequencing this play out very carefully, a lot's at stake, foundational build the building blocks, get their existing infrastructure of service built out, then you're going to start to see the game change, so I think Oracle's doing, I think, the right thing, and their progress I don't think has anything to do with Oracle, per se, they have build out issues, and it's still early, so I'm not going to judge 'em on that, I like what I'm hearing, and they're doing Oracle on Oracle first, and then attacking the competition with the fudd to try to set expectations, and again, Hurd is keeping Wall Street at bay, kind of keeping them down, while they tool up and build out, so yeah, great stuff. Rob, thanks for coming on and sharing your notes from the reporter notebook out in the field writing stories, he just wrote a post on the keynote just now, and the headline is my favorite headline of the week, "You're locked in, baby, Oracle's Larry Ellison "re-doubles attack on AWS." That really is the top story here at the keynote. Peter, thanks for the commentary. We're going to talk about more live coverage here in the Cube, live in San Francisco, live coverage of Oracle OpenWorld. Be right back, you're watching the cube. (upbeat music)

Published Date : Sep 21 2016

SUMMARY :

brought to you by Oracle. and extract the signal through noise. and the Oracle playbook Rob you were in the session, for the customer to say, on the history of their, But most of the businesses out and to quote thing, your Redshift is the fastest was Redshift's position. and the behemoth that they that are going to be but let's keep on the theme look at that and go, on the Needless to say, that's is "what's the impact to the customer?" the business models, you "of the phone, they had email, there's some other things. going on the cloud. of the cloud, it would be Amazon. consequence of the cloud. But the cloud is the iPhone. Clutching and holding on to the old way. I'm encouraged by the fact that this is that the Blackberry ran so the Blackberry application pointed out that the especially on the database side, and some of the other for most of the business, for kind of the monkey and Rob, I'd like to hear and you know if Oracle can provide that, Peter: Right, and the technical ins and outs and leading the editorial I look for, one, the new technology, that you expected to hear more and the IT people as well. that takes advantage of the database in the Cube yesterday, and the headline is my

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

LarryPERSON

0.99+

JohnPERSON

0.99+

Larry EllisonPERSON

0.99+

PeterPERSON

0.99+

Jeff BezosPERSON

0.99+

Mark HurdPERSON

0.99+

CiscoORGANIZATION

0.99+

RobPERSON

0.99+

IBMORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

OracleORGANIZATION

0.99+

Thomas KurianPERSON

0.99+

Rob HothPERSON

0.99+

Peter BurrisPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

John FurrierPERSON

0.99+

Diane GreenePERSON

0.99+

AppleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Andy JassyPERSON

0.99+

BlackberryORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+