UNLIST TILL 4/2 The Data-Driven Prognosis
>> Narrator: Hi, everyone, thanks for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled toward Zero Unplanned Downtime of Medical Imaging Systems using Big Data. My name is Sue LeClaire, Director of Marketing at Vertica, and I'll be your host for this webinar. Joining me is Mauro Barbieri, lead architect of analytics at Philips. Before we begin, I want to encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. And we'll answer as many questions as we're able to during that time. Any questions that we don't get to we'll do our best to answer them offline. Alternatively, you can also visit the vertical forums to post your question there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And yes, this virtual session is being recorded, and we'll be available to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Mauro, over to you. >> Thank you, good day everyone. So medical imaging systems such as MRI scanners, interventional guided therapy machines, CT scanners, the XR system, they need to provide hospitals, optimal clinical performance but also predictable cost of ownership. So clinicians understand the need for maintenance of these devices, but they just want to be non intrusive and scheduled. And whenever there is a problem with the system, the hospital suspects Philips services to resolve it fast and and the first interaction with them. In this presentation you will see how we are using big data to increase the uptime of our medical imaging systems. I'm sure you have heard of the company Phillips. Phillips is a company that was founded in 129 years ago in actually 1891 in Eindhoven in Netherlands, and they started by manufacturing, light bulbs, and other electrical products. The two brothers Gerard and Anton, they took an investment from their father Frederik, and they set up to manufacture and sale light bulbs. And as you may know, a key technology for making light bulbs is, was glass and vacuum. So when you're good at making glass products and vacuum and light bulbs, then there is an easy step to start making radicals like they did but also X ray tubes. So Philips actually entered very early in the market of medical imaging and healthcare technology. And this is what our is our core as a company, and it's also our future. So, healthcare, I mean, we are in a situation now in which everybody recognize the importance of it. And and we see incredible trends in a transition from what we call Volume Based Healthcare to Value Base, where, where the clinical outcomes are driving improvements in the healthcare domain. Where it's not enough to respond to healthcare challenges, but we need to be involved in preventing and maintaining the population wellness and from a situation in which we episodically are in touch with healthcare we need to continuously monitor and continuously take care of populations. And from healthcare facilities and technology available to a few elected and reach countries we want to make health care accessible to everybody throughout the world. And this of course, has poses incredible challenges. And this is why we are transforming the Philips to become a healthcare technology leader. So from Philips has been a concern realizing and active in many sectors in many sectors and realizing what kind of technologies we've been focusing on healthcare. And we have been transitioning from creating and selling products to making solutions to addresses ethical challenges. And from selling boxes, to creating long term relationships with our customers. And so, if you have known the Philips brand from from Shavers from, from televisions to light bulbs, you probably now also recognize the involvement of Philips in the healthcare domain, in diagnostic imaging, in ultrasound, in image guided therapy and systems, in digital pathology, non invasive ventilation, as well as patient monitoring intensive care, telemedicine, but also radiology, cardiology and oncology informatics. Philips has become a powerhouse of healthcare technology. To give you an idea of this, these are the numbers for, from 2019 about almost 20 billion sales, 4% comparable sales growth with respect to the previous year and about 10% of the sales are reinvested in R&D. This is also shown in the number of patents rights, last year we filed more than 1000 patents in, in the healthcare domain. And the company is about 80,000 employees active globally in over 100 countries. So, let me focus now on the type of products that are in the scope of this presentation. This is a Philips Magnetic Resonance Imaging Scanner, also called Ingenia 3.0 Tesla is an incredible machine. Apart from being very beautiful as you can see, it's a it's a very powerful technology. It can make high resolution images of the human body without harmful radiation. And it's a, it's a, it's a complex machine. First of all, it's massive, it weights 4.6 thousand kilograms. And it has superconducting magnets cooled with liquid helium at -269 degrees Celsius. And it's actually full of software millions and millions of lines of code. And it's occupied three rooms. What you see in this picture, the examination room, but there is also a technical room which is full of of of equipment of custom hardware, and machinery that is needed to operate this complex device. This is another system, it's an interventional, guided therapy system where the X ray is used during interventions with the patient on the table. You see on the left, what we call C-arm, a robotic arm that moves and can take images of the patient while it's been operated, it's used for cardiology intervention, neurological intervention, cardiovascular intervention. There's a table that moves in very complex ways and it again it occupies two rooms, this room that we see here and but also a room full of cabinets and hardwood and computers. This is another another characteristic of this machine is that it has to operate it as it is used during medical interventions, and so it has to interact with all kind of other equipment. This is another system it's a, it's a, it's a Computer Tomography Scanner Icon which is a unique, it is unique due to its special detection technology. It has an image resolution up to 0.5 millimeters and making thousand by thousand pixel images. And it is also a complex machine. This is a picture of the inside of a compatible device not really an icon, but it has, again three rotating, which waits two and a half turn. So, it's a combination of X ray tube on top, high voltage generators to power the extra tube and in a ray of detectors to create the images. And this rotates at 220 right per minutes, making 50 frames per second to make 3D reconstruction of the of the body. So a lot of technology, complex technology and this technology is made for this situation. We make it for clinicians, who are busy saving people lives. And of course, they want optimal clinical performance. They want the best technology to treat the patients. But they also want predictable cost of ownership. They want predictable system operations. They want their clinical schedules not interrupted. So, they understand these machines are complex full of technology. And these machines may have, may require maintenance, may require software update, sometimes may even say they require some parts, horrible parts to be replaced, but they don't want to have it unplanned. They don't want to have unplanned downtime. They would hate send, having to send patients home and to have to reschedule visits. So they understand maintenance. They just want to have a schedule predictable and non intrusive. So already a number of years ago, we started a transition from what we call Reactive Maintenance services of these devices to proactive. So, let me show you what we mean with this. Normally, if a system has an issue system on the field, and traditional reactive workflow would be that, this the customer calls a call center, reports the problem. The company servicing the device would dispatch a field service engineer, the field service engineer would go on site, do troubleshooting, literally smell, listen to noise, watch for lights, for, for blinking LEDs or other unusual issues and would troubleshoot the issue, find the root cause and perhaps decide that the spare part needs to be replaced. He would order a spare part. The part would have to be delivered at the site. Either immediately or the engineer would would need to come back another day when the part is available, perform the repair. That means replacing the parts, do all the needed tests and validations. And finally release the system for clinical use. So as you can see, there is a lot of, there are a lot of steps, and also handover of information from one to between different people, between different organizations even. Would it be better to actually keep monitoring the installed base, keep observing the machine and actually based on the information collected, detect or predict even when an issue is is going to happen? And then instead of reacting to a customer calling, proactively approach the customer scheduling, preventive service, and therefore avoid the problem. So this is actually what we call Corrective Service. And this is what we're being transitioning to using Big Data and Big Data is just one ingredient. In fact, there are more things that are needed. The devices themselves need to be designed for reliability and predictability. If the device is a black box does not communicate to the outside world the status, if it does not transmit data, then of course, it is not possible to observe and therefore, predict issues. This of course requires a remote service infrastructure or an IoT infrastructure as it is called nowadays. The passivity to connect the medical device with a data center in enterprise infrastructure, collect the data and perform the remote troubleshooting and the predictions. Also the right processes and the right organization is to be in place, because an organization that is, you know, waiting for the customer to call and then has a number of few service engineers available and a certain amount of spare parts and stock is a different organization from an organization that actually is continuously observing the installed base and is scheduling actions to prevent issues. And in other pillar is knowledge management. So in order to realize predictive models and to have predictive service action, it's important to manage knowledge about failure modes, about maintenance procedures very well to have it standardized and digitalized and available. And last but not least, of course, the predictive models themselves. So we talked about transmitting data from the installed base on the medical device, to an enterprise infrastructure that would analyze the data and generate predictions that's predictive models are exactly the last ingredient that is needed. So this is not something that I'm, you know, I'm telling you for the first time is actually a strategic intent of Philips, where we aim for zero unplanned downtime. And we market it that way. We also is not a secret that we do it by using big data. And, of course, there could be other methods to to achieving the same goal. But we started using big data already now well, quite quite many years ago. And one of the reasons is that our medical devices already are wired to collect lots of data about the functioning. So they collect events, error logs that are sensor connecting sensor data. And to give you an idea, for example, just as an order of magnitudes of size of the data, the one MRI scanner can log more than 1 million events per day, hundreds of thousands of sensor readings and tens of thousands of many other data elements. And so this is truly big data. On the other hand, this data was was actually not designed for predictive maintenance, you have to think a medical device of this type of is, stays in the field for about 10 years. Some a little bit longer, some of it's shorter. So these devices have been designed 10 years ago, and not necessarily during the design, and not all components were designed, were designed with predictive maintenance in mind with IoT, and with the latest technology at that time, you know, progress, will not so forward looking at the time. So the actual the key challenge is taking the data which is already available, which is already logged by the medical devices, integrating it and creating predictive models. And if we dive a little bit more into the research challenges, this is one of the Challenges. How to integrate diverse data sources, especially how to automate the costly process of data provisioning and cleaning? But also, once you have the data, let's say, how to create these models that can predict failures and the degradation of performance of a single medical device? Once you have these models and alerts, another challenge is how to automatically recommend service actions based on the probabilistic information on these possible failures? And once you have the insights even if you can recommend action still recommending an action should be done with the goal of planning, maintenance, for generating value. That means balancing costs and benefits, preventing unplanned downtimes without of course scheduling and unnecessary interventions because every intervention, of course, is a disruption for the clinical schedule. And there are many more applications that can be built off such as the optimal management of spare parts supplies. So how do you approach this problem? Our approach was to collect into one database Vertica. A large amount of historical data, first of all historical data coming from the medical devices, so event logs, parameter value system configuration, sensor readings, all the data that we have at our disposal, that in the same database together with records of failures, maintenance records, service work orders, part replacement contracts, so basically the evidence of failures and once you have data from the medical devices, and data from the failures in the same database, it becomes possible to correlate event logs, errors, signal sensor readings with records of failures and records of part replacement and maintenance operations. And we did that also with a specific approach. So we, we create integrated teams, and every integrated team at three figures, not necessarily three people, they were actually multiple people. But there was at least one business owner from a service organization. And this business owner is the person who knows what is relevant, which use case are relevant to solve for a particular type of product or a particular market. What basically is generating value or is worthwhile tackling as an organization. And we have data scientists, data scientists are the one who actually can manipulate data. They can write the queries, they can write the models and robust statistics. They can create visualization and they are the ones who really manipulate the data. Last but not least, very important is subject matter experts. Subject Matter Experts are the people who know the failure modes, who know about the functioning of the medical devices, perhaps they're even designed, they come from the design side, or they come from the service innovation side or even from the field. People who have been servicing the machines in real life for many, many years. So, they are familiar with the failure models, but also familiar with the type of data that is logged and the processes and how actually the systems behave, if you if you if you if you allow me in, in the wild in the in the field. So the combination of these three secrets was a key. Because data scientist alone, just statisticians basically are people who can all do machine learning. And they're not very effective because the data is too complicated. That's why you more than too complex, so they will spend a huge amount of time just trying to figure out the data. Or perhaps they will spend the time in tackling things that are useless, because it's such an interesting knows much quicker which data points are useful, which phenomenon can be found in the data or probably not found. So the combination of subject matter experts and data scientists is very powerful and together gathered by a business owner, we could tackle the most useful use cases first. So, this teams set up to work and they developed three things mainly, first of all, they develop insights on the failure modes. So, by looking at the data, and analyzing information about what happened in the field, they find out exactly how things fail in a very pragmatic and quantitative way. Also, they of course, set up to develop the predictive model with associated alerts and service actions. And a predictive model is just not an alert is just not a flag. Just not a flag, only flag that turns on like a like a traffic light, you know, but there's much more than that. It's such an alert is to be interpreted and used by highly skilled and trained engineer, for example, in a in a call center, who needs to evaluate that error and plan a service action. Service action may involve the ordering a replacement of an expensive part, it may involve calling up the customer hospital and scheduling a period of downtime, downtime to replace a part. So it has an impact on the clinical practice, could have an impact. So, it is important that the alert is coupled with sufficient evidence and information for such a highly skilled trained engineer to plan the service session efficiently. So, it's it's, it's a lot of work in terms of preparing data, preparing visualizations, and making sure that old information is represented correctly and in a compact form. Additionally, These teams develop, get insight into the failure modes and so they can provide input to the R&D organization to improve the products. So, to summarize these graphically, we took a lot of historical data from, coming from the medical devices from the history but also data from relational databases, where the service, work orders, where the part replacement, the contact information, we integrated it, and we set up to the data analytics. From there we don't have value yet, only value starts appearing when we use the insights of data analytics the model on live data. When we process live data with the module we can generate alerts, and the alerts can be used to plan the maintenance and the maintenance therefore the plant maintenance replaces replacing downtime is creating value. To give an idea of the, of the type of I cannot show you the details of these modules, all of these predictive models. But to give you an idea, this is just a picture of some of the components of our medical device for which we have models for which we have, for which we call the failure modes, hard disk, clinical grade monitoring, monitors, X ray tubes, and so forth. This is for MRI machines, a lot of custom hardware and other types of amplifiers and electronics. The alerts are then displayed in a in a dashboard, what we call a Remote monitoring dashboard. We have a team of remote monitoring engineers that basically surveyors the install base, looks at this dashboard picks up these alerts. And an alert as I said before is not just one flag, it contains a lot of information about the failure and about the medical device. And the remote monitor engineer basically will pick up these alerts, they review them and they create cases for the markets organization to handle. So, they see an alert coming in they create a case. So that the particular call center in in some country can call the customer and schedule and make an appointment to schedule a service action or it can add it preventive action to the schedule of the field service engineer who's already supposed to go to visit the customer for example. This is a picture and high-level picture of the overall data person architecture. On the bottom we have install base install base is formed by all our medical devices that are connected to our Philips and more service network. Data is transmitted in a in a secure and in a secure way to our enterprise infrastructure. Where we have a so called Data Lake, which is basically an archive where we store the data as it comes from, from the customers, it is scrubbed and protected. From there, we have a processes ETL, Extract, Transform and Load that in parallel, analyze this information, parse all these files and all this data and extract the relevant parameters. All this, the reason is that the data coming from the medical device is very verbose, and in legacy formats, sometimes in binary formats in strange legacy structures. And therefore, we parse it and we structure it and we make it magically usable by data science teams. And the results are stored in a in a vertica cluster, in a data warehouse. In the same data warehouse, where we also store information from other enterprise systems from all kinds of databases from SQL, Microsoft SQL Server, Tera Data SAP from Salesforce obligations. So, the enterprise IT system also are connected to vertica the data is inserted into vertica. And then from vertica, the data is pulled by our predictive models, which are Python and Rscripts that run on our proprietary environment helps with insights. From this proprietary environment we generate the alerts which are then used by the remote monitoring application. It's not the only application this is the case of remote monitoring. We also have applications for particular remote service. So whenever we cannot prevent or predict we cannot predict an issue from happening or we cannot prevent an issue from happening and we need to react on a customer call, then we can still use the data to very quickly troubleshoot the system, find the root cause and advice or the best service session. Additionally, there are reliability dashboards because all this data can also be used to perform reliability studies and improve the design of the medical devices and is used by R&D. And the access is with all kinds of tools. So Vertica gives the flexibility to connect with JDBC to connect dashboards using Power BI to create dashboards and click view or just simply use RM Python directly to perform analytics. So little summary of the, of the size of the data for the for the moment we have integrated about 500 terabytes worth of data tables, about 30 trillion data points. More than eighty different data sources. For our complete connected install base, including our customer relation management system SAP, we also have connected, we have integrated data from from the factory for repair shops, this is very useful because having information from the factory allows to characterize components and devices when they are new, when they are still not used. So, we can model degradation, excuse me, predict failures much better. Also, we have many years of historical data and of course 24/7 live feeds. So, to get all this going, we we have chosen very simple designs from the very beginning this was developed in the back the first system in 2015. At that time, we went from scratch to production eight months and is also very stable system. To achieve that, we apply what we call Exhaustive Error Handling. When you process, most of people attending this conference probably know when you are dealing with Big Data, you have probably you face all kinds of corner cases you feel that will never happen. But just because of the sheer volume of the data, you find all kinds of strange things. And that's what you need to take care of, if you want to have a stable, stable platform, stable data pipeline. Also other characteristic is that, we need to handle live data, but also be able to, we need to be able to reprocess large historical datasets, because insights into the data are getting generated over time by the team that is using the data. And very often, they find not only defects, but also they have changed requests for new data to be extracted to distract in a different way to be aggregated in a different way. So basically, the platform is continuously crunching data. Also, components have built-in monitoring capabilities. Transparent transparency builds trust by showing how the platform behaves. People actually trust that they are having all the data which is available, or if they don't see the data or if something is not functioning they can see why and where the processing has stopped. A very important point is documentation of data sources every data point as a so called Data Provenance Fields. That is not only the medical device where it comes from, with all this identifier, but also from which file, from which moment in time, from which row, from which byte offset that data point comes. This allows to identify and not only that, but also when this data point was created, by whom, by whom meaning which version of the platform and of the ETL created a data point. This allows us to identify issues and also to fix only the subset of when an issue is identified and fixed. It's possible then to fix only subset of the data that is impacted by that issue. Again, this grid trusts in data to essential for this type of applications. We actually have different environments in our analytic solution. One that we call data science environment is more or less what I've shown so far, where it's deployed in our Philips private cloud, but also can be deployed in in in public cloud such as Amazon. It contains the years of historical data, it allows interactive data exploration, human queries, therefore, it is a highly viable load. It is used for the training of machine learning algorithms and this design has been such that we it is for allowing rapid prototyping and for large data volumes. In other environments is the so called Production Environment where we actually score the models with live data from generation of the alerts. So this environment does not require years of data just months, because a model to make a prediction does not need necessarily years of data, but maybe some model even a couple of weeks or a few months, three months, six months depending on the type of data on the failure which has been predicted. And this has highly optimized queries because the applications are stable. It only only change when we deploy new models or new versions of the models. And it is designed optimized for low latency, high throughput and reliability is no human intervention, no human queries. And of course, there are development staging environments. And one of the characteristics. Another characteristic of all this work is that what we call Data Driven Service Innovation. In all this work, we use the data in every step of the process. The First business case creation. So, basically, some people ask how did you manage to find the unlocked investment to create such a platform and to work on it for years, you know, how did you start? Basically, we started with a business case and the business case again for that we use data. Of course, you need to start somewhere you need to have some data, but basically, you can use data to make a quantitative analysis of the current situation and also make it as accurate as possible estimate quantitative of value creation, if you have that basically, is you can justify the investments and you can start building. Next to that data is used to decide where to focus your efforts. In this case, we decided to focus on the use cases that had the maximum estimated business impact, with business impact meaning here, customer value, as well as value for the company. So we want to reduce unplanned downtime, we want to give value to our customers. But it would be not sustainable, if for creating value, we would start replacing, you know, parts without any consideration for the cost of it. So it needs to be sustainable. Also, then we use data to analyze the failure modes to actually do digging into the data understanding of things fail, for visualization, and to do reliability analysis. And of course, then data is a key to do feature engineering for the development of the predictive models for training the models and for the validation with historical data. So data is all over the place. And last but not least, again, these models is architecture generates new data about the alerts and about the how good the alerts are, and how well they can predict failures, how much downtime is being saved, how money issues have been prevented. So this also data that needs to be analyzed and provides insights on the performance of this, of this models and can be used to improve the models found. And last but not least, once you have performance of the models you can use data to, to quantify as much as possible the value which is created. And it is when you go back to the first step, you made the business value you you create the first business case with estimates. Can you, can you actually show that you are creating value? And the more you can, have this fitness feedback loop closed and quantify the better it is for having more and more impact. Among the key elements that are needed for realizing this? So I want to mention one about data documentation is the practice that we started already six years ago is proven to be very valuable. We document always how data is extracted and how it is stored in, in data model documents. Data Model documents specify how data goes from one place to the other, in this case from device logs, for example, to a table in vertica. And it includes things such as the finish of duplicates, queries to check for duplicates, and of course, the logical design of the tables below the physical design of the table and the rationale. Next to it, there is a data dictionary that explains for each column in the data model from a subject matter expert perspective, what that means, such as its definition and meaning is if it's, if it's a measurement, the use of measure and the range. Or if it's a, some sort of, of label the spec values, or whether the value is raw or or calculated. This is essential for maximizing the value of data for allowing people to use data. Last but not least, also an ETL design document, it explains how the transformation has happened from the source to the destination including very important the failure and the strategy. For example, when you cannot parse part of a file, should you load only what you can parse or drop the entire file completely? So, import best effort or do all or nothing or how to populate records for which there is no value what are the default values and you know, how to have the data is normalized or transform and also to avoid duplicates. This again is very important to provide to the users of the data, if full picture of all the data itself. And this is not just, this the formal process the documents are reviewed and approved by all the stakeholders into the subject matter experts and also the data scientists from a function that we have started called Data Architect. So to, this is something I want to give about, oh, yeah and of course the the documents are available to the end users of the data. And we even have links with documents of the data warehouse. So if you are, if you get access to the database, and you're doing your research and you see a table or a view, you think, well, it could be that could be interesting. It looks like something I could use for my research. Well, the data itself has a link to the document. So from the database while you're exploring data, you can retrieve a link to the place where the document is available. This is just the quick summary of some of the of the results that I'm allowed to share at this moment. This is about image guided therapy, using our remote service infrastructure for remotely connected system with the right contracts. We can achieve we have we have reduced downtime by 14% more than one out of three of cases are resolved remotely without an engineer having to go outside. 82% is the first time right fixed rate that means that the issue is fixed either remotely or if a visit at the site is needed, that visit only one visit is needed. So at that moment, the engineer we decided the right part and fix this straightaway. And this result on average on 135 hours more operational availability per year. This therefore, the ability to treat more patients for the same costs. I'd like to conclude with citing some nice testimonials from some of our customers, showing that the value that we've created is really high impact and this concludes my presentation. Thanks for your attention so far. >> Thank you Morrow, very interesting. And we've got a number of questions that we that have come in. So let's get to them. The first one, how many devices has Philips connected worldwide? And how do you determine which related center data workloads get analyzed with protocols? >> Okay, so this is just two questions. So the first question how many devices are connected worldwide? Well, actually, I'm not allowed to tell you the precise number of connected devices worldwide, but what I can tell is that we are in the order of tens of thousands of devices. And of all types actually. And then, how would we determine which related sensor gets analyzed with vertica well? And a little bit how I set In the in the presentation is a combination of two approaches is a data driven approach and the knowledge driven approach. So a knowledge driven approach because we make maximum use of our knowledge of the failure modes, and the behavior of the medical devices and of their components to select what we think are promising data points and promising features. However, from that moment on data science kicks in, and it's actually data science is used to look at the actual data and come up with quantitative information of what is really happening. So, it could be that an expert is convinced that the particular range of value of a sensor are indicative of a particular failure. And it turns out that maybe it was too optimistic on the other way around that in practice, there are many other situations situation he was not aware of. That could happen. So thanks to the data, then we, you know, get a better understanding of the phenomenon and we get the better modeling. I bet I answered that, any question? >> Yeah, we have another question. Do you have plans to perform any analytics at the edge? >> Now that's a good question. So I can't disclose our plans on this right now, but at the edge devices are certainly one of the options we look at to help our customers towards Zero Unplanned Downtime. Not only that, but also to facilitate the integration of our solution with existing and future hospital IT infrastructure. I mean, we're talking about advanced security, privacy and guarantee that the data is always safe remains. patient data and clinical data remains does not go outside the parameters of the hospital of course, while we want to enhance our functionality provides more value with our services. Yeah, so edge definitely very interesting area of innovation. >> Another question, what are the most helpful vertica features that you rely on? >> I would say, the first that comes to mind, to me at this moment is ease of integration. Basically, with vertica, we will be able to load any data source in a very easy way. And also it really can be interfaced very easily with old type of ions as an application. And this, of course, is not unique to vertica. Nevertheless, the added value here is that this is coupled with an incredible speed, incredible speed for loading and for querying. So it's basically a very versatile tool to innovate fast for data science, because basically we do not end up another thing is multiple projections, advanced encoding and compression. So this allows us to perform the optimizations only when we need it and without having to touch applications or queries. So if we want to achieve high performance, we Basically spend a little effort on improving the projection. And now we can achieve very often dramatic increases in performance. Another feature is EO mode. This is great for for cloud for cloud deployment. >> Okay, another question. What is the number one lesson learned that you can share? >> I think that would my advice would be document control your entire data pipeline, end to end, create positive feedback loops. So I hear that what I hear often is that enterprises I mean Philips is one of them that are not digitally native. I mean, Philips is 129 years old as a company. So you can imagine the the legacy that we have, we will not, you know, we are not born with Web, like web companies are with with, you know, with everything online and everything digital. So enterprises that are not digitally native, sometimes they struggle to innovate in big data or into to do data driven innovation, because, you know, the data is not available or is in silos. Data is controlled by different parts of the organ of the organization with different processes. There is not as a super strong enterprise IT system, providing all the data, you know, for everybody with API's. So my advice is to, to for the very beginning, a creative creating as soon as possible, an end to end solution, from data creation to consumption. That creates value for all the stakeholders of the data pipeline. It is important that everyone in the data pipeline from the producer of the data to the to the consumers, basically in order to pipeline everybody gets a piece of value, piece of the cake. When the value is proven to all stakeholders, everyone would naturally contribute to keep the data pipeline running, and to keep the quality of the data high. That's the students there. >> Yeah, thank you. And in the area of machine learning, what types of innovations do you plan to adopt to help with your data pipeline? >> So, in the error of machine learning, we're looking at things like automatically detecting the deterioration of models to trigger improvement action, as well as connected with active learning. Again, focused on improving the accuracy of our predictive models. So active learning is when the additional human intervention labeling of difficult cases is triggered. So the machine learning classifier may not be able to, you know, classify correctly all the time and instead of just randomly picking up some cases for a human to review, you, you want the costly humans to only review the most valuable cases, from a machine learning point of view, the ones that would contribute the most in improving the classifier. Another error is is deep learning and was not working on it, I mean, but but also applications of more generic anomaly detection algorithms. So the challenge of anomaly detection is that we are not only interested in finding anomalies but also in the recommended proper service actions. Because without a proper service action, and alert generated because of an anomaly, the data loses most of its value. So, this is where I think we, you know. >> Go ahead. >> No, that's, that's it, thanks. >> Okay, all right. So that's all the time that we have today for questions. I want to thank the audience for attending Mauro's presentation and also for your questions. If you weren't able to, if we weren't able to answer your question today, I'd ask let we'll let you know that we'll respond via email. And again, our engineers will be at the vertica, on the vertica quorums awaiting your other questions. It would help us greatly if you could give us some feedback and rate the session before you sign off. Your rating will help us guide us as when we're looking at content to provide for the next vertica BTC. Also, note that a replay of today's event and a PDF copy of the slides will be available on demand, we'll let you know when that'll be by email hopefully later this week. And of course, we invite you to share the content with your colleagues. Again, thank you for your participation today. This includes this breakout session and hope you have a wonderful day. Thank you. >> Thank you
SUMMARY :
in the lower right corner of the slide. and perhaps decide that the spare part needs to be replaced. So let's get to them. and the behavior of the medical devices Do you have plans to perform any analytics at the edge? and guarantee that the data is always safe remains. on improving the projection. What is the number one lesson learned that you can share? from the producer of the data to the to the consumers, And in the area of machine learning, what types the deterioration of models to trigger improvement action, and a PDF copy of the slides will be available on demand,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mauro Barbieri | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Gerard | PERSON | 0.99+ |
Frederik | PERSON | 0.99+ |
Phillips | ORGANIZATION | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
Mauro | PERSON | 0.99+ |
Eindhoven | LOCATION | 0.99+ |
4.6 thousand kilograms | QUANTITY | 0.99+ |
two rooms | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
14% | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Anton | PERSON | 0.99+ |
4% | QUANTITY | 0.99+ |
135 hours | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
82% | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
three rooms | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
more than 1000 patents | QUANTITY | 0.99+ |
1891 | DATE | 0.99+ |
Today | DATE | 0.99+ |
Power BI | TITLE | 0.99+ |
Netherlands | LOCATION | 0.99+ |
one ingredient | QUANTITY | 0.99+ |
three figures | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
over 100 countries | QUANTITY | 0.99+ |
later this week | DATE | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
SQL | TITLE | 0.98+ |
about 10% | QUANTITY | 0.98+ |
about 80,000 employees | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
Python | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
two brothers | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
about 30 trillion data points | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
about 500 terabytes | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
each column | QUANTITY | 0.98+ |
hundreds of thousands | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
Salesforce | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
tens of thousands of devices | QUANTITY | 0.97+ |
first system | QUANTITY | 0.96+ |
about 10 years | QUANTITY | 0.96+ |
10 years ago | DATE | 0.96+ |
one visit | QUANTITY | 0.95+ |
Morrow | PERSON | 0.95+ |
up to 0.5 millimeters | QUANTITY | 0.95+ |
More than eighty different data sources | QUANTITY | 0.95+ |
129 years ago | DATE | 0.95+ |
first interaction | QUANTITY | 0.94+ |
one flag | QUANTITY | 0.94+ |
three things | QUANTITY | 0.93+ |
thousand | QUANTITY | 0.93+ |
50 frames per second | QUANTITY | 0.93+ |
First business | QUANTITY | 0.93+ |
Murali Anakavur, Gilead | Boomi World 2019
>> Narrator: Live from Washington, D.C. It's the CUBE. Covering Boomi World 19. Brought to you by Boomi. >> Welcome to the CUBE, about the leader in live tech coverage. I am Lisa Martin with John Furrier. We're at Boomi World 19 in Washington, D.C. Please welcome one of Boomi's award winners to the program from Gilead Sciences we have the Director of IT, Murali Anakavur. Welcome Murali and congratulations on Gilead being the 2019 Change Agent Award winner for North America. >> Thank you so much Lisa. It's good to receive the award. Lot of efforts have been put in place by our folks. I'm very honored and privileged to receive this award. >> Fantastic. So give our audience an overview of Gilead Sciences. What you guys do and then we'll start getting into the IT infrastructure and all of the great things that you have done with Boomi. >> Definitely. Gilead has been in the forefront of meeting an aspect of medical needs of patients worldwide. Clearly, it's the company, if you recollect, solve the (mumbles) problem in the world. There were the cured from the cure for it that started the company originally to come up to where they are today. They are in the forefront of science and R and D and technology when putting therapeutics for inflammatory infectious, and recently in big cancer treatments and other treatments. So the world is opening up big time. Our focus is to resolve and make medical needs. The company is so focused and they want to provide the cure for all these and it's so passionately too. So all kinds of R and D going on. I'm so honored to be working for a company which is doing this great need for humanity, frankly. >> Absolutely. So the cure for Hepatitis C, that's huge. Whenever we talk about technology where it impacts every single person on this planet, infectious diseases, cancer as you mentioned, it's really... It's pulverizing people understand it. It's--there's a lot of gravity around it. Talk to us about what you needed to implement, from a technology infrastructure perspective, to connect all of these different data sources, so that the next cure for all these different diseases has a foundation from which providers can actually link data. >> Obviously. >> Talk about it. There are some backing sources company, any company needs, let's say ERP system, need some CRM system. Those are good. But our company has the complexity of manufacturing system that needs to make medicines. Company's complexity is the lab systems, R and D systems, product life cycle management systems where things originated in a little molecule for the compound they call it, and it expands into what they say clinical studies on a medicine. So you can imagine the plethora of system that make this happen. So what happens in this environment is now people bring up systems for what they need and ERP does what they need. All of the sudden, "I can't do without customer data." "I can't do without my patient data." "I can't do without my item data." "How do we get the data?" So it becomes--begs the question like, "Oh my gosh, okay we got all these complex systems in place, how are you going to share the data? Who's the master? Where's the source of truth? So all those sort of begging question is that, kind of start up the landscape of integration. So that's where we are. Launched that previous legacy systems for SOA that we have currently. Mostly call it the SD enterprise service. that shares data within the premises. Guess what today? They want, "Hey I've got this Cloud system that I'm accessing. I'm going to buy this sales force commercial systems that could enable me to launch my commerce market better. How do we deal with these guys? How do we reach out to those folks? How do I make my engagement app on the events for doctors? How do I connect with my patients? All these are big question that've been asked. There was a need for system that'll kind of take care of all these diverse platforms in the Cloud on Prim, connect them together, so the data sharing happens. That was the biggest challenge that we have kind of solve right now. And then with Boomi coming on to our platform since a couple of years from in the past, we have matured into a place where we're going to launch a lot of things on Boomi and we are looking forward to it eagerly to consolidate all those legacy innovation platforms into the Boomi world infrastructure. So i's exciting. >> Talk about the IT landscape in your company. What's going on there? How is the structure? What are some of the environment look like? Is it transforming the roles of the people? Stacking and wrecking, is it Cloud, Hybrid, what? Talk about your environment. >> Fantastic. I think the very question getting into our pillars or what we do in IT, right? Our pillars are very simple. First thing is core services, you've got to make it--keep the lights on you've got to sure things are working fine. The next thing we adhere to is people. Who do we need to make all this happen? ITs people, acknowledged management, retain people, the best talent, get the best talent. The third pillar we have is the enabling of technology. And that's where some of us come in to enables. How do we migrate Cloud? Let's say we have a big data platform on an infrastructure, adopt infrastructure, tear down infrastructure on-prem. And you what, it's a plan's base. So the data growth, it's enormous these days. So we are talking about Cloud. We are already have plans, we already have infrastructure in Cloud that we are moving to. So if you look at it, the company's so focused not only on technology which is required to, in this day and age, to talking about data, talking about expansion, elasticity and a computing power you need, yeah, here we are with opting we'll be multicloud recipient and beneficiary, but at the same time we're also focusing on people and the core services we provide as IT. IT is technical, non-technical-- >> So you have multiple Clouds right now? >> Exactly. >> Amazon, Azure. >> Yup, we will have a multi Cloud eventually. Not that everything is online and in perfection, but our plan is to have a multi Cloud strategy going forward because the amount of things that coming to our landscape. >> You're on classic hybrid right now, you've got it all on premises. >> Yes. >> Some Cloud going on. >> Absolutely, absolutely. >> So let's talk about business transformation, digital transformation. You did a great job of articulating the business challenge, the challenges that you needed to solve. From an IT perspective, you have all the hybrid multi Cloud environment. Where did the digital transformation initiative come from? Was it the business saying, "We have so much data and desperate systems. We want to be solving more real world problems. Hi, IT, help us build the foundation that allows that." >> It's fantastic. If you look at our company, our sheer full task is digital transformation. Not just IT or COO. CEO talks about our digital transformation. So everybody, in fact, it was questioned. "Hey, we want to be digital." What does it mean to be digital? Because thing comes up. So in the landscape of ITVR, we are going to be a digital-enabled company. We're going to define what it means. To me personally, digital-enabled means, "Hey, I need to share a piece of data across the landscape, whoever needs it, whenever they need it or where they need it." That's called the digital transformation if you ask me because that enables other systems to consume it, and then provide the care and attention it needs. Be at our customers. Customers are patients. Be at our hospitals that we work with. They're our customers. Employ our customers and turn that, it could be your portal. So we are attacking it from multiple points of view. You want to make sure the technology enablement moving forward in innovation. We care for all these areas of customers where we can really digitally enable them. So focus is not just one point of digitalization, it's customers and patients. How can we give them access? How can we get the feedback? All of them fall into 360 degrees of data enablement. It's so focused and we're so thrilled to have such (mumbles) that can pay a lot of attention to all these things. I think it'll be transforming our company a big time in the next few years with the digitalization that we're looking forward to. Mobile applications. All kinds of things are coming up. >> So why Boomi? Boomi is a Cloud native platform. We saw the video and if you saw that technical keynote this morning that the first videos started up with a few minutes of all the areas in which they were first. But they took this big bet back in 2007 when they were found that they are this single instance multi technic Cloud application. What differentiated Boomi when you guys were looking for the right partner with which is standardized? >> It's interesting because we like the Cloud part. Same time being (mumbles) country and industry, they said, " I can't (mumbles) put it on the Cloud." I mean this was about four years back. Remember, things were not really stable at that time. Or people are wondering, "What? Cloud?" "Where can I put my data?" We chose the Boomi hybrid model which is awesome because it gave us the benefit of both, of material that's in Cloud, I'm taking care of anything that you need to do material, I'm taking care of my processing on site. So that key was that bang say, "Oh wow, that's a fantastic option to have. It's a (mumbles) infrastructure. People can build things faster on Prim, run your case, data cases on Prim, but you have all Cloud metadatas protecting you (mumbles) Everything is easy, (mumbles) SHA. So all those were factors when we decided to go in to Boomi. But we see among others as full. But then the speed of market, less call framework, and also the roadmap they'll have for them. That's very important for us. I mean first thing in technology I want to go for next five years, ten years. Are you welcome with me in the technology? Are you making insights as we talked about today? (laughs) I'm just paraphrasing it. But those all things matter to us. In words, mine is protected. We don't end up with some debt, right? Like they model this platforms to be up to date. So those were our key factors in moving forward with Dell Boomi. >> And so let's talk about some of the business outcomes. You've mentioned a few. But let's look at them kind of categorically. If we look at kind of this over this polarizing industry, being able to study different aspect of man diseases and identify cures for them hopefully, what are some of the business outcomes that you guys are achieving so far with them. You're a Change Agent Award winner, so give us some of those really big wins that you've seen to dates? >> How to be proactive, right? It's a game, it's a data game these days. The more data you have aboard the decision you can make, you're going to differentiate in solving problems, and mean competitive as well. We are trying to see these aspects in the data that we can collect from all places. Now once you have the data, you need some kind of integration that needs to happen to process the data, to share the data to people who need them. That's why integration comes in. Obviously there are other areas where we do big data processing. We need to have some kind of a cluster to compute them and cue some analytics for scientist to see, "Hey, I've got this data. This was inference." And now we can introduce that integration to cue them all the data that they need. What does it take? In my opinion, days and months too can infer through these files and files of data, takes less than 10 minutes for people to now infer. >> Dramatic speed of (mumbles) here. Wow, elaborate on that a little bit. >> And what happens is when you get this huge epidemiology data on the world, you've got thousands and thousands tera bytes of data. Without proper computing and the resources and the modern platform, it's tough for you to count those data to come out with some analytic that people can use. You can ask queries like, "Hey, this disease happens in this area. Tell me the percentage that is relevant to this disease in this area that I need to concentrate on solving the problem." You want to solve big problems and you want to make sure the population benefits from that. So this kind of data gives you inferences that people can research on and say, "Hey, I'm going to focus on this area. It's very predominant." And let's say Africa nation, population is almost about 3 billion, 4 billion people in the world. So let's focus on that disease that gives some traction going on. And that's how you solve the world's problem, one by one, one step at a time. I'm so happy to be involved in that kind of enablement because I'm a very very minuscule part of the whole deal because we work with scientists who are fantastic, who are biologists, who are researchers. Our act in this helps them get to what they need to do. We are completely at their service for what they need and then we just want to enable things for them, make things faster, make the hope comes for them to an R and D, to be more clearer. So that's where we come in. It's more like a service, but industry aspect within the company, but then we are fully fortunate to work for a company that cures diseases and we are part of that journey that they're going through. >> You've just articulated beautifully why you guys won in the Change Agent category. Morally that was outstanding. Congratulations on what you've achieved so far. I'm sure, I'm excited to hear next year where the business goes. We appreciate your time. >> Thanks a lot, Lisa. Nice to talk to you guys today. >> Likewise, thank you. >> For John Furrier, I'm Lisa Martin. You're watching the CUBE from Boomi World 19. (lively music)
SUMMARY :
Brought to you by Boomi. about the leader in live tech coverage. It's good to receive the award. that you have done with Boomi. Clearly, it's the company, if you recollect, Talk to us about what you needed to implement, So it becomes--begs the question like, How is the structure? and the core services we provide as IT. because the amount of things that coming to our landscape. You're on classic hybrid right now, the challenges that you needed to solve. So in the landscape of ITVR, We saw the video and if you saw that technical keynote I'm taking care of anything that you need to do material, that you guys are achieving so far with them. that we can collect from all places. Wow, elaborate on that a little bit. make the hope comes for them to an R and D, I'm sure, I'm excited to hear next year Nice to talk to you guys today. I'm Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Murali Anakavur | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
360 degrees | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Murali | PERSON | 0.99+ |
Washington, D.C. | LOCATION | 0.99+ |
Hepatitis C | OTHER | 0.99+ |
today | DATE | 0.99+ |
ten years | QUANTITY | 0.99+ |
less than 10 minutes | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first videos | QUANTITY | 0.99+ |
one point | QUANTITY | 0.99+ |
Africa | LOCATION | 0.99+ |
third pillar | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
North America | LOCATION | 0.98+ |
Boomi | ORGANIZATION | 0.98+ |
about 3 billion, 4 billion people | QUANTITY | 0.98+ |
Gilead Sciences | ORGANIZATION | 0.98+ |
single | QUANTITY | 0.97+ |
Boomi | PERSON | 0.95+ |
Boomi World 19 | TITLE | 0.95+ |
thousands tera bytes | QUANTITY | 0.94+ |
Mora | PERSON | 0.93+ |
Gilead | PERSON | 0.93+ |
about four years back | DATE | 0.92+ |
Change Agent Award | TITLE | 0.92+ |
one step | QUANTITY | 0.89+ |
First thing | QUANTITY | 0.89+ |
Boomi World | TITLE | 0.89+ |
first thing | QUANTITY | 0.87+ |
this morning | DATE | 0.85+ |
Azure | ORGANIZATION | 0.82+ |
Boomi World 19 | ORGANIZATION | 0.8+ |
2019 | DATE | 0.77+ |
years | DATE | 0.77+ |
every single person | QUANTITY | 0.76+ |
Dell Boomi | ORGANIZATION | 0.73+ |
one | QUANTITY | 0.68+ |
five years | QUANTITY | 0.64+ |
Cloud | TITLE | 0.64+ |
next | DATE | 0.6+ |
2019 | TITLE | 0.58+ |
CUBE | ORGANIZATION | 0.55+ |
couple of years | QUANTITY | 0.51+ |
19 | QUANTITY | 0.39+ |
Prim | COMMERCIAL_ITEM | 0.37+ |
Tera Sumner, CenturyLink | Cisco Live US 2019
>> Announcer: Live from San Diego, California it's the Cube covering Cisco Live US 2019. Brought to you by Cisco and it's ecosystem partners. >> Welcome back to the Cube, Lisa Martin with Stu Miniman rounding out day one of our coverage of Cisco live! in San Diego. We're pleased to welcome Tera Sumner, the senior manager of Global Product Management at CenturyLink. Tera welcome to the Cube. >> Thank you, thank you both for having me. >> So we've had a number of folks from CenturyLink on the Cube over the years, I know that you guys are a big US communications provider. >> Tera: We are. >> You've got customers in over 60 countries but this is no longer your grandfather's CenturyLink. >> That's right. >> Lisa: Tell us more about it. >> So we are focused in the next roll out of the next phase of CenturyLink. We're moving from a telecommunications company to a technology company and the division that I work in for UC&C the unified communications that's where it's at. That's where it's all going to take place and having a partnership with Cisco is key for us to get from that telecoms base to the technologies base for sure. >> So bring us inside a little bit, Unified communication and collaboration, you know, Cisco obviously a strong presence in that space. Lot's of people have used Webex and understand the various, you know, VOIP phones and everything that they do there. What particularly brings Cisco and CenturyLink together? Is it engineering work, field, go to market, you know where are the pieces? >> Sure and it's all of those, right. It's all of those, what's been very nice is that Cisco has embraced the idea of being a platform and not a siloed individual product line. And so for a service provider like CenturyLink, for us to be able to embrace that same philosophy of the platform of services, what that means is that our engineering and field ops folks, our operations teams do all the hard work on the back end to make sure that we have established all of the right security, the right network, the reliability, the global scaleability of our specific platform of services and being that leader in telecommunications. And then we're able to lay that Cisco platform on top of it and what happens then from a product management level is once you've established that foundation, it's really plug and play. The customer calls and says "I need calling, I need meetings, I need" you know whatever it is they need and we build that solution and very quickly can put those components into play and get them to use the service right away. >> So we were all at Enterprise Connect. We were all just talking about that, Stu and I hosted the Cube there just, what a couple of months ago I guess. And it's such an interesting, it was an interesting event because everything is centered around communication. You can't have a great customer experience without having a phenomenal and very connected communications platform within an organization. >> Correct. >> You can't have great satisfied employees if they don't have the connectivity that they need so really looking at enterprise communication and collaboration tools as table stakes, >> Tera: Absolutely. >> For any organization because without it you're, in any industry, there's a competitor right on your coattails ready to swoop in if you're going to be making any mistakes. >> Tera: Absolutely. >> And now as we look at the waves of change with respect to connectivity, the explosion and expansion of 5G, the proliferation of the amount of mobile data that's going to be video traversing that works, massive demand placed on any organization to be able to deliver communications extremely quickly and extremely securely. Talk to us about some of the waves that you're going to be riding in helping customers to mitigate with respect to these new demands for high density, high performance connectivity. >> Sure, so if we talk to customers, as you know, today one of the biggest things is, it's all about security. We have a massive and really super intelligent security department at CenturyLink and it's kind of cool watching all of the various projects that they get into because they're so passionate. And not only are they passionate about it, they're adamant that we make as much of a connection secure, meetings, any kind of information secure that we possibly can and we've mitigated any risk possible. And then you take that and you have to communicate that information but you have to also be able to showcase the various solutions that you have, all of the Cisco platforms that you have. So what we have also done is we've taken that platform of services from Cisco and we've put it in the hands of our operations folks, our sales folks, our field techs, our executives, our middle management group and every one of them knows then how to quickly use the teams application from their desktop, they all have it on their, and I don't have my phone with me, they'll have it on their mobile device. So it's very familiar, it's very quick and it's always on, right so they're connected all the time, which I know we all say "I hate that, I hate that." the minute you don't have it, it drives people crazy. So it's a very valuable tool for us from a product management perspective to put these tools in the hands of our internal users who are the voice to that customer, so when the customer calls and goes "Oh my gosh, I don't know what's going on", "Ah, I've been there before, let me help you out and let me do that very quickly." >> So want you to help us understand, how are you helping customers keep up with just the rapid pace of change that's going on here. As Lisa mentioned Enterprise Connect, the themes I was hearing, very similar to what we're hearing here at the show. You know cloud drastically changing architectures, AI and ML infusing itself into all the environments there. It feels like from a customer standpoint every time they go do a role, it's like "Oh wait, hold on, didn't you hear about the new thing and the new thing and the new thing." And, >> And don't use that, use this. >> That tendency to, like oh wait, I thought I was down the path yet I constantly need hear about yet another thing. >> Absolutely, so yes, you're right it's a constant game of catch up if you will. Have you tried the new app, do you have the latest version of X, Y and Z? What we're trying to do is also bridge that gap because we have tremendously intelligent and savvy customers where it used to be if you build it, they will come and now it's no, no, no, don't even build it. Let them tell you what your market needs to drive, the customers have the most unique uses for the technology these days and we have to keep up with that. So we let those customers help drive where we go from a product standpoint but at the same time I've got traditional customers who are saying "Okay, somebody told me I need to get to the cloud." "Okay, I can help you with that." We have a very unique perspective on how we bring customers onboard, on how we get customers to adopt the technology and truly, the way that we do that is with the human touch right. We concentrate completely on our customer experience from end to end, so if you give us a call and you say "Here's a problem I need to solve and here are the components I have sitting in there today." We design the solution that you need for your business needs and then we walk you through that step by step and when we're all implemented and ready to go we're still going to answer that phone. We're still going to answer your emails and take your calls and say "What else can I do for you? How can I help? How do we want to expand?" So it's really that customer service on top of the focus of customer experience that makes CenturyLink I think still very unique in the industry because we care that what we are putting in your hands as a customer is something that not only you will use but you'll talk about in a very positive light. >> So given that everything you talked about, you know connectivity, and when we don't have connectivity you feel like you've lost a limb or you've lost sight or hearing. It's that disconnect that is just, these days it feels so strange but customers need to have definitely, and that was a theme I think that we hear at every event. We also heard it at Enterprise Connect, it's not just AI it's humans and AI but speed is essential for any industry especially those that are undergoing any sort of transformation because they've got to stay ahead of their competition. So how do you balance that, how does CenturyLink balance that need for speed and also deliver a customer experience that's unique as you say, that has that personalized element that it sounds like I'm hearing. How are you leveraging tools like automation and AI machine learning to help CenturyLink deliver that customer experience but quickly? >> Well, we're doing lots of things, some of the things that we're doing is that automation from the first time they click on the website to say "What's going on at CenturyLink? Oh, they've got UC&C." You click a button and you read a little bit about what the products are and you can order it right then and there and then you get it turned around very quickly to put it in your hands. And oh by the way, if you need some help we've got the training videos, we've got you know, a phone number for you to call if you really need some human explanation of "Okay, I just can't figure this out, I can't get that." So the automation is key for sure. When you're talking about speed, as you know if anyone has teenagers around and they're using gaming systems or you're watching Netflix or whatever it is that you're doing all day, you are eating a ton of bandwidth. And so what's nice about working for CenturyLink is that well, we're the provider of the bandwidth, so we get to see the trending of what products are consuming the most of that bandwidth and we very quickly can prioritize and say "this content delivery network needs more" or "Holy Cow, what is U&C doing in Latin America or APAC or EMEA? They're consuming a ton of bandwidth, we need to allocate more and put a priority on that." And so that's different than other competitors who aren't also service providers because then they have to go back and negotiate. "No, no, no, my services really do need more bandwidth and I really do need some priority and be nice to me and I'll take care of it." Right, so we have that ability at CenturyLink to do that very quickly. >> So Tera, CenturyLink's had a long partnership with Cisco, a very deep relationship, Cisco's been talking a lot about their transformation. Remember a year ago, it was when you think about, you know Cisco 2030, it's not as a networking company it's a software company. Give us your assessment as a partner, what you've been seeing in Cisco and also bring us in a little as to how CenturyLink is, as we said at the beginning, a different CenturyLink that we might have thought of in a previous generation? >> Sure and it's a good question, it's for me I've been at CenturyLink for, you know as I mentioned, about 15 years and I've got to witness and be a part of the initial relationship with Cisco that we had, up unto today when I help manage that relationship and it really has transformed from a relationship to a partnership. And it's no longer just they give you something and you go and implement it, now it's truly the give and take. Right, you have these conversations, but we also have the relationships with several of the employees of Cisco to say "Okay, I understand you're putting this into the network, tell me a little bit more about that, how is that unique to a service provider versus an enterprise? How can I make that a better value proposition for my customer base because of CenturyLink?" And we get the reciprocal communication back and forth, whereas years ago it was "Here you go, here's what we're giving you, go ahead and put that into the network." So it's really been exciting for us at CenturyLink and certainly, I think, for our Cisco folks because it's easy now, we know each other very well, we know so many of the employees at both companies that when I pick up the phone its "Hey, how's it going?" Instead of "Oh, I need to speak to the Vice President of X, Y, Z." Right, so it's truly been a great transformation in a partnership from that relationship to that true partnership where there's give and take. And if we have a question or we think "You know I've got this amazing customer who has this bizarrely intelligent ask." I want to help them with that. I have no hesitation to pick up the phone to call my partner and say "You're going to love this. Help me figure out how to get us there." And it's really been working quite well over the last few years. I'm kind of excited to see how far it goes in the next few. >> So it sounds like it's evolved into a much more strategic partnership. >> Tera: Absolutely. >> Is that an accelerator or facilitator of CenturyLink's transformation to a technology company? >> It's both of those things, it's a complete accelerator but it just makes sense when you have partners who have that very similar vision that you do from a strategic company, you look at that and think "Okay, you know what, this is going to fit very nicely into my strategy, my mission statement" and it's going to be a much easier transition for all of my colleagues as a result because then they can see "Oh, that's exactly what we need to do." We need to take these steps to move into that technology mode and now you're showing me how to do that with your strategic partnership with Cisco. It's very fun. >> Fun is good, Tera thank you so much for joining Stu and me on the Cube this afternoon. >> Absolutely. - We're going to keep our eye on CenturyLink, we appreciate your time. >> Absolutely, come and visit us any chance you get. >> All right, for Stu Miniman, I'm Lisa Martin, you're watching the Cube, day one of our coverage of Cisco Live has just come to an end. We want to thank you so much for watching and catch us starting tomorrow morning, day two from San Diego. Thanks for watching.
SUMMARY :
Brought to you by Cisco and it's ecosystem partners. Welcome back to the Cube, Lisa Martin on the Cube over the years, I know that you guys in over 60 countries but this is no longer of the next phase of CenturyLink. you know, VOIP phones and everything that they do there. and get them to use the service right away. Stu and I hosted the Cube there just, on your coattails ready to swoop in of the amount of mobile data that's going to all of the Cisco platforms that you have. So want you to help us understand, how are you helping the path yet I constantly need hear about yet another thing. from end to end, so if you give us a call So given that everything you talked about, And oh by the way, if you need some help you know Cisco 2030, it's not as a networking company of the employees of Cisco to say So it sounds like it's evolved into and it's going to be a much easier transition Stu and me on the Cube this afternoon. - We're going to keep We want to thank you so much for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
CenturyLink | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
San Diego | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
tomorrow morning | DATE | 0.99+ |
Tera Sumner | PERSON | 0.99+ |
a year ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Tera | PERSON | 0.99+ |
today | DATE | 0.99+ |
U&C | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Enterprise Connect | ORGANIZATION | 0.98+ |
APAC | ORGANIZATION | 0.98+ |
Latin America | LOCATION | 0.98+ |
over 60 countries | QUANTITY | 0.98+ |
day two | QUANTITY | 0.94+ |
about 15 years | QUANTITY | 0.94+ |
couple of months ago | DATE | 0.93+ |
Cisco 2030 | ORGANIZATION | 0.93+ |
Webex | ORGANIZATION | 0.9+ |
UC&C. | ORGANIZATION | 0.9+ |
Cube | COMMERCIAL_ITEM | 0.9+ |
one | QUANTITY | 0.89+ |
this afternoon | DATE | 0.88+ |
day one | QUANTITY | 0.87+ |
a ton of bandwidth | QUANTITY | 0.87+ |
2019 | DATE | 0.85+ |
Tera | ORGANIZATION | 0.84+ |
Z | TITLE | 0.81+ |
UC&C | ORGANIZATION | 0.81+ |
years | DATE | 0.77+ |
EMEA | ORGANIZATION | 0.77+ |
last | DATE | 0.76+ |
Vice President | PERSON | 0.73+ |
waves | EVENT | 0.71+ |
years ago | DATE | 0.69+ |
X | TITLE | 0.67+ |
Cube | ORGANIZATION | 0.66+ |
Cisco Live | TITLE | 0.65+ |
Chris Wojdak, Symcor | Informatica World 2018
>> Announcer: Live from Las Vegas, it's theCUBE! Covering Informatica World 2018. Brought to you by Informatica. >> Hey welcome back, everyone. Live here in Las Vegas, this is theCUBE's coverage of Informatica World 2018. I'm John Furrier co-host of theCUBE with Peter Burris, my co-host for the next two days. Chris Wojdak who's the Production Architect at Symcor, a Canadian leading financial processing services provider. Welcome to theCUBE. >> Thank you, great to be here, guys. >> So first explain, about in one minute, what the company does and your role. >> Yeah, so Symcor was formed by the three largest banks in Canada, over 20 years ago. We have a proven ability to work effectively as a utility service structure type of model. Symcor is a leading business processing and client communications provider in Canada, supporting banks, telecommunications, insurance, and retail companies in Canada. >> John: And your role there is to do what? Deployment of data, deployment? Be specific. >> Yeah, specifics, one of the things that I work on is strategic initiatives. Everything from data-driven architectures to the strategies, where we want to take the company and how do we, how does the technology line up to the business needs. Such that I'm a Senior Architect in the office as a CTO. >> So what's your data look like? I mean, obviously, you're an Informatica customer. >> Are you happy with Informatica? And are they helping you out? And what's the, tell us about, tell us what's going on. >> Anybody who knows me will know that I'm a pretty blunt guy, so when I say this, I do mean it is, Informatica has done tremendous things for us. Their products actually just work. It's very easy to get value out of our data using Informatica. Our time to market has decreased from months to weeks with them. So we're extremely happy with the maturity of their products and services that we get from them. >> So as you think about the role that, that the architecture's played, and you being a, a good example of that. The architect used to be the individual that would look at the physical assets, and how you thought about the physical assets should be put together in response to a known process, >> Chris: Correct. >> and a known application. And now, as you mention, a data-first orientation requires thinking about the arrangement of assets that have to be architected around very differently. >> Absolutely. >> How has the role of architecture changed? Certainly where you are, but in response to this notion of data first. >> Yeah, so one of the biggest challenges that we have is how do we ethically use that data for fraud prevention and detection purposes 'cause that's one of the key areas that we're trying to grow as one of our key initiatives, which is digital and data services. And where we struggle with that is how do we effectively use our data? So we work with our internal teams, like our privacy and data governance teams to come up with a data governance policy, a comprehensive one at that. How do we ethically use this data now for our services? That's the biggest thing that's changed as opposed to just taking our process and gluing it together. How can you use that without breaking laws and things like that? That's the biggest change I see. >> And what's the relationship between architecture, data architecture, or architecture generally and the role that security's playing? We have a feeling that because data can be shared, because it can be copied, 'cause it can be moved, privatizing that data is essential to any business strategy and security historically has played a major role in thinking about how we privatize data. How does security fit into that governance, ethical kind of model? >> Yeah, and we are a security first type of company over anything else a lot of times. They definitely have a seat at the table. We've had to deploy certain things, I'm not sure if you heard of format preserving encryption architectures and techniques to help enable not only to satisfy the governance, but to drive value legally to our businesses, and our clients. >> How do you look at data as a platform, and how is your data laid out? You made a comment earlier which I liked, which was, Informatica products just works. We've been covering them for a few years. One of the things that got my attention was horizontally scaling the data across systems, not just a point product, >> Chris: Exactly. >> more of a platform. How, from your standpoint, do you look at platforms for you? As you re-platform with data, you are digitizing a lot of services, you're actually enabling new services. What is it about the data platform, and how are you guys thinking about it? >> Well, when we're thinking about it, how do we manage data in a centralized spot, and deliver microservices on top of that data in one spot? How do we, because we can't afford to have data in a million data warehouses, or sporadically throughout the organization, it's not an effective use of data. So the way we've tried to structure it is as soon as we get the data in, we keep it in one spot, which in our case would be the Tera Hadoop cluster. Fully encrypted using format preserving encryption as our mechanism to securing the data. And then from there, running microservices on top of our Hadoop stack power byte Informatica, to drive value out of that data. And where the biggest bang for our buck a lot of times is is that, mainly we have old mainframe data file structured data that's hard to parse and deal with. Well, we can store it in Hadoop, save the space, 'cause it's highly compressed, like X9 or EBCDIC, use Informatica to just get at it in a matter of minutes, to drive value in weeks versus months in a traditional model >> Talk about the microservices architecture because that's kind of a methodology, kind of a mindset. Is it like the classic cloud, Kubernetes containers, or you think of it more of endpoint APIs, talk about how you define microservices. >> Yeah, so microservices, where we've leveraged microservices is essentially in our in our new development models where we're utilizing node.js, and react, single page application development, where we have this in the front end just talking to microservice, specifically, delivering on a specific need only. And then we're leveraging things like, for instance, Kubernetes in the backend, where we deploy those microservices, but we're dealing with it from a single page application perspective, really the more modern web development approach is. >> So you're bringin' data into the application, via microservices, so you can have the centralized location, microservices handles the interaction, and it inputs that into the application? >> Right, and then which also, we have to rework the security infrastructure, and approach to it, because we couldn't use the old school, let's see, Jade session, cookie, now we're using token-based authentication, and all these challenges there, right? >> Hey, I love it, we're at a data show, and we're talkin' Kubernetes, and orchestration containers, and microservices, and it's awesome. (laughing) (Chris laughing) >> But that's what those, that's what those technologies are deployed for, right? >> I know, I'm just saying, it's great! >> But I want to push you on this. >> Chris: Yeah, sure. >> So, today, Symcor provides, as you said, a, this enormous facility for looking up past banking transactions or past banking statements for a variety of different banks in Canada. But, I presume you're looking at providing new services in the future. I can imagine that a centralized resource for a human being looking up an old banking statement, well you got, four, five seconds to get the job done, it's probably pretty good. But when you start talking about, maybe moving to fraud detection, or some other types of services, does that start to change the way you think about your data architecture? 'Cause now you're doing something that's much more close to real-time, how's that going to effect the way you think about things? >> Oh, it was a, we've been on a journey, right? On a digital data transformation journey, literally at Symcor because of that. We started off with some in-house built solutions that we have actually patents on, on how to properly warehouse data. We have one of the largest Canada data warehouses for check images, like a 2.6 petabytes in Canada, and we have to somehow, how do we drive value out of this as a data warehouse type of mentality solution, how do we drive value? So how do we move now into more of the Hadoop, the Cassandra's world, to get that real-time batch processing and get insights, and how do we do that ethically as well, right? And secure, how do we secure? Those are the three biggest things that we have to look at in our journey to get there, hasn't been easy, 'cause different paradigms, different understandings-- >> So let me make sure I got that, new technologies to reduce the response times, ethical use of the data, >> The data. >> and secure control in reference to the data? >> Correct, to protect it, yes. >> So how is that changing then, how you think, do you see it staying centralized, do you see it becoming, moving some of the data, some of the responses out closer to some of your banks, who are actually doing the fraud detection? >> Well, we see it, 'cause we're trying to get into this space, and do it on their behalf because, we have that overarching kind of look at this, so how do we just do it ethically, right? So, when some of our owner banks, for example, send this data, well we can provide services overarching to provide insights across the board, something they can not let's say, do on their own, without our help, type of thing. >> Real quick, define data ethics, 'cause you mentioned ethics many times. Do you mean securely, anonymized, what does that mean for you? >> Well, to me it means like that old, you know, 20 years ago for example, I would take my wallet, maybe put it in my vault, in my vault at home, physically protected, it's safe. Well how do I protect that data now, not only from potentially breaches, but how do I protect to make sure my privacy isn't at risk, that someone's not using it for, for improper use, things like that, that's how define ethical use, right? >> What're you doin' now that you couldn't do before, we're seeing this awesome cloud, you mentioned, Kubernetes gets me pumped up, because that's kind of a horizontal orchestration, you talk about multi-cloud, these are things that are, coming into sight with those kinds of technologies. There's an old way, there's a new way, right? (laughs) So we're seeing this transformation, what's different now for you, that you couldn't do before? >> Yeah, before it was hard to drive insights, because we didn't have the scalability horizontally, or vertically, so things like Hadoop, Informatica and Hadoop the way we can scale our web applications with microservices that's what's made the big difference, is the techniques that are being developed to get down to real-time processing, get the answer quicker and faster, and drive value to our clients faster. What's really important is, when they moved to digital channels, you know, fraud becomes a problem it's growing, in incidents and complexity. We see an opportunity now, where we can provide this fraud detection and prevention services as they change and go to digital channels, were there for the ride, type of thing. >> Chris, it's a great interview, I'd love to follow up with you and learn more about your environment. Final question, I heard you got the Informatica innovation award honoring, congratulations! >> Thank you. >> Advice to other folks doing cutting edge stuff that might be interested in in that kind of status? >> Yeah, words of advice there would be, try to push the limits. Never give up, try to push the limits on the design patterns and design approaches. You'd be amazed at what you can achieve if you really push those limits. >> Great story, love what you guys are doing out of Canada, Toronto area, Chris thanks for comin' on theCUBE, appreciate your stories. theCUBE live coverage here in Las Vegas for Informatica World 2018, I'm John Furrier, Peter Burris, we'll be back after this short break. (bubbly music)
SUMMARY :
Brought to you by Informatica. Welcome to theCUBE. So first explain, about in one minute, We have a proven ability to work effectively John: And your role there is to do what? Such that I'm a Senior Architect in the office as a CTO. So what's your data look like? And are they helping you out? from months to weeks with them. and how you thought about the physical assets that have to be architected around very differently. but in response to this notion of data first. Yeah, so one of the biggest challenges that we have is privatizing that data is essential to any business strategy Yeah, and we are a security first type of company and how is your data laid out? and how are you guys thinking about it? as our mechanism to securing the data. or you think of it more of endpoint APIs, Kubernetes in the backend, and we're talkin' Kubernetes, and orchestration containers, how's that going to effect the way you think about things? and how do we do that ethically as well, right? and do it on their behalf because, 'cause you mentioned ethics many times. Well, to me it means like that old, you know, What're you doin' now that you couldn't do before, the way we can scale our web applications with microservices I'd love to follow up with you and You'd be amazed at what you can achieve if you really Great story, love what you guys are doing out of Canada,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Chris Wojdak | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Canada | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Symcor | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
2.6 petabytes | QUANTITY | 0.99+ |
five seconds | QUANTITY | 0.99+ |
one spot | QUANTITY | 0.99+ |
four | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
20 years ago | DATE | 0.98+ |
three largest banks | QUANTITY | 0.97+ |
three biggest things | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
node.js | TITLE | 0.97+ |
single page | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
Informatica World 2018 | EVENT | 0.96+ |
One | QUANTITY | 0.96+ |
Hadoop | TITLE | 0.95+ |
over 20 years ago | DATE | 0.95+ |
one minute | QUANTITY | 0.9+ |
Canadian | OTHER | 0.88+ |
Toronto | LOCATION | 0.87+ |
X9 | TITLE | 0.84+ |
Cassandra | PERSON | 0.81+ |
first type | QUANTITY | 0.75+ |
Kubernetes | TITLE | 0.7+ |
next two days | DATE | 0.69+ |
Jade | TITLE | 0.67+ |
EBCDIC | ORGANIZATION | 0.66+ |
million data warehouses | QUANTITY | 0.66+ |
Kubernetes | ORGANIZATION | 0.58+ |
Informatica innovation | EVENT | 0.57+ |
Hadoop | COMMERCIAL_ITEM | 0.49+ |
Informatica | TITLE | 0.49+ |
Tera | ORGANIZATION | 0.38+ |
Andrew Wheeler and Kirk Bresniker, HP Labs - HPE Discover 2017
>> Announcer: Live from Las Vegas, it's The Cube, covering HPE Discover, 2017 brought to you by Hewlett Packard Enterprise. >> Okay, welcome back everyone. We're here live in Las Vegas for our exclusive three day coverage from The Cube Silicon Angle media's flagship program. We go out to events, talk to the smartest people we can find CEOs, entrepreneurs, R&D lab managers and of course we're here at HPE Discover 2017 our next two guests, Andrew Wheeler, the Fellow, VP, Deputy Director, Hewlett Packard Labs and Kirk Bresniker, Fellow and VP, Chief Architect of HP Labs, was on yesterday. Welcome back, welcome to The Cube. Hewlett Packard Labs well known you guys doing great research, Meg Whitman really staying with a focused message and one of the comments she mentioned at our press analyst meeting yesterday was focusing on the lab. So I want ask you where is that range in the labs? In terms of what you guys, when does something go outside the lines if you will? >> Andrew: Yeah good question. So, if you think about Hewlett Packard Labs and really our charter role within the company we're really kind of tasked for looking at things that will disrupt our current business or looking for kind of those new opportunities. So for us we have something we call an innovation horizon and you know it's like any other portfolio that you have where you've got maybe things that are more kind of near term, maybe you know one to three years out, things that are easily kind of transferred or the timing is right. And then we have kind of another bucket that says well maybe it's more of a three to five year kind of in that advanced development category where it needs a little more incubation but you know it needs a little more time. And then you know we reserve probably you know a smaller pocket that's for more kind of pure research. Things that are further out, higher risk. It's a bigger bet but you know we do want to have kind of a complete portfolio of those, and you know over time throughout our history you know we've got really success stories in all of those. So it's always finding kind of that right blend. But you know there's clearly a focus around the advanced development piece now that we've had a lot of things come from that research point and really one of the... >> John: You're looking for breakthroughs. I mean that's what you're... Some-- >> Andrew: Clearly. >> Internal improvement, simplify IT all that good stuff, you guys still have your eyes on some breakthroughs. >> That's right. Breakthroughs, how do we differentiate what we're doing so but yeah clearly, clearly looking for those breakthrough opportunities. >> John: And one of the things that's come up really big in this show is the security and chip thing was pretty hot, very hot, and actually wiki bonds public, true public cloud report that they put out sizing up on prem the cloud mark. >> Dave: True private cloud. >> True private cloud I'm sorry. And that's not including hybrids of $265 billion tam but the notable thing that I want to get your thoughts on is the point they pushed was over 10 years $150 billion is going to shift out of IT on premise into other differentiated services. >> Andrew: Out of labor. >> Out of labor. So this, and I asked them what that means, as he said that means it's going to shift to vendor R&D meaning the suppliers have to do more work. So that the customers don't have to do the R&D. Which we see a lot in cloud where there's a lot of R&D going on. That's your job. So you guys are HP Labs, what's happening in that R&D area that's going to off load that labor so they can move to some other high yield tasks. >> Sure. Take first. >> John: Go ahead take a stab at it. >> When we've been looking at some of the concepts we had in the memory driven computing research and advanced development programs the machine program, you know one of the things that was the kick off for me back in 2003 we looked at what we had in the unix market, we had advanced virtualization technologies, we had great management of resources technologies, we had memory fabric technologies. But they're all kind of proprietary. But Silicon is thinking and back then we were saying how does risk unix compete with industry standards service? This new methodology, new wave, exciting changing cost structures. And for us it was that it was a chance to explore those ideas and understand how they would affect our maintaining the kind of rich set of customer experiences, mission criticality, security, all of these elements. And it's kind of funny that we're sort of just coming back to the future again and we're saying okay we have this move we want to see these things happen on the cloud and we're seeing those same technologies, the composable infrastructure we have in synergy and looking forward to see the research we've done on the machine advanced development program and how will that intersect hardware composability, converged infrastructure so that you can actually have that shift, those technologies coming in taking on more of that burden to allow you freedom of choice, so you can make sure that you end up with that right mix. The right part on a full public cloud, the right mix on a full private cloud, the right mixing on that intelligent edge. But still having the ability to have all of those great software development methodologies that agile methodology, the only thing the kids know how to do out of school is open source and agile now. So you want to make sure that you can embrace that and make sure regardless of where the right spot is for a particular application in your entire enterprise portfolio that you have this common set of experiences and tools. And some of the research and development we're doing will enable us to drive that into that existing, conventional, enterprise market as well as this intelligent edge. Making a continuum, a continuum from the core to the intelligent edge. And something that modern computer science graduates will find completely comfortable. >> One attracting them is going to be the key, I think the edge is kind of intoxicating if you think about all the possibilities that are out there in terms of what you know just from a business model disruption and also technology. I mean wearables are edge, brain implants in the future will be edge, you know the singularities here as Ray Kersewile would say... >> Yeah. >> I mean but, this is the truth. This is what's happened. This is real right now. >> Oh absolutely. You know we think of all that data and right now we're just scratching the surface. I remember it was 1994 the first time I fired up a web server inside of my development team. So I could begin thinning out design information on prototype products inside of HP, and it was a novelty. People would say "What is that thing "you just sent me an email, W W whatever?" And suddenly we went from, like almost overnight, from a novelty to a business necessity, to then it transformed the way that we created the applications for the... >> John: A lot of people don't know this but since you brought it up this historical trivia, HP Labs, Hewlett Packard Labs had scientists who actually invented the web with Tim Berners-Lee, I think HTML founder was an HP Labs scientist. Pretty notable trivia. A lot of people don't know that so congratulations. >> And so I look at just what you're saying there and we see this new edge thing is it's going to be similarly transformative. Now today it's a little gimmicky perhaps it's sort of scratching the surface. It's taking security and it can be problematic at times but that will transform, because there is so much possibility for economic transformation. Right now almost all that data on the edge is thrown away. If you, the first person who understands okay I'm going to get 1% more of that data and turn it into real time intelligence, real time action... That will unmake industries and it will remake new industries. >> John: Andrew this the applied research vision, you got to apply R&D to the problem... >> Andrew: Correct. >> That's what he's getting at but you got to also think differently. You got to bring in talent. The young guns. How are you guys bringing in the young guns? What's the, what's the honeypot? >> Well I think you know for us it's, the sell for us, obviously is just the tradition of Hewlett Packard to begin with right? You know we have recognition on that level even it's not just Hewlett Packard Labs as well it's you know just R&D in general right? Kind of it you know the DNA being an engineering company so... But it's you know I think it is creating kind of these opportunities and whether it's internship programs you know just the various things that we're doing whether it's enterprise related, high performance computing... I think this edge opportunity is a really interesting one as a bridge because if you think about all the things that we hear about in enterprise in terms of "Oh you know I need this deep analytics "capability," or you know even a lot of the in memories things that we're talking about, real time response, driving information, right? All of that needs to happen at the edge as well for various opportunities so it's got a lot of the young graduates excited. We host you know hundreds of interns every year and it's real exciting to see kind of the ideas they come in with and you know they're all excited to work in this space. >> Dave: So Kirk you have your machine button, three, of course you got the logo. And then the machine... >> I got the labs logo, I got the machine logo. >> So when I first entered you talked about in the early 1980s. When I first got in the business I remembered Gene Emdall. "The best IO is no IO." (laughter) >> Yeah that's right. >> We're here again with this sort of memory semantics, centric computing. So in terms of the three that Andrew laid out the three types of sort of projects you guys pursue... Where does the machine fit? IS it sort of in all three? Or maybe you could talk about that a little bit. >> Kirk: I think it is, so we see those technologies that over the last three years we have brought so much new and it was, the critical thing about this is I think it's also sort of the prototyping of the overall approach our leaning in approach here... >> Andrew: That's right. >> It wasn't just researchers. Right? Those 500 people who made that 160 terabyte monster machine possible weren't just from labs. It was engineering teams from across Hewlett Packard Enterprise. It was our supply chain team. It was our services team telling us how these things fit together for real. Now we've had incredible technology experiences, incredible technologist experiences, and what we're seeing is that we have intercepts on conventional platforms where there's the photonics, the persistent memories. Those will make our existing DCIG and SDCG products better almost immediately. But then we also have now these whole cloth applications and as we take all of our learnings, drive them into open source software, drive them into the genesys consortium and we'll see you know probably 18, 24 months from now some of those first optimized silicon designs pop out of that ecosystem then we'll be right there to assemble those again, into conventional systems as well as more expansive, exo-scale computing, intelligent edge with large persistent memories and application specific processing as that next generation of gateways, I think we can see these intercept points at every category Andrew talked about. >> Andrew: And another good point there that kind of magnifies the model we were talking about, if we were sitting here five years ago, we would talking about things like photonics and non-volatile memory as being those big R projects. Those higher risk, longer term things, that right? As those mature, we make more progress innovation happens, right? It gets pulled into that shorter time frame that becomes advanced development. >> Dave: And Meg has talked about that... >> Yeah. >> Wanting to get more productivity out of the labs. And she's also pointed out you guys have spent more on R&D in the last several years. But even as we talked about the other day you want to see a little more D and keep the R going. So my question is, when you get to that point, of being able to support DCIG... Where do you, is it a hand off? Are you guys intimately involved? When you're making decisions about okay so member stir for example, okay this is great, that's still in the R phase then you bring it in. But now you got to commercialize this and you got 3D nan coming out and okay let's use that, that fits into our framework. So how much do you guys get involved in that handoff? You know the commercialization of this stuff? >> We get very involved. So it's at the point where when we think have something that hey we think you know maybe this could get into a product or let's see if there's good intercept here. We work jointly at that point. It's lab engineers, it's the product managers out of the group, engineers out of the business group, they essentially work collectively then on getting it to that next step. So it's kind of just one big R&D effort at that point. >> Dave: And so specifically as it relates to the machine, where do you see in the next in the near term, let's call near term next three years, or five years even, what do you see that looking like? Is it this combination of memory width capacitors or flash extensions? What does that look like in terms of commercial terms that we can expect? >> Kirk: So I really think the palette is pretty broad here. That I can see these going into existing rack and tower products to allow them to have memory that's composable down to the individual module level. To be able to take that facility to have just the right resources applied at just the right time with that API that we have in one view. Extend down to composing the hardware itself. I think we look at those edge line systems and want to have just the right kind of analytic capability, large persistent memories at that edge so we can handle those zeta bytes and zeta bytes of data in full fidelity analyzed at the edge sending back that intelligence to the core but also taking action at the edge in a timeframe that matters. I also see it coming out and being the basis of our exoscale high performance computing. You know when you want to have a exoscale system that has all of the combined capacity of the top 500 systems today but 1/20th of their power that is going to take rather novel technologies and everything we've been working on is exactly what's feeding that research and soon to be advanced development and then soon to be production in supply chain. >> Dave: Great. >> John: So the question I have is obviously we saw some really awesome Gen 10 stuff here at this show you guys are seeing that obviously you're on stage talking about a lot of the cool R&D, but really the reality is that's multiple years in the works some of this root of trust silicon technology that's pretty, getting the show buzzed up everyone's psyched about it. Dreamworks Animation's talking about how inorganic opportunities is helping their business and they got the security with the root of trust NIST certified and compliant. Pretty impressive. What's next? What else are you working on because this is where the R&D is on your shoulders for that next level of innovation. Where, what do you guys see that? Because security is a huge deal. That's that great example of how you guys innovated. Cause that'll stop the vector of a tax in the service area of IOT if you can get the servers to lock down and you have firmware that's secure, makes a lot of sense. That's probably the tip of the iceberg. What else is happening with security? >> Kirk: So when we think about security and our efforts on advanced development research around the machine what you're seeing here with the proliance is making the machines more secure. The inherent platform more secure. But the other thing I would point to you is the application we're running on the prototype. Large scale graph inference. And this is security because you have a platform like the machine. Able to digest hundreds and hundreds of tera bytes worth of log data to look for that fingerprint, that subtle clue that you have a system that has been compromised. And these are not blatant let's just blast everything out to some dot dot x x x sub domain, this is an advanced persistent thread by a very capable adversary who is very subtle in their reach out from a system that has been compromised to that command and control server. The signs are there if you can look at the data holistically. If you can look at that DNS log, graph of billions of entries everyday, constantly changing, if you can look at that as a graph in totality in a timeframe that matters then that's an empowering thing for a cyber defense team and I think that's one of the interesting things that we're adding to this discussion. Not only protect, detect and recover, but giving offensive weapons to our cyber defense team so they can hunt, they can hunt for those events for system threats. >> John: One of the things, Andrew I'll get your thoughts and reaction to this because Ill make an observation and you guys can comment and tell me I'm all wet, fell off the deep end or what not. Last year HP had great marketing around the machine. I love that Star Trek ad. It was beautiful and it was just... A machine is very, a great marketing technique. I mean use the machine... So a lot of people set expectations on the machine You saw articles being written maybe these people didn't understand it. Little bit pulled back, almost dampered down a little bit in terms of the marketing of the machine, other than the bin. Is that because you don't yet know what it's going to look like? Or there's so many broader possibilities where you're trying to set expectations? Cause the machine certainly has a lot of range and it's almost as if I could read your minds you don't want to post the position too early on what it could do. And that's my observation. Why the pullback? I mean certainly as a marketer I'd be all over that. >> Andrew: Yeah, I think part of it has been intentional just on how the ecosystem, we need the ecosystem developed kind of around this at the same time. Meaning, there are a lot of kind of moving parts to it whether it's around the open source community and kind of getting their head wrapped around what is this new architecture look like. We've got things like you know the Jin Zee Consortium where we're pouring a lot of our understanding and knowledge into that. And so we need a lot of partners, we know we're in a day and an age where look there's no single one company that's going to do every piece and part themselves. So part of it is kind of enough to get out there, to get the buzz, get the excitement to get other people then on board and now we have been heads down especially this last six months of... >> John: Jamming hard on it. >> Getting it all together. You know you think about what we showed first essentially first booted the thing in November and now you know we've got it running at this scale, that's really been the focus. But we needed a lot of that early engagement, interaction to get a lot of the other, members of the ecosystem kind of on board and starting to contribute. And really that's where we're at today. >> John: It's almost you want it let it take its own course organically because you mentioned just on the cyber surveillance opportunity around the crunching, you kind of don't know yet what the killer app is right? >> And that's the great thing of where we're at today now that we have kind of the prototype running at scale like this, it is allowing us to move beyond, look we've had the simulators to work with, we've had kind of emulation vehicles now you've got the real thing to run actual workloads on. You know we had the announcement around DZ and E as kind of an early early example, but it really now will allow us to do some refinement that allows us to get to those product concepts. >> Dave: I want to just ask the closing question. So I've had this screen here, it's like the theater, and I've been seeing these great things coming up and one was "Moore's Law is dead." >> Oh that was my session this morning. >> Another one was block chain. And unfortunately I couldn't hear it but I could see the tease. So when you guys come to work in the morning what's kind of the driving set of assumptions for you? Is it just the technology is limitless and we're going to go figure it out or are there things that sort of frame your raison d'etre? That drive your activities and thinking? And what are the fundamental assumptions that you guys use to drive your actions? >> Kirk: So what's been driving me for the last couple years is this exponential growth of information that we create as a species. That seems to have no upper bounding function that tamps it down. At the same time, the timeframe we want to get from information, from raw information to insight that we can take action on seems to be shrinking from days, weeks, minutes... Now it's down to micro seconds. If I want to have an intelligent power grid, intelligent 3G communication, I have to have micro seconds. So if you look at those two things and at the same time we just have to be the lucky few who are sitting in these seats right when Moore's Law is slowing down and will eventually flatten out. And so all the skills that we've had over the last 28 years of my career you look at those technologies and you say "Those aren't the ones that are going "to take us forward." This is an opportunity for us to really look and examine every piece of this, because if was something we could of just can't we just dot dot dot do one thing? We would do it, right? We can't just do one thing. We have to be more holistic if we're going to create the next 20, 30, 40 years of innovation. And that's really what I'm looking at. How do we get back exponential scaling on supply to meet this unending exponential demand? >> Dave: So technically I would imagine, that's a very hard thing to balance because the former says that we're going to have more data than we've ever seen. The latter says we've got to act on it fast which is a great trend for memory but the economics are going to be such a challenge to meet, to balance that. >> Kirk: We have to be able to afford the energy, and we have to be able to afford the material cost, and we have to be able to afford the business processes that do all these things. So yeah, you need breakthroughs. And that's really what we've been doing. And I think that's why we're so fortunate at Hewlett Packard Enterprise to have the labs team but also that world class engineering and that world class supply chain and a services team that can get us introduced to every interesting customer around the world who has those challenging problems and can give us that partnership and that insight to get those kind of breakthroughs. >> Dave: And I wonder if there will be a tipping point, if the tipping point will be, and I'm sure you've thought about this, a change in the application development model that drives so much value and so much productivity that it offsets some of the potential cost issues of changing the development paradigm. >> And I think you're seeing hints of that. Now we saw this when we went from systems of record, OLTP systems, to systems of engagement, mobile systems, and suddenly new ways to develop it. I think now the interesting thing is we move over to systems of action and we're moving from programmatic to training. And this is this interesting thing if you have those data bytes of data you can't have a pair of human eyeballs in front of that, you have to have a machine learning algorithm. That's the only thing that's voracious enough to consume this data in a timely enough fashion to get us answers, but you can't program it. We saw those old approaches in old school A.I., old school autonomous vehicle programs, they go about 10 feet, boom, and they'd flip over, right? Now you know they're on our streets and they are functioning. They're a little bit raw right now but that improvement cycle is fantastic because they're training, they're not programming. >> Great opportunity to your point about Moore's Law but also all this new functionality that has yet been defined, is right on the doorstep. Andrew, Kirk thank you so much for sharing. >> Andrew: Thank you >> Great insight, love Hewlett Packard Labs love the R&D conversation. Gets us a chance to go play in the wild and dream about the future you guys are out creating it congratulations and thanks for spending the time on The Cube, appreciate it. >> Thanks. >> The Cube coverage will continue here live at Las Vegas for HPE Discover 2017, Hewlett Packard Enterprises annual event. We'll be right back with more, stay with us. (bright music)
SUMMARY :
brought to you by Hewlett Packard Enterprise. go outside the lines if you will? kind of near term, maybe you know one to three I mean that's what you're... all that good stuff, you guys still have Breakthroughs, how do we differentiate is the security and chip thing was pretty hot, of $265 billion tam but the notable So that the customers don't have to taking on more of that burden to allow you in terms of what you know just from I mean but, this is the truth. that we created the applications for the... A lot of people don't know that Right now almost all that data on the edge vision, you got to apply R&D to the problem... How are you guys bringing in the young guns? All of that needs to happen at the edge as well Dave: So Kirk you have your machine button, So when I first entered you talked about So in terms of the three that Andrew laid out technologies that over the last three years of gateways, I think we can see these intercept that kind of magnifies the model we were So how much do you guys get involved hey we think you know maybe this system that has all of the combined capacity the servers to lock down and you have firmware But the other thing I would point to you John: One of the things, the ecosystem, we need the ecosystem kind of on board and starting to contribute. And that's the great thing of where we're the theater, and I've been seeing these that you guys use to drive your actions? and at the same time we just have to be but the economics are going to be such a challenge the energy, and we have to be able to afford that it offsets some of the potential cost issues to get us answers, but you can't program it. is right on the doorstep. and thanks for spending the time on We'll be right back with more, stay with us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirk | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Andrew Wheeler | PERSON | 0.99+ |
Tim Berners-Lee | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Ray Kersewile | PERSON | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
Meg | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
1994 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Gene Emdall | PERSON | 0.99+ |
$265 billion | QUANTITY | 0.99+ |
Kirk Bresniker | PERSON | 0.99+ |
Jin Zee Consortium | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Dreamworks Animation | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Star Trek | TITLE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
160 terabyte | QUANTITY | 0.99+ |
three day | QUANTITY | 0.99+ |
500 people | QUANTITY | 0.99+ |
HP Labs | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
five year | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
1% | QUANTITY | 0.98+ |
Moore's Law is dead | TITLE | 0.98+ |
early 1980s | DATE | 0.98+ |
five years | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
1/20th | QUANTITY | 0.98+ |
three types | QUANTITY | 0.97+ |
DCIG | ORGANIZATION | 0.97+ |
500 systems | QUANTITY | 0.97+ |
Eric Herzog | VMworld 2014
live from San Francisco California it's the queue at vmworld 2014 brought to you by vmware cisco EMC HP and nutanix now here are your hosts John courier and Dave vellante okay welcome back at when live in San Francisco here this is the cube vmworld 2014 our fifth year I'm John furry with Dave a lot the extracting the signal noise we love talking to the executives the entrepreneurs the VCS all all the action is here on the ground ball tickets our next guest Eric Herzog the CMO and I think you're running biz dev as well yes is Deb for violin memory systems violin is went recently went public now on a complete transformation you're at the helm there from EMC so you know a little bit bout storage and flash welcome to the cube well thank you very much i always enjoy coming to the cube and doing it now for four or five years it's been great guys do an outstanding job we really appreciate it one of the things we're excited about aussies flash and every gets move them up here that's been in the storage and the periphery of stored with cloud and hybrid cloud is raving about the economic disruption of flash the performance of flash flash is super hot now doctors getting a lot of the press right now cuz the deal but still flashes at the under the hood that's where the action is so what's the update give us a take on what's going on in flash in violin what are you guys up to so the big thing is flashes at that economic tipping point so if you go back to late 70s and early 80s as everyone remembers everything was taped all the data centers were taped hard drives were more expensive they were faster and you got to the economic tipping port we're using a hard drive base to Ray was much better than using a tape subsystem than tape became backup archive which is still great at tape in fact I saw from one of the analysts who tracks such things that tape is actually still the cheapest media I don't see any CIO rushing to the all taped data center so what you've got now is flashes at that economic tipping point that between the savings and storage server software licensing power rack space floor space etc that when you do the economic analysis you can just literally do with the calculator pay is to go flash in fact flash is almost free these days so certainly the economists are ridiculously amazing in terms of cost now on the performance side you're starting to see some segmentation yesterday were talking about capacity flash and performance flash what does that mean I mean I was how they different off is it flashes flash but you started to see these conversations that are being kind of workload specific is that where it's going we still in the flash adoption phase what's your take them now we're anthem at the maturation phase flash is shifting away from everyone assuming it's the same just think of the old hard drives you know even today got 7200 rpm 10,000 RPM and 15,000 rpm and it really makes a difference as you use those various capacities and the various perform em extra around them flashes and it's all the same medium same media same heads but they make changes flash is doing the same thing there is people focusing on performance flash violin being one of those we have one of the highest performing systems out there as measured by not by violin by third parties and they got other people that want to go would all say cheap and deep flash not as cheap as hard drive but let's make flash you know faster than hard drives but not uber fast and so you could put other workloads on it that are more capacity sensitive than performance sensitive so I want if we get to unpack performance a little bit so people talk about I ops they talk about latency how do you guys look at performance how should customers be looking at performance so it's really a package okay the number one enemy of most applications particularly in mid up to global enterprise is absolutely latency so I ops is important but if you don't have good latency I ops don't overpower that so you need to have both good I ops and really strong latency in order to optimize where that be an Oracle workload at sa p workload a sequel workload those types of workloads often are very latency sensitive the lower the latency the better the application functions and the more you can do with it so so who are the kings and queens and princes of latency you would put you guys in that mix and we are in that category we can guarantee under half a millisecond latency or five hundred microseconds whichever term you want to you is whether the array is empty or full we also have some customers that have done some host-based aggregation in production and we have one of the 25 largest companies in the world with multiple petabytes in production they aggregate on the host side are arrays and they're able to deliver to millions sustained I ops regardless of workload across all those petabytes and point 15 millisecond of latency now that's not what we claim on an individual array the spec sheet so they're really getting it and they've proven it to us several times so you know that's in the performance side of the equation so latency I ops bandwidth snot as much of an issue because bandwidth obviously you can get off a hard drives and hard drives are very good for high-bandwidth situation you're not going to use all flash in meeting or attainment applications or an oil and gas or a lot of the genomic research stuff because it's very bandwidth intensive and you could get great bandwidth off of low-cost hard drives actually and create you know giant mass cluster for example is better in those workloads but in database workloads virtualized workloads for example we have a customer that on a certain physical server had 14 vm virtual machines they then used our flash and they were able to get 50 on the same exact physical Hardware same size virtual machine same I ops for that those virtual machines and go from 14 to 50 just by switching to flash same vm was VMware same exact server infrastructure all they do is swap the storage out so that's an example of how a you get the performance and be you also get the economics because obviously putting 50 virtual machines on the same physical Hardware saves you money so I would think the big benefit to is consistency all right so you hear from customers are just give me consistent predictable right moments right so while you're in the same thing from customers yes absolutely so what you have when you look out at the flash world what you're going to see is certain people have a right cliff and what happens is when you hit the right cliff or they're going to have unequal performance they'll be better than a hard drive system for sure but there they'll still get a sawtooth not as dramatic as you'd see in a hard drive subsystem but sawtooth what we do is we guarantee consistent I ops and since latency whether the array is empty half full or all the way full and very few guys in the off lash community can do that I want to talk a little bit about the the stack so you came from a company you were running you know very senior executive at emc within the mid-range business VNX awesome stack been around forever a lot of value in that stack takes a long time to harden a stack a lot of the flash guys you know you guys included came out you solving a problem start selling stack takes a long time to mature so how should we be thinking about the stack so raid stack is always crucial you know rate is not just about performance redundant array of independent disks its number one function when raid came out quite evident across the bay here at UC Berkeley was for resiliency so that's the number one thing that a raid stack does the second thing it does of course is give you performance as well because you aggregate whether it's hard drives or flash drives or hybrids you aggregate the performance across the pieces of media so I think one of the benefits you're going to see from certain vendors in the flash base we being one of them is we have a long history we're on our fourth generation flash configuration and we basically rev our generations every two years so we're looking at a raid stack that's in the eighth year time frame some of the other flash startups you know they've been shipping for two years you have a two-year-old raid stack an eight-year-old raid stack has got much more resiliency it's got more test time for us in particular our sweet spot is in the upper mid to global enterprise if you look at the fortune global 500 list over 50 of those customers use violin which when you're big company is one thing when you're a small company like us to have 50 of the global fortune 500 using your products it's got to be pretty resilient in the stack or they wouldn't be using it I mean I was on it I probably spoke one-on-one or maybe one on 2132 over 500 customers in the first half of this year and the on flash and i would ask every one of them who's used an all-flash array and it was actually pretty low penetration still right not surprising violin came up a lot TMS came up a lot I mean not and then and then pure a little bit and then you know bits and pieces but violin was consistently there's guys did a good job early on getting into this space but I want to ask you about sometimes I call it channel ft the urinary Olympics and particularly around data reduction and so you guys are now you know throwing your head into that ring how should we be thinking about sort of data reduction compression d2 obviously drives pricing down rank it helps create that that's I think part of the reason why we're at that tipping point that and you know ml see how should we be thinking about data reduction there's a lot a lot of you know finger-pointing in line not in line post process give us your point of view so the bottom line is dated ed will help you in two primary workloads virtual desktop and virtual server okay beyond that it doesn't help you compression helps you in database oriented workloads and there are certain data types that are not compressible at all so for example mpegs JPEGs and other data types are not compressed with all their already pre compressed by the nature of the data type so everyone needs to be wary that just as when you get your miles per gallon when you buy that brand new car it will vary and it will vary by workloads so if you've got a workload that's heavily already compressed you're not going to get benefit from anyone's compression including arms if you've got a workload that's already been d duped you're not going to get a benefit from anyone's d do so you have to segment your workloads I think the other thing Dave in addition to what's driving that price point which is compression and D do is multiple workloads so for violin in particular our average arraign we've already publicly talked about this our average array shipping is well over 30 terabytes that's not true of a lot of other guys when you've got 30 terabytes with the average database being four to five terabytes people don't put one database on our stuff people who sell five terabyte arrays and a recent large coming just announced the new five terabyte array they're going to put one database with us at 30 to 40 terabytes average people run three four five databases does anyone really buy a vmax or a netapp 8,000 class or a high-end IBM box and run one workload on that in the hybrid world or in the hard drive world no but that's now that people are running multiple and mixed workloads on flash arrays that plus the dee doop and compression is driving this economic switch over and why flashes the right choice for your data center well you guys do obvious do a lot in database generally and specifically oracle database via Oracle's big on pushing hybrid Columba compression and trying to lock out its competitors for grants abating in that what are you seeing there in Oracle environments and I've again I've talked a lot of customers and the the instances of hybrid columnar are still very limited right in theory on the road map how what are you seeing what are your thoughts on that what do you talk to customers customers must say well you know Oracle's locking you out you know how about I just a chubber a couple things first of all on the price points it won't matter because people run violin arrays with mixed in multiple workloads already so even if you want Oracle stuff if you were to buy the Oracle if you're going to buy Oracle compression or compression to any of the database from the database vendors themselves for us it's still benefits us we don't sell a lot of five and ten terabyte arrays we sell lots of 30 and 40 and 70 tera byte arrays we can even scale are raised up to 280 terabytes which most the other guys can't do and I'm talking now raw capacity not d duper compressor capacity at the same time while the database guys are trying to do that one thing I'd encourage the end users do is just look at the list price it's available readily Oracle's is available it's a pretty high ticket item so whether it's violent or any of the other flash vendors that have compression it won't compress as well as Oracle's will or any other database vendors but the price is pretty high so if you get reasonable compression from a storage render it's going to be a lot less expensive than using that from the database vendor down maybe the database vendors an Oracle change their strategy but right now it's a very high ticket item and when you get it from the storage vendor and even if it doesn't compress as much it's still a lot cheaper so you'll have to take that as part of the financial analysis when you're looking at your database deployment now you made a big personal bet on violin I mean you and I i was there in the front row and you announcing the latest sort of v NX which is a great announcement I mean it was you guys ticked a lot of boxes it was a lot of hard work and I realize that but my one big question was what about all flash like well we have all flash too well you said all the right marketing things and then you know several months later here you are at violin big personal bet all right you have senior executive at emc years not bad I know a lot of travel but you know pretty pretty good life hey yeah a lot of a lot of people working with you for you you know a lot of great customers why'd you make that that choice so a couple things first of all violence got an incredible set of customers when they divulge the customers to me under NDA I was like shocked I couldn't believe who the customers were you know I worked at IBM as well as EMC so of course all the big boys are your customers and they always will be but the number of really big companies they had was very impressive incredible technology this year has been all about the software stack which violin has been very mediocre at now it's got a whole set of software potential and as you know Dave I've done seven startups five of them been acquired and I can smell a stinker this is not a stinker so it past the fume test after doing seven startup so it you know feels like the what was that attraction obviously the IPO went off without a hitch right in terms of at least going public but it stopped in climb there was a little hitch excuse me absolutely being a low I'd like violent emerging player also the market team is huge yeah so that's I mean one market opportunity so with that kind of the IPO stumble if you will you still came on board yes that was not an issue for you like okay I'm going guns blaring well in addition doing seven starters I've done this is my fourth turn around and all of them have ended up very well IBM wat one of my turn arounds i was at mac store as the senior VP of Marketing when CJ Mack store that was another turnaround although be at a very large company obviously mac store at five billion at the time of the acquisition but done a number of turnarounds as well so it's it's an attractive thing to do it's a fun thing to do you feel you could really do this yeah the park I know I'm a good man but I'm not that old yet yeah it's pretty straightforward you get the customers give them some good product collect some cash do it again well I mean it's all about execution you know and violin get a lot of really great things they did really well by the customers customers love them great tech support great field support the SE teams even a group of consulting engineers and all the consulting engineers actually RX oracle and microsoft guys know their learning story but they know all about the database community and we got a couple guys from actually ex vmware guys as well so that's that's a big thing but I think the key thing is you got to execute on all cylinders and we had a great technology leadership group that did the first set got the company to the first hundred million but it wasn't the right guys to grow the business make the visit and by the way you guys interview VCS all the times you know it's very common you get to a certain point and then the founding executive team sort of needs to move aside great technology guys but not the best business men and that's a strong attraction we're just talking some VCS up here some tier 1 Greylock and any a move the question that came in over text and the day was texting me that we wanted to ask was you know at these big valuation the private companies it's hard for the employees to make money so the silver lining and your opportunity is there is a lot of growth opportunities and money-making opportunities for the management team and investors right so so that's a good position to attract some town yeah that's well that's the that's the appeal yeah when you think about there's certain guys that are really good at IBM EMC Microsoft HP VMware and they're never going to do well in a start-up you got other guys that are hybrids can be big and small company and the attraction for those that can do both is you can bring the seasoned management that you learn at an IBM and EMC a Microsoft a VMware bring that to the small company which has great technology would often does not have the discipline and rigor that a big company does and what you have to do is bounce the drive for new technology and new customers with the business model and not become overly bureaucratic and that's the attraction of a turnaround as well as guys who do lots of startups is to be able to do that and grow the company and the key thing has got to grow it properly and that's the upside well you're getting your track records phenomenal we've been following your career tech athlete for sure now Wall Street you got to kind of do the dance and you know keep keep nice and get these guys back to snap them in line right that's kind of the key focus to as well right yeah it's it's about financial execution right now we brought out a whole bunch of new products our windows flash array in line to you to compression a whole class of I'd say unmatched enterprise class data services in the off flash erase space and you've got to be able to leverage all of that and that's a key thing you've got the technology if you don't execute on the business side you know you go out of business and we've got the right team in place now to take the technology where it needs to deliver the business value to the shareholders and the and the stockholders Eric herzlich CMO violin memory systems you know my philosophy in my experience although you know not as extensive as yours is in a growing market a few missteps can be rewarded with great product so you guys have certainly a good product to get a mulligan with a growth market wind behind your back so congratulations seeing things on track and really exciting to see good company this is the cube here at vmworld 2014 right back into the short break
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan Potter | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
15,000 rpm | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
7200 rpm | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
3rd | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
five terabyte | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
five terabyte | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
five hundred microseconds | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
1st step | QUANTITY | 0.99+ |
Kafka | TITLE | 0.99+ |
40 | QUANTITY | 0.99+ |
14 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
50 virtual | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
five billion | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
late 70s | DATE | 0.99+ |
nutanix | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
five terabytes | QUANTITY | 0.99+ |
10,000 RPM | QUANTITY | 0.99+ |
fifth year | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Five years ago | DATE | 0.99+ |
fourth turn | QUANTITY | 0.99+ |
Dave vellante | PERSON | 0.99+ |
40 terabytes | QUANTITY | 0.99+ |
7th year | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two-year-old | QUANTITY | 0.98+ |
several months later | DATE | 0.98+ |
70 tera | QUANTITY | 0.98+ |
Uber | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
2nd one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
first set | QUANTITY | 0.98+ |
25 largest companies | QUANTITY | 0.98+ |
seven years ago | DATE | 0.98+ |
microsoft | ORGANIZATION | 0.98+ |
early 80s | DATE | 0.98+ |
1st choice | QUANTITY | 0.98+ |
ten terabyte | QUANTITY | 0.98+ |
vmware | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |